Methods and apparatus for graphical display and editing of flight plans
NASA Technical Reports Server (NTRS)
Gibbs, Michael J. (Inventor); Adams, Jr., Mike B. (Inventor); Chase, Karl L. (Inventor); Lewis, Daniel E. (Inventor); McCrobie, Daniel E. (Inventor); Omen, Debi Van (Inventor)
2002-01-01
Systems and methods are provided for an integrated graphical user interface which facilitates the display and editing of aircraft flight-plan data. A user (e.g., a pilot) located within the aircraft provides input to a processor through a cursor control device and receives visual feedback via a display produced by a monitor. The display includes various graphical elements associated with the lateral position, vertical position, flight-plan and/or other indicia of the aircraft's operational state as determined from avionics data and/or various data sources. Through use of the cursor control device, the user may modify the flight-plan and/or other such indicia graphically in accordance with feedback provided by the display. In one embodiment, the display includes a lateral view, a vertical profile view, and a hot-map view configured to simplify the display and editing of the aircraft's flight-plan data.
Perspective traffic display format and airline pilot traffic avoidance
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Mcgreevy, Michael W.; Hitchcock, Robert J.
1987-01-01
Part-task experiments have examined perspective projections of cockpit displays of traffic information as a means of presenting aircraft separation information to airline pilots. Ten airline pilots served as subjects in an experiment comparing the perspective projection with plan-view projections of the same air traffic situations. The pilots' task was to monitor the traffic display in order to decide if an avoidance maneuver was needed. Pilots took more time to select avoidance maneuvers with a conventional plan-view display than with an experimental perspective display. In contrast to previous results, if the pilots selected a maneuver with the perspective display, they were more likely to choose one with a vertical component. Tabulation of the outcomes of their initial avoidance decisions with both perspective and plan-view displays showed that they were more likely to achieve required separation with maneuvers chosen with the aid of perspective displays.
Graphics simulation and training aids for advanced teleoperation
NASA Technical Reports Server (NTRS)
Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.
1993-01-01
Graphics displays can be of significant aid in accomplishing a teleoperation task throughout all three phases of off-line task analysis and planning, operator training, and online operation. In the first phase, graphics displays provide substantial aid to investigate work cell layout, motion planning with collision detection and with possible redundancy resolution, and planning for camera views. In the second phase, graphics displays can serve as very useful tools for introductory training of operators before training them on actual hardware. In the third phase, graphics displays can be used for previewing planned motions and monitoring actual motions in any desired viewing angle, or, when communication time delay prevails, for providing predictive graphics overlay on the actual camera view of the remote site to show the non-time-delayed consequences of commanded motions in real time. This paper addresses potential space applications of graphics displays in all three operational phases of advanced teleoperation. Possible applications are illustrated with techniques developed and demonstrated in the Advanced Teleoperation Laboratory at JPL. The examples described include task analysis and planning of a simulated Solar Maximum Satellite Repair task, a novel force-reflecting teleoperation simulator for operator training, and preview and predictive displays for on-line operations.
Integrating SIMNET into Heavy Task Force Tactical Training
1994-10-01
the overlays will allow the battlemaster time to input them for display on the Plan View Display ( PVD ) for use in AARs. The time schedule must also...reconnaissance for both friendly and enemy forces to occupy. The stealth station, in conjunction with the Plan View Display ( PVD ), can be used to...was moved to another assembly area for armor crewman and Nuclear, Biological, and Chemical (NBC) skills testing. The battalion observer tested 25% to
NASA Technical Reports Server (NTRS)
Gibbs, Michael J. (Inventor); Adams, Michael B. (Inventor); Chase, Karl L. (Inventor); Van Omen, Debi (Inventor); Lewis, Daniel E. (Inventor); McCrobie, Daniel E. (Inventor)
2003-01-01
A method and system for displaying a flight plan such that an entire flight plan is viewable through the use of scrolling devices is disclosed. The flight plan display may also include a method and system for collapsing and expanding a flight plan display, have provisions for the conspicuous marking of changes to a flight plan, the use of tabs to switch between various displays of data, and access to a navigation database that allows a user to view information about various navigational aids. The database may also the access to the information about the navigational aids to be prioritized based on proximity to the current position of the aircraft.
Presentation Extensions of the SOAP
NASA Technical Reports Server (NTRS)
Carnright, Robert; Stodden, David; Coggi, John
2009-01-01
A set of extensions of the Satellite Orbit Analysis Program (SOAP) enables simultaneous and/or sequential presentation of information from multiple sources. SOAP is used in the aerospace community as a means of collaborative visualization and analysis of data on planned spacecraft missions. The following definitions of terms also describe the display modalities of SOAP as now extended: In SOAP terminology, View signifies an animated three-dimensional (3D) scene, two-dimensional still image, plot of numerical data, or any other visible display derived from a computational simulation or other data source; a) "Viewport" signifies a rectangular portion of a computer-display window containing a view; b) "Palette" signifies a collection of one or more viewports configured for simultaneous (split-screen) display in the same window; c) "Slide" signifies a palette with a beginning and ending time and an animation time step; and d) "Presentation" signifies a prescribed sequence of slides. For example, multiple 3D views from different locations can be crafted for simultaneous display and combined with numerical plots and other representations of data for both qualitative and quantitative analysis. The resulting sets of views can be temporally sequenced to convey visual impressions of a sequence of events for a planned mission.
A walk through the planned CS building. M.S. Thesis
NASA Technical Reports Server (NTRS)
Khorramabadi, Delnaz
1991-01-01
Using the architectural plan views of our future computer science building as test objects, we have completed the first stage of a Building walkthrough system. The inputs to our system are AutoCAD files. An AutoCAD converter translates the geometrical information in these files into a format suitable for 3D rendering. Major model errors, such as incorrect polygon intersections and random face orientations, are detected and fixed automatically. Interactive viewing and editing tools are provided to view the results, to modify and clean the model and to change surface attributes. Our display system provides a simple-to-use user interface for interactive exploration of buildings. Using only the mouse buttons, the user can move inside and outside the building and change floors. Several viewing and rendering options are provided, such as restricting the viewing frustum, avoiding wall collisions, and selecting different rendering algorithms. A plan view of the current floor, with the position of the eye point and viewing direction on it, is displayed at all times. The scene illumination can be manipulated, by interactively controlling intensity values for 5 light sources.
EverVIEW: a visualization platform for hydrologic and Earth science gridded data
Romañach, Stephanie S.; McKelvy, James M.; Suir, Kevin J.; Conzelmann, Craig
2015-01-01
The EverVIEW Data Viewer is a cross-platform desktop application that combines and builds upon multiple open source libraries to help users to explore spatially-explicit gridded data stored in Network Common Data Form (NetCDF). Datasets are displayed across multiple side-by-side geographic or tabular displays, showing colorized overlays on an Earth globe or grid cell values, respectively. Time-series datasets can be animated to see how water surface elevation changes through time or how habitat suitability for a particular species might change over time under a given scenario. Initially targeted toward Florida's Everglades restoration planning, EverVIEW has been flexible enough to address the varied needs of large-scale planning beyond Florida, and is currently being used in biological planning efforts nationally and internationally.
Comparison of tablet-based strategies for incision planning in laser microsurgery
NASA Astrophysics Data System (ADS)
Schoob, Andreas; Lekon, Stefan; Kundrat, Dennis; Kahrs, Lüder A.; Mattos, Leonardo S.; Ortmaier, Tobias
2015-03-01
Recent research has revealed that incision planning in laser surgery deploying stylus and tablet outperforms state-of-the-art micro-manipulator-based laser control. Providing more detailed quantitation regarding that approach, a comparative study of six tablet-based strategies for laser path planning is presented. Reference strategy is defined by monoscopic visualization and continuous path drawing on a graphics tablet. Further concepts deploying stereoscopic or a synthesized laser view, point-based path definition, real-time teleoperation or a pen display are compared with the reference scenario. Volunteers were asked to redraw and ablate stamped lines on a sample. Performance is assessed by measuring planning accuracy, completion time and ease of use. Results demonstrate that significant differences exist between proposed concepts. The reference strategy provides more accurate incision planning than the stereo or laser view scenario. Real-time teleoperation performs best with respect to completion time without indicating any significant deviation in accuracy and usability. Point-based planning as well as the pen display provide most accurate planning and increased ease of use compared to the reference strategy. As a result, combining the pen display approach with point-based planning has potential to become a powerful strategy because of benefiting from improved hand-eye-coordination on the one hand and from a simple but accurate technique for path definition on the other hand. These findings as well as the overall usability scale indicating high acceptance and consistence of proposed strategies motivate further advanced tablet-based planning in laser microsurgery.
Greene, Jessica; Hibbard, Judith H; Sacks, Rebecca M
2016-04-01
Starting in 2017, all state and federal health insurance exchanges will present quality data on health plans in addition to cost information. We analyzed variations in the current design of information on state exchanges to identify presentation approaches that encourage consumers to take quality as well as cost into account when selecting a health plan. Using an online sample of 1,025 adults, we randomly assigned participants to view the same comparative information on health plans, displayed in different ways. We found that consumers were much more likely to select a high-value plan when cost information was summarized instead of detailed, when quality stars were displayed adjacent to cost information, when consumers understood that quality stars signified the quality of medical care, and when high-value plans were highlighted with a check mark or blue ribbon. These approaches, which were equally effective for participants with higher and lower numeracy, can inform the development of future displays of plan information in the exchanges. Project HOPE—The People-to-People Health Foundation, Inc.
Modern Display Technologies for Airborne Applications.
1983-04-01
the case of LED head-down direct view displays, this requires that special attention be paid to the optical filtering , the electrical drive/address...effectively attenuates the LED specular reflectance component, the colour and neutral density filtering attentuate the diffuse component and the... filter techniques are planned for use with video, multi- colour and advanced versions of numeric, alphanumeric and graphic displays; this technique
Software Aids Visualization Of Mars Pathfinder Mission
NASA Technical Reports Server (NTRS)
Weidner, Richard J.
1996-01-01
Report describes Simulator for Imager for Mars Pathfinder (SIMP) computer program. SIMP generates "virtual reality" display of view through video camera on Mars lander spacecraft of Mars Pathfinder mission, along with display of pertinent textual and graphical data, for use by scientific investigators in planning sequences of activities for mission.
NASA Astrophysics Data System (ADS)
Zamorano, Lucia J.; Dujovny, Manuel; Ausman, James I.
1990-01-01
"Real time" surgical treatment planning utilizing multimodality imaging (CT, MRI, DA) has been developed to provide the neurosurgeon with 2D multiplanar and 3D views of a patient's lesion for stereotactic planning. Both diagnostic and therapeutic stereotactic procedures have been implemented utilizing workstation (SUN 1/10) and specially developed software and hardware (developed in collaboration with TOMO Medical Imaging Technology, Southfield, MI). This provides complete 3D and 2D free-tilt views as part of the system instrumentation. The 2D Multiplanar includes reformatted sagittal, coronal, paraaxial and free tilt oblique vectors at any arbitrary plane of the patient's lesion. The 3D includes features for extracting a view of the target volume localized by a process including steps of automatic segmentation, thresholding, and/or boundary detection with 3D display of the volumes of interest. The system also includes the capability of interactive playback of reconstructed 3D movies, which can be viewed at any hospital network having compatible software on strategical locations or at remote sites through data transmission and record documentation by image printers. Both 2D and 3D menus include real time stereotactic coordinate measurements and trajectory definition capabilities as well as statistical functions for computing distances, angles, areas, and volumes. A combined interactive 3D-2D multiplanar menu allows simultaneous display of selected trajectory, final optimization, and multiformat 2D display of free-tilt reformatted images perpendicular to selected trajectory of the entire target volume.
A visualization tool to support decision making in environmental and biological planning
Romañach, Stephanie S.; McKelvy, James M.; Conzelmann, Craig; Suir, Kevin J.
2014-01-01
Large-scale ecosystem management involves consideration of many factors for informed decision making. The EverVIEW Data Viewer is a cross-platform desktop decision support tool to help decision makers compare simulation model outputs from competing plans for restoring Florida's Greater Everglades. The integration of NetCDF metadata conventions into EverVIEW allows end-users from multiple institutions within and beyond the Everglades restoration community to share information and tools. Our development process incorporates continuous interaction with targeted end-users for increased likelihood of adoption. One of EverVIEW's signature features is side-by-side map panels, which can be used to simultaneously compare species or habitat impacts from alternative restoration plans. Other features include examination of potential restoration plan impacts across multiple geographic or tabular displays, and animation through time. As a result of an iterative, standards-driven approach, EverVIEW is relevant to large-scale planning beyond Florida, and is used in multiple biological planning efforts in the United States.
Military display market: third comprehensive edition
NASA Astrophysics Data System (ADS)
Desjardins, Daniel D.; Hopper, Darrel G.
2002-08-01
Defense displays comprise a niche market whose continually high performance requirements drive technology. The military displays market is being characterized to ascertain opportunities for synergy across platforms, and needs for new technology. All weapons systems are included. Some 382,585 displays are either now in use or planned in DoD weapon systems over the next 15 years, comprising displays designed into direct-view, projection-view, and virtual- image-view applications. This defense niche market is further fractured into 1163 micro-niche markets by the some 403 program offices who make decisions independently of one another. By comparison, a consumer electronics product has volumes of tens-of-millions of units for a single fixed design. Some 81% of defense displays are ruggedized versions of consumer-market driven designs. Some 19% of defense displays, especially in avionics cockpits and combat crewstations, are custom designs to gain the additional performance available in the technology base but not available in consumer-market-driven designs. Defense display sizes range from 13.6 to 4543 mm. More than half of defense displays are now based on some form of flat panel display technology, especially thin-film-transistor active matrix liquid crystal display (TFT AMLCD); the cathode ray tube (CRT) is still widely used but continuing to drop rapidly in defense market share.
LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; Fleming, Gary A.
2006-01-01
This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.
NASA Technical Reports Server (NTRS)
Maldague, Pierre; Page, Dennis; Chase, Adam
2005-01-01
Activity Plan Generator (APGEN), now at version 5.0, is a computer program that assists in generating an integrated plan of activities for a spacecraft mission that does not oversubscribe spacecraft and ground resources. APGEN generates an interactive display, through which the user can easily create or modify the plan. The display summarizes the plan by means of a time line, whereon each activity is represented by a bar stretched between its beginning and ending times. Activities can be added, deleted, and modified via simple mouse and keyboard actions. The use of resources can be viewed on resource graphs. Resource and activity constraints can be checked. Types of activities, resources, and constraints are defined by simple text files, which the user can modify. In one of two modes of operation, APGEN acts as a planning expert assistant, displaying the plan and identifying problems in the plan. The user is in charge of creating and modifying the plan. In the other mode, APGEN automatically creates a plan that does not oversubscribe resources. The user can then manually modify the plan. APGEN is designed to interact with other software that generates sequences of timed commands for implementing details of planned activities.
NASA Technical Reports Server (NTRS)
Gershzohn, Gary R.; Sirko, Robert J.; Zimmerman, K.; Jones, A. D.
1990-01-01
This task concerns the design, development, testing, and evaluation of a new proximity operations planning and flight guidance display and control system for manned space operations. A forecast, derivative manned maneuvering unit (MMU) was identified as a candidate for the application of a color, highway-in-the-sky display format for the presentation of flight guidance information. A silicon graphics 4D/20-based simulation is being developed to design and test display formats and operations concepts. The simulation includes the following: (1) real-time color graphics generation to provide realistic, dynamic flight guidance displays and control characteristics; (2) real-time graphics generation of spacecraft trajectories; (3) MMU flight dynamics and control characteristics; (4) control algorithms for rotational and translational hand controllers; (5) orbital mechanics effects for rendezvous and chase spacecraft; (6) inclusion of appropriate navigation aids; and (7) measurement of subject performance. The flight planning system under development provides for: (1) selection of appropriate operational modes, including minimum cost, optimum cost, minimum time, and specified ETA; (2) automatic calculation of rendezvous trajectories, en route times, and fuel requirements; (3) and provisions for manual override. Man/machine function allocations in planning and en route flight segments are being evaluated. Planning and en route data are presented on one screen composed of two windows: (1) a map display presenting a view perpendicular to the orbital plane, depicting flight planning trajectory and time data attitude display presenting attitude and course data for use en route; and (2) an attitude display presenting local vertical-local horizontal attitude data superimposed on a highway-in-the-sky or flight channel representation of the flight planned course. Both display formats are presented while the MMU is en route. In addition to these displays, several original display elements are being developed, including a 3DOF flight detector for attitude commanding, a different flight detector for translation commands, and a pictorial representation of velocity deviations.
Effect of image scaling on stereoscopic movie experience
NASA Astrophysics Data System (ADS)
Häkkinen, Jukka P.; Hakala, Jussi; Hannuksela, Miska; Oittinen, Pirkko
2011-03-01
Camera separation affects the perceived depth in stereoscopic movies. Through control of the separation and thereby the depth magnitudes, the movie can be kept comfortable but interesting. In addition, the viewing context has a significant effect on the perceived depth, as a larger display and longer viewing distances also contribute to an increase in depth. Thus, if the content is to be viewed in multiple viewing contexts, the depth magnitudes should be carefully planned so that the content always looks acceptable. Alternatively, the content can be modified for each viewing situation. To identify the significance of changes due to the viewing context, we studied the effect of stereoscopic camera base distance on the viewer experience in three different situations: 1) small sized video and a viewing distance of 38 cm, 2) television and a viewing distance of 158 cm, and 3) cinema and a viewing distance of 6-19 meters. We examined three different animations with positive parallax. The results showed that the camera distance had a significant effect on the viewing experience in small display/short viewing distance situations, in which the experience ratings increased until the maximum disparity in the scene was 0.34 - 0.45 degrees of visual angle. After 0.45 degrees, increasing the depth magnitude did not affect the experienced quality ratings. Interestingly, changes in the camera distance did not affect the experience ratings in the case of television or cinema if the depth magnitudes were below one degree of visual angle. When the depth was greater than one degree, the experience ratings began to drop significantly. These results indicate that depth magnitudes have a larger effect on the viewing experience with a small display. When a stereoscopic movie is viewed from a larger display, other experiences might override the effect of depth magnitudes.
Dose factor entry and display tool for BNCT radiotherapy
Wessol, Daniel E.; Wheeler, Floyd J.; Cook, Jeremy L.
1999-01-01
A system for use in Boron Neutron Capture Therapy (BNCT) radiotherapy planning where a biological distribution is calculated using a combination of conversion factors and a previously calculated physical distribution. Conversion factors are presented in a graphical spreadsheet so that a planner can easily view and modify the conversion factors. For radiotherapy in multi-component modalities, such as Fast-Neutron and BNCT, it is necessary to combine each conversion factor component to form an effective dose which is used in radiotherapy planning and evaluation. The Dose Factor Entry and Display System is designed to facilitate planner entry of appropriate conversion factors in a straightforward manner for each component. The effective isodose is then immediately computed and displayed over the appropriate background (e.g. digitized image).
An Experimental Study of the Effect of Shared Information on Pilot/Controller Re-Route Negotiation
NASA Technical Reports Server (NTRS)
Farley, Todd C.; Hansman, R. John
1999-01-01
Air-ground data link systems are being developed to enable pilots and air traffic controllers to share information more fully. The sharing of information is generally expected to enhance their shared situation awareness and foster more collaborative decision making. An exploratory, part-task simulator experiment is described which evaluates the extent to which shared information may lead pilots and controllers to cooperate or compete when negotiating route amendments. The results indicate an improvement in situation awareness for pilots and controllers and a willingness to work cooperatively. Independent of data link considerations, the experiment also demonstrates the value of providing controllers with a good-quality weather representation on their plan view displays. Observed improvements in situation awareness and separation assurance are discussed. It is argued that deployment of this relatively simple, low-risk addition to the plan view displays be accelerated.
Interactive orbital proximity operations planning system
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Ellis, Stephen R.
1988-01-01
An interactive graphical proximity operations planning system was developed, which allows on-site design of efficient, complex, multiburn maneuvers in a dynamic multispacecraft environment. Maneuvering takes place in and out of the orbital plane. The difficulty in planning such missions results from the unusual and counterintuitive character of orbital dynamics and complex time-varying operational constraints. This difficulty is greatly overcome by visualizing the relative trajectories and the relevant constraints in an easily interpretable graphical format, which provides the operator with immediate feedback on design actions. The display shows a perspective bird's-eye view of a Space Station and co-orbiting spacecraft on the background of the Station's orbital plane. The operator has control over the two modes of operation: a viewing system mode, which enables the exporation of the spatial situation about the Space Station and thus the ability to choose and zoom in on areas of interest; and a trajectory design mode, which allows the interactive editing of a series of way points and maneuvering burns to obtain a trajectory that complies with all operational constraints. A first version of this display was completed. An experimental program is planned in which operators will carry out a series of design missions which vary in complexity and constraints.
Web-Based Customizable Viewer for Mars Network Overflight Opportunities
NASA Technical Reports Server (NTRS)
Gladden, Roy E.; Wallick, Michael N.; Allard, Daniel A.
2012-01-01
This software displays a full summary of information regarding the overflight opportunities between any set of lander and orbiter pairs that the user has access to view. The information display can be customized, allowing the user to choose which fields to view/hide and filter. The software works from a Web browser on any modern operating system. A full summary of information pertaining to an overflight is available, including the proposed, tentative, requested, planned, and implemented. This gives the user a chance to quickly check for inconsistencies and fix any problems. Overflights from multiple lander/ orbiter pairs can be compared instantly, and information can be filtered through the query and shown/hidden, giving the user a customizable view of the data. The information can be exported to a CSV (comma separated value) or XML (extensible markup language) file. The software only grants access to users who are authorized to view the information. This application is an addition to the MaROS Web suite. Prior to this addition, information pertaining to overflight opportunities would have a limited amount of data (displayed graphically) and could only be shown in strict temporal ordering. This new display shows more information, allows direct comparisons between overflights, and allows the data to be manipulated in ways that it was unable to be done in the past. The current software solution is to use CSV files to view the overflight opportunities.
Advanced Visualization of Experimental Data in Real Time Using LiveView3D
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; Fleming, Gary A.
2006-01-01
LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.
Format and basic geometry of a perspective display of air traffic for the cockpit
NASA Technical Reports Server (NTRS)
Mcgreevy, Michael Wallace; Ellis, Stephen R.
1991-01-01
The design and implementation of a perspective display of air traffic for the cockpit is discussed. Parameters of the perspective are variable and interactive so that the appearance of the projected image can be widely varied. This approach makes allowances for exploration of perspective parameters and their interactions. The display was initially used to study the cases of horizontal maneuver biases found in experiments involving a plan view air traffic display format. Experiments to determine the effect of perspective geometry on spatial judgements have evolved from the display program. Several scaling techniques and other adjustments to the perspective are used to tailor the geometry for effective presentation of 3-D traffic situations.
How controllers compensate for the lack of flight progress strips.
DOT National Transportation Integrated Search
1996-02-01
The role of the Flight Progress Strip, currently used to display important flight data, has been debated because of long range plans to automate the air traffic control (ATC) human-computer interface. Currently, the Fight Progress Strip is viewed by ...
Airport surface traffic control TAGS planning alternatives and cost/benefit
DOT National Transportation Integrated Search
1977-01-01
The findings of a cost/benefit analysis of the deployment of a new airport ground surveillance system TAGS (Tower Automated Ground Surveillance) are presented. TAGS will provide a plain view display of aircraft on the airports taxiways and runways li...
Study to design and develop remote manipulator systems
NASA Technical Reports Server (NTRS)
Hill, J. W.; Salisbury, J. K., Jr.
1977-01-01
A description is given of part of a continuing effort both to develop models for and to augment the performance of humans controlling remote manipulators. The project plan calls for the performance of several standard tasks with a number of different manipulators, controls, and viewing conditions, using an automated performance measuring system; in addition, the project plan calls for the development of a force-reflecting joystick and supervisory display system.
Viewing Chinese art on an interactive tabletop.
Hsieh, Chun-ko; Hung, Yi-Ping; Ben-Ezra, Moshe; Hsieh, Hsin-Fang
2013-01-01
To protect fragile paintings and calligraphy, Taiwan's National Palace Museum (NPM) has policies controlling the frequency and duration of their exposure. So, visitors might not see the works they planned to see. To address this problem, the NPM installed an interactive tabletop for viewing the works. This tabletop, the first to feature multiresolution and gigapixel photography technology, displays extremely high-quality images revealing brushwork-level detail. A user study at the NPM examined the tabletop's performance and collected visitor feedback.
Consolidated Cab Display (CCD) System, Project Planning Document (PPD),
1981-02-01
1980 1981 I 2 3 4 5 6 7 8 9 10 1112 1 2 3 4 5 6 7 8 9 1011112 1 2 31 12. Software Documentation a. Overall Computer Program Description ( OCPD ) b...Approve OCPD c. Computer Program Functional Specifications (CPFS) d. Data Base Table Design Specification (DBTDS) e. Software Interface Control Document...Parts List Master Pattern and Plan View Reproducible Drawings Instruction Book Training Aids/Materials b. Software: OCPD CPFS SI CD PDS DBTDS SDD
Optical cross-talk and visual comfort of a stereoscopic display used in a real-time application
NASA Astrophysics Data System (ADS)
Pala, S.; Stevens, R.; Surman, P.
2007-02-01
Many 3D systems work by presenting to the observer stereoscopic pairs of images that are combined to give the impression of a 3D image. Discomfort experienced when viewing for extended periods may be due to several factors, including the presence of optical crosstalk between the stereo image channels. In this paper we use two video cameras and two LCD panels viewed via a Helmholtz arrangement of mirrors, to display a stereoscopic image inherently free of crosstalk. Simple depth discrimination tasks are performed whilst viewing the 3D image and controlled amounts of image crosstalk are introduced by electronically mixing the video signals. Error monitoring and skin conductance are used as measures of workload as well as traditional subjective questionnaires. We report qualitative measurements of user workload under a variety of viewing conditions. This pilot study revealed a decrease in task performance and increased workload as crosstalk was increased. The observations will assist in the design of further trials planned to be conducted in a medical environment.
Thomas, Lisa C; Wickens, Christopher D
2008-08-01
Two experiments explored the effects of display dimensionality, conflict geometry, and time pressure on pilot maneuvering preferences for resolving en route conflicts. With the presence of a cockpit display of traffic information (CDTI) that provides graphical airspace information, pilots can use a variety of conflict resolution maneuvers in response to how they perceive the conflict. Inconsistent preference findings from previous research on conflict resolution using CDTIs may be attributable to inherent ambiguities in 3-D perspective displays and/or a limited range of conflict geometries. Pilots resolved predicted conflicts using CDTIs with three levels of display dimensionality; the first had two 2-D orthogonal views, the second depicted the airspace in two alternating 3-D perspective views, and the third had a pilot-controlled swiveling viewpoint. Pilots demonstrated the same preferences that have been observed in previous research for vertical over lateral maneuvers in low workload and climbs over descents for level-flight conflicts. With increasing workload the two 3-D perspective displays, but not the 2-D displays, resulted in an increased preference for lateral over vertical maneuvers. Increased time pressure resulted in increased vertical maneuvers, an effect again limited to the two 3-D perspective displays. Resolution preferences were more affected by workload and time pressure when the 3-D perspective displays were used, as compared with the 2-D displays, although overall preferences were milder than in previous studies. Investigating maneuver preferences using the strategic flight planning paradigm employed in this study may be the key to better ensure pilot acceptance of computer-generated resolution maneuvers.
3D laptop for defense applications
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.
Solid models for CT/MR image display: accuracy and utility in surgical planning
NASA Astrophysics Data System (ADS)
Mankovich, Nicholas J.; Yue, Alvin; Ammirati, Mario; Kioumehr, Farhad; Turner, Scott
1991-05-01
Medical imaging can now take wider advantage of Computer-Aided-Manufacturing through rapid prototyping technologies (RPT) such as stereolithography, laser sintering, and laminated object manufacturing to directly produce solid models of patient anatomy from processed CT and MR images. While conventional surgical planning relies on consultation with the radiologist combined with direct reading and measurement of CT and MR studies, 3-D surface and volumetric display workstations are providing a more easily interpretable view of patient anatomy. RPT can provide the surgeon with a life size model of patient anatomy constructed layer by layer with full internal detail. Although this life-size anatomic model is more easily understandable by the surgeon, its accuracy and true surgical utility remain untested. We have developed a prototype image processing and model fabrication system based on stereolithography, which provides the neurosurgeon with models of the skull base. Parallel comparison of the model with the original thresholded CT data and with a CRT displayed surface rendering showed that both have an accuracy of 99.6 percent. Because of the ease of exact voxel localization on the model, its precision was high with the standard deviation of measurement of 0.71 percent. The measurements on the surface rendered display proved more difficult to exactly locate and yielded a standard deviation of 2.37 percent. This paper presents our accuracy study and discussed ways of assessing the quality of neurosurgical plans when 3-D models a made available as planning tools.
The Effect of Perspective on Presence and Space Perception
Ling, Yun; Nefs, Harold T.; Brinkman, Willem-Paul; Qu, Chao; Heynderickx, Ingrid
2013-01-01
In this paper we report two experiments in which the effect of perspective projection on presence and space perception was investigated. In Experiment 1, participants were asked to score a presence questionnaire when looking at a virtual classroom. We manipulated the vantage point, the viewing mode (binocular versus monocular viewing), the display device/screen size (projector versus TV) and the center of projection. At the end of each session of Experiment 1, participants were asked to set their preferred center of projection such that the image seemed most natural to them. In Experiment 2, participants were asked to draw a floor plan of the virtual classroom. The results show that field of view, viewing mode, the center of projection and display all significantly affect presence and the perceived layout of the virtual environment. We found a significant linear relationship between presence and perceived layout of the virtual classroom, and between the preferred center of projection and perceived layout. The results indicate that the way in which virtual worlds are presented is critical for the level of experienced presence. The results also suggest that people ignore veridicality and they experience a higher level of presence while viewing elongated virtual environments compared to viewing the original intended shape. PMID:24223156
All-around viewing display system for group activity on life review therapy
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Okumura, Mitsuru
2009-10-01
This paper describes 360 degree viewing display system that can be viewed from any direction. A conventional monitor display is viewed from one direction, i.e., the display has narrow viewing angle and observers cannot view the screen from the opposite side. To solve this problem, we developed the 360 degree viewing display for collaborative tasks on the round table. This developed 360 degree viewing system has a liquid crystal display screen and a 360 degree rotating table by motor. The principle is very simple. The screen of a monitor only rotates at a uniform speed, but the optical techniques are also utilized. Moreover, we have developed a floating 360 degree viewing display that can be viewed from any direction. This new viewing system has a display screen, a rotating table and dual parabolic mirrors. In order to float the only image screen above the table, the rotating mechanism works in the parabolic mirrors. Because the dual parabolic mirrors generate a "mirage" image over the upper mirror, observers can view a floating 2D image on the virtual screen in front of them. Then the observer can view a monitor screen at any position surrounding the round table.
Davidson, George S.; Anderson, Thomas G.
2001-01-01
A display controller allows a user to control a base viewing location, a base viewing orientation, and a relative viewing orientation. The base viewing orientation and relative viewing orientation are combined to determine a desired viewing orientation. An aspect of a multidimensional space visible from the base viewing location along the desired viewing orientation is displayed to the user. The user can change the base viewing location, base viewing orientation, and relative viewing orientation by changing the location or other properties of input objects.
Interactive orbital proximity operations planning system
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Ellis, Stephen R.
1989-01-01
An interactive, graphical proximity operations planning system was developed which allows on-site design of efficient, complex, multiburn maneuvers in the dynamic multispacecraft environment about the space station. Maneuvering takes place in, as well as out of, the orbital plane. The difficulty in planning such missions results from the unusual and counterintuitive character of relative orbital motion trajectories and complex operational constraints, which are both time varying and highly dependent on the mission scenario. This difficulty is greatly overcome by visualizing the relative trajectories and the relative constraints in an easily interpretable, graphical format, which provides the operator with immediate feedback on design actions. The display shows a perspective bird's-eye view of the space station and co-orbiting spacecraft on the background of the station's orbital plane. The operator has control over two modes of operation: (1) a viewing system mode, which enables him or her to explore the spatial situation about the space station and thus choose and frame in on areas of interest; and (2) a trajectory design mode, which allows the interactive editing of a series of way-points and maneuvering burns to obtain a trajectory which complies with all operational constraints. Through a graphical interactive process, the operator will continue to modify the trajectory design until all operational constraints are met. The effectiveness of this display format in complex trajectory design is presently being evaluated in an ongoing experimental program.
NASA Technical Reports Server (NTRS)
Prinzel, III, Lawrence J. (Inventor); Pope, Alan T. (Inventor); Williams, Steven P. (Inventor); Bailey, Randall E. (Inventor); Arthur, Jarvis J. (Inventor); Kramer, Lynda J. (Inventor); Schutte, Paul C. (Inventor)
2012-01-01
Embodiments of the invention permit flight paths (current and planned) to be viewed from various orientations to provide improved path and terrain awareness via graphical two-dimensional or three-dimensional perspective display formats. By coupling the flight path information with a terrain database, uncompromising terrain awareness relative to the path and ownship is provided. In addition, missed approaches, path deviations, and any navigational path can be reviewed and rehearsed before performing the actual task. By rehearsing a particular mission, check list items can be reviewed, terrain awareness can be highlighted, and missed approach procedures can be discussed by the flight crew. Further, the use of Controller Pilot Datalink Communications enables data-linked path, flight plan changes, and Air Traffic Control requests to be integrated into the flight display of the present invention.
Evaluation of viewing experiences induced by a curved three-dimensional display
NASA Astrophysics Data System (ADS)
Mun, Sungchul; Park, Min-Chul; Yano, Sumio
2015-10-01
Despite an increased need for three-dimensional (3-D) functionality in curved displays, comparisons pertinent to human factors between curved and flat panel 3-D displays have rarely been tested. This study compared stereoscopic 3-D viewing experiences induced by a curved display with those of a flat panel display by evaluating subjective and objective measures. Twenty-four participants took part in the experiments and viewed 3-D content with two different displays (flat and curved 3-D display) within a counterbalanced and within-subject design. For the 30-min viewing condition, a paired t-test showed significantly reduced P300 amplitudes, which were caused by engagement rather than cognitive fatigue, in the curved 3-D viewing condition compared to the flat 3-D viewing condition at P3 and P4. No significant differences in P300 amplitudes were observed for 60-min viewing. Subjective ratings of realness and engagement were also significantly higher in the curved 3-D viewing condition than in the flat 3-D viewing condition for 30-min viewing. Our findings support that curved 3-D displays can be effective for enhancing engagement among viewers based on specific viewing times and environments.
High-quality remote interactive imaging in the operating theatre
NASA Astrophysics Data System (ADS)
Grimstead, Ian J.; Avis, Nick J.; Evans, Peter L.; Bocca, Alan
2009-02-01
We present a high-quality display system that enables the remote access within an operating theatre of high-end medical imaging and surgical planning software. Currently, surgeons often use printouts from such software for reference during surgery; our system enables surgeons to access and review patient data in a sterile environment, viewing real-time renderings of MRI & CT data as required. Once calibrated, our system displays shades of grey in Operating Room lighting conditions (removing any gamma correction artefacts). Our system does not require any expensive display hardware, is unobtrusive to the remote workstation and works with any application without requiring additional software licenses. To extend the native 256 levels of grey supported by a standard LCD monitor, we have used the concept of "PseudoGrey" where slightly off-white shades of grey are used to extend the intensity range from 256 to 1,785 shades of grey. Remote access is facilitated by a customized version of UltraVNC, which corrects remote shades of grey for display in the Operating Room. The system is successfully deployed at Morriston Hospital, Swansea, UK, and is in daily use during Maxillofacial surgery. More formal user trials and quantitative assessments are being planned for the future.
NASA Technical Reports Server (NTRS)
Clement, W. F.
1976-01-01
The use which pilots make of a moving map display from en route through the terminal area and including the approach and go-around flight phases was investigated. The content and function of each of three primary STOLAND displays are reviewed from an operational point of view. The primary displays are the electronic attitude director indicator (EADI), the horizontal situation indicator (HSI), and the multifunction display (MFD). Manually controlled flight with both flight director guidance and raw situation data is examined in detail in a simulated flight experiment with emphasis on tracking reference flight plans and maintaining geographic orientation after missed approaches. Eye-point-of-regard and workload measurements, coupled with task performance measurements, pilot opinion ratings, and pilot comments are presented. The experimental program was designed to offer a systematic objective and subjective comparison of pilots' use of the moving map MFD in conjunction with the other displays.
Smartphone-Guided Needle Angle Selection During CT-Guided Procedures.
Xu, Sheng; Krishnasamy, Venkatesh; Levy, Elliot; Li, Ming; Tse, Zion Tsz Ho; Wood, Bradford John
2018-01-01
In CT-guided intervention, translation from a planned needle insertion angle to the actual insertion angle is estimated only with the physician's visuospatial abilities. An iPhone app was developed to reduce reliance on operator ability to estimate and reproduce angles. The iPhone app overlays the planned angle on the smartphone's camera display in real-time based on the smartphone's orientation. The needle's angle is selected by visually comparing the actual needle with the guideline in the display. If the smartphone's screen is perpendicular to the planned path, the smartphone shows the Bull's-Eye View mode, in which the angle is selected after the needle's hub overlaps the tip in the camera. In phantom studies, we evaluated the accuracies of the hardware, the Guideline mode, and the Bull's-Eye View mode and showed the app's clinical efficacy. A proof-of-concept clinical case was also performed. The hardware accuracy was 0.37° ± 0.27° (mean ± SD). The mean error and navigation time were 1.0° ± 0.9° and 8.7 ± 2.3 seconds for a senior radiologist with 25 years' experience and 1.5° ± 1.3° and 8.0 ± 1.6 seconds for a junior radiologist with 4 years' experience. The accuracy of the Bull's-Eye View mode was 2.9° ± 1.1°. Combined CT and smart-phone guidance was significantly more accurate than CT-only guidance for the first needle pass (p = 0.046), which led to a smaller final targeting error (mean distance from needle tip to target, 2.5 vs 7.9 mm). Mobile devices can be useful for guiding needle-based interventions. The hardware is low cost and widely available. The method is accurate, effective, and easy to implement.
Three-dimensional holographic display of ultrasound computed tomograms
NASA Astrophysics Data System (ADS)
Andre, Michael P.; Janee, Helmar S.; Ysrael, Mariana Z.; Hodler, Jeurg; Olson, Linda K.; Leopold, George R.; Schulz, Raymond
1997-05-01
Breast ultrasound is a valuable adjunct to mammography but is limited by a very small field of view, particularly with high-resolution transducers necessary for breast diagnosis. We have been developing an ultrasound system based on a diffraction tomography method that provides slices through the breast on a large 20-cm diameter circular field of view. Eight to fifteen images are typically produced in sequential coronal planes from the nipple to the chest wall with either 0.25 or 0.5 mm pixels. As a means to simplify the interpretation of this large set of images, we report experience with 3D life-sized displays of the entire breast of human volunteers using a digital holographic technique. The compound 3D holographic images are produced from the digital image matrix, recorded on 14 X 17 inch transparency and projected on a special white-light viewbox. Holographic visualization of the entire breast has proved to be the preferred method for 3D display of ultrasound computed tomography images. It provides a unique perspective on breast anatomy and may prove useful for biopsy guidance and surgical planning.
A computer graphics system for visualizing spacecraft in orbit
NASA Technical Reports Server (NTRS)
Eyles, Don E.
1989-01-01
To carry out unanticipated operations with resources already in space is part of the rationale for a permanently manned space station in Earth orbit. The astronauts aboard a space station will require an on-board, spatial display tool to assist the planning and rehearsal of upcoming operations. Such a tool can also help astronauts to monitor and control such operations as they occur, especially in cases where first-hand visibility is not possible. A computer graphics visualization system designed for such an application and currently implemented as part of a ground-based simulation is described. The visualization system presents to the user the spatial information available in the spacecraft's computers by drawing a dynamic picture containing the planet Earth, the Sun, a star field, and up to two spacecraft. The point of view within the picture can be controlled by the user to obtain a number of specific visualization functions. The elements of the display, the methods used to control the display's point of view, and some of the ways in which the system can be used are described.
Securing information display by use of visual cryptography.
Yamamoto, Hirotsugu; Hayasaki, Yoshio; Nishida, Nobuo
2003-09-01
We propose a secure display technique based on visual cryptography. The proposed technique ensures the security of visual information. The display employs a decoding mask based on visual cryptography. Without the decoding mask, the displayed information cannot be viewed. The viewing zone is limited by the decoding mask so that only one person can view the information. We have developed a set of encryption codes to maintain the designed viewing zone and have demonstrated a display that provides a limited viewing zone.
Plots, Calculations and Graphics Tools (PCG2). Software Transfer Request Presentation
NASA Technical Reports Server (NTRS)
Richardson, Marilou R.
2010-01-01
This slide presentation reviews the development of the Plots, Calculations and Graphics Tools (PCG2) system. PCG2 is an easy to use tool that provides a single user interface to view data in a pictorial, tabular or graphical format. It allows the user to view the same display and data in the Control Room, engineering office area, or remote sites. PCG2 supports extensive and regular engineering needs that are both planned and unplanned and it supports the ability to compare, contrast and perform ad hoc data mining over the entire domain of a program's test data.
Wentink, M; Jakimowicz, J J; Vos, L M; Meijer, D W; Wieringa, P A
2002-08-01
Compared to open surgery, minimally invasive surgery (MIS) relies heavily on advanced technology, such as endoscopic viewing systems and innovative instruments. The aim of the study was to objectively compare three technologically advanced laparoscopic viewing systems with the standard viewing system currently used in most Dutch hospitals. We evaluated the following advanced laparoscopic viewing systems: a Thin Film Transistor (TFT) display, a stereo endoscope, and an image projection display. The standard viewing system was comprised of a monocular endoscope and a high-resolution monitor. Task completion time served as the measure of performance. Eight surgeons with laparoscopic experience participated in the experiment. The average task time was significantly greater (p <0.05) with the stereo viewing system than with the standard viewing system. The average task times with the TFT display and the image projection display did not differ significantly from the standard viewing system. Although the stereo viewing system promises improved depth perception and the TFT and image projection displays are supposed to improve hand-eye coordination, none of these systems provided better task performance than the standard viewing system in this pelvi-trainer experiment.
PTSD in Limb Trauma and Recovery
2008-10-16
field of view, much greater image fidelity and more comfortable viewing than the Emagin head-mounted display, and is well-suited to deployment in a...run on display platforms other than the eMagin Head-Mounted Display (HMD). This will include Brown University’s Cave, an eight- foot immersive VR...Samsung display provides wider field of view, much greater image fidelity and more comfortable viewing than the Emagin head-mounted display, and is
Demonstration of arbitrary views based on autostereoscopic three-dimensional display system
NASA Astrophysics Data System (ADS)
Liu, Boyang; Sang, Xinzhu; Yu, Xunbo; Li, Liu; Yang, Le; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
A method to realize arbitrary views for the lenticular lens array based on autostereoscopic three-dimensional display system is demonstrated. Normally, the number of views is proportional to pitch of the lenticular lens array. Increasing the number of views will result in reducing resolution and enhancing of granular sensation. 32 dense views can be achieved with one lenticular lens pitch covering 5.333 sub-pixels, which does significantly increases the number of views without affecting the resolution. But the structure of pitch and the number of views are fixed. Here, the 3D display method that the number of views can be changed artificially for most structures of lenticular lens is presented. Compared with the previous 32 views display method, the smoothness of motion parallex and the display depth of field are significantly improved.
NASA Technical Reports Server (NTRS)
Harris, H. M.; Bergam, M. J.; Kim, S. L.; Smith, E. A.
1987-01-01
Shuttle Mission Design and Operations Software (SMDOS) assists in design and operation of missions involving spacecraft in low orbits around Earth by providing orbital and graphics information. SMDOS performs following five functions: display two world and two polar maps or any user-defined window 5 degrees high in latitude by 5 degrees wide in longitude in one of eight standard projections; designate Earth sites by points or polygon shapes; plot spacecraft ground track with 1-min demarcation lines; display, by means of different colors, availability of Tracking and Data Relay Satellite to Shuttle; and calculate available times and orbits to view particular site, and corresponding look angles. SMDOS written in Laboratory Micro-systems FORTH (1979 standard)
Overconfidence, preview, and probability in strategic planning
NASA Technical Reports Server (NTRS)
Wickens, Christopher D.; Pizarro, David; Bell, Brian
1991-01-01
The performance of eight subjects in a 'rescue' video game requiring choices as to which node they should fly to in order to rescue the simulated casualties is presently studied with a view to biases and display support criteria in strategic planning. After each choice, the subjects needed to fly a challenging tracking dynamic along a path to reach the next node. The results obtained indicate that the choices of the subjects were less optimal when full preview was offered, perhaps due to subjects' reliance on the simple strategy of choosing routes with the greatest number of casualties.
Del Fiol, Guilherme; Butler, Jorie; Livnat, Yarden; Mayer, Jeanmarie; Samore, Matthew; Jones, Makoto; Weir, Charlene
2016-01-01
Summary Objective Big data or population-based information has the potential to reduce uncertainty in medicine by informing clinicians about individual patient care. The objectives of this study were: 1) to explore the feasibility of extracting and displaying population-based information from an actual clinical population’s database records, 2) to explore specific design features for improving population display, 3) to explore perceptions of population information displays, and 4) to explore the impact of population information display on cognitive outcomes. Methods We used the Veteran’s Affairs (VA) database to identify similar complex patients based on a similar complex patient case. Study outcomes measures were 1) preferences for population information display 2) time looking at the population display, 3) time to read the chart, and 4) appropriateness of plans with pre- and post-presentation of population data. Finally, we redesigned the population information display based on our findings from this study. Results The qualitative data analysis for preferences of population information display resulted in four themes: 1) trusting the big/population data can be an issue, 2) embedded analytics is necessary to explore patient similarities, 3) need for tools to control the view (overview, zoom and filter), and 4) different presentations of the population display can be beneficial to improve the display. We found that appropriateness of plans was at 60% for both groups (t9=-1.9; p=0.08), and overall time looking at the population information display was 2.3 minutes versus 3.6 minutes with experts processing information faster than non-experts (t8= -2.3, p=0.04). Conclusion A population database has great potential for reducing complexity and uncertainty in medicine to improve clinical care. The preferences identified for the population information display will guide future health information technology system designers for better and more intuitive display. PMID:27437065
NASA Astrophysics Data System (ADS)
Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash
2012-06-01
This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.
NASA Technical Reports Server (NTRS)
Post, R. B.; Welch, R. B.
1996-01-01
Visually perceived eye level (VPEL) was measured while subjects viewed two vertical lines which were either upright or pitched about the horizontal axis. In separate conditions, the display consisted of a relatively large pair of lines viewed at a distance of 1 m, or a display scaled to one third the dimensions and viewed at a distance of either 1 m or 33.3 cm. The small display viewed at 33.3 cm produced a retinal image the same size as that of the large display at 1 m. Pitch of all three displays top-toward and top-away from the observer caused upward and downward VPEL shifts, respectively. These effects were highly similar for the large display and the small display viewed at 33.3 cm (ie equal retinal size), but were significantly smaller for the small display viewed at 1 m. In a second experiment, perceived size of the three displays was measured and found to be highly accurate. The results of the two experiments indicate that the effect of optical pitch on VPEL depends on the retinal image size of stimuli rather than on perceived size.
Kim, Joowhan; Min, Sung-Wook; Lee, Byoungho
2007-10-01
Integral floating display is a recently proposed three-dimensional (3D) display method which provides a dynamic 3D image in the vicinity to an observer. It has a viewing window only through which correct 3D images can be observed. However, the positional difference between the viewing window and the floating image causes limited viewing zone in integral floating system. In this paper, we provide the principle and experimental results of the location adjustment of the viewing window of the integral floating display system by modifying the elemental image region for integral imaging. We explain the characteristics of the viewing window and propose how to move the viewing window to maximize the viewing zone.
Beck, Donghyun; Lee, Minho; Park, Woojin
2017-12-01
This study conducted a driving simulator experiment to comparatively evaluate three in-vehicle side view displays layouts for camera monitor systems (CMS) and the traditional side view mirror arrangement. The three layouts placed two electronic side view displays near the traditional mirrors positions, on the dashboard at each side of the steering wheel and on the centre fascia with the two displays joined side-by-side, respectively. Twenty-two participants performed a time- and safety-critical driving task that required rapidly gaining situation awareness through the side view displays/mirrors and making a lane change to avoid collision. The dependent variables were eye-off-the-road time, response time, and, ratings of perceived workload, preference and perceived safety. Overall, the layout placing the side view displays on the dashboard at each side of the steering wheel was found to be the best. The results indicated that reducing eye gaze travel distance and maintaining compatibility were both important for the design of CMS displays layout. Practitioner Summary: A driving simulator study was conducted to comparatively evaluate three in-vehicle side view displays layouts for camera monitor systems (CMS) and the traditional side view mirror arrangement in critical lane changing situation. Reducing eye movement and maintaining compatibility were found to be both important for the ergonomics design of CMS displays layout.
Evaluation of viewing experiences induced by curved 3D display
NASA Astrophysics Data System (ADS)
Mun, Sungchul; Park, Min-Chul; Yano, Sumio
2015-05-01
As advanced display technology has been developed, much attention has been given to flexible panels. On top of that, with the momentum of the 3D era, stereoscopic 3D technique has been combined with the curved displays. However, despite the increased needs for 3D function in the curved displays, comparisons between curved and flat panel displays with 3D views have rarely been tested. Most of the previous studies have investigated their basic ergonomic aspects such as viewing posture and distance with only 2D views. It has generally been known that curved displays are more effective in enhancing involvement in specific content stories because field of views and distance from the eyes of viewers to both edges of the screen are more natural in curved displays than in flat panel ones. For flat panel displays, ocular torsions may occur when viewers try to move their eyes from the center to the edges of the screen to continuously capture rapidly moving 3D objects. This is due in part to differences in viewing distances from the center of the screen to eyes of viewers and from the edges of the screen to the eyes. Thus, this study compared S3D viewing experiences induced by a curved display with those of a flat panel display by evaluating significant subjective and objective measures.
NASA Technical Reports Server (NTRS)
Merwin, David H.; Wickens, Christopher D.
1996-01-01
We examined the cockpit display representation of traffic, to support the pilot in tactical planning and conflict avoidance. Such displays may support the "free flight" concept, but can also support greater situation awareness in a non-free flight environment. Two perspective views and a coplanar display were contrasted in scenarios in which pilots needed to navigate around conflicting traffic, either in the absence (low workload) or presence (high workload) of a second intruder aircraft. All three formats were configured with predictive aiding vectors that explicitly represented the predicted point of closest pass, and predicted penetration of an alert zone around ownship. Ten pilots were assigned to each of the display conditions, and each flew a series of 60 conflict maneuvers that varied in their workload and the complexity of the conflict geometry. Results indicated a tendency to choose vertical over lateral maneuvers, a tendency which was amplified with the coplanar display. Vertical maneuvers by the intruder produced an added source of workload. Importantly, the coplanar display supported performance in all measures that was equal to or greater than either of the perspective displays (i.e., fewer predicted and actual conflicts, less extreme maneuvers). Previous studies that have indicated perspective superiority have only contrasted these with UNIplanar displays rather than the coplanar display used here.
Determination of depth-viewing volumes for stereo three-dimensional graphic displays
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Williams, Steven P.
1990-01-01
Real-world, 3-D, pictorial displays incorporating true depth cues via stereopsis techniques offer a potential means of displaying complex information in a natural way to prevent loss of situational awareness and provide increases in pilot/vehicle performance in advanced flight display concepts. Optimal use of stereopsis requires an understanding of the depth viewing volume available to the display designer. Suggested guidelines are presented for the depth viewing volume from an empirical determination of the effective region of stereopsis cueing (at several viewer-CRT screen distances) for a time multiplexed stereopsis display system. The results provide the display designer with information that will allow more effective placement of depth information to enable the full exploitation of stereopsis cueing. Increasing viewer-CRT screen distances provides increasing amounts of usable depth, but with decreasing fields-of-view. A stereopsis hardware system that permits an increased viewer-screen distance by incorporating larger screen sizes or collimation optics to maintain the field-of-view at required levels would provide a much larger stereo depth-viewing volume.
Invariant spatial context is learned but not retrieved in gaze-contingent tunnel-view search.
Zang, Xuelian; Jia, Lina; Müller, Hermann J; Shi, Zhuanghua
2015-05-01
Our visual brain is remarkable in extracting invariant properties from the noisy environment, guiding selection of where to look and what to identify. However, how the brain achieves this is still poorly understood. Here we explore interactions of local context and global structure in the long-term learning and retrieval of invariant display properties. Participants searched for a target among distractors, without knowing that some "old" configurations were presented repeatedly (randomly inserted among "new" configurations). We simulated tunnel vision, limiting the visible region around fixation. Robust facilitation of performance for old versus new contexts was observed when the visible region was large but not when it was small. However, once the display was made fully visible during the subsequent transfer phase, facilitation did become manifest. Furthermore, when participants were given a brief preview of the total display layout prior to tunnel view search with 2 items visible, facilitation was already obtained during the learning phase. The eye movement results revealed contextual facilitation to be coupled with changes of saccadic planning, characterized by slightly extended gaze durations but a reduced number of fixations and shortened scan paths for old displays. Taken together, our findings show that invariant spatial display properties can be acquired based on scarce, para-/foveal information, while their effective retrieval for search guidance requires the availability (even if brief) of a certain extent of peripheral information. (c) 2015 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
1977-01-01
A preliminary design for a helicopter/VSTOL wide angle simulator image generation display system is studied. The visual system is to become part of a simulator capability to support Army aviation systems research and development within the near term. As required for the Army to simulate a wide range of aircraft characteristics, versatility and ease of changing cockpit configurations were primary considerations of the study. Due to the Army's interest in low altitude flight and descents into and landing in constrained areas, particular emphasis is given to wide field of view, resolution, brightness, contrast, and color. The visual display study includes a preliminary design, demonstrated feasibility of advanced concepts, and a plan for subsequent detail design and development. Analysis and tradeoff considerations for various visual system elements are outlined and discussed.
Analysis of the viewing zone of the Cambridge autostereoscopic display.
Dodgson, N A
1996-04-01
The Cambridge autostereoscopic three-dimensional display is a time-multiplexed device that gives both stereo and movement parallax to the viewer without the need for any special glasses. This analysis derives the size and position of the fully illuminated, and hence useful, viewing zone for a Cambridge display. The viewing zone of such a display is shown to be completely determined by four parameters: the width of the screen, the optimal distance of the viewer from the screen, the width over which an image can be seen across the whole screen at this optimal distance, and the number of views. A display's viewing zone can thus be completely described without reference to the internal implementation of the device. An equation that describes what the eye sees from any position in front of the display is derived. The equations derived can be used in both the analysis and design of this type of time-multiplexed autostereoscopic display.
The Electronic View Box: a software tool for radiation therapy treatment verification.
Bosch, W R; Low, D A; Gerber, R L; Michalski, J M; Graham, M V; Perez, C A; Harms, W B; Purdy, J A
1995-01-01
We have developed a software tool for interactively verifying treatment plan implementation. The Electronic View Box (EVB) tool copies the paradigm of current practice but does so electronically. A portal image (online portal image or digitized port film) is displayed side by side with a prescription image (digitized simulator film or digitally reconstructed radiograph). The user can measure distances between features in prescription and portal images and "write" on the display, either to approve the image or to indicate required corrective actions. The EVB tool also provides several features not available in conventional verification practice using a light box. The EVB tool has been written in ANSI C using the X window system. The tool makes use of the Virtual Machine Platform and Foundation Library specifications of the NCI-sponsored Radiation Therapy Planning Tools Collaborative Working Group for portability into an arbitrary treatment planning system that conforms to these specifications. The present EVB tool is based on an earlier Verification Image Review tool, but with a substantial redesign of the user interface. A graphical user interface prototyping system was used in iteratively refining the tool layout to allow rapid modifications of the interface in response to user comments. Features of the EVB tool include 1) hierarchical selection of digital portal images based on physician name, patient name, and field identifier; 2) side-by-side presentation of prescription and portal images at equal magnification and orientation, and with independent grayscale controls; 3) "trace" facility for outlining anatomical structures; 4) "ruler" facility for measuring distances; 5) zoomed display of corresponding regions in both images; 6) image contrast enhancement; and 7) communication of portal image evaluation results (approval, block modification, repeat image acquisition, etc.). The EVB tool facilitates the rapid comparison of prescription and portal images and permits electronic communication of corrections in port shape and positioning.
Kim, Hwi; Hahn, Joonku; Choi, Hee-Jin
2011-04-10
We investigate the viewing angle enhancement of a lenticular three-dimensional (3D) display with a triplet lens array. The theoretical limitations of the viewing angle and view number of the lenticular 3D display with the triplet lens array are analyzed numerically. For this, the genetic-algorithm-based design method of the triplet lens is developed. We show that a lenticular 3D display with viewing angle of 120° and 144 views without interview cross talk can be realized with the use of an optimally designed triplet lens array. © 2011 Optical Society of America
Optimizing height presentation for aircraft cockpit displays
NASA Astrophysics Data System (ADS)
Jordan, Chris S.; Croft, D.; Selcon, Stephen J.; Markin, H.; Jackson, M.
1997-02-01
This paper describes an experiment conducted to investigate the type of display symbology that most effectively conveys height information to users of head-down plan-view radar displays. The experiment also investigated the use of multiple information sources (redundancy) in the design of such displays. Subjects were presented with eight different height display formats. These formats were constructed from a control, and/or one, two, or three sources of redundant information. The three formats were letter coding, analogue scaling, and toggling (spatially switching the position of the height information from above to below the aircraft symbol). Subjects were required to indicate altitude awareness via a four-key, forced-choice keyboard response. Error scores and response times were taken as performance measures. There were three main findings. First, there was a significant performance advantage when the altitude information was presented above and below the symbol to aid the representation of height information. Second, the analogue scale, a line whose length indicated altitude, proved significantly detrimental to performance. Finally, no relationship was found between the number of redundant information sources employed and performance. The implications for future aircraft and displays are discussed in relation to current aircraft tactical displays and in the context of perceptual psychological theory.
Recent Developments in the VISRAD 3-D Target Design and Radiation Simulation Code
NASA Astrophysics Data System (ADS)
Macfarlane, Joseph; Golovkin, Igor; Sebald, James
2017-10-01
The 3-D view factor code VISRAD is widely used in designing HEDP experiments at major laser and pulsed-power facilities, including NIF, OMEGA, OMEGA-EP, ORION, Z, and LMJ. It simulates target designs by generating a 3-D grid of surface elements, utilizing a variety of 3-D primitives and surface removal algorithms, and can be used to compute the radiation flux throughout the surface element grid by computing element-to-element view factors and solving power balance equations. Target set-up and beam pointing are facilitated by allowing users to specify positions and angular orientations using a variety of coordinates systems (e.g., that of any laser beam, target component, or diagnostic port). Analytic modeling for laser beam spatial profiles for OMEGA DPPs and NIF CPPs is used to compute laser intensity profiles throughout the grid of surface elements. VISRAD includes a variety of user-friendly graphics for setting up targets and displaying results, can readily display views from any point in space, and can be used to generate image sequences for animations. We will discuss recent improvements to conveniently assess beam capture on target and beam clearance of diagnostic components, as well as plans for future developments.
Toward a 3D video format for auto-stereoscopic displays
NASA Astrophysics Data System (ADS)
Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha
2008-08-01
There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.
[Display technologies for augmented reality in medical applications].
Eck, Ulrich; Winkler, Alexander
2018-04-01
One of the main challenges for modern surgery is the effective use of the many available imaging modalities and diagnostic methods. Augmented reality systems can be used in the future to blend patient and planning information into the view of surgeons, which can improve the efficiency and safety of interventions. In this article we present five visualization methods to integrate augmented reality displays into medical procedures and the advantages and disadvantages are explained. Based on an extensive literature review the various existing approaches for integration of augmented reality displays into medical procedures are divided into five categories and the most important research results for each approach are presented. A large number of mixed and augmented reality solutions for medical interventions have been developed as research prototypes; however, only very few systems have been tested on patients. In order to integrate mixed and augmented reality displays into medical practice, highly specialized solutions need to be developed. Such systems must comply with the requirements with respect to accuracy, fidelity, ergonomics and seamless integration into the surgical workflow.
Wang, Annabel Z; Scherr, Karen A; Wong, Charlene A; Ubel, Peter A
2017-01-01
Many health policy experts have endorsed insurance competition as a way to reduce the cost and improve the quality of medical care. In line with this approach, health insurance exchanges, such as HealthCare.gov, allow consumers to compare insurance plans online. Since the 2013 rollout of HealthCare.gov, administrators have added features intended to help consumers better understand and compare insurance plans. Although well-intentioned, changes to exchange websites affect the context in which consumers view plans, or choice architecture, which may impede their ability to choose plans that best fit their needs at the lowest cost. By simulating the 2016 HealthCare.gov enrollment experience in an online sample of 374 American adults, we examined comprehension and choice of HealthCare.gov plans under its choice architecture. We found room for improvement in plan comprehension, with higher rates of misunderstanding among participants with poor math skills (P < 0.05). We observed substantial variations in plan choice when identical plan sets were displayed in different orders (P < 0.001). However, regardless of order in which they viewed the plans, participants cited the same factors as most important to their choices (P > 0.9). Participants were drawn from a general population sample. The study does not assess for all possible plan choice influencers, such as provider networks, brand recognition, or help from others. Our findings suggest two areas of improvement for exchanges: first, the remaining gap in consumer plan comprehension and second, the apparent influence of sorting order - and likely other choice architecture elements - on plan choice. Our findings inform strategies for exchange administrators to help consumers better understand and select plans that better fit their needs.
Semiautomated Management Of Arriving Air Traffic
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Nedell, William
1992-01-01
System of computers, graphical workstations, and computer programs developed for semiautomated management of approach and arrival of numerous aircraft at airport. System comprises three subsystems: traffic-management advisor, used for controlling traffic into terminal area; descent advisor generates information integrated into plan-view display of traffic on monitor; and final-approach-spacing tool used to merge traffic converging on final approach path while making sure aircraft are properly spaced. Not intended to restrict decisions of air-traffic controllers.
Xia, Xinxing; Zheng, Zhenrong; Liu, Xu; Li, Haifeng; Yan, Caijie
2010-09-10
We utilized a high-frame-rate projector, a rotating mirror, and a cylindrical selective-diffusing screen to present a novel three-dimensional (3D) omnidirectional-view display system without the need for any special viewing aids. The display principle and image size are analyzed, and the common display zone is proposed. The viewing zone for one observation place is also studied. The experimental results verify this method, and a vivid color 3D scene with occlusion and smooth parallax is also demonstrated with the system.
Secure information display with limited viewing zone by use of multi-color visual cryptography.
Yamamoto, Hirotsugu; Hayasaki, Yoshio; Nishida, Nobuo
2004-04-05
We propose a display technique that ensures security of visual information by use of visual cryptography. A displayed image appears as a completely random pattern unless viewed through a decoding mask. The display has a limited viewing zone with the decoding mask. We have developed a multi-color encryption code set. Eight colors are represented in combinations of a displayed image composed of red, green, blue, and black subpixels and a decoding mask composed of transparent and opaque subpixels. Furthermore, we have demonstrated secure information display by use of an LCD panel.
Simple measurement of lenticular lens quality for autostereoscopic displays
NASA Astrophysics Data System (ADS)
Gray, Stuart; Boudreau, Robert A.
2013-03-01
Lenticular lens based autostereoscopic 3D displays are finding many applications in digital signage and consumer electronics devices. A high quality 3D viewing experience requires the lenticular lens be properly aligned with the pixels on the display device so that each eye views the correct image. This work presents a simple and novel method for rapidly assessing the quality of a lenticular lens to be used in autostereoscopic displays. Errors in lenticular alignment across the entire display are easily observed with a simple test pattern where adjacent views are programmed to display different colors.
Portrait view of ESA Spacelab Specialists
NASA Technical Reports Server (NTRS)
1978-01-01
Portrait view of European Space Agency (ESA) Spacelab Specialist Byron K. Lichtenberg in civilian clothes standing in front of a display case. The photo was taken at the Marshall Space Flight Center (MSFC), Huntsville, Alabama (31779); portrait view of ESA Spacelab Specialist Michael L. Lampton, also in civilian clothes in front of display at MSFC (31780); portrait view of Wubbo Ockels, also in civilian clothes in front of display at MSFC (31781).
Choudhri, Asim F; Radvany, Martin G
2011-04-01
Medical imaging is commonly used to diagnose many emergent conditions, as well as plan treatment. Digital images can be reviewed on almost any computing platform. Modern mobile phones and handheld devices are portable computing platforms with robust software programming interfaces, powerful processors, and high-resolution displays. OsiriX mobile, a new Digital Imaging and Communications in Medicine viewing program, is available for the iPhone/iPod touch platform. This raises the possibility of mobile review of diagnostic medical images to expedite diagnosis and treatment planning using a commercial off the shelf solution, facilitating communication among radiologists and referring clinicians.
Optimum viewing distance for target acquisition
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
2015-05-01
Human visual system (HVS) "resolution" (a.k.a. visual acuity) varies with illumination level, target characteristics, and target contrast. For signage, computer displays, cell phones, and TVs a viewing distance and display size are selected. Then the number of display pixels is chosen such that each pixel subtends 1 min-1. Resolution of low contrast targets is quite different. It is best described by Barten's contrast sensitivity function. Target acquisition models predict maximum range when the display pixel subtends 3.3 min-1. The optimum viewing distance is nearly independent of magnification. Noise increases the optimum viewing distance.
NASA Technical Reports Server (NTRS)
2008-01-01
NASA s advanced visual simulations are essential for analyses associated with life cycle planning, design, training, testing, operations, and evaluation. Kennedy Space Center, in particular, uses simulations for ground services and space exploration planning in an effort to reduce risk and costs while improving safety and performance. However, it has been difficult to circulate and share the results of simulation tools among the field centers, and distance and travel expenses have made timely collaboration even harder. In response, NASA joined with Valador Inc. to develop the Distributed Observer Network (DON), a collaborative environment that leverages game technology to bring 3-D simulations to conventional desktop and laptop computers. DON enables teams of engineers working on design and operations to view and collaborate on 3-D representations of data generated by authoritative tools. DON takes models and telemetry from these sources and, using commercial game engine technology, displays the simulation results in a 3-D visual environment. Multiple widely dispersed users, working individually or in groups, can view and analyze simulation results on desktop and laptop computers in real time.
Developing a Prototype ALHAT Human System Interface for Landing
NASA Technical Reports Server (NTRS)
Hirsh, Robert L.; Chua, Zarrin K.; Heino, Todd A.; Strahan, Al; Major, Laura; Duda, Kevin
2011-01-01
The goal of the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project is to safely execute a precision landing anytime/anywhere on the moon. This means the system must operate in any lighting conditions, operate in the presence of any thruster generated regolith clouds, and operate without the help of redeployed navigational aids or prepared landing site at the landing site. In order to reach this ambitious goal, computer aided technologies such as ALHAT will be needed in order to permit these landings to be done safely. Although there will be advanced autonomous capabilities onboard future landers, humans will still be involved (either onboard as astronauts or remotely from mission control) in any mission to the moon or other planetary body. Because many time critical decisions must be made quickly and effectively during the landing sequence, the Descent and Landing displays need to be designed to be as effective as possible at presenting the pertinent information to the operator, and allow the operators decisions to be implemented as quickly as possible. The ALHAT project has established the Human System Interface (HSI) team to lead in the development of these displays and to study the best way to provide operators enhanced situational awareness during landing activities. These displays are prototypes that were developed based on multiple design and feedback sessions with the astronaut office at NASA/ Johnson Space Center. By working with the astronauts in a series of plan/build/evaluate cycles, the HSI team has obtained astronaut feedback from the very beginning of the design process. In addition to developing prototype displays, the HSI team has also worked to provide realistic lunar terrain (and shading) to simulate a "out the window" view that can be adjusted to various lighting conditions (based on a desired date/time) to allow the same terrain to be viewed under varying lighting terrain. This capability will be critical to determining the effect of terrain/lighting on the human pilot, and how they use windows and displays during landing activities. The Apollo missions were limited to about 28 possible launch days a year due to lighting and orbital constraints. In order to take advantage of more landing opportunities and venture to more challenging landing locations, future landers will need to utilize sensors besides human eyes for scanning the surface. The ALHAT HSI system must effectively convey ALHAT produced information to the operator, so that landings can occur during less "optimal" conditions (lighting, surface terrain, slopes, etc) than was possible during Apollo missions. By proving this capability, ALHAT will simultaneously provide more flexible access to the moon, and greater safety margins for future landers. This paper will specifically focus on the development of prototype displays (the Trajectory Profile Display (TPD), Landing Point Designation (LPD), and Crew Camera View (CCV) ), implementation of realistic planetary terrain, human modeling, and future HSI plans.
T-38 Primary Flight Display Prototyping and HIVE Support Abstract & Summary
NASA Technical Reports Server (NTRS)
Boniface, Andrew
2015-01-01
This fall I worked in EV3 within NASA's Johnson Space Center in The HIVE (Human Integrated Vehicles & Environments). The HIVE is responsible for human in the loop testing, getting new technologies in front of astronauts, operators, and users early in the development cycle to make the interfaces more human friendly. Some projects the HIVE is working on includes user interfaces for future spacecraft, wearables to alert astronauts about important information, and test beds to simulate mock missions. During my internship I created a prototype for T-38 aircraft displays using LabVIEW, learned how to use microcontrollers, and helped out with other small tasks in the HIVE. The purpose of developing a prototype for T-38 Displays in LabVIEW is to analyze functions of the display such as navigation in a cost and time effective manner. The LabVIEW prototypes allow Ellington Field AOD to easily make adjustments to the display before hardcoding the final product. LabVIEW was used to create a user interface for simulation almost identical to the real aircraft display. Goals to begin the T-38 PFD (Primary Flight Display) prototype included creating a T-38 PFD hardware display in a software environment, designing navigation for the menu's, incorporating vertical and horizontal navigation bars, and to add a heading bug for compass controls connected to the HSI (Horizontal Situation Indicator). To get started with the project, measurements of the entire display were taken. This enabled an accurate model of the hardware display to be created. Navigation of menu's required some exploration of different buttons on the display. The T-38 simulator and aircraft were used for examining the display. After one piece of the prototype was finished, another trip of to the simulator took place. This was done until all goals for the prototype were complete. Some possible integration ideas for displays in the near future are autopilot selection, touch screen displays, and crew member preferences. Complete navigation, control, and function customization will be achievable once a display is fully developed. Other than the T-38 prototyping, I spent time learning how to design small circuits and write code for them to function. This was done by adding electronic circuit components to breadboard and microcontroller then writing code to speak to those components through the microcontroller. I went through an Arduino starter kit to build circuits and code software that allowed the hardware to act. This work was planned to assist in a lighting project this fall but another solution was discovered for the lighting project. Other tasks that I assisted with, included hands on work such as mock-up construction/removal, logic analyzer repairs, and soldering with circuits. The unique opportunity to be involved work with NASA has significantly changed my educational and career goals. This opportunity has only opened the door to my career with engineering. I have learned over the span of this internship that I am fascinated by the type of work that NASA does. My desire to work in the aerospace industry has increased immensely. I hope to return to NASA to be more involved in the advancement of science, engineering, and spaceflight. My interests for my future education and career lie in NASA’s work - pioneering the future in space exploration, scientific discovery and aeronautics research.
Development of scanning holographic display using MEMS SLM
NASA Astrophysics Data System (ADS)
Takaki, Yasuhiro
2016-10-01
Holography is an ideal three-dimensional (3D) display technique, because it produces 3D images that naturally satisfy human 3D perception including physiological and psychological factors. However, its electronic implementation is quite challenging because ultra-high resolution is required for display devices to provide sufficient screen size and viewing zone. We have developed holographic display techniques to enlarge the screen size and the viewing zone by use of microelectromechanical systems spatial light modulators (MEMS-SLMs). Because MEMS-SLMs can generate hologram patterns at a high frame rate, the time-multiplexing technique is utilized to virtually increase the resolution. Three kinds of scanning systems have been combined with MEMS-SLMs; the screen scanning system, the viewing-zone scanning system, and the 360-degree scanning system. The screen scanning system reduces the hologram size to enlarge the viewing zone and the reduced hologram patterns are scanned on the screen to increase the screen size: the color display system with a screen size of 6.2 in. and a viewing zone angle of 11° was demonstrated. The viewing-zone scanning system increases the screen size and the reduced viewing zone is scanned to enlarge the viewing zone: a screen size of 2.0 in. and a viewing zone angle of 40° were achieved. The two-channel system increased the screen size to 7.4 in. The 360-degree scanning increases the screen size and the reduced viewing zone is scanned circularly: the display system having a flat screen with a diameter of 100 mm was demonstrated, which generates 3D images viewed from any direction around the flat screen.
3D Graphics For Interactive Surgical Simulation And Implant Design
NASA Astrophysics Data System (ADS)
Dev, P.; Fellingham, L. L.; Vassiliadis, A.; Woolson, S. T.; White, D. N.; Young, S. L.
1984-10-01
The combination of user-friendly, highly interactive software, 3D graphics, and the high-resolution detailed views of anatomy afforded by X-ray computer tomography and magnetic resonance imaging can provide surgeons with the ability to plan and practice complex surgeries. In addition to providing a realistic and manipulable 3D graphics display, this system can drive a milling machine in order to produce physical models of the anatomy or prosthetic devices and implants which have been designed using its interactive graphics editing facilities.
The Challenge of New and Emerging Information Operations
1999-06-01
Information Dominance Center (IDC) are addressing the operational and technological needs. The IDC serves as a model for the DoD and a proposed virtual hearing room for Congress. As the IDC and its supporting technologies mature, individuals will be able to freely enter, navigate, plan, and execute operations within Perceptual and Knowledge Landscapes. This capability begins the transition from Information Dominance to Knowledge Dominance. The IDC is instantiating such entities as smart rooms, avatars, square pixel displays, polymorphic views, and
Using virtual reality and game technology to assist command and control
NASA Astrophysics Data System (ADS)
Riead, Lorien H.; Straub, James; Mangino, Joseph
2017-04-01
Recent improvements in virtual reality hardware have brought this technology to the point where easily-obtained commercial equipment can conceivably provide an affordable and relatively unexplored alternative to the traditional monitor and keyboard view of the tactical space. In addition, commercially available game engines provide several advantages for tactical applications. Using these technologies, we have created a concept of a low-cost display that allows for real-time immersive planning and strategy, with suggestions for further exploration.
Assessment of a User Guide for One Semi-Automated Forces (OneSAF) Version 2.0
2009-09-01
OneSAF uses a two-dimensional feature named a Plan View Display ( PVD ) as the primary graphical interface. The PVD replicates a map with a series...primary interface, the PVD is how the user watches the scenario unfold and requires the most interaction with the user. As seen in Table 3, all...participant indicated never using these seven map-related functions. Graphic control measures. Graphic control measures are applied to the PVD map to
A Plan for Collecting Ada Software Development Cost, Schedule, and Environment Data.
1987-04-02
separate comment section for events that affected individual CSCIs. I C-31 S% S g S. SYSTEM LEVEL OR CSCI LEVEL DOCUMENTATION PORN INSTRUCTIONS This...J*~ *.p.N. COMPUTER SOFTVARE CONFIGURATION ITEM SUMMART DATA PORN INSTRUCTIONS 4.1.4. Development Methods Experience Enter the average number of...users see what is happening and view results). It is often a video display terminal of some sort, although it could be a hard copy printer and keyboard or
Displays enabling mobile multimedia
NASA Astrophysics Data System (ADS)
Kimmel, Jyrki
2007-02-01
With the rapid advances in telecommunications networks, mobile multimedia delivery to handsets is now a reality. While a truly immersive multimedia experience is still far ahead in the mobile world, significant advances have been made in the constituent audio-visual technologies to make this become possible. One of the critical components in multimedia delivery is the mobile handset display. While such alternatives as headset-style near-to-eye displays, autostereoscopic displays, mini-projectors, and roll-out flexible displays can deliver either a larger virtual screen size than the pocketable dimensions of the mobile device can offer, or an added degree of immersion by adding the illusion of the third dimension in the viewing experience, there are still challenges in the full deployment of such displays in real-life mobile communication terminals. Meanwhile, direct-view display technologies have developed steadily, and can provide a development platform for an even better viewing experience for multimedia in the near future. The paper presents an overview of the mobile display technology space with an emphasis on the advances and potential in developing direct-view displays further to meet the goal of enabling multimedia in the mobile domain.
Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu
2016-02-22
We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7.
A new AS-display as part of the MIRO lightweight robot for surgical applications
NASA Astrophysics Data System (ADS)
Grossmann, Christoph M.
2010-02-01
The DLR MIRO is the second generation of versatile robot arms for surgical applications, developed at the Institute for Robotics and Mechatronics at Deutsche Zentrum für Luft- und Raumfahrt (DLR) in Oberpfaffenhofen, Germany. With its low weight of 10 kg and dimensions similar to those of the human arm, the MIRO robot can assist the surgeon directly at the operating table where space is scarce. The planned scope of applications of this robot arm ranges from guiding a laser unit for the precise separation of bone tissue in orthopedics to positioning holes for bone screws, robot assisted endoscope guidance and on to the multi-robot concept for endoscopic minimally invasive surgery. A stereo-endoscope delivers two full HD video streams that can even be augmented with information, e.g vectors indicating the forces that act on the surgical tool at any given moment. SeeFront's new autostereoscopic 3D display SF 2223, being a part of the MIRO assembly, will let the surgeon view the stereo video stream in excellent quality, in real time and without the need for any viewing aids. The presentation is meant to provide an insight into the principles at the basis of the SeeFront 3D technology and how they allow the creation of autostereoscopic display solutions ranging from smallest "stamp-sized" displays to 30" desktop versions, which all provide comfortable freedom of movement for the viewer along with excellent 3D image quality.
NASA Astrophysics Data System (ADS)
Castro, José J.; Pozo, Antonio M.; Rubiño, Manuel
2013-11-01
In this work we studied the color dependence with a horizontal-viewing angle and colorimetric characterization of two liquid-crystal displays (LCD) using two different backlighting: Cold Cathode Fluorescent Lamps (CCFLs) and light-emitting diodes (LEDs). The LCDs studied had identical resolution, size, and technology (TFT - thin film transistor). The colorimetric measurements were made with the spectroradiometer SpectraScan PR-650 following the procedure recommended in the European guideline EN 61747-6. For each display, we measured at the centre of the screen the chromaticity coordinates at horizontal viewing angles of 0, 20, 40, 60 and 80 degrees for the achromatic (A), red (R), green (G) and blue (B) channels. Results showed a greater color-gamut area for the display with LED backlight, compared with the CCFL backlight, showing a greater range of colors perceptible by human vision. This color-gamut area diminished with viewing angle for both displays. Higher differences between trends for viewing angles were observed in the LED-backlight, especially for the R- and G-channels, demonstrating a higher variability of the chromaticity coordinates with viewing angle. The best additivity was reached by the LED-backlight display (a lower error percentage). LED-backlight display provided better color performance of visualization.
Military display market segment: avionics (Invited Paper)
NASA Astrophysics Data System (ADS)
Desjardins, Daniel D.; Hopper, Darrel G.
2005-05-01
The military display market is analyzed in terms of one of its segments: avionics. Requirements are summarized for 13 technology-driving parameters for direct-view and virtual-view displays in cockpits and cabins. Technical specifications are discussed for selected programs. Avionics stresses available technology and usually requires custom display designs.
2014-01-01
Background To validate the association between accommodation and visual asthenopia by measuring objective accommodative amplitude with the Optical Quality Analysis System (OQAS®, Visiometrics, Terrassa, Spain), and to investigate associations among accommodation, ocular surface instability, and visual asthenopia while viewing 3D displays. Methods Fifteen normal adults without any ocular disease or surgical history watched the same 3D and 2D displays for 30 minutes. Accommodative ability, ocular protection index (OPI), and total ocular symptom scores were evaluated before and after viewing the 3D and 2D displays. Accommodative ability was evaluated by the near point of accommodation (NPA) and OQAS to ensure reliability. The OPI was calculated by dividing the tear breakup time (TBUT) by the interblink interval (IBI). The changes in accommodative ability, OPI, and total ocular symptom scores after viewing 3D and 2D displays were evaluated. Results Accommodative ability evaluated by NPA and OQAS, OPI, and total ocular symptom scores changed significantly after 3D viewing (p = 0.005, 0.003, 0.006, and 0.003, respectively), but yielded no difference after 2D viewing. The objective measurement by OQAS verified the decrease of accommodative ability while viewing 3D displays. The change of NPA, OPI, and total ocular symptom scores after 3D viewing had a significant correlation (p < 0.05), implying direct associations among these factors. Conclusions The decrease of accommodative ability after 3D viewing was validated by both subjective and objective methods in our study. Further, the deterioration of accommodative ability and ocular surface stability may be causative factors of visual asthenopia in individuals viewing 3D displays. PMID:24612686
Flight Simulator Evaluation of Display Media Devices for Synthetic Vision Concepts
NASA Technical Reports Server (NTRS)
Arthur, J. J., III; Williams, Steven P.; Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.
2004-01-01
The Synthetic Vision Systems (SVS) Project of the National Aeronautics and Space Administration's (NASA) Aviation Safety Program (AvSP) is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft. To accomplish these safety and capacity improvements, the SVS concept is designed to provide a clear view of the world around the aircraft through the display of computer-generated imagery derived from an onboard database of terrain, obstacle, and airport information. Display media devices with which to implement SVS technology that have been evaluated so far within the Project include fixed field of view head up displays and head down Primary Flight Displays with pilot-selectable field of view. A simulation experiment was conducted comparing these display devices to a fixed field of view, unlimited field of regard, full color Helmet-Mounted Display system. Subject pilots flew a visual circling maneuver in IMC at a terrain-challenged airport. The data collected for this experiment is compared to past SVS research studies.
Distributed rendering for multiview parallax displays
NASA Astrophysics Data System (ADS)
Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.
2006-02-01
3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.
Polymer Dispersed Liquid Crystal Displays
NASA Astrophysics Data System (ADS)
Doane, J. William
The following sections are included: * INTRODUCTION AND HISTORICAL DEVELOPMENT * PDLC MATERIALS PREPARATION * Polymerization induced phase separation (PIPS) * Thermally induced phase separation (TIPS) * Solvent induced phase separation (SIPS) * Encapsulation (NCAP) * RESPONSE VOLTAGE * Dielectric and resistive effects * Radial configuration * Bipolar configuration * Other director configurations * RESPONSE TIME * DISPLAY CONTRAST * Light scattering and index matching * Incorporation of dyes * Contrast measurements * PDLC DISPLAY DEVICES AND INNOVATIONS * Reflective direct view displays * Large-scale, flexible displays * Switchable windows * Projection displays * High definition spatial light modulator * Haze-free PDLC shutters: wide angle view displays * ENVIRONMENTAL STABILITY * ACKNOWLEDGEMENTS * REFERENCES
Dual redundant display in bubble canopy applications
NASA Astrophysics Data System (ADS)
Mahdi, Ken; Niemczyk, James
2010-04-01
Today's cockpit integrator, whether for state of the art military fast jet, or piston powered general aviation, is striving to utilize all available panel space for AMLCD based displays to enhance situational awareness and increase safety. The benefits of a glass cockpit have been well studied and documented. The technology used to create these glass cockpits, however, is driven by commercial AMLCD demand which far outstrips the combined worldwide avionics requirements. In order to satisfy the wide variety of human factors and environmental requirements, large area displays have been developed to maximize the usable display area while also providing necessary redundancy in case of failure. The AMLCD has been optimized for extremely wide viewing angles driven by the flat panel TV market. In some cockpit applications, wide viewing cones are desired. In bubble canopy cockpits, however, narrow viewing cones are desired to reduce canopy reflections. American Panel Corporation has developed AMLCD displays that maximize viewing area, provide redundancy, while also providing a very narrow viewing cone even though commercial AMLCD technology is employed suitable for high performance AMLCD Displays. This paper investigates both the large area display architecture with several available options to solve redundancy as well as beam steering techniques to also limit canopy reflections.
Yoon, Ki-Hyuk; Kang, Min-Koo; Lee, Hwasun; Kim, Sung-Kyu
2018-01-01
We study optical technologies for viewer-tracked autostereoscopic 3D display (VTA3D), which provides improved 3D image quality and extended viewing range. In particular, we utilize a technique-the so-called dynamic fusion of viewing zone (DFVZ)-for each 3D optical line to realize image quality equivalent to that achievable at optimal viewing distance, even when a viewer is moving in a depth direction. In addition, we examine quantitative properties of viewing zones provided by the VTA3D system that adopted DFVZ, revealing that the optimal viewing zone can be formed at viewer position. Last, we show that the comfort zone is extended due to DFVZ. This is demonstrated by a viewer's subjective evaluation of the 3D display system that employs both multiview autostereoscopic 3D display and DFVZ.
Autostereoscopic display technology for mobile 3DTV applications
NASA Astrophysics Data System (ADS)
Harrold, Jonathan; Woodgate, Graham J.
2007-02-01
Mobile TV is now a commercial reality, and an opportunity exists for the first mass market 3DTV products based on cell phone platforms with switchable 2D/3D autostereoscopic displays. Compared to conventional cell phones, TV phones need to operate for extended periods of time with the display running at full brightness, so the efficiency of the 3D optical system is key. The desire for increased viewing freedom to provide greater viewing comfort can be met by increasing the number of views presented. A four view lenticular display will have a brightness five times greater than the equivalent parallax barrier display. Therefore, lenticular displays are very strong candidates for cell phone 3DTV. Selection of Polarisation Activated Microlens TM architectures for LCD, OLED and reflective display applications is described. The technology delivers significant advantages especially for high pixel density panels and optimises device ruggedness while maintaining display brightness. A significant manufacturing breakthrough is described, enabling switchable microlenses to be fabricated using a simple coating process, which is also readily scalable to large TV panels. The 3D image performance of candidate 3DTV panels will also be compared using autostereoscopic display optical output simulations.
Vainio, L; Alén, H; Hiltunen, S; Lehikoinen, K; Lindbäck, H; Patrikainen, A; Paavilainen, P
2013-02-01
Previous research has shown that subliminally presented arrows produce negative priming effect in which responses are performed slower when primes and targets are calling for the same response than different response. This phenomenon has been attributed to self-inhibitory mechanisms of response processes. Similar negative priming was recently observed when participants responded to the direction of the target arrow and the prime was a briefly displayed image of a left or right hand. Responses were made slower when the left-right identity of the viewed hand was compatible with the responding hand. This was suggested to demonstrate that the proposed motor self-inhibition is a general and basic functional principle in manual control processes. However, the behavioural evidence observed in that study was not capable of showing whether the negative priming associated with a briefly displayed hand could reflect other inhibitory processes than the motor self-inhibition. The present study uses an electrophysiological indicator of automatic response priming, the lateralized readiness potential (LRP), to investigate whether the negative priming triggered by the identity of the viewed hand does indeed reflect motor self-inhibition processes. The LRP revealed a pattern of motor activation that was in line with the motor self-inhibition hypothesis. Thus, the finding supports the view that the self-inhibition mechanisms are not restricted to arrow stimuli that are presented subliminally. Rather, they are general sensorimotor mechanisms that operate in planning and control of manual actions. Copyright © 2012 Elsevier Ltd. All rights reserved.
Exploring interaction with 3D volumetric displays
NASA Astrophysics Data System (ADS)
Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin
2005-03-01
Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.
Super long viewing distance light homogeneous emitting three-dimensional display
NASA Astrophysics Data System (ADS)
Liao, Hongen
2015-04-01
Three-dimensional (3D) display technology has continuously been attracting public attention with the progress in today's 3D television and mature display technologies. The primary characteristics of conventional glasses-free autostereoscopic displays, such as spatial resolution, image depths, and viewing angle, are often limited due to the use of optical lenses or optical gratings. We present a 3D display using MEMS-scanning-mechanism-based light homogeneous emitting (LHE) approach and demonstrate that the display can directly generate an autostereoscopic 3D image without the need for optical lenses or gratings. The generated 3D image has the advantages of non-aberration and a high-definition spatial resolution, making it the first to exhibit animated 3D images with image depth of six meters. Our LHE 3D display approach can be used to generate a natural flat-panel 3D display with super long viewing distance and alternative real-time image update.
Preferred viewing distance and screen angle of electronic paper displays.
Shieh, Kong-King; Lee, Der-Song
2007-09-01
This study explored the viewing distance and screen angle for electronic paper (E-Paper) displays under various light sources, ambient illuminations, and character sizes. Data analysis showed that the mean viewing distance and screen angle were 495 mm and 123.7 degrees. The mean viewing distances for Kolin Chlorestic Liquid Crystal display was 500 mm, significantly longer than Sony electronic ink display, 491 mm. Screen angle for Kolin was 127.4 degrees, significantly greater than that of Sony, 120.0 degrees. Various light sources revealed no significant effect on viewing distances; nevertheless, they showed significant effect on screen angles. The screen angle for sunlight lamp (D65) was similar to that of fluorescent lamp (TL84), but greater than that of tungsten lamp (F). Ambient illumination and E-paper type had significant effects on viewing distance and screen angle. The higher the ambient illumination was, the longer the viewing distance and the lesser the screen angle. Character size had significant effect on viewing distances: the larger the character size, the longer the viewing distance. The results of this study indicated that the viewing distance for E-Paper was similar to that of visual display terminal (VDT) at around 500 mm, but greater than normal paper at about 360 mm. The mean screen angle was around 123.7 degrees, which in terms of viewing angle is 29.5 degrees below horizontal eye level. This result is similar to the general suggested viewing angle between 20 degrees and 50 degrees below the horizontal line of sight.
Design of a single projector multiview 3D display system
NASA Astrophysics Data System (ADS)
Geng, Jason
2014-03-01
Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.
Demonstration of a real-time implementation of the ICVision holographic stereogram display
NASA Astrophysics Data System (ADS)
Kulick, Jeffrey H.; Jones, Michael W.; Nordin, Gregory P.; Lindquist, Robert G.; Kowel, Stephen T.; Thomsen, Axel
1995-07-01
There is increasing interest in real-time autostereoscopic 3D displays. Such systems allow 3D objects or scenes to be viewed by one or more observers with correct motion parallax without the need for glasses or other viewing aids. Potential applications of such systems include mechanical design, training and simulation, medical imaging, virtual reality, and architectural design. One approach to the development of real-time autostereoscopic display systems has been to develop real-time holographic display systems. The approach taken by most of the systems is to compute and display a number of holographic lines at one time, and then use a scanning system to replicate the images throughout the display region. The approach taken in the ICVision system being developed at the University of Alabama in Huntsville is very different. In the ICVision display, a set of discrete viewing regions called virtual viewing slits are created by the display. Each pixel is required fill every viewing slit with different image data. When the images presented in two virtual viewing slits separated by an interoccular distance are filled with stereoscopic pair images, the observer sees a 3D image. The images are computed so that a different stereo pair is presented each time the viewer moves 1 eye pupil diameter (approximately mm), thus providing a series of stereo views. Each pixel is subdivided into smaller regions, called partial pixels. Each partial pixel is filled with a diffraction grating that is just that required to fill an individual virtual viewing slit. The sum of all the partial pixels in a pixel then fill all the virtual viewing slits. The final version of the ICVision system will form diffraction gratings in a liquid crystal layer on the surface of VLSI chips in real time. Processors embedded in the VLSI chips will compute the display in real- time. In the current version of the system, a commercial AMLCD is sandwiched with a diffraction grating array. This paper will discuss the design details of a protable 3D display based on the integration of a diffractive optical element with a commercial off-the-shelf AMLCD. The diffractive optic contains several hundred thousand partial-pixel gratings and the AMLCD modulates the light diffracted by the gratings.
NASA Astrophysics Data System (ADS)
Venolia, Dan S.; Williams, Lance
1990-08-01
A range of stereoscopic display technologies exist which are no more intrusive, to the user, than a pair of spectacles. Combining such a display system with sensors for the position and orientation of the user's point-of-view results in a greatly enhanced depiction of three-dimensional data. As the point of view changes, the stereo display channels are updated in real time. The face of a monitor or display screen becomes a window on a three-dimensional scene. Motion parallax naturally conveys the placement and relative depth of objects in the field of view. Most of the advantages of "head-mounted display" technology are achieved with a less cumbersome system. To derive the full benefits of stereo combined with motion parallax, both stereo channels must be updated in real time. This may limit the size and complexity of data bases which can be viewed on processors of modest resources, and restrict the use of additional three-dimensional cues, such as texture mapping, depth cueing, and hidden surface elimination. Effective use of "full 3D" may still be undertaken in a non-interactive mode. Integral composite holograms have often been advanced as a powerful 3D visualization tool. Such a hologram is typically produced from a film recording of an object on a turntable, or a computer animation of an object rotating about one axis. The individual frames of film are multiplexed, in a composite hologram, in such a way as to be indexed by viewing angle. The composite may be produced as a cylinder transparency, which provides a stereo view of the object as if enclosed within the cylinder, which can be viewed from any angle. No vertical parallax is usually provided (this would require increasing the dimensionality of the multiplexing scheme), but the three dimensional image is highly resolved and easy to view and interpret. Even a modest processor can duplicate the effect of such a precomputed display, provided sufficient memory and bus bandwidth. This paper describes the components of a stereo display system with user point-of-view tracking for interactive 3D, and a digital realization of integral composite display which we term virtual integral holography. The primary drawbacks of holographic display - film processing turnaround time, and the difficulties of displaying scenes in full color -are obviated, and motion parallax cues provide easy 3D interpretation even for users who cannot see in stereo.
A Low-Cost PC-Based Image Workstation for Dynamic Interactive Display of Three-Dimensional Anatomy
NASA Astrophysics Data System (ADS)
Barrett, William A.; Raya, Sai P.; Udupa, Jayaram K.
1989-05-01
A system for interactive definition, automated extraction, and dynamic interactive display of three-dimensional anatomy has been developed and implemented on a low-cost PC-based image workstation. An iconic display is used for staging predefined image sequences through specified increments of tilt and rotation over a solid viewing angle. Use of a fast processor facilitates rapid extraction and rendering of the anatomy into predefined image views. These views are formatted into a display matrix in a large image memory for rapid interactive selection and display of arbitrary spatially adjacent images within the viewing angle, thereby providing motion parallax depth cueing for efficient and accurate perception of true three-dimensional shape, size, structure, and spatial interrelationships of the imaged anatomy. The visual effect is that of holding and rotating the anatomy in the hand.
Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.
Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho
2016-04-18
We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.
Military display performance parameters
NASA Astrophysics Data System (ADS)
Desjardins, Daniel D.; Meyer, Frederick
2012-06-01
The military display market is analyzed in terms of four of its segments: avionics, vetronics, dismounted soldier, and command and control. Requirements are summarized for a number of technology-driving parameters, to include luminance, night vision imaging system compatibility, gray levels, resolution, dimming range, viewing angle, video capability, altitude, temperature, shock and vibration, etc., for direct-view and virtual-view displays in cockpits and crew stations. Technical specifications are discussed for selected programs.
Applications of graphics to support a testbed for autonomous space vehicle operations
NASA Technical Reports Server (NTRS)
Schmeckpeper, K. R.; Aldridge, J. P.; Benson, S.; Horner, S.; Kullman, A.; Mulder, T.; Parrott, W.; Roman, D.; Watts, G.; Bochsler, Daniel C.
1989-01-01
Researchers describe their experience using graphics tools and utilities while building an application, AUTOPS, that uses a graphical Machintosh (TM)-like interface for the input and display of data, and animation graphics to enhance the presentation of results of autonomous space vehicle operations simulations. AUTOPS is a test bed for evaluating decisions for intelligent control systems for autonomous vehicles. Decisions made by an intelligent control system, e.g., a revised mission plan, might be displayed to the user in textual format or he can witness the effects of those decisions via out of window graphics animations. Although a textual description conveys essentials, a graphics animation conveys the replanning results in a more convincing way. Similarily, iconic and menu-driven screen interfaces provide the user with more meaningful options and displays. Presented here are experiences with the SunView and TAE Plus graphics tools used for interface design, and the Johnson Space Center Interactive Graphics Laboratory animation graphics tools used for generating out out of the window graphics.
Investigation of designated eye position and viewing zone for a two-view autostereoscopic display.
Huang, Kuo-Chung; Chou, Yi-Heng; Lin, Lang-chin; Lin, Hoang Yan; Chen, Fu-Hao; Liao, Ching-Chiu; Chen, Yi-Han; Lee, Kuen; Hsu, Wan-Hsuan
2014-02-24
Designated eye position (DEP) and viewing zone (VZ) are important optical parameters for designing a two-view autostereoscopic display. Although much research has been done to date, little empirical evidence has been found to establish a direct relationship between design and measurement. More rigorous studies and verifications to investigate DEP and to ascertain the VZ criterion will be valuable. We propose evaluation metrics based on equivalent luminance (EL) and binocular luminance (BL) to figure out DEP and VZ for a two-view autostereoscopic display. Simulation and experimental results prove that our proposed evaluation metrics can be used to find the DEP and VZ accurately.
NASA Astrophysics Data System (ADS)
Preston, Sandra; Cianciolo, F.; Jones, T.; Wetzel, M.; Mace, K.; Barrick, R.; Kelton, P.; Cochran, A.; Johnson, R.
2007-05-01
Of the 100,000 visitors that come to McDonald Observatory each year, about half of them visit the Harlan J. Smith 2.7-m Telescope. Visitors experience the 2.7-m telescope as part of a guided tour, a self-guided tour, and during the once-a-month special viewing nights, that are unique to a telescope this size. Recent safety requirements limiting visitor access to the dome-floor level and a need to modernize out-of-date displays in the 2.7-m lobby area, motivated us to do this new exhibit. A planning team consisting of McDonald Observatory personnel from Outreach & Education, Physical Plant, and Administration came together via videoconferences (between Austin and Fort Davis) to develop an exhibit for the lobby area of this telescope. As the planning process unfolded, the team determined that a mix of static displays and modern technology such as flat panel displays and DVD video were key to presenting the history of the facility, introducing basic concepts about the telescope and current research, as well as giving virtual access to the dome floor for visitors on the self-guided tour. This approach also allows for content development and much of production to be done in-house, which was important from both a cost and maintenance standpoint. A representative of the Smith family was also consulted throughout the development of the exhibit to insure that the exhibit plan was seen as an acceptable memorial to the late director. The exhibit was installed in January 2007.
46 CFR 131.945 - Display of plans.
Code of Federal Regulations, 2011 CFR
2011-10-01
... § 131.945 Display of plans. Each vessel must have a permanently exhibited, for the guidance of the master and crew members, general arrangement plans showing, for each deck, the various fire-retardant... 46 Shipping 4 2011-10-01 2011-10-01 false Display of plans. 131.945 Section 131.945 Shipping COAST...
46 CFR 131.945 - Display of plans.
Code of Federal Regulations, 2012 CFR
2012-10-01
... § 131.945 Display of plans. Each vessel must have a permanently exhibited, for the guidance of the master and crew members, general arrangement plans showing, for each deck, the various fire-retardant... 46 Shipping 4 2012-10-01 2012-10-01 false Display of plans. 131.945 Section 131.945 Shipping COAST...
46 CFR 169.853 - Display of plans.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 7 2013-10-01 2013-10-01 false Display of plans. 169.853 Section 169.853 Shipping COAST... Tests, Drills, and Inspections § 169.853 Display of plans. (a) Each vessel of 100 gross tons and over must have permanently exhibited for the guidance of the master, general arrangement plans for each deck...
46 CFR 131.945 - Display of plans.
Code of Federal Regulations, 2014 CFR
2014-10-01
... § 131.945 Display of plans. Each vessel must have a permanently exhibited, for the guidance of the master and crew members, general arrangement plans showing, for each deck, the various fire-retardant... 46 Shipping 4 2014-10-01 2014-10-01 false Display of plans. 131.945 Section 131.945 Shipping COAST...
46 CFR 131.945 - Display of plans.
Code of Federal Regulations, 2013 CFR
2013-10-01
... § 131.945 Display of plans. Each vessel must have a permanently exhibited, for the guidance of the master and crew members, general arrangement plans showing, for each deck, the various fire-retardant... 46 Shipping 4 2013-10-01 2013-10-01 false Display of plans. 131.945 Section 131.945 Shipping COAST...
46 CFR 169.853 - Display of plans.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 7 2014-10-01 2014-10-01 false Display of plans. 169.853 Section 169.853 Shipping COAST... Tests, Drills, and Inspections § 169.853 Display of plans. (a) Each vessel of 100 gross tons and over must have permanently exhibited for the guidance of the master, general arrangement plans for each deck...
46 CFR 169.853 - Display of plans.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 7 2010-10-01 2010-10-01 false Display of plans. 169.853 Section 169.853 Shipping COAST... Tests, Drills, and Inspections § 169.853 Display of plans. (a) Each vessel of 100 gross tons and over must have permanently exhibited for the guidance of the master, general arrangement plans for each deck...
46 CFR 169.853 - Display of plans.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 7 2012-10-01 2012-10-01 false Display of plans. 169.853 Section 169.853 Shipping COAST... Tests, Drills, and Inspections § 169.853 Display of plans. (a) Each vessel of 100 gross tons and over must have permanently exhibited for the guidance of the master, general arrangement plans for each deck...
46 CFR 169.853 - Display of plans.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 7 2011-10-01 2011-10-01 false Display of plans. 169.853 Section 169.853 Shipping COAST... Tests, Drills, and Inspections § 169.853 Display of plans. (a) Each vessel of 100 gross tons and over must have permanently exhibited for the guidance of the master, general arrangement plans for each deck...
Dual-view-zone tabletop 3D display system based on integral imaging.
He, Min-Yang; Zhang, Han-Le; Deng, Huan; Li, Xiao-Wei; Li, Da-Hai; Wang, Qiong-Hua
2018-02-01
In this paper, we propose a dual-view-zone tabletop 3D display system based on integral imaging by using a multiplexed holographic optical element (MHOE) that has the optical properties of two sets of microlens arrays. The MHOE is recorded by a reference beam using the single-exposure method. The reference beam records the wavefronts of a microlens array from two different directions. Thus, when the display beam is projected on the MHOE, two wavefronts with the different directions will be rebuilt and the 3D virtual images can be reconstructed in two viewing zones. The MHOE has angle and wavelength selectivity. Under the conditions of the matched wavelength and the angle of the display beam, the diffraction efficiency of the MHOE is greatest. Because the unmatched light just passes through the MHOE, the MHOE has the advantage of a see-through display. The experimental results confirm the feasibility of the dual-view-zone tabletop 3D display system.
Virtual navigation performance: the relationship to field of view and prior video gaming experience.
Richardson, Anthony E; Collaer, Marcia L
2011-04-01
Two experiments examined whether learning a virtual environment was influenced by field of view and how it related to prior video gaming experience. In the first experiment, participants (42 men, 39 women; M age = 19.5 yr., SD = 1.8) performed worse on a spatial orientation task displayed with a narrow field of view in comparison to medium and wide field-of-view displays. Counter to initial hypotheses, wide field-of-view displays did not improve performance over medium displays, and this was replicated in a second experiment (30 men, 30 women; M age = 20.4 yr., SD = 1.9) presenting a more complex learning environment. Self-reported video gaming experience correlated with several spatial tasks: virtual environment pointing and tests of Judgment of Line Angle and Position, mental rotation, and Useful Field of View (with correlations between .31 and .45). When prior video gaming experience was included as a covariate, sex differences in spatial tasks disappeared.
Sitting in the Pilot's Seat; Optimizing Human-Systems Interfaces for Unmanned Aerial Vehicles
NASA Technical Reports Server (NTRS)
Queen, Steven M.; Sanner, Kurt Gregory
2011-01-01
One of the pilot-machine interfaces (the forward viewing camera display) for an Unmanned Aerial Vehicle called the DROID (Dryden Remotely Operated Integrated Drone) will be analyzed for optimization. The goal is to create a visual display for the pilot that as closely resembles an out-the-window view as possible. There are currently no standard guidelines for designing pilot-machine interfaces for UAVs. Typically, UAV camera views have a narrow field, which limits the situational awareness (SA) of the pilot. Also, at this time, pilot-UAV interfaces often use displays that have a diagonal length of around 20". Using a small display may result in a distorted and disproportional view for UAV pilots. Making use of a larger display and a camera lens with a wider field of view may minimize the occurrences of pilot error associated with the inability to see "out the window" as in a manned airplane. It is predicted that the pilot will have a less distorted view of the DROID s surroundings, quicker response times and more stable vehicle control. If the experimental results validate this concept, other UAV pilot-machine interfaces will be improved with this design methodology.
Display technologies for augmented reality
NASA Astrophysics Data System (ADS)
Lee, Byoungho; Lee, Seungjae; Jang, Changwon; Hong, Jong-Young; Li, Gang
2018-02-01
With the virtue of rapid progress in optics, sensors, and computer science, we are witnessing that commercial products or prototypes for augmented reality (AR) are penetrating into the consumer markets. AR is spotlighted as expected to provide much more immersive and realistic experience than ordinary displays. However, there are several barriers to be overcome for successful commercialization of AR. Here, we explore challenging and important topics for AR such as image combiners, enhancement of display performance, and focus cue reproduction. Image combiners are essential to integrate virtual images with real-world. Display performance (e.g. field of view and resolution) is important for more immersive experience and focus cue reproduction may mitigate visual fatigue caused by vergence-accommodation conflict. We also demonstrate emerging technologies to overcome these issues: index-matched anisotropic crystal lens (IMACL), retinal projection displays, and 3D display with focus cues. For image combiners, a novel optical element called IMACL provides relatively wide field of view. Retinal projection displays may enhance field of view and resolution of AR displays. Focus cues could be reconstructed via multi-layer displays and holographic displays. Experimental results of our prototypes are explained.
Semi-autonomous wheelchair system using stereoscopic cameras.
Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T
2009-01-01
This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.
Protective laser beam viewing device
Neil, George R.; Jordan, Kevin Carl
2012-12-18
A protective laser beam viewing system or device including a camera selectively sensitive to laser light wavelengths and a viewing screen receiving images from the laser sensitive camera. According to a preferred embodiment of the invention, the camera is worn on the head of the user or incorporated into a goggle-type viewing display so that it is always aimed at the area of viewing interest to the user and the viewing screen is incorporated into a video display worn as goggles over the eyes of the user.
46 CFR 35.10-3 - Display of plans-TB/ALL.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 1 2010-10-01 2010-10-01 false Display of plans-TB/ALL. 35.10-3 Section 35.10-3... Requirements § 35.10-3 Display of plans—TB/ALL. Barges with sleeping accommodations for more than six persons... charge of the vessel the following plans: (a) General arrangement plans showing for each deck the fire...
View generation for 3D-TV using image reconstruction from irregularly spaced samples
NASA Astrophysics Data System (ADS)
Vázquez, Carlos
2007-02-01
Three-dimensional television (3D-TV) will become the next big step in the development of advanced TV systems. One of the major challenges for the deployment of 3D-TV systems is the diversity of display technologies and the high cost of capturing multi-view content. Depth image-based rendering (DIBR) has been identified as a key technology for the generation of new views for stereoscopic and multi-view displays from a small number of views captured and transmitted. We propose a disparity compensation method for DIBR that does not require spatial interpolation of the disparity map. We use a forward-mapping disparity compensation with real precision. The proposed method deals with the irregularly sampled image resulting from this disparity compensation process by applying a re-sampling algorithm based on a bi-cubic spline function space that produces smooth images. The fact that no approximation is made on the position of the samples implies that geometrical distortions in the final images due to approximations in sample positions are minimized. We also paid attention to the occlusion problem. Our algorithm detects the occluded regions in the newly generated images and uses simple depth-aware inpainting techniques to fill the gaps created by newly exposed areas. We tested the proposed method in the context of generation of views needed for viewing on SynthaGram TM auto-stereoscopic displays. We used as input either a 2D image plus a depth map or a stereoscopic pair with the associated disparity map. Our results show that this technique provides high quality images to be viewed on different display technologies such as stereoscopic viewing with shutter glasses (two views) and lenticular auto-stereoscopic displays (nine views).
Display of high dynamic range images under varying viewing conditions
NASA Astrophysics Data System (ADS)
Borer, Tim
2017-09-01
Recent demonstrations of high dynamic range (HDR) television have shown that superb images are possible. With the emergence of an HDR television production standard (ITU-R Recommendation BT.2100) last year, HDR television production is poised to take off. However research to date has focused principally on HDR image display only under "dark" viewing conditions. HDR television will need to be displayed at varying brightness and under varying illumination (for example to view sport in daytime or on mobile devices). We know, from common practice with conventional TV, that the rendering intent (gamma) should change under brighter conditions, although this is poorly quantified. For HDR the need to render images under varying conditions is all the more acute. This paper seeks to explore the issues surrounding image display under varying conditions. It also describes how visual adaptation is affected by display brightness, surround illumination, screen size and viewing distance. Existing experimental results are presented and extended to try to quantify these effects. Using the experimental results it is described how HDR images may be displayed so that they are perceptually equivalent under different viewing conditions. A new interpretation of the experimental results is reported, yielding a new, luminance invariant model for the appropriate display "gamma". In this way the consistency of HDR image reproduction should be improved, thereby better maintaining "creative intent" in television.
Bird's Eye View - A 3-D Situational Awareness Tool for the Space Station
NASA Technical Reports Server (NTRS)
Dershowitz, Adam; Chamitoff, Gregory
2002-01-01
Even as space-qualified computer hardware lags well behind the latest home computers, the possibility of using high-fidelity interactive 3-D graphics for displaying important on board information has finally arrived, and is being used on board the International Space Station (ISS). With the quantity and complexity of space-flight telemetry, 3-D displays can greatly enhance the ability of users, both onboard and on the ground, to interpret data quickly and accurately. This is particularly true for data related to vehicle attitude, position, configuration, and relation to other objects on the ground or in-orbit Bird's Eye View (BEV) is a 3-D real-time application that provides a high degree of Situational Awareness for the crew. Its purpose is to instantly convey important motion-related parameters to the crew and mission controllers by presenting 3-D simulated camera views of the International Space Station (ISS) in its actual environment Driven by actual telemetry, and running on board, as well as on the ground, the user can visualize the Space Station relative to the Earth, Sun, stars, various reference frames, and selected targets, such as ground-sites or communication satellites. Since the actual ISS configuration (geometry) is also modeled accurately, everything from the alignment of the solar panels to the expected view from a selected window can be visualized accurately. A virtual representation of the Space Station in real time has many useful applications. By selecting different cameras, the crew or mission control can monitor the station's orientation in space, position over the Earth, transition from day to night, direction to the Sun, the view from a particular window, or the motion of the robotic arm. By viewing the vehicle attitude and solar panel orientations relative to the Sun, the power status of the ISS can be easily visualized and understood. Similarly, the thermal impacts of vehicle attitude can be analyzed and visually confirmed. Communication opportunities can be displayed, and line-of-sight blockage due to interference by the vehicle structure (or the Earth) can be seen easily. Additional features in BEV display targets on the ground and in-orbit, including cities, communication sites, landmarks, satellites, and special sites of scientific interest for Earth observation and photography. Any target can be selected and tracked. This gives the user a continual line-of-sight to the target of current interest, and real-time knowledge about its visibility. Similarly, the vehicle ground-track, and an option to show "visibility circles" around displayed ground sites, provide continuous insight regarding current and future visibility to any target BEV was designed with inputs from many disciplines in the flight control and operations community both at NASA and from the International Partners. As such, BEV is setting the standards for interactive 3-D graphics for spacecraft applications. One important contribution of BEV is a generic graphical interface for camera control that can be used for any 3-D applications. This interface has become part of the International Display and Graphics Standards for the 16-nation ISS partnership. Many other standards related to camera properties, and the display of 3-D data, also have been defined by BEV. Future enhancements to BEV will include capabilities related to simulating ahead of the current time. This will give the user tools for analyzing off-nominal and future scenarios, as well as for planning future operations.
Air Traffic Complexity Measurement Environment (ACME): Software User's Guide
NASA Technical Reports Server (NTRS)
1996-01-01
A user's guide for the Air Traffic Complexity Measurement Environment (ACME) software is presented. The ACME consists of two major components, a complexity analysis tool and user interface. The Complexity Analysis Tool (CAT) analyzes complexity off-line, producing data files which may be examined interactively via the Complexity Data Analysis Tool (CDAT). The Complexity Analysis Tool is composed of three independently executing processes that communicate via PVM (Parallel Virtual Machine) and Unix sockets. The Runtime Data Management and Control process (RUNDMC) extracts flight plan and track information from a SAR input file, and sends the information to GARP (Generate Aircraft Routes Process) and CAT (Complexity Analysis Task). GARP in turn generates aircraft trajectories, which are utilized by CAT to calculate sector complexity. CAT writes flight plan, track and complexity data to an output file, which can be examined interactively. The Complexity Data Analysis Tool (CDAT) provides an interactive graphic environment for examining the complexity data produced by the Complexity Analysis Tool (CAT). CDAT can also play back track data extracted from System Analysis Recording (SAR) tapes. The CDAT user interface consists of a primary window, a controls window, and miscellaneous pop-ups. Aircraft track and position data is displayed in the main viewing area of the primary window. The controls window contains miscellaneous control and display items. Complexity data is displayed in pop-up windows. CDAT plays back sector complexity and aircraft track and position data as a function of time. Controls are provided to start and stop playback, adjust the playback rate, and reposition the display to a specified time.
A study of payload specialist station monitor size constraints. [space shuttle orbiters
NASA Technical Reports Server (NTRS)
Kirkpatrick, M., III; Shields, N. L., Jr.; Malone, T. B.
1975-01-01
Constraints on the CRT display size for the shuttle orbiter cabin are studied. The viewing requirements placed on these monitors were assumed to involve display of imaged scenes providing visual feedback during payload operations and display of alphanumeric characters. Data on target recognition/resolution, target recognition, and range rate detection by human observers were utilized to determine viewing requirements for imaged scenes. Field-of-view and acuity requirements for a variety of payload operations were obtained along with the necessary detection capability in terms of range-to-target size ratios. The monitor size necessary to meet the acuity requirements was established. An empirical test was conducted to determine required recognition sizes for displayed alphanumeric characters. The results of the test were used to determine the number of characters which could be simultaneously displayed based on the recognition size requirements using the proposed monitor size. A CRT display of 20 x 20 cm is recommended. A portion of the display area is used for displaying imaged scenes and the remaining display area is used for alphanumeric characters pertaining to the displayed scene. The entire display is used for the character alone mode.
NASA Technical Reports Server (NTRS)
Clark, T. A.; Brainard, G.; Salazar, G.; Johnston, S.; Schwing, B.; Litaker, H.; Kolomenski, A.; Venus, D.; Tran, K.; Hanifin, J.;
2017-01-01
NASA has demonstrated an interest in improving astronaut health and performance through the installment of a new lighting countermeasure on the International Space Station. The Solid State Lighting Assembly (SSLA) system is designed to positively influence astronaut health by providing a daily change to light spectrum to improve circadian entrainment. Unfortunately, existing NASA standards and requirements define ambient light level requirements for crew sleep and other tasks, yet the number of light-emitting diode (LED) indicators and displays within a habitable volume is currently uncontrolled. Because each of these light sources has its own unique spectral properties, the additive lighting environment ends up becoming something different from what was planned or researched. Restricting the use of displays and indicators is not a solution because these systems provide beneficial feedback to the crew. The research team for this grant used computer-based computational modeling and real-world lighting mockups to document the impact that light sources other than the ambient lighting system contribute to the ambient spectral lighting environment. In particular, the team was focused on understanding the impacts of long-term tasks located in front of avionics or computer displays. The team also wanted to understand options for mitigating the changes to the ambient light spectrum in the interest of maintaining the performance of a lighting countermeasure. The project utilized a variety of physical and computer-based simulations to determine direct relationships between system implementation and light spectrum. Using real-world data, computer models were built in the commercially available optics analysis software Zemax Optics Studio(c). The team also built a mockup test facility that had the same volume and configuration as one of the Zemax models. The team collected over 1200 spectral irradiance measurements, each representing a different configuration of the mockup. Analysis of the data showed a measurable impact on ambient light spectrum. This data showed that obvious design techniques exist that can be used to bind the ambient light spectrum closer to the planned spectral operating environment for the observer's eye point. The following observations should be considered when designing an operational environment that is dominated by computer displays. When more light is directed into the field of view of the observer, the greater the impact it will make on various human factors issues that depend on spectral shape and intensity. Because viewing angle has a large part to play in the amount of light flux on the crewmember's retina, beam shape, combined with light source location is an important factor for determining percent probable incident flux on the observer from any combination of light sources. Computer graphics design and display lumen output are major factors influencing the amount of spectrally intense light projected into the environment and in the viewer's direction. Use of adjustable white point display software was useful only if the predominant background color was white and if it matched the ambient light system's color. Display graphics that used a predominantly black background had the least influence on unplanned spectral energy projected into the environment. Percent reflectance makes a difference in total energy reflected back into an environment, and within certain architectural geometries, reflectance can be used to control the amount of a light spectrum that is allowed to perpetuate in the environment. Data showed that room volume and distance from significant light sources influence the total spectrum in a room. Smaller environments had a homogenizing effect on total light spectrum, whereas light from multiple sources in larger environments was less mixed. The findings indicated above should be considered when making recommendations for practice or standards for architectural systems. The ambient lighting system, surface reflectance, and display and indicator implementation all factor into the users' spectral environment. A variety of low-cost solutions exist to mitigate the impact of light from non-architectural lighting systems, and much potential for system automation and integration of display systems with the ambient environment. This team believes that proper planning can be used to avoid integration problems and also believes that human-in-the-loop evaluations, real-world test and measurement, and computer modeling can be used to determine how changes to a process, display graphics, and architecture will help maintain the planned spectral operating lighting environment.
HTML 5 Displays for On-Board Flight Systems
NASA Technical Reports Server (NTRS)
Silva, Chandika
2016-01-01
During my Internship at NASA in the summer of 2016, I was assigned to a project which dealt with developing a web-server that would display telemetry and other system data using HTML 5, JavaScript, and CSS. By doing this, it would be possible to view the data across a variety of screen sizes, and establish a standard that could be used to simplify communication and software development between NASA and other countries. Utilizing a web- approach allowed us to add in more functionality, as well as make the displays more aesthetically pleasing for the users. When I was assigned to this project my main task was to first establish communication with the current display server. This display server would output data from the on-board systems in XML format. Once communication was established I was then asked to create a dynamic telemetry table web page that would update its header and change as new information came in. After this was completed, certain minor functionalities were added to the table such as a hide column and filter by system option. This was more for the purpose of making the table more useful for the users, as they can now filter and view relevant data. Finally my last task was to create a graphical system display for all the systems on the space craft. This was by far the most challenging part of my internship as finding a JavaScript library that was both free and contained useful functions to assist me in my task was difficult. In the end I was able to use the JointJs library and accomplish the task. With the help of my mentor and the HIVE lab team, we were able to establish stable communication with the display server. We also succeeded in creating a fully dynamic telemetry table and in developing a graphical system display for the advanced modular power system. Working in JSC for this internship has taught me a lot about coding in JavaScript and HTML 5. I was also introduced to the concept of developing software as a team, and exposed to the different types of programs that are used to simplify team coding such as GitLab. While in JSC, I took full advantage of and attended the lectures that were held here on site. I learned a lot about what it is NASA does and about the interesting projects that are conducted here. One of the lectures I attended was about the selection process and the criteria that is used to select future astronauts for flight missions. This truly had an impact on my future plans as it showed me that this path was a viable option for me. After this internship I plan on completing my undergraduate course work and plan to move on for a masters degree. However, during the time in which I will be completing my masters course work, I would like to apply for the NASA pathways graduate program and, if I am accepted, eventually move on to being a full time civil servant. Working in NASA has not only been enjoyable, but full of information and great experiences that have motivated me to seek a full time employment here in the near future.
Real object-based 360-degree integral-floating display using multiple depth camera
NASA Astrophysics Data System (ADS)
Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam
2015-03-01
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
Variable acuity remote viewing system flight demonstration
NASA Technical Reports Server (NTRS)
Fisher, R. W.
1983-01-01
The Variable Acuity Remote Viewing System (VARVS), originally developed under contract to the Navy (ONR) as a laboratory brassboard, was modified for flight demonstration. The VARVS system was originally conceived as a technique which could circumvent the acuity/field of view/bandwidth tradeoffs that exists in remote viewing to provide a nearly eye limited display in both field of view (160 deg) and resolution (2 min arc) while utilizing conventional TV sensing, transmission, and display equipment. The modifications for flight demonstration consisted of modifying the sensor so it could be installed and flow in a Piper PA20 aircraft, equipped for remote control and modifying the display equipment so it could be integrated with the NASA Research RPB (RPRV) remote control cockpit.
A novel emissive projection display (EPD) on transparent phosphor screen
NASA Astrophysics Data System (ADS)
Cheng, Botao; Sun, Leonard; Yu, Ge; Sun, Ted X.
2017-03-01
A new paradigm of digital projection is on the horizon, based on innovative emissive screen that are made fully transparent. It can be readily applied and convert any surface to a high image quality emissive digital display, without affecting the surface appearance. For example, it can convert any glass window or windshield to completely see-through display, with unlimited field of view and viewing angles. It also enables a scalable and economic projection display on a pitch-black emissive screen with black level and image contrast that rivals other emissive displays such as plasma display or OLED.
NASA Astrophysics Data System (ADS)
Marson, Avishai; Stern, Adrian
2015-05-01
One of the main limitations of horizontal parallax autostereoscopic displays is the horizontal resolution loss due the need to repartition the pixels of the display panel among the multiple views. Recently we have shown that this problem can be alleviated by applying a color sub-pixel rendering technique1. Interpolated views are generated by down-sampling the panel pixels at sub-pixel level, thus increasing the number of views. The method takes advantage of lower acuity of the human eye to chromatic resolution. Here we supply further support of the technique by analyzing the spectra of the subsampled images.
A see-through holographic head-mounted display with the large viewing angle
NASA Astrophysics Data System (ADS)
Chen, Zhidong; sang, Xinzhu; Lin, Qiaojun; Li, Jin; Yu, Xunbo; Gao, Xin; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu; Xie, Songlin
2017-02-01
A novel solution for the large view angle holographic head-mounted display (HHMD) is presented. Divergent light is used for the hologram illumination to construct a large size three-dimensional object outside the display in a short distance. A designed project-type lens with large numerical aperture projects the object constructed by the hologram to its real location. The presented solution can realize a compact HHMD system with a large field of view. The basic principle and the structure of the system are described. An augmented reality (AR) prototype with the size of 50 mm×40 mm and the view angle above 60° is demonstrated.
Predicted Weather Display and Decision Support Interface for Flight Deck
NASA Technical Reports Server (NTRS)
Johnson, Walter W. (Inventor); Wong, Dominic G. (Inventor); Koteskey, Robert W. (Inventor); Wu, Shu-Chieh (Inventor)
2017-01-01
A system and method for providing visual depictions of a predictive weather forecast for in-route vehicle trajectory planning. The method includes displaying weather information on a graphical display, displaying vehicle position information on the graphical display, selecting a predictive interval, displaying predictive weather information for the predictive interval on the graphical display, and displaying predictive vehicle position information for the predictive interval on the graphical display, such that the predictive vehicle position information is displayed relative to the predictive weather information, for in-route trajectory planning.
Statis omnidirectional stereoscopic display system
NASA Astrophysics Data System (ADS)
Barton, George G.; Feldman, Sidney; Beckstead, Jeffrey A.
1999-11-01
A unique three camera stereoscopic omnidirectional viewing system based on the periscopic panoramic camera described in the 11/98 SPIE proceedings (AM13). The 3 panoramic cameras are equilaterally combined so each leg of the triangle approximates the human inter-ocular spacing allowing each panoramic camera to view 240 degree(s) of the panoramic scene, the most counter clockwise 120 degree(s) being the left eye field and the other 120 degree(s) segment being the right eye field. Field definition may be by green/red filtration or time discrimination of the video signal. In the first instance a 2 color spectacle is used in viewing the display or in the 2nd instance LCD goggles are used to differentiate the R/L fields. Radially scanned vidicons or re-mapped CCDs may be used. The display consists of three vertically stacked 120 degree(s) segments of the panoramic field of view with 2 fields/frame. Field A being the left eye display and Field B the right eye display.
Influence of viewing distance and size of tv on visual fatigue and feeling of involvement.
Sakamoto, Kiyomi; Asahara, Shigeo; Yamashita, Kuniko; Okada, Akira
2012-12-01
Using physiological and psychological measurements, we carried out experiments to investigate the influence of viewing distance and TV screen size on visual fatigue and feeling of involvement using 17-inch, 42-inch and 65-inch displays. The experiment was an ordinary viewing test with the content similar to everyday TV programs for one hour including scenery, sport, drama, etc., with commercials sandwiched in between. The number of participants was 16 (8 persons aged 21-31, and 8 persons aged 50-70) for each display size. In all, 48 participants viewed 3 display sizes. In our physiological evaluation, CFF (critical flicker fusion frequency), blink rate and a sympathetic nerve activity index were used; and in the psychological evaluation, questionnaires and interviews were employed. Our results, based on physiological and psychological measurements, suggest the opti- mum viewing distance to be around 165-220 cm, irrespective of screen size. Our evaluations, which are based on optimum viewing distance for minimal visual fatigue and a closer feeling of involvement, might therefore not agree with the currently recommended viewing distance, which is defined as 2 or 3 times the display's height.
Emissive and reflective properties of curved displays in relation to image quality
NASA Astrophysics Data System (ADS)
Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique; Blanc, Pierre; Sandré-Chardonnal, Etienne
2016-03-01
Different aspects of the characterization of curved displays are presented. The limit of validity of viewing angle measurements without angular distortion on such displays using goniometer or Fourier optics viewing angle instrument is given. If the condition cannot be fulfilled the measurement can be corrected using a general angular distortion formula as demonstrated experimentally using a Samsung Galaxy S6 edge phone display. The reflective properties of the display are characterized by measuring the spectral BRDF using a multispectral Fourier optics viewing angle system. The surface of a curved OLED TV has been measured. The BDRF patterns show a mirror like behavior with and additional strong diffraction along the pixels lines and columns that affect the quality of the display when observed with parasitic lighting. These diffraction effects are very common on OLED surfaces. We finally introduce a commercial ray tracing software that can use directly the measured emissive and reflective properties of the display to make realistic simulation under any lighting environment.
Development and design of a late-model fitness test instrument based on LabView
NASA Astrophysics Data System (ADS)
Xie, Ying; Wu, Feiqing
2010-12-01
Undergraduates are pioneers of China's modernization program and undertake the historic mission of rejuvenating our nation in the 21st century, whose physical fitness is vital. A smart fitness test system can well help them understand their fitness and health conditions, thus they can choose more suitable approaches and make practical plans for exercising according to their own situation. following the future trends, a Late-model fitness test Instrument based on LabView has been designed to remedy defects of today's instruments. The system hardware consists of fives types of sensors with their peripheral circuits, an acquisition card of NI USB-6251 and a computer, while the system software, on the basis of LabView, includes modules of user register, data acquisition, data process and display, and data storage. The system, featured by modularization and an open structure, is able to be revised according to actual needs. Tests results have verified the system's stability and reliability.
Autostereoscopic display based on two-layer lenticular lenses.
Zhao, Wu-Xiang; Wang, Qiong-Hua; Wang, Ai-Hong; Li, Da-Hai
2010-12-15
An autostereoscopic display based on two-layer lenticular lenses is proposed. The two-layer lenticular lenses include one-layer conventional lenticular lenses and additional one-layer concentrating-light lenticular lenses. Two prototypes of the proposed and conventional autostereoscopic displays are developed. At the optimum three-dimensional view distance, the luminance distribution of the prototypes along the horizontal direction is measured. By calculating the luminance distribution, the crosstalk of the prototypes is obtained. Compared with the conventional autostereoscopic display, the proposed autostereoscopic display has less crosstalk, a wider view angle, and higher efficiency of light utilization.
14. AERIAL VIEW OF ENGINE DISPLAY INSIDE PASSENGER CAR SHOP ...
14. AERIAL VIEW OF ENGINE DISPLAY INSIDE PASSENGER CAR SHOP (NOW A TRANSPORTATION MUSEUM) - Baltimore & Ohio Railroad, Mount Clare Passenger Car Shop, Southwest corner of Pratt & Poppleton Streets, Baltimore, Independent City, MD
ListingAnalyst: A program for analyzing the main output file from MODFLOW
Winston, Richard B.; Paulinski, Scott
2014-01-01
ListingAnalyst is a Windows® program for viewing the main output file from MODFLOW-2005, MODFLOW-NWT, or MODFLOW-LGR. It organizes and displays large files quickly without using excessive memory. The sections and subsections of the file are displayed in a tree-view control, which allows the user to navigate quickly to desired locations in the files. ListingAnalyst gathers error and warning messages scattered throughout the main output file and displays them all together in an error and a warning tab. A grid view displays tables in a readable format and allows the user to copy the table into a spreadsheet. The user can also search the file for terms of interest.
Distributed volume rendering and stereoscopic display for radiotherapy treatment planning
NASA Astrophysics Data System (ADS)
Hancock, David J.
The thesis describes attempts to use direct volume rendering techniques to produce visualisations useful in the preparation of radiotherapy treatment plans. The selected algorithms allow the generation of data-rich images which can be used to assist the radiologist in comprehending complicated three-dimensional phenomena. The treatment plans are formulated using a three dimensional model which combines patient data acquired from CT scanning and the results of a simulation of the radiation delivery. Multiple intersecting beams with shaped profiles are used and the region of intersection is designed to closely match the position and shape of the targeted tumour region. The proposed treatment must be evaluated as to how well the target region is enveloped by the high dose occurring where the beams intersect, and also as to whether the treatment is likely to expose non-tumour regions to unacceptably high levels of radiation. Conventionally the plans are reviewed by examining CT images overlaid with contours indicating dose levels. Volume visualisation offers a possible saving in time by presenting the data in three dimensional form thereby removing the need to examine a set of slices. The most difficult aspect is to depict unambiguously the relationships between the different data. For example, if a particular beam configuration results in unintended irradiation of a sensitive organ, then it is essential to ensure that this is clearly displayed, and that the 3D relationships between the beams and other data can be readily perceived in order to decide how to correct the problem. The user interface has been designed to present a unified view of the different techniques available for identifying features of interest within the data. The system differs from those previously reported in that complex visualisations can be constructed incrementally, and several different combinations of features can be viewed simultaneously. To maximise the quantity of relevant data presented in a single view, large regions of the data are rendered very transparently. This is done to ensure that interesting features buried deep within the data are visible from any viewpoint. Rendering images with high degrees of transparency raises a number of problems, primarily the drop in quality of depth cues in the image, but also the increase in computational requirements over surface-based visualisations. One solution to the increase in image generation times is the use of parallel architectures, which are an attractive platform for large visualisation tasks such as this. A parallel implementation of the direct volume rendering algorithm is described and its performance is evaluated. Several issues must be addressed in implementing an interactive rendering system in a distributed computing environment: principally overcoming the latency and limited bandwidth of the typical network connection. This thesis reports a pipelining strategy developed to improve the level of interactivity in such situations. Stereoscopic image presentation offers a method to offset the reduction in clarity of the depth information in the transparent images. The results of an investigation into the effectiveness of stereoscopic display as an aid to perception in highly transparent images are presented. Subjects were shown scenes of a synthetic test data set in which conventional depth cues were very limited. The experiments were designed to discover what effect stereoscopic viewing of the transparent, volume rendered images had on user's depth perception.
Integration of a 3D perspective view in the navigation display: featuring pilot's mental model
NASA Astrophysics Data System (ADS)
Ebrecht, L.; Schmerwitz, S.
2015-05-01
Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.
West wall, display area (room 101), view 4 of 4: ...
West wall, display area (room 101), view 4 of 4: northwest corner, with D.M. logistics office below (room 137), and D.O./D.D.O. offices above. Lower stairs lead to entry shown in view 13 - March Air Force Base, Strategic Air Command, Combat Operations Center, 5220 Riverside Drive, Moreno Valley, Riverside County, CA
Wrap-Around Out-the-Window Sensor Fusion System
NASA Technical Reports Server (NTRS)
Fox, Jeffrey; Boe, Eric A.; Delgado, Francisco; Secor, James B.; Clark, Michael R.; Ehlinger, Kevin D.; Abernathy, Michael F.
2009-01-01
The Advanced Cockpit Evaluation System (ACES) includes communication, computing, and display subsystems, mounted in a van, that synthesize out-the-window views to approximate the views of the outside world as it would be seen from the cockpit of a crewed spacecraft, aircraft, or remote control of a ground vehicle or UAV (unmanned aerial vehicle). The system includes five flat-panel display units arranged approximately in a semicircle around an operator, like cockpit windows. The scene displayed on each panel represents the view through the corresponding cockpit window. Each display unit is driven by a personal computer equipped with a video-capture card that accepts live input from any of a variety of sensors (typically, visible and/or infrared video cameras). Software running in the computers blends the live video images with synthetic images that could be generated, for example, from heads-up-display outputs, waypoints, corridors, or from satellite photographs of the same geographic region. Data from a Global Positioning System receiver and an inertial navigation system aboard the remote vehicle are used by the ACES software to keep the synthetic and live views in registration. If the live image were to fail, the synthetic scenes could still be displayed to maintain situational awareness.
Network of fully integrated multispecialty hospital imaging systems
NASA Astrophysics Data System (ADS)
Dayhoff, Ruth E.; Kuzmak, Peter M.
1994-05-01
The Department of Veterans Affairs (VA) DHCP Imaging System records clinically significant diagnostic images selected by medical specialists in a variety of departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images are displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system, allowing integrated displays of text and image data across medical specialties. Clinicians can view screens of `thumbnail' images for all studies or procedures performed on a selected patient. Two VA medical centers currently have DHCP Imaging Systems installed, and others are planned. All VA medical centers and other VA facilities are connected by a wide area packet-switched network. The VA's electronic mail software has been modified to allow inclusion of binary data such as images in addition to the traditional text data. Testing of this multimedia electronic mail system is underway for medical teleconsultation.
The hubris hypothesis: The downside of comparative optimism displays.
Hoorens, Vera; Van Damme, Carolien; Helweg-Larsen, Marie; Sedikides, Constantine
2017-04-01
According to the hubris hypothesis, observers respond more unfavorably to individuals who express their positive self-views comparatively than to those who express their positive self-views non-comparatively, because observers infer that the former hold a more disparaging view of others and particularly of observers. Two experiments extended the hubris hypothesis in the domain of optimism. Observers attributed less warmth (but not less competence) to, and showed less interest in affiliating with, an individual displaying comparative optimism (the belief that one's future will be better than others' future) than with an individual displaying absolute optimism (the belief that one's future will be good). Observers responded differently to individuals displaying comparative versus absolute optimism, because they inferred that the former held a gloomier view of the observers' future. Consistent with previous research, observers still attributed more positive traits to a comparative or absolute optimist than to a comparative or absolute pessimist. Copyright © 2016. Published by Elsevier Inc.
Pixel-level tunable liquid crystal lenses for auto-stereoscopic display
NASA Astrophysics Data System (ADS)
Li, Kun; Robertson, Brian; Pivnenko, Mike; Chu, Daping; Zhou, Jiong; Yao, Jun
2014-02-01
Mobile video and gaming are now widely used, and delivery of a glass-free 3D experience is of both research and development interest. The key drawbacks of a conventional 3D display based on a static lenticular lenslet array and parallax barriers are low resolution, limited viewing angle and reduced brightness, mainly because of the need of multiple-pixels for each object point. This study describes the concept and performance of pixel-level cylindrical liquid crystal (LC) lenses, which are designed to steer light to the left and right eye sequentially to form stereo parallax. The width of the LC lenses can be as small as 20-30 μm, so that the associated auto-stereoscopic display will have the same resolution as the 2D display panel in use. Such a thin sheet of tunable LC lens array can be applied directly on existing mobile displays, and can deliver 3D viewing experience while maintaining 2D viewing capability. Transparent electrodes were laser patterned to achieve the single pixel lens resolution, and a high birefringent LC material was used to realise a large diffraction angle for a wide field of view. Simulation was carried out to model the intensity profile at the viewing plane and optimise the lens array based on the measured LC phase profile. The measured viewing angle and intensity profile were compared with the simulation results.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Liston, Dorion B.
2011-01-01
Visual motion and other visual cues are used by tower controllers to provide important support for their control tasks at and near airports. These cues are particularly important for anticipated separation. Some of them, which we call visual features, have been identified from structured interviews and discussions with 24 active air traffic controllers or supervisors. The visual information that these features provide has been analyzed with respect to possible ways it could be presented at a remote tower that does not allow a direct view of the airport. Two types of remote towers are possible. One could be based on a plan-view, map-like computer-generated display of the airport and its immediate surroundings. An alternative would present a composite perspective view of the airport and its surroundings, possibly provided by an array of radially mounted cameras positioned at the airport in lieu of a tower. An initial more detailed analyses of one of the specific landing cues identified by the controllers, landing deceleration, is provided as a basis for evaluating how controllers might detect and use it. Understanding other such cues will help identify the information that may be degraded or lost in a remote or virtual tower not located at the airport. Some initial suggestions how some of the lost visual information may be presented in displays are mentioned. Many of the cues considered involve visual motion, though some important static cues are also discussed.
Glasses-free large size high-resolution three-dimensional display based on the projector array
NASA Astrophysics Data System (ADS)
Sang, Xinzhu; Wang, Peng; Yu, Xunbo; Zhao, Tianqi; Gao, Xing; Xing, Shujun; Yu, Chongxiu; Xu, Daxiong
2014-11-01
Normally, it requires a huge amount of spatial information to increase the number of views and to provide smooth motion parallax for natural three-dimensional (3D) display similar to real life. To realize natural 3D video display without eye-wears, a huge amount of 3D spatial information is normal required. However, minimum 3D information for eyes should be used to reduce the requirements for display devices and processing time. For the 3D display with smooth motion parallax similar to the holographic stereogram, the size the virtual viewing slit should be smaller than the pupil size of eye at the largest viewing distance. To increase the resolution, two glass-free 3D display systems rear and front projection are presented based on the space multiplexing with the micro-projector array and the special designed 3D diffuse screens with the size above 1.8 m× 1.2 m. The displayed clear depths are larger 1.5m. The flexibility in terms of digitized recording and reconstructed based on the 3D diffuse screen relieves the limitations of conventional 3D display technologies, which can realize fully continuous, natural 3-D display. In the display system, the aberration is well suppressed and the low crosstalk is achieved.
UNAVCO Software and Services for Visualization and Exploration of Geoscience Data
NASA Astrophysics Data System (ADS)
Meertens, C.; Wier, S.
2007-12-01
UNAVCO has been involved in visualization of geoscience data to support education and research for several years. An early and ongoing service is the Jules Verne Voyager, a web browser applet built on the GMT that displays any area on Earth, with many data set choices, including maps, satellite images, topography, geoid heights, sea-floor ages, strain rates, political boundaries, rivers and lakes, earthquake and volcano locations, focal mechanisms, stress axes, and observed and modeled plate motion and deformation velocity vectors from geodetic measurements around the world. As part of the GEON project, UNAVCO has developed the GEON IDV, a research-level, 4D (earth location, depth and/or altitude, and time), Java application for interactive display and analysis of geoscience data. The GEON IDV is designed to meet the challenge of investigating complex, multi-variate, time-varying, three-dimensional geoscience data anywhere on earth. The GEON IDV supports simultaneous displays of data sets from differing sources, with complete control over colors, time animation, map projection, map area, point of view, and vertical scale. The GEON IDV displays gridded and point data, images, GIS shape files, and several other types of data. The GEON IDV has symbols and displays for GPS velocity vectors, seismic tomography, earthquake focal mechanisms, earthquake locations with magnitude or depth, seismic ray paths in 3D, seismic anisotropy, convection model visualization, earth strain axes and strain field imagery, and high-resolution 3D topographic relief maps. Multiple data sources and display types may appear in one view. As an example of GEON IDV utility, it can display hypocenters under a volcano, a surface geology map of the volcano draped over 3D topographic relief, town locations and political boundaries, and real-time 3D weather radar clouds of volcanic ash in the atmosphere, with time animation. The GEON IDV can drive a GeoWall or other 3D stereo system. IDV output includes imagery, movies, and KML files for Google Earth use of IDV static images, where Google Earth can handle the display. The IDV can be scripted to create display images on user request or automatically on data arrival, offering the use of the IDV as a back end to support a data web site. We plan to extend the power of the IDV by accepting new data types and data services, such as GeoSciML. An active program of online and video training in GEON IDV use is planned. UNAVCO will support users who need assistance converting their data to the standard formats used by the GEON IDV. The UNAVCO Facility provides web-accessible support for Google Earth and Google Maps display of any of more than 9500 GPS stations and survey points, including metadata for each installation. UNAVCO provides corresponding Open Geospatial Consortium (OGC) web services with the same data. UNAVCO's goal is to facilitate data access, interoperability, and efficient searches, exploration, and use of data by promoting web services, standards for GEON IDV data formats and metadata, and software able to simultaneously read and display multiple data sources, formats, and map locations or projections. Retention and propagation of semantics and metadata with observational and experimental values is essential for interoperability and understanding diverse data sources.
Using virtual reality for science mission planning: A Mars Pathfinder case
NASA Technical Reports Server (NTRS)
Kim, Jacqueline H.; Weidner, Richard J.; Sacks, Allan L.
1994-01-01
NASA's Mars Pathfinder Project requires a Ground Data System (GDS) that supports both engineering and scientific payloads with reduced mission operations staffing, and short planning schedules. Also, successful surface operation of the lander camera requires efficient mission planning and accurate pointing of the camera. To meet these challenges, a new software strategy that integrates virtual reality technology with existing navigational ancillary information and image processing capabilities. The result is an interactive workstation based applications software that provides a high resolution, 3-dimensial, stereo display of Mars as if it were viewed through the lander camera. The design, implementation strategy and parametric specification phases for the development of this software were completed, and the prototype tested. When completed, the software will allow scientists and mission planners to access simulated and actual scenes of Mars' surface. The perspective from the lander camera will enable scientists to plan activities more accurately and completely. The application will also support the sequence and command generation process and will allow testing and verification of camera pointing commands via simulation.
Can a More User-Friendly Medicare Plan Finder Improve Consumers' Selection of Medicare Plans?
Martino, Steven C; Kanouse, David E; Miranda, David J; Elliott, Marc N
2017-10-01
To evaluate the efficacy for consumers of two potential enhancements to the Medicare Plan Finder (MPF)-a simplified data display and a "quick links" home page designed to match the specific tasks that users seek to accomplish on the MPF. Participants (N = 641) were seniors and adult caregivers of seniors who were recruited from a national online panel. Participants browsed a simulated version of the MPF, made a hypothetical plan choice, and reported on their experience. Participants were randomly assigned to one of eight conditions in a fully factorial design: 2 home pages (quick links, current MPF home page) × 2 data displays (simplified, current MPF display) × 2 plan types (stand-alone prescription drug plan [PDP], Medicare Advantage plan with prescription drug coverage [MA-PD]). The quick links page resulted in more favorable perceptions of the MPF, improved users' understanding of the information, and increased the probability of choosing the objectively best plan. The simplified data display resulted in a more favorable evaluation of the website, better comprehension of the displayed information, and, among those choosing a PDP only, an increased probability of choosing the best plan. Design enhancements could markedly improve average website users' understanding, ability to use, and experience of using the MPF. © Health Research and Educational Trust.
SU-E-T-154: Establishment and Implement of 3D Image Guided Brachytherapy Planning System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, S; Zhao, S; Chen, Y
2014-06-01
Purpose: Cannot observe the dose intuitionally is a limitation of the existing 2D pre-implantation dose planning. Meanwhile, a navigation module is essential to improve the accuracy and efficiency of the implantation. Hence a 3D Image Guided Brachytherapy Planning System conducting dose planning and intra-operative navigation based on 3D multi-organs reconstruction is developed. Methods: Multi-organs including the tumor are reconstructed in one sweep of all the segmented images using the multiorgans reconstruction method. The reconstructed organs group establishs a three-dimensional visualized operative environment. The 3D dose maps of the three-dimentional conformal localized dose planning are calculated with Monte Carlo method whilemore » the corresponding isodose lines and isodose surfaces are displayed in a stereo view. The real-time intra-operative navigation is based on an electromagnetic tracking system (ETS) and the fusion between MRI and ultrasound images. Applying Least Square Method, the coordinate registration between 3D models and patient is realized by the ETS which is calibrated by a laser tracker. The system is validated by working on eight patients with prostate cancer. The navigation has passed the precision measurement in the laboratory. Results: The traditional marching cubes (MC) method reconstructs one organ at one time and assembles them together. Compared to MC, presented multi-organs reconstruction method has superiorities in reserving the integrality and connectivity of reconstructed organs. The 3D conformal localized dose planning, realizing the 'exfoliation display' of different isodose surfaces, helps make sure the dose distribution has encompassed the nidus and avoid the injury of healthy tissues. During the navigation, surgeons could observe the coordinate of instruments real-timely employing the ETS. After the calibration, accuracy error of the needle position is less than 2.5mm according to the experiments. Conclusion: The speed and quality of 3D reconstruction, the efficiency in dose planning and accuracy in navigation all can be improved simultaneously.« less
NASA Technical Reports Server (NTRS)
Fuller, H. V.
1974-01-01
A display system was developed to provide flight information to the ground based pilots of radio controlled models used in flight research programs. The display system utilizes data received by telemetry from the model, and presents the information numerically in the field of view of the binoculars used by the pilots.
Sakata, S; Grove, P M; Hill, A; Watson, M O; Stevenson, A R L
2017-07-01
This study compared precision of depth judgements, technical performance and workload using two-dimensional (2D) and three-dimensional (3D) laparoscopic displays across different viewing distances. It also compared the accuracy of 3D displays with natural viewing, along with the relationship between stereoacuity and 3D laparoscopic performance. A counterbalanced within-subjects design with random assignment to testing sequences was used. The system could display 2D or 3D images with the same set-up. A Howard-Dolman apparatus assessed precision of depth judgements, and three laparoscopic tasks (peg transfer, navigation in space and suturing) assessed performance (time to completion). Participants completed tasks in all combinations of two viewing modes (2D, 3D) and two viewing distances (1 m, 3 m). Other measures administered included the National Aeronautics and Space Administration Task Load Index (perceived workload) and the Randot ® Stereotest (stereoacuity). Depth judgements were 6·2 times as precise at 1 m and 3·0 times as precise at 3 m using 3D versus 2D displays (P < 0·001). Participants performed all laparoscopic tasks faster in 3D at both 1 and 3 m (P < 0.001), with mean completion times up to 64 per cent shorter for 3D versus 2D displays. Workload was lower for 3D displays (up to 34 per cent) than for 2D displays at both viewing distances (P < 0·001). Greater viewing distance inhibited performance for two laparoscopic tasks, and increased perceived workload for all three (P < 0·001). Higher stereoacuity was associated with shorter completion times for the navigating in space task performed in 3D at 1 m (r = - 0·40, P = 0·001). 3D displays offer large improvements over 2D displays in precision of depth judgements, technical performance and perceived workload. © 2017 The Authors. BJS published by John Wiley & Sons Ltd on behalf of BJS Society Ltd.
Dual-view integral imaging three-dimensional display using polarized glasses.
Wu, Fei; Lv, Guo-Jiao; Deng, Huan; Zhao, Bai-Chuan; Wang, Qiong-Hua
2018-02-20
We propose a dual-view integral imaging (DVII) three-dimensional (3D) display using polarized glasses. The DVII 3D display consists of a display panel, a polarized parallax barrier, a microlens array, and two pairs of polarized glasses. Two kinds of elemental images, which are captured from two different 3D scenes, are alternately arranged on the display panel. The polarized parallax barrier is attached to the display panel and composed of two kinds of units that are also alternately arranged. The polarization directions between adjacent units are perpendicular. The polarization directions of the two pairs of polarized glasses are the same as those of the two kinds of units of the polarized parallax barrier, respectively. The lights emitted from the two kinds of elemental images are modulated by the corresponding polarizer units and microlenses, respectively. Two different 3D images are reconstructed in the viewing zone and separated by using two pairs of polarized glasses. A prototype of the DVII 3D display is developed and two 3D images can be presented simultaneously, verifying the hypothesis.
Thomas, W P; Gaber, C E; Jacobs, G J; Kaplan, P M; Lombard, C W; Moise, N S; Moses, B L
1993-01-01
Recommendations are presented for standardized imaging planes and display conventions for two-dimensional echocardiography in the dog and cat. Three transducer locations ("windows") provide access to consistent imaging planes: the right parasternal location, the left caudal (apical) parasternal location, and the left cranial parasternal location. Recommendations for image display orientations are very similar to those for comparable human cardiac images, with the heart base or cranial aspect of the heart displayed to the examiner's right on the video display. From the right parasternal location, standard views include a long-axis four-chamber view and a long-axis left ventricular outflow view, and short-axis views at the levels of the left ventricular apex, papillary muscles, chordae tendineae, mitral valve, aortic valve, and pulmonary arteries. From the left caudal (apical) location, standard views include long-axis two-chamber and four-chamber views. From the left cranial parasternal location, standard views include a long-axis view of the left ventricular outflow tract and ascending aorta (with variations to image the right atrium and tricuspid valve, and the pulmonary valve and pulmonary artery), and a short-axis view of the aortic root encircled by the right heart. These images are presented by means of idealized line drawings. Adoption of these standards should facilitate consistent performance, recording, teaching, and communicating results of studies obtained by two-dimensional echocardiography.
Biocular vehicle display optical designs
NASA Astrophysics Data System (ADS)
Chu, H.; Carter, Tom
2012-06-01
Biocular vehicle display optics is a fast collimating lens (f / # < 0.9) that presents the image of the display at infinity to both eyes of the viewer. Each eye captures the scene independently and the brain merges the two images into one through the overlapping portions of the images. With the recent conversion from analog CRT based displays to lighter, more compact active-matrix organic light-emitting diodes (AMOLED) digital image sources, display optical designs have evolved to take advantage of the higher resolution AMOLED image sources. To maximize the field of view of the display optics and fully resolve the smaller pixels, the digital image source is pre-magnified by relay optics or a coherent taper fiber optics plate. Coherent taper fiber optics plates are used extensively to: 1. Convert plano focal planes to spherical focal planes in order to eliminate Petzval field curvature. This elimination enables faster lens speed and/or larger field of view of eye pieces, display optics. 2. Provide pre-magnification to lighten the work load of the optics to further increase the numerical aperture and/or field of view. 3. Improve light flux collection efficiency and field of view by collecting all the light emitted by the image source and guiding imaging light bundles toward the lens aperture stop. 4. Reduce complexity of the optical design and overall packaging volume by replacing pre-magnification optics with a compact taper fiber optics plate. This paper will review and compare the performance of biocular vehicle display designs without and with taper fiber optics plate.
Characterization and optimization of 3D-LCD module design
NASA Astrophysics Data System (ADS)
van Berkel, Cees; Clarke, John A.
1997-05-01
Autostereoscopic displays with flat panel liquid crystal display and lenticular sheets are receiving much attention. Multiview 3D-LCD is truly autostereoscopic because no head tracking is necessary and the technology is well poised to become a mass market consumer 3D display medium as the price of liquid crystal displays continues to drop. Making the viewing experience as natural as possible is of prime importance. The main challenges are to reduce the picket fence effect of the black mask and to try to get away with as few perspective views as possible. Our solution is to 'blur' the boundaries between the views. This hides the black mask image by spreading it out and softens the transition between one view and the next, encouraging the user to perceive 'solid objects' instead of a succession of flipping views. One way to achieve this is by introducing a new pixel design in which the pixels are slanted with respect to the column direction. Another way is to place the lenticular at a small (9.46 degree) angle with respect to the LCD columns. The effect of either method is that, as the observer moves sideways in front of the display, he always 'sees' a constant amount of black mask. This renders the black mask, in effect, invisible and eliminates the picket fence effect.
Raster graphic helmet-mounted display study
NASA Technical Reports Server (NTRS)
Beamon, William S.; Moran, Susanna I.
1990-01-01
A design of a helmet mounted display system is presented, including a design specification and development plan for the selected design approach. The requirements for the helmet mounted display system and a survey of applicable technologies are presented. Three helmet display concepts are then described which utilize lasers, liquid crystal display's (LCD's), and subminiature cathode ray tubes (CRT's), respectively. The laser approach is further developed in a design specification and a development plan.
Digital 3D holographic display using scattering layers for enhanced viewing angle and image size
NASA Astrophysics Data System (ADS)
Yu, Hyeonseung; Lee, KyeoReh; Park, Jongchan; Park, YongKeun
2017-05-01
In digital 3D holographic displays, the generation of realistic 3D images has been hindered by limited viewing angle and image size. Here we demonstrate a digital 3D holographic display using volume speckle fields produced by scattering layers in which both the viewing angle and the image size are greatly enhanced. Although volume speckle fields exhibit random distributions, the transmitted speckle fields have a linear and deterministic relationship with the input field. By modulating the incident wavefront with a digital micro-mirror device, volume speckle patterns are controlled to generate 3D images of micrometer-size optical foci with 35° viewing angle in a volume of 2 cm × 2 cm × 2 cm.
NASA Astrophysics Data System (ADS)
Meng, Yang; Yu, Zhongyuan; Jia, Fangda; Zhang, Chunyu; Wang, Ye; Liu, Yumin; Ye, Han; Chen, Laurence Lujun
2017-10-01
A multi-view autostereoscopic three-dimensional (3D) system is built by using a 2D display screen and a customized parallax-barrier shutter (PBS) screen. The shutter screen is controlled dynamically by address driving matrix circuit and it is placed in front of the display screen at a certain location. The system could achieve densest viewpoints due to its specially optical and geometric design which is based on concept of "eye space". The resolution of 3D imaging is not reduced compared to 2D mode by using limited time division multiplexing technology. The diffraction effects may play an important role in 3D display imaging quality, especially when applied to small screen, such as iPhone screen etc. For small screen, diffraction effects may contribute crosstalk between binocular views, image brightness uniformity etc. Therefore, diffraction effects are analyzed and considered in a one-dimensional shutter screen model of the 3D display, in which the numerical simulation of light from display pixels on display screen through parallax barrier slits to each viewing zone in eye space, is performed. The simulation results provide guidance for criteria screen size over which the impact of diffraction effects are ignorable, and below which diffraction effects must be taken into account. Finally, the simulation results are compared to the corresponding experimental measurements and observation with discussion.
Computational see-through near-eye displays
NASA Astrophysics Data System (ADS)
Maimone, Andrew S.
See-through near-eye displays with the form factor and field of view of eyeglasses are a natural choice for augmented reality systems: the non-encumbering size enables casual and extended use and large field of view enables general-purpose spatially registered applications. However, designing displays with these attributes is currently an open problem. Support for enhanced realism through mutual occlusion and the focal depth cues is also not found in eyeglasses-like displays. This dissertation provides a new strategy for eyeglasses-like displays that follows the principles of computational displays, devices that rely on software as a fundamental part of image formation. Such devices allow more hardware simplicity and flexibility, showing greater promise of meeting form factor and field of view goals while enhancing realism. This computational approach is realized in two novel and complementary see-through near-eye display designs. The first subtractive approach filters omnidirectional light through a set of optimized patterns displayed on a stack of spatial light modulators, reproducing a light field corresponding to in-focus imagery. The design is thin and scales to wide fields of view; see-through is achieved with transparent components placed directly in front of the eye. Preliminary support for focal cues and environment occlusion is also demonstrated. The second additive approach uses structured point light illumination to form an image with a minimal set of rays. Each of an array of defocused point light sources is modulated by a region of a spatial light modulator, essentially encoding an image in the focal blur. See-through is also achieved with transparent components and thin form factors and wide fields of view (>= 100 degrees) are demonstrated. The designs are examined in theoretical terms, in simulation, and through prototype hardware with public demonstrations. This analysis shows that the proposed computational near-eye display designs offer a significantly different set of trade-offs than conventional optical designs. Several challenges remain to make the designs practical, most notably addressing diffraction limits.
Development of 40-in hybrid hologram screen for auto-stereoscopic video display
NASA Astrophysics Data System (ADS)
Song, Hyun Ho; Nakashima, Y.; Momonoi, Y.; Honda, Toshio
2004-06-01
Usually in auto stereoscopic display, there are two problems. The first problem is that large image display is difficult, and the second problem is that the view zone (which means the zone in which both eyes are put for stereoscopic or 3-D image observation) is very narrow. We have been developing an auto stereoscopic large video display system (over 100 inches diagonal) which a few people can view simultaneously1,2. Usually in displays that are over 100 inches diagonal, an optical video projection system is used. As one of auto stereoscopic display systems the hologram screen has been proposed3,4,5,6. However, if the hologram screen becomes too large, the view zone (corresponding to the reconstructed diffused object) causes color dispersion and color aberration7. We also proposed the additional Fresnel lens attached to the hologram screen. We call the screen a "hybrid hologram screen", (HHS in short). We made the HHS 866mm(H)×433mm(V) (about 40 inch diagonal)8,9,10,11. By using the lens in the reconstruction step, the angle between object light and reference light can be small, compared to without the lens. So, the spread of the view zone by the color dispersion and color aberration becomes small. And also, the virtual image which is reconstructed from the hologram screen can be transformed to a real image (view zone). So, it is not necessary to use a large lens or concave mirror while making a large hologram screen.
West wall, display area (room 101), view 1 of 4: ...
West wall, display area (room 101), view 1 of 4: southwest corner, showing stairs to commander's quarters and viewing bridge, windows to controller's room (room 102), south end of control consoles, and holes in pedestal floor for computer equipment cables (tape drive I/O?) - March Air Force Base, Strategic Air Command, Combat Operations Center, 5220 Riverside Drive, Moreno Valley, Riverside County, CA
Crosstalk in automultiscopic 3-D displays: blessing in disguise?
NASA Astrophysics Data System (ADS)
Jain, Ashish; Konrad, Janusz
2007-02-01
Most of 3-D displays suffer from interocular crosstalk, i.e., the perception of an unintended view in addition to intended one. The resulting "ghosting" at high-contrast object boundaries is objectionable and interferes with depth perception. In automultiscopic (no glasses, multiview) displays using microlenses or parallax barrier, the effect is compounded since several unintended views may be perceived at once. However, we recently discovered that crosstalk in automultiscopic displays can be also beneficial. Since spatial multiplexing of views in order to prepare a composite image for automultiscopic viewing involves sub-sampling, prior anti-alias filtering is required. To date, anti-alias filter design has ignored the presence of crosstalk in automultiscopic displays. In this paper, we propose a simple multiplexing model that takes crosstalk into account. Using this model we derive a mathematical expression for the spectrum of single view with crosstalk, and we show that it leads to reduced spectral aliasing compared to crosstalk-free case. We then propose a new criterion for the characterization of ideal anti-alias pre-filter. In the experimental part, we describe a simple method to measure optical crosstalk between views using digital camera. We use the measured crosstalk parameters to find the ideal frequency response of anti-alias filter and we design practical digital filters approximating this response. Having applied the designed filters to a number of multiview images prior to multiplexing, we conclude that, due to their increased bandwidth, the filters lead to visibly sharper 3-D images without increasing aliasing artifacts.
Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen
2017-07-01
Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.
Moreno, Megan A.; Swanson, Michael J.; Royer, Heather; Roberts, Linda J.
2011-01-01
Study Objective Sexual reference display on a social networking web site (SNS) is associated with self-reported sexual intention; females are more likely to display sexually explicit content on SNSs. The purpose of this study was to investigate male college students' views towards sexual references displayed on publicly available SNSs by females. Design Focus groups Setting One large state university Participants Male college students age 18–23 Interventions All tape recorded data was fully transcribed, then discussed to determine thematic consensus. Main Outcome Measures A trained male facilitator asked participants about views on sexual references displayed on SNSs by female peers and showed examples of sexual references from female's SNS profiles to facilitate discussion. Results A total of 28 heterosexual male participants participated in 7 focus groups. Nearly all participants reported using Facebook to evaluate potential female partners. Three themes emerged from our data. First, participants reported that displays of sexual references on social networking web sites increased sexual expectations. Second, sexual reference display decreased interest in pursuing a dating relationship. Third, SNS data was acknowledged as imperfect but valuable. Conclusion Females who display sexual references on publicly available SNS profiles may be influencing potential partners' sexual expectations and dating intentions. Future research should examine females' motivations and beliefs about displaying such references, and educate women about the potential impact of these sexual displays. PMID:21190872
Large-screen display industry: market and technology trends for direct view and projection displays
NASA Astrophysics Data System (ADS)
Castellano, Joseph A.; Mentley, David E.
1996-03-01
Large screen information displays are defined as dynamic electronic displays that can be viewed by more than one person and are at least 2-feet wide. These large area displays for public viewing provide convenience, entertainment, security, and efficiency to the viewers. There are numerous uses for large screen information displays including those in advertising, transportation, traffic control, conference room presentations, computer aided design, banking, and military command/control. A noticeable characteristic of the large screen display market is the interchangeability of display types. For any given application, the user can usually choose from at least three alternative technologies, and sometimes from many more. Some display types have features that make them suitable for specific applications due to temperature, brightness, power consumption, or other such characteristic. The overall worldwide unit consumption of large screen information displays of all types and for all applications (excluding consumer TV) will increase from 401,109 units in 1995 to 655,797 units in 2002. On a unit consumption basis, applications in business and education represent the largest share of unit consumption over this time period; in 1995, this application represented 69.7% of the total. The market (value of shipments) will grow from DOL3.1 billion in 1995 to DOL3.9 billion in 2002. The market will be dominated by front LCD projectors and LCD overhead projector plates.
Moreno, Megan A; Swanson, Michael J; Royer, Heather; Roberts, Linda J
2011-04-01
Sexual reference display on a social networking web site (SNS) is associated with self-reported sexual intention; females are more likely to display sexually explicit content on SNSs. The purpose of this study was to investigate male college students' views towards sexual references displayed on publicly available SNSs by females. Focus groups. One large state university. Male college students age 18-23. All tape recorded discussion was fully transcribed, then discussed to determine thematic consensus. A trained male facilitator asked participants about views on sexual references displayed on SNSs by female peers and showed examples of sexual references from female's SNS profiles to facilitate discussion. A total of 28 heterosexual male participants participated in seven focus groups. Nearly all participants reported using Facebook to evaluate potential female partners. Three themes emerged from our data. First, participants reported that displays of sexual references on social networking web sites increased sexual expectations. Second, sexual reference display decreased interest in pursuing a dating relationship. Third, SNS data was acknowledged as imperfect but valuable. Females who display sexual references on publicly available SNS profiles may be influencing potential partners' sexual expectations and dating intentions. Future research should examine females' motivations and beliefs about displaying such references and educate women about the potential impact of these sexual displays. Copyright © 2011 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.
Lofthag-Hansen, Sara; Thilander-Klang, Anne; Gröndahl, Kerstin
2011-11-01
To evaluate subjective image quality for two diagnostic tasks, periapical diagnosis and implant planning, for cone beam computed tomography (CBCT) using different exposure parameters and fields of view (FOVs). Examinations were performed in posterior part of the jaws on a skull phantom with 3D Accuitomo (FOV 3 cm×4 cm) and 3D Accuitomo FPD (FOVs 4 cm×4 cm and 6 cm×6 cm). All combinations of 60, 65, 70, 75, 80 kV and 2, 4, 6, 8, 10 mA with a rotation of 180° and 360° were used. Dose-area product (DAP) value was determined for each combination. The images were presented, displaying the object in axial, cross-sectional and sagittal views, without scanning data in a random order for each FOV and jaw. Seven observers assessed image quality on a six-point rating scale. Intra-observer agreement was good (κw=0.76) and inter-observer agreement moderate (κw=0.52). Stepwise logistic regression showed kV, mA and diagnostic task to be the most important variables. Periapical diagnosis, regardless jaw, required higher exposure parameters compared to implant planning. Implant planning in the lower jaw required higher exposure parameters compared to upper jaw. Overall ranking of FOVs gave 4 cm×4 cm, 6 cm×6 cm followed by 3 cm×4 cm. This study has shown that exposure parameters should be adjusted according to diagnostic task. For this particular CBCT brand a rotation of 180° gave good subjective image quality, hence a substantial dose reduction can be achieved without loss of diagnostic information. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Inland area contingency plan and maps for Pennsylvania (on CD-ROM). Data file
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-12-01
EPA Region III has assembled on this CD a multitude of environmental data, in both visual and textual formats. While targeted for Facility Response Planning under the Oil Pollution Act of 1990, this information will prove helpful to anyone in the environmental arena. Specifically, the CD will aid contingency planning and emergency response personnel. Combining innovative GIS technology with EPA`s state-specific data allows you to display maps, find and identify map features, look at tabular information about map features, and print out maps. The CD was designed to be easy to use and incorporates example maps as well as helpmore » sections describing the use of the environmental data on the CD, and introduces you to the IACP Viewer and its capabilities. These help features will make it easy for you to conduct analysis, produce maps, and browse the IACP Plan. The IACP data are included in two formats: shapefiles, which can be viewed with the IACP Viewer or ESRI`s ArcView software (Version 2.1 or higher), and ARC/INFO export files, which can be imported into ARC/INFO or converted to other GIS data formats. Point Data Sources: Sensitive Areas, Surface Drinking Water Intakes, Groundwater Intakes, Groundwater Supply Facilities, NPL (National Priority List) Sites, FRP (Facility Response Plan) Facilities, NPDES (National Pollutant Discharge Elimination System) Facilities, Hospitals, RCRA (Resource Conservation and Recovery Act) Sites, TRI (Toxic Release Inventory) Sites, CERCLA (Comprehensive Environmental Response, Compensation, and Liability Act) Sites Line Data Sources: TIGER Roads, TIGER Railroads, TIGER Hydrography, Pipelines Polygon Data Sources: State Boundaries, County Boundaries, Watershed Boundaries (8-digit HUC), TIGER Hydrography, Public Lands, Populated Places, IACP Boundaries, Coast Guard Boundaries, Forest Types, US Congressional Districts, One-half Mile Buffer of Surface Drinking Water Intakes.« less
Inland area contingency plan and maps for Virginia (on CD-ROM). Data file
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-12-01
EPA Region III has assembled on this CD a multitude of environmental data, in both visual and textual formats. While targeted for Facility Response Planning under the Oil Pollution Act of 1990, this information will prove helpful to anyone in the environmental arena. Specifically, the CD will aid contingency planning and emergency response personnel. Combining innovative GIS technology with EPA`s state-specific data allows you to display maps, find and identify map features, look at tabular information about map features, and print out maps. The CD was designed to be easy to use and incorporates example maps as well as helpmore » sections describing the use of the environmental data on the CD, and introduces you to the IACP Viewer and its capabilities. These help features will make it easy for you to conduct analysis, produce maps, and browse the IACP Plan. The IACP data are included in two formats: shapefiles, which can be viewed with the IACP Viewer or ESRI`s ArcView software (Version 2.1 or higher), and ARC/INFO export files, which can be imported into ARC/INFO or converted to other GIS data formats. Point Data Sources: Sensitive Areas, Surface Drinking Water Intakes, Groundwater Intakes, Groundwater Supply Facilities, NPL (National Priority List) Sites, FRP (Facility Response Plan) Facilities, NPDES (National Pollutant Discharge Elimination System) Facilities, Hospitals, RCRA (Resource Conservation and Recovery Act) Sites, TRI (Toxic Release Inventory) Sites, CERCLA (Comprehensive Environmental Response, Compensation, and Liability Act) Sites Line Data Sources: TIGER Roads, TIGER Railroads, TIGER Hydrography, Pipelines Polygon Data Sources: State Boundaries, County Boundaries, Watershed Boundaries (8-digit HUC), TIGER Hydrography, Public Lands, Populated Places, IACP Boundaries, Coast Guard Boundaries, Forest Types, US Congressional Districts, One-half Mile Buffer of Surface Drinking Water Intakes.« less
Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.
Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu
2015-05-18
We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.
46 CFR 35.10-3 - Display of plans-TB/ALL.
Code of Federal Regulations, 2012 CFR
2012-10-01
... charge of the vessel the following plans: (a) General arrangement plans showing for each deck the fire... 46 Shipping 1 2012-10-01 2012-10-01 false Display of plans-TB/ALL. 35.10-3 Section 35.10-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS OPERATIONS Fire and Emergency...
46 CFR 35.10-3 - Display of plans-TB/ALL.
Code of Federal Regulations, 2013 CFR
2013-10-01
... charge of the vessel the following plans: (a) General arrangement plans showing for each deck the fire... 46 Shipping 1 2013-10-01 2013-10-01 false Display of plans-TB/ALL. 35.10-3 Section 35.10-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS OPERATIONS Fire and Emergency...
46 CFR 35.10-3 - Display of plans-TB/ALL.
Code of Federal Regulations, 2014 CFR
2014-10-01
... charge of the vessel the following plans: (a) General arrangement plans showing for each deck the fire... 46 Shipping 1 2014-10-01 2014-10-01 false Display of plans-TB/ALL. 35.10-3 Section 35.10-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS OPERATIONS Fire and Emergency...
46 CFR 35.10-3 - Display of plans-TB/ALL.
Code of Federal Regulations, 2011 CFR
2011-10-01
... charge of the vessel the following plans: (a) General arrangement plans showing for each deck the fire... 46 Shipping 1 2011-10-01 2011-10-01 false Display of plans-TB/ALL. 35.10-3 Section 35.10-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS OPERATIONS Fire and Emergency...
Enhancing multi-view autostereoscopic displays by viewing distance control (VDC)
NASA Astrophysics Data System (ADS)
Jurk, Silvio; Duckstein, Bernd; Renault, Sylvain; Kuhlmey, Mathias; de la Barré, René; Ebner, Thomas
2014-03-01
Conventional multi-view displays spatially interlace various views of a 3D scene and form appropriate viewing channels. However, they only support sufficient stereo quality within a limited range around the nominal viewing distance (NVD). If this distance is maintained, two slightly divergent views are projected to the person's eyes, both covering the entire screen. With increasing deviations from the NVD the stereo image quality decreases. As a major drawback in usability, the manufacturer so far assigns this distance. We propose a software-based solution that corrects false view assignments depending on the distance of the viewer. Our novel approach enables continuous view adaptation based on the calculation of intermediate views and a column-bycolumn rendering method. The algorithm controls each individual subpixel and generates a new interleaving pattern from selected views. In addition, we use color-coded test content to verify its efficacy. This novel technology helps shifting the physically determined NVD to a user-defined distance thereby supporting stereopsis. The recent viewing positions can fall in front or behind the NVD of the original setup. Our algorithm can be applied to all multi-view autostereoscopic displays — independent of the ascent or the periodicity of the optical element. In general, the viewing distance can be corrected with a factor of more than 2.5. By creating a continuous viewing area the visualized 3D content is suitable even for persons with largely divergent intraocular distance — adults and children alike — without any deficiency in spatial perception.
Accommodation response measurements for integral 3D image
NASA Astrophysics Data System (ADS)
Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.
2014-03-01
We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.
Branching out with filmless radiology.
Carbajal, R; Honea, R
1999-05-01
Texas Children's Hospital, a 456 bed pediatric hospital located in the Texas Medical Center, has been constructing a large-scale picture archiving and communications system (PACS), including ultrasound (US), computed tomography (CT), magnetic resonance (MR), and computed radiography (CR). Until recently, filmless radiology operations have been confined to the imaging department, the outpatient treatment center, and the emergency center. As filmless services expand to other clinical services, the PACS staff must engage each service in a dialog to determine the appropriate level of support required. The number and type of image examinations, the use of multiple modalities and comparison examinations, and the relationship between viewing and direct patient care activities have a bearing on the number and type of display stations provided. Some of the information about customer services is contained in documentation already maintained by the imaging department. For example, by a custom report from the radiology information system (RIS), we were able to determine the number and type of examinations ordered by each referring physician for the previous 6 months. By compiling these by clinical service, we were able to determine our biggest customers by examination type and volume. Another custom report was used to determine who was requesting old examinations from the film library. More information about imaging usage was gathered by means of a questionnaire. Some customers view images only where patients are also seen, while some services view images independently from the patient. Some services use their conference rooms for critical image viewing such as treatment planning. Additional information was gained from geographical surveys of where films are currently produced, delivered by the film library, and viewed. In some areas, available space dictates the type and configuration of display station that can be used. Active participation in the decision process by the clinical service is a key element to successful filmless operations.
Real-time free-viewpoint DIBR for large-size 3DLED
NASA Astrophysics Data System (ADS)
Wang, NengWen; Sang, Xinzhu; Guo, Nan; Wang, Kuiru
2017-10-01
Three-dimensional (3D) display technologies make great progress in recent years, and lenticular array based 3D display is a relatively mature technology, which most likely to commercial. In naked-eye-3D display, the screen size is one of the most important factors that affect the viewing experience. In order to construct a large-size naked-eye-3D display system, the LED display is used. However, the pixel misalignment is an inherent defect of the LED screen, which will influences the rendering quality. To address this issue, an efficient image synthesis algorithm is proposed. The Texture-Plus-Depth(T+D) format is chosen for the display content, and the modified Depth Image Based Rendering (DIBR) method is proposed to synthesize new views. In order to achieve realtime, the whole algorithm is implemented on GPU. With the state-of-the-art hardware and the efficient algorithm, a naked-eye-3D display system with a LED screen size of 6m × 1.8m is achieved. Experiment shows that the algorithm can process the 43-view 3D video with 4K × 2K resolution in real time on GPU, and vivid 3D experience is perceived.
Single DMD time-multiplexed 64-views autostereoscopic 3D display
NASA Astrophysics Data System (ADS)
Loreti, Luigi
2013-03-01
Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links
Optimization of reading conditions for flat panel displays.
Thomas, J A; Chakrabarti, K; Kaczmarek, R V; Maslennikov, A; Mitchell, C A; Romanyukha, A
2006-06-01
Task Group 18 (TG 18) of the American Association of Physicists in Medicine has developed guidelines for Assessment of Display Performance for Medical Imaging Systems. In this document, a method for determination of the maximum room lighting for displays is suggested. It is based on luminance measurements of a black target displayed on each display device at different room illuminance levels. Linear extrapolation of the above luminance measurements vs. room illuminance allows one to determine diffuse and specular reflection coefficients. TG 18 guidelines have established recommended maximum room lighting. It is based on the characterization of the display by its minimum and maximum luminance and the description of room by diffuse and specular coefficients. We carried out these luminance measurements for three selected displays to determine their optimum viewing conditions: one cathode ray tube and two flat panels. We found some problems with the application of the TG 18 guidelines to optimize viewing conditions for IBM T221 flat panels. Introduction of the requirement for minimum room illuminance allows a more accurate determination of the optimal viewing conditions (maximum and minimum room illuminance) for IBM flat panels. It also addresses the possible loss of contrast in medical images on flat panel displays because of the effect of nonlinearity in the dependence of luminance on room illuminance at low room lighting.
Characterization of crosstalk in stereoscopic display devices.
Zafar, Fahad; Badano, Aldo
2014-12-01
Many different types of stereoscopic display devices are used for commercial and research applications. Stereoscopic displays offer the potential to improve performance in detection tasks for medical imaging diagnostic systems. Due to the variety of stereoscopic display technologies, it remains unclear how these compare with each other for detection and estimation tasks. Different stereo devices have different performance trade-offs due to their display characteristics. Among them, crosstalk is known to affect observer perception of 3D content and might affect detection performance. We measured and report the detailed luminance output and crosstalk characteristics for three different types of stereoscopic display devices. We recorded the effect of other issues on recorded luminance profiles such as viewing angle, use of different eye wear, and screen location. Our results show that the crosstalk signature for viewing 3D content can vary considerably when using different types of 3D glasses for active stereo displays. We also show that significant differences are present in crosstalk signatures when varying the viewing angle from 0 degrees to 20 degrees for a stereo mirror 3D display device. Our detailed characterization can help emulate the effect of crosstalk in conducting computational observer image quality assessment evaluations that minimize costly and time-consuming human reader studies.
Projection display industry market and technology trends
NASA Astrophysics Data System (ADS)
Castellano, Joseph A.; Mentley, David E.
1995-04-01
The projection display industry is diverse, embracing a variety of technologies and applications. In recent years, there has been a high level of interest in projection displays, particularly those using LCD panels or light valves because of the difficulty in making large screen, direct view displays. Many developers feel that projection displays will be the wave of the future for large screen HDTV (high-definition television), penetrating the huge existing market for direct view CRT-based televisions. Projection displays can have the images projected onto a screen either from the rear or the front; the main characteristic is their ability to be viewed by more than one person. In addition to large screen home television receivers, there are numerous other uses for projection displays including conference room presentations, video conferences, closed circuit programming, computer-aided design, and military command/control. For any given application, the user can usually choose from several alternative technologies. These include CRT front or rear projectors, LCD front or rear projectors, LCD overhead projector plate monitors, various liquid or solid-state light valve projectors, or laser-addressed systems. The overall worldwide market for projection information displays of all types and for all applications, including home television, will top DOL4.6 billion in 1995 and DOL6.45 billion in 2001.
Sociability modifies dogs' sensitivity to biological motion of different social relevance.
Ishikawa, Yuko; Mills, Daniel; Willmott, Alexander; Mullineaux, David; Guo, Kun
2018-03-01
Preferential attention to living creatures is believed to be an intrinsic capacity of the visual system of several species, with perception of biological motion often studied and, in humans, it correlates with social cognitive performance. Although domestic dogs are exceptionally attentive to human social cues, it is unknown whether their sociability is associated with sensitivity to conspecific and heterospecific biological motion cues of different social relevance. We recorded video clips of point-light displays depicting a human or dog walking in either frontal or lateral view. In a preferential looking paradigm, dogs spontaneously viewed 16 paired point-light displays showing combinations of normal/inverted (control condition), human/dog and frontal/lateral views. Overall, dogs looked significantly longer at frontal human point-light display versus the inverted control, probably due to its clearer social/biological relevance. Dogs' sociability, assessed through owner-completed questionnaires, further revealed that low-sociability dogs preferred the lateral point-light display view, whereas high-sociability dogs preferred the frontal view. Clearly, dogs can recognize biological motion, but their preference is influenced by their sociability and the stimulus salience, implying biological motion perception may reflect aspects of dogs' social cognition.
NASA Astrophysics Data System (ADS)
Veligdan, James T.; Beiser, Leo; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard
1997-07-01
The polyplanar optical display (POD) is a unique display screen which can be use with any projection source. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 100 milliwatt green solid state laser as its optical source. In order to produce real- time video, the laser light is being modulated by a digital light processing (DLP) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, we discuss the electronic interfacing to the DLP chip, the opto-mechanical design and viewing angle characteristics.
Laser-driven polyplanar optic display
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veligdan, J.T.; Biscardi, C.; Brewster, C.
1998-01-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte-black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 200 milliwatt green solid-state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP) chip manufactured by Texas Instruments, Inc. A variablemore » astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, the authors discuss the DLP chip, the optomechanical design and viewing angle characteristics.« less
Laser-driven polyplanar optic display
NASA Astrophysics Data System (ADS)
Veligdan, James T.; Beiser, Leo; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard
1998-05-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte-black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 200 milliwatt green solid- state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLPTM) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, we discuss the DLPTM chip, the opto-mechanical design and viewing angle characteristics.
Credit BG. Interior view of the building displays temporary wooden ...
Credit BG. Interior view of the building displays temporary wooden building construction, pump, and water piping arrangements. The well is currently used as an observation post for changes in ground water levels - Edwards Air Force Base, North Base, Well No. 2, East of Second Street, Boron, Kern County, CA
Accommodation measurements of horizontally scanning holographic display.
Takaki, Yasuhiro; Yokouchi, Masahito
2012-02-13
Eye accommodation is considered to function properly for three-dimensional (3D) images generated by holography. We developed a horizontally scanning holographic display technique that enlarges both the screen size and viewing zone angle. A 3D image generated by this technique can be easily seen by both eyes. In this study, we measured the accommodation responses to a 3D image generated by the horizontally scanning holographic display technique that has a horizontal viewing zone angle of 14.6° and screen size of 4.3 in. We found that the accommodation responses to a 3D image displayed within 400 mm from the display screen were similar to those of a real object.
Apparent minification in an imaging display under reduced viewing conditions.
Meehan, J W
1993-01-01
When extended outdoor scenes are imaged with magnification of 1 in optical, electronic, or computer-generated displays, scene features appear smaller and farther than in direct view. This has been shown to occur in various periscopic and camera-viewfinder displays outdoors in daylight. In four experiments it was found that apparent minification of the size of a planar object at a distance of 3-9 m indoors occurs in the viewfinder display of an SLR camera both in good light and in darkness with only the luminous object visible. The effect is robust and survives changes in the relationship between object luminance in the display and in direct view and occurs in the dark when subjects have no prior knowledge of room dimensions, object size or object distance. The results of a fifth experiment suggest that the effect is an instance of reduced visual size constancy consequent on elimination of cues for size, which include those for distance.
Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays.
Choi, Heejin; Min, Sung-Wook; Jung, Sungyong; Park, Jae-Hyeung; Lee, Byoungho
2003-04-21
In spite of many advantages of integral imaging, the viewing zone in which an observer can see three-dimensional images is limited within a narrow range. Here, we propose a novel method to increase the number of viewing zones by using a dynamic barrier array. We prove our idea by fabricating and locating the dynamic barrier array between a lens array and a display panel. By tilting the barrier array, it is possible to distribute images for each viewing zone. Thus, the number of viewing zones can be increased with an increment of the states of the barrier array tilt.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2006-01-01
The visual requirements for augmented reality or virtual environments displays that might be used in real or virtual towers are reviewed with respect to similar displays already used in aircraft. As an example of the type of human performance studies needed to determine the useful specifications of augmented reality displays, an optical see-through display was used in an ATC Tower simulation. Three different binocular fields of view (14deg, 28deg, and 47deg) were examined to determine their effect on subjects ability to detect aircraft maneuvering and landing. The results suggest that binocular fields of view much greater than 47deg are unlikely to dramatically improve search performance and that partial binocular overlap is a feasible display technique for augmented reality Tower applications.
Science opportunity analyzer - a multi-mission tool for planning
NASA Technical Reports Server (NTRS)
Streiffert, B. A.; Polanskey, C. A.; O'Reilly, T.; Colwell, J.
2002-01-01
For many years the diverse scientific community that supports JPL's wide variety ofinterplanetary space missions has needed a tool in order to plan and develop their experiments. The tool needs to be easily adapted to various mission types and portable to the user community. The Science Opportunity Analyzer, SOA, now in its third year of development, is intended to meet this need. SOA is a java-based application that is designed to enable scientists to identify and analyze opportunities for science observations from spacecraft. It differs from other planning tools in that it does not require an in-depth knowledge of the spacecraft command system or operation modes to begin high level planning. Users can, however, develop increasingly detailed levels of design. SOA consists of six major functions: Opportunity Search, Visualization, Observation Design, Constraint Checking, Data Output and Communications. Opportunity Search is a GUI driven interface to existing search engines that can be used to identify times when a spacecraft is in a specific geometrical relationship with other bodies in the solar system. This function can be used for advanced mission planning as well as for making last minute adjustments to mission sequences in response to trajectory modifications. Visualization is a key aspect of SOA. The user can view observation opportunities in either a 3D representation or as a 2D map projection. The user is given extensive flexibility to customize what is displayed in the view. Observation Design allows the user to orient the spacecraft and visualize the projection of the instrument field of view for that orientation using the same views as Opportunity Search. Constraint Checking is provided to validate various geometrical and physical aspects of an observation design. The user has the ability to easily create custom rules or to use official project-generated flight rules. This capability may also allow scientists to easily impact the cost to science if flight rule changes occur. Data Output generates information based on the spacecraft's trajectory, opportunity search results or based on a created observation. The data can be viewed either in tabular format or as a graph. Finally, SOA is unique in that it is designed to be able to communicate with a variety of existing planning and sequencing tools. From the very beginning SOA was designed with the user in mind. Extensive surveys of the potential user community were conducted in order to develop the software requirements. Throughout the development period, close ties have been maintained with the science community to insure that the tool maintains its user focus. Although development is still in its early stages, SOA is already developing a user community on the Cassini project, which is depending on this tool for their science planning. There are other tools at JPL that do various pieces of what SOA can do; however, there is no other tool which combines all these functions and presents them to the user in such a convenient, cohesive, and easy to use fashion.
High-resolution, continuous field-of-view (FOV), non-rotating imaging system
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance L. (Inventor); Stirbl, Robert C. (Inventor); Aghazarian, Hrand (Inventor); Padgett, Curtis W. (Inventor)
2010-01-01
A high resolution CMOS imaging system especially suitable for use in a periscope head. The imaging system includes a sensor head for scene acquisition, and a control apparatus inclusive of distributed processors and software for device-control, data handling, and display. The sensor head encloses a combination of wide field-of-view CMOS imagers and narrow field-of-view CMOS imagers. Each bank of imagers is controlled by a dedicated processing module in order to handle information flow and image analysis of the outputs of the camera system. The imaging system also includes automated or manually controlled display system and software for providing an interactive graphical user interface (GUI) that displays a full 360-degree field of view and allows the user or automated ATR system to select regions for higher resolution inspection.
A framework for interactive visualization of digital medical images.
Koehring, Andrew; Foo, Jung Leng; Miyano, Go; Lobe, Thom; Winer, Eliot
2008-10-01
The visualization of medical images obtained from scanning techniques such as computed tomography and magnetic resonance imaging is a well-researched field. However, advanced tools and methods to manipulate these data for surgical planning and other tasks have not seen widespread use among medical professionals. Radiologists have begun using more advanced visualization packages on desktop computer systems, but most physicians continue to work with basic two-dimensional grayscale images or not work directly with the data at all. In addition, new display technologies that are in use in other fields have yet to be fully applied in medicine. It is our estimation that usability is the key aspect in keeping this new technology from being more widely used by the medical community at large. Therefore, we have a software and hardware framework that not only make use of advanced visualization techniques, but also feature powerful, yet simple-to-use, interfaces. A virtual reality system was created to display volume-rendered medical models in three dimensions. It was designed to run in many configurations, from a large cluster of machines powering a multiwalled display down to a single desktop computer. An augmented reality system was also created for, literally, hands-on interaction when viewing models of medical data. Last, a desktop application was designed to provide a simple visualization tool, which can be run on nearly any computer at a user's disposal. This research is directed toward improving the capabilities of medical professionals in the tasks of preoperative planning, surgical training, diagnostic assistance, and patient education.
NASA Astrophysics Data System (ADS)
Maurer, Calvin R., Jr.; Sauer, Frank; Hu, Bo; Bascle, Benedicte; Geiger, Bernhard; Wenzel, Fabian; Recchi, Filippo; Rohlfing, Torsten; Brown, Christopher R.; Bakos, Robert J.; Maciunas, Robert J.; Bani-Hashemi, Ali R.
2001-05-01
We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.
User's Guide for MetView: A Meteorological Display and Assessment Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glantz, Clifford S.; Pelton, Mitchell A.; Allwine, K Jerry
2000-09-27
MetView Version 2.0 is an easy-to-use model for accessing, viewing, and analyzing meteorological data. MetView provides both graphical and numerical displays of data. It can accommodate data from an extensive meteorological monitoring network that includes near-surface monitoring locations, instrumented towers, sodars, and meteorologist observations. MetView is used operationally for both routine, emergency response, and research applications at the U.S. Department of Energy's Hanford Site. At the Site's Emergency Operations Center, MetView aids in the access, visualization, and interpretation of real-time meteorological data. Historical data can also be accessed and displayed. Emergency response personnel at the Emergency Operations Center use MetViewmore » products in the formulation of protective action recommendations and other decisions. In the initial stage of an emergency, MetView can be operated using a very simple, five-step procedure. This first-responder procedure allows non-technical staff to rapidly generate meteorological products and disseminate key information. After first-responder information products are produced, the Emergency Operations Center's technical staff can conduct more sophisticated analyses using the model. This may include examining the vertical variation in winds, assessing recent changes in atmospheric conditions, evaluating atmospheric mixing rates, and forecasting changes in meteorological conditions. This user's guide provides easy-to-follow instructions for both first-responder and routine operation of the model. Examples, with explanations, are provided for each type of MetView output display. Information is provided on the naming convention, format, and contents of each type of meteorological data file used by the model area. This user's guide serves as a ready reference for experienced MetView users and a training manual for new users.« less
NASA Astrophysics Data System (ADS)
Ito, Shusei; Uchida, Keitaro; Mizushina, Haruki; Suyama, Shiro; Yamamoto, Hirotsugu
2017-02-01
Security is one of the big issues in automated teller machine (ATM). In ATM, two types of security have to be maintained. One is to secure displayed information. The other is to secure screen contamination. This paper gives a solution for these two security issues. In order to secure information against peeping at the screen, we utilize visual cryptography for displayed information and limit the viewing zone. Furthermore, an aerial information screen with aerial imaging by retro-reflection, named AIRR enables users to avoid direct touch on the information screen. The purpose of this paper is to propose an aerial secure display technique that ensures security of displayed information as well as security against contamination problem on screen touch. We have developed a polarization-processing display that is composed of a backlight, a polarizer, a background LCD panel, a gap, a half-wave retarder, and a foreground LCD panel. Polarization angle is rotated with the LCD panels. We have constructed a polarization encryption code set. Size of displayed images are designed to limit the viewing position. Furthermore, this polarization-processing display has been introduced into our aerial imaging optics, which employs a reflective polarizer and a retro-reflector covered with a quarter-wave retarder. Polarization-modulated light forms the real image over the reflective polarizer. We have successfully formed aerial information screen that shows the secret image with a limited viewing position. This is the first realization of aerial secure display by use of polarization-processing display with retarder-film and retro-reflector.
Polyplanar optic display for cockpit application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veligdan, J.; Biscardi, C.; Brewster, C.
1998-04-01
The Polyplanar Optical Display (POD) is a high contrast display screen being developed for cockpit applications. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a long lifetime, (10,000 hour), 200 mW green solid-state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments,more » Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design and speckle reduction, the authors discuss the electronic interfacing to the DLP{trademark} chip, the opto-mechanical design and viewing angle characteristics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veligdan, J.; Biscardi, C.; Brewster, C.
1997-07-01
The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 100 milliwatt green solid state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments, Inc.more » A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, the authors discuss the electronic interfacing to the DLP{trademark} chip, the opto-mechanical design and viewing angle characteristics.« less
Polyplanar optic display for cockpit application
NASA Astrophysics Data System (ADS)
Veligdan, James T.; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard; Freibott, William C.
1998-09-01
The Polyplanar Optical Display (POD) is a high contrast display screen being developed for cockpit applications. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a long lifetime, (10,000 hour), 200 mW green solid-state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLPTM) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design and speckle reduction, we discuss the electronic interfacing to the DLPTM chip, the opto-mechanical design and viewing angle characteristics.
Display challenges resulting from the use of wide field of view imaging devices
NASA Astrophysics Data System (ADS)
Petty, Gregory J.; Fulton, Jack; Nicholson, Gail; Seals, Ean
2012-06-01
As focal plane array technologies advance and imagers increase in resolution, display technology must outpace the imaging improvements in order to adequately represent the complete data collection. Typical display devices tend to have an aspect ratio similar to 4:3 or 16:9, however a breed of Wide Field of View (WFOV) imaging devices exist that skew from the norm with aspect ratios as high as 5:1. This particular quality, when coupled with a high spatial resolution, presents a unique challenge for display devices. Standard display devices must choose between resizing the image data to fit the display and displaying the image data in native resolution and truncating potentially important information. The problem compounds when considering the applications; WFOV high-situationalawareness imagers are sought for space-limited military vehicles. Tradeoffs between these issues are assessed to the image quality of the WFOV sensor.
The viewpoint-specific failure of modern 3D displays in laparoscopic surgery.
Sakata, Shinichiro; Grove, Philip M; Hill, Andrew; Watson, Marcus O; Stevenson, Andrew R L
2016-11-01
Surgeons conventionally assume the optimal viewing position during 3D laparoscopic surgery and may not be aware of the potential hazards to team members positioned across different suboptimal viewing positions. The first aim of this study was to map the viewing positions within a standard operating theatre where individuals may experience visual ghosting (i.e. double vision images) from crosstalk. The second aim was to characterize the standard viewing positions adopted by instrument nurses and surgical assistants during laparoscopic pelvic surgery and report the associated levels of visual ghosting and discomfort. In experiment 1, 15 participants viewed a laparoscopic 3D display from 176 different viewing positions around the screen. In experiment 2, 12 participants (randomly assigned to four clinically relevant viewing positions) viewed laparoscopic suturing in a simulation laboratory. In both experiments, we measured the intensity of visual ghosting. In experiment 2, participants also completed the Simulator Sickness Questionnaire. We mapped locations within the dimensions of a standard operating theatre at which visual ghosting may result during 3D laparoscopy. Head height relative to the bottom of the image and large horizontal eccentricities away from the surface normal were important contributors to high levels of visual ghosting. Conventional viewing positions adopted by instrument nurses yielded high levels of visual ghosting and severe discomfort. The conventional viewing positions adopted by surgical team members during laparoscopic pelvic operations are suboptimal for viewing 3D laparoscopic displays, and even short periods of viewing can yield high levels of discomfort.
JuxtaView - A tool for interactive visualization of large imagery on scalable tiled displays
Krishnaprasad, N.K.; Vishwanath, V.; Venkataraman, S.; Rao, A.G.; Renambot, L.; Leigh, J.; Johnson, A.E.; Davis, B.
2004-01-01
JuxtaView is a cluster-based application for viewing ultra-high-resolution images on scalable tiled displays. We present in JuxtaView, a new parallel computing and distributed memory approach for out-of-core montage visualization, using LambdaRAM, a software-based network-level cache system. The ultimate goal of JuxtaView is to enable a user to interactively roam through potentially terabytes of distributed, spatially referenced image data such as those from electron microscopes, satellites and aerial photographs. In working towards this goal, we describe our first prototype implemented over a local area network, where the image is distributed using LambdaRAM, on the memory of all nodes of a PC cluster driving a tiled display wall. Aggressive pre-fetching schemes employed by LambdaRAM help to reduce latency involved in remote memory access. We compare LambdaRAM with a more traditional memory-mapped file approach for out-of-core visualization. ?? 2004 IEEE.
JSC MCC Bldg 30 Instrumentation and Communications Officer (INCO) RTDS
1988-06-02
Instrumentation and Communications Officer (INCO) John F. Muratore monitors conventional workstation displays during an STS-26 simulation in JSC Mission Control Center (MCC) Bldg 30 Flight Control Room (FCR). Next to Muratore an operator views the real time data system (RTDS), an expert system. During the STS-29 mission two conventional monochrome console display units will be removed and replaced with RTDS displays. View is for the STS-29 press kit from Office of Aeronautics and Space Technology (OAST) RTDS.
Real Image Visual Display System
1992-12-01
DTI-100M autostereoscopic display ......................... 15 8. Lenticular screen ........ ............................. 16 9. Lenticular screen...parameters and pixel position ................. 17 10. General viewing of the stereoscopic couple .................... 18 11. Viewing zones for lenticular ...involves using a lenticular screen for imaging. Lenticular screens are probably most familiar in the form of ŗ-D postcards" which 15 consist of an
37. View of detection radar environmental display (DRED) console for ...
37. View of detection radar environmental display (DRED) console for middle DR 2 (structure no. 736) antenna, located in MWOC facility. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
Endeavour backdropped against space with Sun displaying rayed effect
1993-12-09
STS061-105-024 (2-13 Dec. 1993) --- One of Endeavour's space walkers captured this view of Endeavour backdropped against the blackness of space, with the Sun displaying a rayed effect. The extended Remote Manipulator System (RMS) arm that the astronaut was standing on is seen on the left side of the view.
Software components for medical image visualization and surgical planning
NASA Astrophysics Data System (ADS)
Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.
2001-05-01
Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been licensed and certified for use in a commercial image guidance system. Conclusions: It is feasible to encapsulate image manipulation and surgical guidance tasks in individual, reusable software modules. These modules allow for faster development of new applications. The strict application of object oriented software design methods allows individual components of such a system to make the transition from the research environment to a commercial one.
Han, Jian; Liu, Juan; Yao, Xincheng; Wang, Yongtian
2015-02-09
A compact waveguide display system integrating freeform elements and volume holograms is presented here for the first time. The use of freeform elements can broaden the field of view, which limits the applications of a holographic waveguide. An optimized system can achieve a diagonal field of view of 45° when the thickness of the waveguide planar is 3mm. Freeform-elements in-coupler and the volume holograms out-coupler were designed in detail in our study, and the influence of grating configurations on diffraction efficiency was analyzed thoroughly. The off-axis aberrations were well compensated by the in-coupler and the diffraction efficiency of the optimized waveguide display system could reach 87.57%. With integrated design, stability and reliability of this monochromatic display system were achieved and the alignment of the system was easily controlled by the record of the volume holograms, which makes mass production possible.
Han, Jian; Liu, Juan; Yao, Xincheng; Wang, Yongtian
2015-01-01
A compact waveguide display system integrating freeform elements and volume holograms is presented here for the first time. The use of freeform elements can broaden the field of view, which limits the applications of a holographic waveguide. An optimized system can achieve a diagonal field of view of 45° when the thickness of the waveguide planar is 3mm. Freeform-elements in-coupler and the volume holograms out-coupler were designed in detail in our study, and the influence of grating configurations on diffraction efficiency was analyzed thoroughly. The off-axis aberrations were well compensated by the in-coupler and the diffraction efficiency of the optimized waveguide display system could reach 87.57%. With integrated design, stability and reliability of this monochromatic display system were achieved and the alignment of the system was easily controlled by the record of the volume holograms, which makes mass production possible. PMID:25836207
Mono-stereo-autostereo: the evolution of 3-dimensional neurosurgical planning.
Stadie, Axel T; Kockro, Ralf A
2013-01-01
In the past decade, surgery planning has changed significantly. The main reason is the improvements in computer graphical rendering power and display technology, which turned the plain graphics of the mid-1990s into interactive stereoscopic objects. To report our experiences with 2 virtual reality systems used for planning neurosurgical operations. A series of 208 operations were planned with the Dextroscope (Bracco AMT, Singapore) requiring the use of liquid crystal display shutter glasses. The participating neurosurgeons answered a questionnaire after the planning procedure and postoperatively. In a second prospective series of 33 patients, we used an autostereoscopic monitor system (MD20-3-D; Setred SA, Sweden) to plan intracranial operations. A questionnaire regarding the value of surgery planning was answered preoperatively and postoperatively. The Dextroscope could be integrated into daily surgical routine. Surgeons regarded their understanding of the pathoanatomical situation as improved, leading to enhanced intraoperative orientation and confidence compared with conventional planning. The autostereoscopic Setred system was regarded as helpful in establishing the surgical strategy and analyzing the pathoanatomical situation compared with conventional planning. Both systems were perceived as a backup in case of failure of the standard navigation system. Improvement of display and interaction techniques adds to the realism of the planning process and enables precise structural understanding preoperatively. This minimizes intraoperative guesswork and exploratory dissection. Autostereoscopic display techniques will further increase the value and acceptance of 3-dimensional planning and intraoperative navigation.
Tracking a head-mounted display in a room-sized environment with head-mounted cameras
NASA Astrophysics Data System (ADS)
Wang, Jih-Fang; Azuma, Ronald T.; Bishop, Gary; Chi, Vernon; Eyles, John; Fuchs, Henry
1990-10-01
This paper presents our efforts to accurately track a Head-Mounted Display (HMD) in a large environment. We review our current benchtop prototype (introduced in {WCF9O]), then describe our plans for building the full-scale system. Both systems use an inside-oui optical tracking scheme, where lateraleffect photodiodes mounted on the user's helmet view flashing infrared beacons placed in the environment. Church's method uses the measured 2D image positions and the known 3D beacon locations to recover the 3D position and orientation of the helmet in real-time. We discuss the implementation and performance of the benchtop prototype. The full-scale system design includes ceiling panels that hold the infrared beacons and a new sensor arrangement of two photodiodes with holographic lenses. In the full-scale system, the user can walk almost anywhere under the grid of ceiling panels, making the working volume nearly as large as the room.
Perceived change in orientation from optic flow in the central visual field
NASA Technical Reports Server (NTRS)
Dyre, Brian P.; Andersen, George J.
1988-01-01
The effects of internal depth within a simulation display on perceived changes in orientation have been studied. Subjects monocularly viewed displays simulating observer motion within a volume of randomly positioned points through a window which limited the field of view to 15 deg. Changes in perceived spatial orientation were measured by changes in posture. The extent of internal depth within the display, the presence or absence of visual information specifying change in orientation, and the frequency of motion supplied by the display were examined. It was found that increased sway occurred at frequencies equal to or below 0.375 Hz when motion at these frequencies was displayed. The extent of internal depth had no effect on the perception of changing orientation.
NASA Astrophysics Data System (ADS)
Sanford, James L.; Schlig, Eugene S.; Prache, Olivier; Dove, Derek B.; Ali, Tariq A.; Howard, Webster E.
2002-02-01
The IBM Research Division and eMagin Corp. jointly have developed a low-power VGA direct view active matrix OLED display, fabricated on a crystalline silicon CMOS chip. The display is incorporated in IBM prototype wristwatch computers running the Linus operating system. IBM designed the silicon chip and eMagin developed the organic stack and performed the back-end-of line processing and packaging. Each pixel is driven by a constant current source controlled by a CMOS RAM cell, and the display receives its data from the processor memory bus. This paper describes the OLED technology and packaging, and outlines the design of the pixel and display electronics and the processor interface. Experimental results are presented.
Magnifying Smartphone Screen Using Google Glass for Low-Vision Users.
Pundlik, Shrinivas; HuaQi Yi; Rui Liu; Peli, Eli; Gang Luo
2017-01-01
Magnification is a key accessibility feature used by low-vision smartphone users. However, small screen size can lead to loss of context and make interaction with magnified displays challenging. We hypothesize that controlling the viewport with head motion can be natural and help in gaining access to magnified displays. We implement this idea using a Google Glass that displays the magnified smartphone screenshots received in real time via Bluetooth. Instead of navigating with touch gestures on the magnified smartphone display, the users can view different screen locations by rotating their head, and remotely interacting with the smartphone. It is equivalent to looking at a large virtual image through a head contingent viewing port, in this case, the Glass display with ~ 15 ° field of view. The system can transfer seven screenshots per second at 8 × magnification, sufficient for tasks where the display content does not change rapidly. A pilot evaluation of this approach was conducted with eight normally sighted and four visually impaired subjects performing assigned tasks using calculator and music player apps. Results showed that performance in the calculation task was faster with the Glass than with the phone's built-in screen zoom. We conclude that head contingent scanning control can be beneficial in navigating magnified small smartphone displays, at least for tasks involving familiar content layout.
NASA Astrophysics Data System (ADS)
Yan, Zhiqiang; Yan, Xingpeng; Jiang, Xiaoyu; Gao, Hui; Wen, Jun
2017-11-01
An integral imaging based light field display method is proposed by use of holographic diffuser, and enhanced viewing resolution is gained over conventional integral imaging systems. The holographic diffuser is fabricated with controlled diffusion characteristics, which interpolates the discrete light field of the reconstructed points to approximate the original light field. The viewing resolution can thus be improved and independent of the limitation imposed by Nyquist sampling frequency. An integral imaging system with low Nyquist sampling frequency is constructed, and reconstructed scenes of high viewing resolution using holographic diffuser are demonstrated, verifying the feasibility of the method.
Efficient fabrication method of nano-grating for 3D holographic display with full parallax views.
Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Fang, Zongbao; Pu, Donglin; Ye, Yan; Liu, Yanhua; Chen, Linsen
2016-03-21
Without any special glasses, multiview 3D displays based on the diffractive optics can present high resolution, full-parallax 3D images in an ultra-wide viewing angle. The enabling optical component, namely the phase plate, can produce arbitrarily distributed view zones by carefully designing the orientation and the period of each nano-grating pixel. However, such 3D display screen is restricted to a limited size due to the time-consuming fabricating process of nano-gratings on the phase plate. In this paper, we proposed and developed a lithography system that can fabricate the phase plate efficiently. Here we made two phase plates with full nano-grating pixel coverage at a speed of 20 mm2/mins, a 500 fold increment in the efficiency when compared to the method of E-beam lithography. One 2.5-inch phase plate generated 9-view 3D images with horizontal-parallax, while the other 6-inch phase plate produced 64-view 3D images with full-parallax. The angular divergence in horizontal axis and vertical axis was 1.5 degrees, and 1.25 degrees, respectively, slightly larger than the simulated value of 1.2 degrees by Finite Difference Time Domain (FDTD). The intensity variation was less than 10% for each viewpoint, in consistency with the simulation results. On top of each phase plate, a high-resolution binary masking pattern containing amplitude information of all viewing zone was well aligned. We achieved a resolution of 400 pixels/inch and a viewing angle of 40 degrees for 9-view 3D images with horizontal parallax. In another prototype, the resolution of each view was 160 pixels/inch and the view angle was 50 degrees for 64-view 3D images with full parallax. As demonstrated in the experiments, the homemade lithography system provided the key fabricating technology for multiview 3D holographic display.
Subjective and objective evaluation of visual fatigue on viewing 3D display continuously
NASA Astrophysics Data System (ADS)
Wang, Danli; Xie, Yaohua; Yang, Xinpan; Lu, Yang; Guo, Anxiang
2015-03-01
In recent years, three-dimensional (3D) displays become more and more popular in many fields. Although they can provide better viewing experience, they cause extra problems, e.g., visual fatigue. Subjective or objective methods are usually used in discrete viewing processes to evaluate visual fatigue. However, little research combines subjective indicators and objective ones in an entirely continuous viewing process. In this paper, we propose a method to evaluate real-time visual fatigue both subjectively and objectively. Subjects watch stereo contents on a polarized 3D display continuously. Visual Reaction Time (VRT), Critical Flicker Frequency (CFF), Punctum Maximum Accommodation (PMA) and subjective scores of visual fatigue are collected before and after viewing. During the viewing process, the subjects rate the visual fatigue whenever it changes, without breaking the viewing process. At the same time, the blink frequency (BF) and percentage of eye closure (PERCLOS) of each subject is recorded for comparison to a previous research. The results show that the subjective visual fatigue and PERCLOS increase with time and they are greater in a continuous process than a discrete one. The BF increased with time during the continuous viewing process. Besides, the visual fatigue also induced significant changes of VRT, CFF and PMA.
System requirements for head down and helmet mounted displays in the military avionics environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flynn, M.F.; Kalmanash, M.; Sethna, V.
1996-12-31
The introduction of flat panel display technologies into the military avionics cockpit is a challenging proposition, due to the very difficult system level requirements which must be met. These relate to environmental extremes (temperature and vibrational), sever ambient lighting conditions (10,000 fL to nighttime viewing), night vision system compatibility, and wide viewing angle. At the same time, the display system must be packaged in minimal space and use minimal power. The authors will present details on the display system requirements for both head down and helmet mounted systems, as well as information on how these challenges may be overcome.
Operator vision aids for space teleoperation assembly and servicing
NASA Technical Reports Server (NTRS)
Brooks, Thurston L.; Ince, Ilhan; Lee, Greg
1992-01-01
This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2006-01-01
The visual requirements for augmented reality or virtual environments displays that might be used in real or virtual towers are reviewed wi th respect to similar displays already used in aircraft. As an example of the type of human performance studies needed to determine the use ful specifications of augmented reality displays, an optical see-thro ugh display was used in an ATC Tower simulation. Three different binocular fields of view (14 deg, 28 deg, and 47 deg) were examined to det ermine their effect on subjects# ability to detect aircraft maneuveri ng and landing. The results suggest that binocular fields of view much greater than 47 deg are unlikely to dramatically improve search perf ormance and that partial binocular overlap is a feasible display tech nique for augmented reality Tower applications.
Television, computer and portable display device use by people with central vision impairment
Woods, Russell L; Satgunam, PremNandhini
2011-01-01
Purpose To survey the viewing experience (e.g. hours watched, difficulty) and viewing metrics (e.g. distance viewed, display size) for television (TV), computers and portable visual display devices for normally-sighted (NS) and visually impaired participants. This information may guide visual rehabilitation. Methods Survey was administered either in person or in a telephone interview on 223 participants of whom 104 had low vision (LV, worse than 6/18, age 22 to 90y, 54 males), and 94 were NS (visual acuity 6/9 or better, age 20 to 86y, 50 males). Depending on their situation, NS participants answered up to 38 questions and LV participants answered up to a further 10 questions. Results Many LV participants reported at least “some” difficulty watching TV (71/103), reported at least “often” having difficulty with computer displays (40/76) and extreme difficulty watching videos on handheld devices (11/16). The average daily TV viewing was slightly, but not significantly, higher for the LV participants (3.6h) than the NS (3.0h). Only 18% of LV participants used visual aids (all optical) to watch TV. Most LV participants obtained effective magnification from a reduced viewing distance for both TV and computer display. Younger LV participants also used a larger display when compared to older LV participants to obtain increased magnification. About half of the TV viewing time occurred in the absence of a companion for both the LV and the NS participants. The mean number of TVs at home reported by LV participants (2.2) was slightly but not significantly (p=0.09) higher than NS participants (2.0). LV participants were equally likely to have a computer but were significantly (p=0.004) less likely to access the internet (73/104) compared to NS participants (82/94). Most LV participants expressed an interest in image enhancing technology for TV viewing (67/104) and for computer use (50/74), if they used a computer. Conclusion In this study, both NS and LV participants had comparable video viewing habits. Most LV participants in our sample reported difficulty watching TV, and indicated an interest in assistive technology, such as image enhancement. As our participants reported that at least half their video viewing hours are spent alone and that there is usually more than one TV per household, this suggests that there are opportunities to use image enhancement on the TVs of LV viewers without interfering with the viewing experience of NS viewers. PMID:21410501
PlanWorks: A Debugging Environment for Constraint Based Planning Systems
NASA Technical Reports Server (NTRS)
Daley, Patrick; Frank, Jeremy; Iatauro, Michael; McGann, Conor; Taylor, Will
2005-01-01
Numerous planning and scheduling systems employ underlying constraint reasoning systems. Debugging such systems involves the search for errors in model rules, constraint reasoning algorithms, search heuristics, and the problem instance (initial state and goals). In order to effectively find such problems, users must see why each state or action is in a plan by tracking causal chains back to part of the initial problem instance. They must be able to visualize complex relationships among many different entities and distinguish between those entities easily. For example, a variable can be in the scope of several constraints, as well as part of a state or activity in a plan; the activity can arise as a consequence of another activity and a model rule. Finally, they must be able to track each logical inference made during planning. We have developed PlanWorks, a comprehensive system for debugging constraint-based planning and scheduling systems. PlanWorks assumes a strong transaction model of the entire planning process, including adding and removing parts of the constraint network, variable assignment, and constraint propagation. A planner logs all transactions to a relational database that is tailored to support queries for of specialized views to display different forms of data (e.g. constraints, activities, resources, and causal links). PlanWorks was specifically developed for the Extensible Universal Remote Operations Planning Architecture (EUROPA(sub 2)) developed at NASA, but the underlying principles behind PlanWorks make it useful for many constraint-based planning systems. The paper is organized as follows. We first describe some fundamentals of EUROPA(sub 2). We then describe PlanWorks' principal components. We then discuss each component in detail, and then describe inter-component navigation features. We close with a discussion of how PlanWorks is used to find model flaws.
Eye height scaling of absolute size in immersive and nonimmersive displays
NASA Technical Reports Server (NTRS)
Dixon, M. W.; Wraga, M.; Proffitt, D. R.; Williams, G. C.; Kaiser, M. K. (Principal Investigator)
2000-01-01
Eye-height (EH) scaling of absolute height was investigated in three experiments. In Experiment 1, standing observers viewed cubes in an immersive virtual environment. Observers' center of projection was placed at actual EH and at 0.7 times actual EH. Observers' size judgments revealed that the EH manipulation was 76.8% effective. In Experiment 2, seated observers viewed the same cubes on an interactive desktop display; however, no effect of EH was found in response to the simulated EH manipulation. Experiment 3 tested standing observers in the immersive environment with the field of view reduced to match that of the desktop. Comparable to Experiment 1, the effect of EH was 77%. These results suggest that EH scaling is not generally used when people view an interactive desktop display because the altitude of the center of projection is indeterminate. EH scaling is spontaneously evoked, however, in immersive environments.
Toward the light field display: autostereoscopic rendering via a cluster of projectors.
Yang, Ruigang; Huang, Xinyu; Li, Sifang; Jaynes, Christopher
2008-01-01
Ultimately, a display device should be capable of reproducing the visual effects observed in reality. In this paper we introduce an autostereoscopic display that uses a scalable array of digital light projectors and a projection screen augmented with microlenses to simulate a light field for a given three-dimensional scene. Physical objects emit or reflect light in all directions to create a light field that can be approximated by the light field display. The display can simultaneously provide many viewers from different viewpoints a stereoscopic effect without head tracking or special viewing glasses. This work focuses on two important technical problems related to the light field display; calibration and rendering. We present a solution to automatically calibrate the light field display using a camera and introduce two efficient algorithms to render the special multi-view images by exploiting their spatial coherence. The effectiveness of our approach is demonstrated with a four-projector prototype that can display dynamic imagery with full parallax.
Panoramic projection avionics displays
NASA Astrophysics Data System (ADS)
Kalmanash, Michael H.
2003-09-01
Avionics projection displays are entering production in advanced tactical aircraft. Early adopters of this technology in the avionics community used projection displays to replace or upgrade earlier units incorporating direct-view CRT or AMLCD devices. Typical motivation for these upgrades were the alleviation of performance, cost and display device availability concerns. In these systems, the upgraded (projection) displays were one-for-one form / fit replacements for the earlier units. As projection technology has matured, this situation has begun to evolve. The Lockheed-Martin F-35 is the first program in which the cockpit has been specifically designed to take advantage of one of the more unique capabilities of rear projection display technology, namely the ability to replace multiple small screens with a single large conformal viewing surface in the form of a panoramic display. Other programs are expected to follow, since the panoramic formats enable increased mission effectiveness, reduced cost and greater information transfer to the pilot. Some of the advantages and technical challenges associated with panoramic projection displays for avionics applications are described below.
Virtual viewpoint generation for three-dimensional display based on the compressive light field
NASA Astrophysics Data System (ADS)
Meng, Qiao; Sang, Xinzhu; Chen, Duo; Guo, Nan; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan
2016-10-01
Virtual view-point generation is one of the key technologies the three-dimensional (3D) display, which renders the new scene image perspective with the existing viewpoints. The three-dimensional scene information can be effectively recovered at different viewing angles to allow users to switch between different views. However, in the process of multiple viewpoints matching, when N free viewpoints are received, we need to match N viewpoints each other, namely matching C 2N = N(N-1)/2 times, and even in the process of matching different baselines errors can occur. To address the problem of great complexity of the traditional virtual view point generation process, a novel and rapid virtual view point generation algorithm is presented in this paper, and actual light field information is used rather than the geometric information. Moreover, for better making the data actual meaning, we mainly use nonnegative tensor factorization(NTF). A tensor representation is introduced for virtual multilayer displays. The light field emitted by an N-layer, M-frame display is represented by a sparse set of non-zero elements restricted to a plane within an Nth-order, rank-M tensor. The tensor representation allows for optimal decomposition of a light field into time-multiplexed, light-attenuating layers using NTF. Finally, the compressive light field of multilayer displays information synthesis is used to obtain virtual view-point by multiple multiplication. Experimental results show that the approach not only the original light field is restored with the high image quality, whose PSNR is 25.6dB, but also the deficiency of traditional matching is made up and any viewpoint can obtained from N free viewpoints.
Upadhyayula, Venkata K K; Meyer, David E; Curran, Mary Ann; Gonzalez, Michael A
2014-01-21
Carbon nanotube (CNT) field emission displays (FEDs) are currently in the product development stage and are expected to be commercialized in the near future because they offer image quality and viewing angles comparable to a cathode ray tube (CRT) while using a thinner structure, similar to a liquid crystal display (LCD), and enable more efficient power consumption during use. To address concerns regarding the environmental performance of CNT-FEDs, a screening-level, cradle-to-grave life cycle assessment (LCA) was conducted based on a functional unit of 10,000 viewing hours, the viewing lifespan of a CNT-FED. Contribution analysis suggests the impacts for material acquisition and manufacturing are greater than the combined impacts for use and end-of-life. A scenario analysis of the CNT paste composition identifies the metal components used in the paste are key contributors to the impacts of the upstream stages due to the impacts associated with metal preparation. Further improvement of the manufacturing impacts is possible by considering the use of plant-based oils, such as rapeseed oil, as alternatives to organic solvents for dispersion of CNTs. Given the differences in viewing lifespan, the impacts of the CNT-FED were compared with a LCD and a CRT display to provide more insight on how to improve the CNT-FED to make it a viable product alternative. When compared with CRT technology, CNT-FEDs show better environmental performance, whereas a comparison with LCD technology indicates the environmental impacts are roughly the same. Based on the results, the enhanced viewing capabilities of CNT-FEDs will be a more viable display option if manufacturers can increase the product's expected viewing lifespan.
Virtual displays for 360-degree video
NASA Astrophysics Data System (ADS)
Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.
2012-03-01
In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.
Scalable screen-size enlargement by multi-channel viewing-zone scanning holography.
Takaki, Yasuhiro; Nakaoka, Mitsuki
2016-08-08
Viewing-zone scanning holographic displays can enlarge both the screen size and the viewing zone. However, limitations exist in the screen size enlargement process even if the viewing zone is effectively enlarged. This study proposes a multi-channel viewing-zone scanning holographic display comprising multiple projection systems and a planar scanner to enable the scalable enlargement of the screen size. Each projection system produces an enlarged image of the screen of a MEMS spatial light modulator. The multiple enlarged images produced by the multiple projection systems are seamlessly tiled on the planar scanner. This screen size enlargement process reduces the viewing zones of the projection systems, which are horizontally scanned by the planar scanner comprising a rotating off-axis lens and a vertical diffuser to enlarge the viewing zone. A screen size of 7.4 in. and a viewing-zone angle of 43.0° are demonstrated.
Is eye damage caused by stereoscopic displays?
NASA Astrophysics Data System (ADS)
Mayer, Udo; Neumann, Markus D.; Kubbat, Wolfgang; Landau, Kurt
2000-05-01
A normal developing child will achieve emmetropia in youth and maintain it. Thereby cornea, lens and axial length of the eye grow astonishingly coordinated. In the last years research has evidenced that this coordinated growing process is a visually controlled closed loop. The mechanism has been studied particularly in animals. It was found that the growth of the axial length of the eyeball is controlled by image focus information from the retina. It was shown that maladjustment can occur by this visually-guided growth control mechanism that result in ametropia. Thereby it has been proven that e.g. short-sightedness is not only caused by heredity, but is acquired under certain visual conditions. It is shown that these conditions are similar to the conditions of viewing stereoscopic displays where the normal accommodation convergence coupling is disjoint. An evaluation is given of the potential of damaging the eyes by viewing stereoscopic displays. Concerning this, different viewing methods for stereoscopic displays are evaluated. Moreover, clues are given how the environment and display conditions shall be set and what users shall be chosen to minimize the risk of eye damages.
Augmenting digital displays with computation
NASA Astrophysics Data System (ADS)
Liu, Jing
As we inevitably step deeper and deeper into a world connected via the Internet, more and more information will be exchanged digitally. Displays are the interface between digital information and each individual. Naturally, one fundamental goal of displays is to reproduce information as realistically as possible since humans still care a lot about what happens in the real world. Human eyes are the receiving end of such information exchange; therefore it is impossible to study displays without studying the human visual system. In fact, the design of displays is rather closely coupled with what human eyes are capable of perceiving. For example, we are less interested in building displays that emit light in the invisible spectrum. This dissertation explores how we can augment displays with computation, which takes both display hardware and the human visual system into consideration. Four novel projects on display technologies are included in this dissertation: First, we propose a software-based approach to driving multiview autostereoscopic displays. Our display algorithm can dynamically assign views to hardware display zones based on multiple observers' current head positions, substantially reducing crosstalk and stereo inversion. Second, we present a dense projector array that creates a seamless 3D viewing experience for multiple viewers. We smoothly interpolate the set of viewer heights and distances on a per-vertex basis across the arrays field of view, reducing image distortion, crosstalk, and artifacts from tracking errors. Third, we propose a method for high dynamic range display calibration that takes into account the variation of the chrominance error over luminance. We propose a data structure for enabling efficient representation and querying of the calibration function, which also allows user-guided balancing between memory consumption and the amount of computation. Fourth, we present user studies that demonstrate that the ˜ 60 Hz critical flicker fusion rate for traditional displays is not enough for some computational displays that show complex image patterns. The study focuses on displays with hidden channels, and their application to 3D+2D TV. By taking advantage of the fast growing power of computation and sensors, these four novel display setups - in combination with display algorithms - advance the frontier of computational display research.
Preliminary display comparison for dental diagnostic applications
NASA Astrophysics Data System (ADS)
Odlum, Nicholas; Spalla, Guillaume; van Assche, Nele; Vandenberghe, Bart; Jacobs, Reinhilde; Quirynen, Marc; Marchessoux, Cédric
2012-02-01
The aim of this study is to predict the clinical performance and image quality of a display system for viewing dental images. At present, the use of dedicated medical displays is not uniform among dentists - many still view images on ordinary consumer displays. This work investigated whether the use of a medical display improved the perception of dental images by a clinician, compared to a consumer display. Display systems were simulated using the MEdical Virtual Imaging Chain (MEVIC). Images derived from two carefully performed studies on periodontal bone lesion detection and endodontic file length determination, were used. Three displays were selected: a medical grade one and two consumer displays (Barco MDRC-2120, Dell 1907FP and Dell 2007FPb). Some typical characteristics of the displays are evaluated by measurements and simulations like the Modulation Function (MTF), the Noise Power Spectrum (NPS), backlight stability or calibration. For the MTF, the display with the largest pixel pitch has logically the worst MTF. Moreover, the medical grade display has a slightly better MTF and the displays have similar NPS. The study shows the instability effect for the emitted intensity of the consumer displays compared to the medical grade one. Finally the study on the calibration methodology of the display shows that the signal in the dental images will be always more perceivable on the DICOM GSDF display than a gamma 2,2 display.
Facial Displays Are Tools for Social Influence.
Crivelli, Carlos; Fridlund, Alan J
2018-05-01
Based on modern theories of signal evolution and animal communication, the behavioral ecology view of facial displays (BECV) reconceives our 'facial expressions of emotion' as social tools that serve as lead signs to contingent action in social negotiation. BECV offers an externalist, functionalist view of facial displays that is not bound to Western conceptions about either expressions or emotions. It easily accommodates recent findings of diversity in facial displays, their public context-dependency, and the curious but common occurrence of solitary facial behavior. Finally, BECV restores continuity of human facial behavior research with modern functional accounts of non-human communication, and provides a non-mentalistic account of facial displays well-suited to new developments in artificial intelligence and social robotics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Comparative Study of the MTFA, ICS, and SQRI Image Quality Metrics for Visual Display Systems
1991-09-01
reasonable image quality predictions across select display and viewing condition parameters. 101 6.0 REFERENCES American National Standard for Human Factors Engineering of ’ Visual Display Terminal Workstations . ANSI
Saenz, Daniel L.; Paliwal, Bhudatt R.; Bayouth, John E.
2014-01-01
ViewRay, a novel technology providing soft-tissue imaging during radiotherapy is investigated for treatment planning capabilities assessing treatment plan dose homogeneity and conformity compared with linear accelerator plans. ViewRay offers both adaptive radiotherapy and image guidance. The combination of cobalt-60 (Co-60) with 0.35 Tesla magnetic resonance imaging (MRI) allows for magnetic resonance (MR)-guided intensity-modulated radiation therapy (IMRT) delivery with multiple beams. This study investigated head and neck, lung, and prostate treatment plans to understand what is possible on ViewRay to narrow focus toward sites with optimal dosimetry. The goal is not to provide a rigorous assessment of planning capabilities, but rather a first order demonstration of ViewRay planning abilities. Images, structure sets, points, and dose from treatment plans created in Pinnacle for patients in our clinic were imported into ViewRay. The same objectives were used to assess plan quality and all critical structures were treated as similarly as possible. Homogeneity index (HI), conformity index (CI), and volume receiving <20% of prescription dose (DRx) were calculated to assess the plans. The 95% confidence intervals were recorded for all measurements and presented with the associated bars in graphs. The homogeneity index (D5/D95) had a 1-5% inhomogeneity increase for head and neck, 3-8% for lung, and 4-16% for prostate. CI revealed a modest conformity increase for lung. The volume receiving 20% of the prescription dose increased 2-8% for head and neck and up to 4% for lung and prostate. Overall, for head and neck Co-60 ViewRay treatments planned with its Monte Carlo treatment planning software were comparable with 6 MV plans computed with convolution superposition algorithm on Pinnacle treatment planning system. PMID:24872603
Saenz, Daniel L; Paliwal, Bhudatt R; Bayouth, John E
2014-04-01
ViewRay, a novel technology providing soft-tissue imaging during radiotherapy is investigated for treatment planning capabilities assessing treatment plan dose homogeneity and conformity compared with linear accelerator plans. ViewRay offers both adaptive radiotherapy and image guidance. The combination of cobalt-60 (Co-60) with 0.35 Tesla magnetic resonance imaging (MRI) allows for magnetic resonance (MR)-guided intensity-modulated radiation therapy (IMRT) delivery with multiple beams. This study investigated head and neck, lung, and prostate treatment plans to understand what is possible on ViewRay to narrow focus toward sites with optimal dosimetry. The goal is not to provide a rigorous assessment of planning capabilities, but rather a first order demonstration of ViewRay planning abilities. Images, structure sets, points, and dose from treatment plans created in Pinnacle for patients in our clinic were imported into ViewRay. The same objectives were used to assess plan quality and all critical structures were treated as similarly as possible. Homogeneity index (HI), conformity index (CI), and volume receiving <20% of prescription dose (DRx) were calculated to assess the plans. The 95% confidence intervals were recorded for all measurements and presented with the associated bars in graphs. The homogeneity index (D5/D95) had a 1-5% inhomogeneity increase for head and neck, 3-8% for lung, and 4-16% for prostate. CI revealed a modest conformity increase for lung. The volume receiving 20% of the prescription dose increased 2-8% for head and neck and up to 4% for lung and prostate. Overall, for head and neck Co-60 ViewRay treatments planned with its Monte Carlo treatment planning software were comparable with 6 MV plans computed with convolution superposition algorithm on Pinnacle treatment planning system.
Informap... a computerized information system for fire planning and fire control
Theodore G. Storey; Ross D. Carder; Ernest T. Tolin
1969-01-01
INFORMAP (Information Necessary for Optimum Resource Management and Protection) is a computerized system under development for storing, manipulating, retrieving, and displaying data for fire planning and fire control. A prototype for planning applications has been developed and tested. It is programed in Fortran IV for the IBM 7040 computer, and displays information in...
ERIC Educational Resources Information Center
Hassa, Samira
2012-01-01
This study examines language planning as displayed in street names, advertising posters, billboards, and supermarket product displays in three Moroccan cities: Casablanca, Fes, and Rabat. The study reveals somewhat confusing language planning stemming from on-going political, economic, and social transformation in Morocco. More than 50 years after…
Color moiré simulations in contact-type 3-D displays.
Lee, B-R; Son, J-Y; Chernyshov, O O; Lee, H; Jeong, I-K
2015-06-01
A new method of color moiré fringe simulation in the contact-type 3-D displays is introduced. The method allows simulating color moirés appearing in the displays, which cannot be approximated by conventional cosine approximation of a line grating. The color moirés are mainly introduced by the line width of the boundary lines between the elemental optics in and plate thickness of viewing zone forming optics. This is because the lines are hiding some parts of pixels under the viewing zone forming optics, and the plate thickness induces a virtual contraction of the pixels. The simulated color moiré fringes are closely matched with those appearing at the displays.
NASA Astrophysics Data System (ADS)
Teng, Dongdong; Liu, Lilin; Zhang, Yueli; Pang, Zhiyong; Wang, Biao
2014-09-01
Through the creative usage of a shiftable cylindrical lens, a wide-view-angle holographic display system is developed for medical object display in real three-dimensional (3D) space based on a time-multiplexing method. The two-dimensional (2D) source images for all computer generated holograms (CGHs) needed by the display system are only one group of computerized tomography (CT) or magnetic resonance imaging (MRI) slices from the scanning device. Complicated 3D message reconstruction on the computer is not necessary. A pelvis is taken as the target medical object to demonstrate this method and the obtained horizontal viewing angle reaches 28°.
Initial experience with a nuclear medicine viewing workstation
NASA Astrophysics Data System (ADS)
Witt, Robert M.; Burt, Robert W.
1992-07-01
Graphical User Interfaced (GUI) workstations are now available from commercial vendors. We recently installed a GUI workstation in our nuclear medicine reading room for exclusive use of staff and resident physicians. The system is built upon a Macintosh platform and has been available as a DELTAmanager from MedImage and more recently as an ICON V from Siemens Medical Systems. The workstation provides only display functions and connects to our existing nuclear medicine imaging system via ethernet. The system has some processing capabilities to create oblique, sagittal and coronal views from transverse tomographic views. Hard copy output is via a screen save device and a thermal color printer. The DELTAmanager replaced a MicroDELTA workstation which had both process and view functions. The mouse activated GUI has made remarkable changes to physicians'' use of the nuclear medicine viewing system. Training time to view and review studies has been reduced from hours to about 30-minutes. Generation of oblique views and display of brain and heart tomographic studies has been reduced from about 30-minutes of technician''s time to about 5-minutes of physician''s time. Overall operator functionality has been increased so that resident physicians with little prior computer experience can access all images on the image server and display pertinent patient images when consulting with other staff.
NASA Astrophysics Data System (ADS)
Dixon, Kevin W.; Krueger, Gretchen M.; Rojas, Victoria A.; Hubbard, David C.
1989-09-01
Helmet mounted displays provide required field of regard, out of the cockpit visual imagery for tactical training while maintaining acceptable luminance and resolution levels. An important consideration for visual system designers is the horizontal and vertical dimensions of the instantaneous field of view. This study investigated the effect of various instantaneous field of view sizes on the performance of low level flight and 30 degree manual dive bomb tasks. An in-simulator transfer of training design allowed pilots to be trained in an instantaneous field of view condition and transferred to a wide FOV condition for testing. The selected instantaneous field of view sizes cover the range of current and proposed helmet mounted displays. The field of view sizes used were 127° H x 67° V, 140° H x 80° V, 160° H x 80° V, and 180° H x 80° V. The 300° H x 150° V size provided a full field of view control condition. An A-10 dodecahedron simulator configured with a color light valve display, computer generated imagery, and a Polhemus magnetic head tracker provided the cockpit and display apparatus. The Polhemus magnetic head tracker allowed the electronically masked field of view sizes to be moved on the seven window display of the dodecahedron. The dependent measures were: 1) Number of trials to reach criterion for low level flight tasks and dive bombs, 2) Performance measures of the low level flight route, 3) Performance measures of the dive bombing task, and 4) Subjective questionnaire data. Thirty male instructor pilots from Williams AFB, Arizona served as subjects for the study. The results revealed significant field of view effects for the number of trials required to reach criterion in the two smallest FOV conditions for right 180° turns and dive bomb training. The data also revealed pilots performed closer to the desired pitch angle for all but the two smallest conditions. The questionnaire data revealed that pilots felt their performance was degraded and they relied more on information from their instruments in the smaller field of view conditions. The conclusions of this study are that for tasks requiring close course adherence to a desired flight profile a minimum of 160° H X 80° V instantaneous field of view should be used for training. Future investigations into the instantaneous field of view size will be conducted to validate the results on other tactical tasks.
Testing optimum viewing conditions for mammographic image displays.
Waynant, R W; Chakrabarti, K; Kaczmarek, R A; Dagenais, I
1999-05-01
The viewbox luminance and viewing room light level are important parameters in a medical film display, but these parameters have not had much attention. Spatial variations and too much room illumination can mask real signal or create the false perception of a signal. This presentation looks at how scotopic light sources and dark-adapted radiologists may identify more real diseases.
Display device-adapted video quality-of-experience assessment
NASA Astrophysics Data System (ADS)
Rehman, Abdul; Zeng, Kai; Wang, Zhou
2015-03-01
Today's viewers consume video content from a variety of connected devices, including smart phones, tablets, notebooks, TVs, and PCs. This imposes significant challenges for managing video traffic efficiently to ensure an acceptable quality-of-experience (QoE) for the end users as the perceptual quality of video content strongly depends on the properties of the display device and the viewing conditions. State-of-the-art full-reference objective video quality assessment algorithms do not take into account the combined impact of display device properties, viewing conditions, and video resolution while performing video quality assessment. We performed a subjective study in order to understand the impact of aforementioned factors on perceptual video QoE. We also propose a full reference video QoE measure, named SSIMplus, that provides real-time prediction of the perceptual quality of a video based on human visual system behaviors, video content characteristics (such as spatial and temporal complexity, and video resolution), display device properties (such as screen size, resolution, and brightness), and viewing conditions (such as viewing distance and angle). Experimental results have shown that the proposed algorithm outperforms state-of-the-art video quality measures in terms of accuracy and speed.
Looking forward: In-vehicle auxiliary display positioning affects carsickness.
Kuiper, Ouren X; Bos, Jelte E; Diels, Cyriel
2018-04-01
Carsickness is associated with a mismatch between actual and anticipated sensory signals. Occupants of automated vehicles, especially when using a display, are at higher risk of becoming carsick than drivers of conventional vehicles. This study aimed to evaluate the impact of positioning of in-vehicle displays, and subsequent available peripheral vision, on carsickness of passengers. We hypothesized that increased peripheral vision during display use would reduce carsickness. Seated in the front passenger seat 18 participants were driven a 15-min long slalom on two occasions while performing a continuous visual search-task. The display was positioned either at 1) eye-height in front of the windscreen, allowing peripheral view on the outside world, and 2) the height of the glove compartment, allowing only limited view on the outside world. Motion sickness was reported at 1-min intervals. Using a display at windscreen height resulted in less carsickness compared to a display at glove compartment height. Copyright © 2017 Elsevier Ltd. All rights reserved.
Effect of Display Technology on Perceived Scale of Space.
Geuss, Michael N; Stefanucci, Jeanine K; Creem-Regehr, Sarah H; Thompson, William B; Mohler, Betty J
2015-11-01
Our goal was to evaluate the degree to which display technologies influence the perception of size in an image. Research suggests that factors such as whether an image is displayed stereoscopically, whether a user's viewpoint is tracked, and the field of view of a given display can affect users' perception of scale in the displayed image. Participants directly estimated the size of a gap by matching the distance between their hands to the gap width and judged their ability to pass unimpeded through the gap in one of five common implementations of three display technologies (two head-mounted displays [HMD] and a back-projection screen). Both measures of gap width were similar for the two HMD conditions and the back projection with stereo and tracking. For the displays without tracking, stereo and monocular conditions differed from each other, with monocular viewing showing underestimation of size. Display technologies that are capable of stereoscopic display and tracking of the user's viewpoint are beneficial as perceived size does not differ from real-world estimates. Evaluations of different display technologies are necessary as display conditions vary and the availability of different display technologies continues to grow. The findings are important to those using display technologies for research, commercial, and training purposes when it is important for the displayed image to be perceived at an intended scale. © 2015, Human Factors and Ergonomics Society.
On consistent inter-view synthesis for autostereoscopic displays
NASA Astrophysics Data System (ADS)
Tran, Lam C.; Bal, Can; Pal, Christopher J.; Nguyen, Truong Q.
2012-03-01
In this paper we present a novel stereo view synthesis algorithm that is highly accurate with respect to inter-view consistency, thus to enabling stereo contents to be viewed on the autostereoscopic displays. The algorithm finds identical occluded regions within each virtual view and aligns them together to extract a surrounding background layer. The background layer for each occluded region is then used with an exemplar based inpainting method to synthesize all virtual views simultaneously. Our algorithm requires the alignment and extraction of background layers for each occluded region; however, these two steps are done efficiently with lower computational complexity in comparison to previous approaches using the exemplar based inpainting algorithms. Thus, it is more efficient than existing algorithms that synthesize one virtual view at a time. This paper also describes the implementation of a simplified GPU accelerated version of the approach and its implementation in CUDA. Our CUDA method has sublinear complexity in terms of the number of views that need to be generated, which makes it especially useful for generating content for autostereoscopic displays that require many views to operate. An objective of our work is to allow the user to change depth and viewing perspective on the fly. Therefore, to further accelerate the CUDA variant of our approach, we present a modified version of our method to warp the background pixels from reference views to a middle view to recover background pixels. We then use an exemplar based inpainting method to fill in the occluded regions. We use warping of the foreground from the reference images and background from the filled regions to synthesize new virtual views on the fly. Our experimental results indicate that the simplified CUDA implementation decreases running time by orders of magnitude with negligible loss in quality. [Figure not available: see fulltext.
Thomaes, Sander; Kamphuis, Jan Henk; de Castro, Bram Orobio; Telch, Michael J.
2010-01-01
Research among adults has consistently shown that people holding negative self-views prefer negative over positive feedback. The present study tested the hypothesis that this preference is less robust among pre-adolescents, such that it will be mitigated by a preceding positive event. Pre-adolescents (n = 75) holding positive or negative global self-esteem were randomized to a favorable or unfavorable peer evaluation outcome. Next, preferences for positive versus negative feedback were assessed using an unobtrusive behavioral viewing time measure. As expected, results showed that after being faced with the success outcome children holding negative self-views were as likely as their peers holding positive self-views to display a significant preference for positive feedback. In contrast, children holding negative self-views displayed a stronger preference for negative feedback after being faced with the unfavorable outcome that matched their pre-existing self-views. PMID:21151482
Reijntjes, Albert; Thomaes, Sander; Kamphuis, Jan Henk; de Castro, Bram Orobio; Telch, Michael J
2010-12-01
Research among adults has consistently shown that people holding negative self-views prefer negative over positive feedback. The present study tested the hypothesis that this preference is less robust among pre-adolescents, such that it will be mitigated by a preceding positive event. Pre-adolescents (n = 75) holding positive or negative global self-esteem were randomized to a favorable or unfavorable peer evaluation outcome. Next, preferences for positive versus negative feedback were assessed using an unobtrusive behavioral viewing time measure. As expected, results showed that after being faced with the success outcome children holding negative self-views were as likely as their peers holding positive self-views to display a significant preference for positive feedback. In contrast, children holding negative self-views displayed a stronger preference for negative feedback after being faced with the unfavorable outcome that matched their pre-existing self-views.
Evaluating Education and Science in the KSC Visitor Complex Exhibits
NASA Technical Reports Server (NTRS)
Erickson, Lance K.
2000-01-01
The continuing development of exhibits at the Kennedy Space Center's Visitor Complex is an excellent opportunity for NASA personnel to promote science and provide insight into NASA programs and projects for the approximately 3 million visitors that come to KSC annually. Stated goals for the Visitor Complex, in fact, emphasize science awareness and recommend broadening the appeal of the displays and exhibits for all age groups. To this end, this summer project seeks to evaluate the science content of planned exhibits/displays in relation to these developing opportunities and identify specific areas for enhancement of existing or planned exhibits and displays. To help expand the educational and science content within the developing exhibits at the Visitor Complex, this project was structured to implement the goals of the Visitor Center Director. To accomplish this, the exhibits and displays planned for completion within the year underwent review and evaluation for science content and educational direction. Planning emphasis for the individual displays was directed at combining the elements of effective education with fundamental scientific integrity, within an appealing format.
NASA Technical Reports Server (NTRS)
Marmolejo, Jose (Inventor); Smith, Stephen (Inventor); Plough, Alan (Inventor); Clarke, Robert (Inventor); Mclean, William (Inventor); Fournier, Joseph (Inventor)
1990-01-01
A helmet mounted display device is disclosed for projecting a display on a flat combiner surface located above the line of sight where the display is produced by two independent optical channels with independent LCD image generators. The display has a fully overlapped field of view on the combiner surface and the focus can be adjusted from a near field of four feet to infinity.
NASA Astrophysics Data System (ADS)
D'Haene, Nicky; Maris, Calliope; Rorive, Sandrine; Moles Lopez, Xavier; Rostang, Johan; Marchessoux, Cédric; Pantanowitz, Liron; Parwani, Anil V.; Salmon, Isabelle
2013-03-01
User experience with viewing images in pathology is crucial for accurate interpretation and diagnosis. With digital pathology, images are being read on a display system, and this poses new types of questions: such as what is the difference in terms of pixelation, refresh lag or obscured features compared to an optical microscope. Is there a resultant change in user performance in terms of speed of slide review, perception of adequacy and quality or in diagnostic confidence? A prior psychophysical study was carried out comparing various display modalities on whole slide imaging (WSI) in pathology at the University of Pittsburgh Medical Center (UPMC) in the USA. This prior study compared professional and non-professional grade display modalities and highlighted the importance of using a medical grade display to view pathological digital images. This study was duplicated in Europe at the Department of Pathology in Erasme Hospital (Université Libre de Bruxelles (ULB)) in an attempt to corroborate these findings. Digital WSI with corresponding glass slides of 58 cases including surgical pathology and cytopathology slides of varying difficulty were employed. Similar non-professional and professional grade display modalities were compared to an optical microscope (Olympus BX51). Displays ranged from a laptop (DELL Latitude D620), to a consumer grade display (DELL E248WFPb), to two professional grade monitors (Eizo CG245W and Barco MDCC-6130). Three pathologists were selected from the Department of Pathology in Erasme Hospital (ULB) in Belgium to view and interpret the pathological images on these different displays. The results show that non-professional grade displays (laptop and consumer) have inferior user experience compared to professional grade monitors and the optical microscope.
Hwang, Alex D.; Peli, Eli
2014-01-01
Watching 3D content using a stereoscopic display may cause various discomforting symptoms, including eye strain, blurred vision, double vision, and motion sickness. Numerous studies have reported motion-sickness-like symptoms during stereoscopic viewing, but no causal linkage between specific aspects of the presentation and the induced discomfort has been explicitly proposed. Here, we describe several causes, in which stereoscopic capture, display, and viewing differ from natural viewing resulting in static and, importantly, dynamic distortions that conflict with the expected stability and rigidity of the real world. This analysis provides a basis for suggested changes to display systems that may alleviate the symptoms, and suggestions for future studies to determine the relative contribution of the various effects to the unpleasant symptoms. PMID:26034562
Alam, Md Ashraful; Piao, Mei-Lan; Bang, Le Thanh; Kim, Nam
2013-10-01
Viewing-zone control of integral imaging (II) displays using a directional projection and elemental image (EI) resizing method is proposed. Directional projection of EIs with the same size of microlens pitch causes an EI mismatch at the EI plane. In this method, EIs are generated computationally using a newly introduced algorithm: the directional elemental image generation and resizing algorithm considering the directional projection geometry of each pixel as well as an EI resizing method to prevent the EI mismatch. Generated EIs are projected as a collimated projection beam with a predefined directional angle, either horizontally or vertically. The proposed II display system allows reconstruction of a 3D image within a predefined viewing zone that is determined by the directional projection angle.
Display gamma is an important factor in Web image viewing
NASA Astrophysics Data System (ADS)
Zhang, Xuemei; Lavin, Yingmei; Silverstein, D. Amnon
2001-06-01
We conducted a perceptual image preference experiment over the web to find our (1) if typical computer users have significant variations in their display gamma settings, and (2) if so, do the gamma settings have significant perceptual effect on the appearance of images in their web browsers. The digital image renderings used were found to have preferred tone characteristics from a previous lab- controlled experiment. They were rendered with 4 different gamma settings. The subjects were asked to view the images over the web, with their own computer equipment and web browsers. The subjects werewe asked to view the images over the web, with their own computer equipment and web browsers. The subjects made pair-wise subjective preference judgements on which rendering they liked bets for each image. Each subject's display gamma setting was estimated using a 'gamma estimator' tool, implemented as a Java applet. The results indicated that (1) the user's gamma settings, as estimated in the experiment, span a wide range from about 1.8 to about 3.0; (2) the subjects preferred images that werewe rendered with a 'correct' gamma value matching their display setting. Subjects disliked images rendered with a gamma value not matching their displays'. This indicates that display gamma estimation is a perceptually significant factor in web image optimization.
Using the Plan View to Teach Basic Crystallography in General Chemistry
ERIC Educational Resources Information Center
Cushman, Cody V.; Linford, Matthew R.
2015-01-01
The plan view is used in crystallography and materials science to show the positions of atoms in crystal structures. However, it is not widely used in teaching general chemistry. In this contribution, we introduce the plan view, and show these views for the simple cubic, body-centered cubic, face-centered cubic, hexagonal close packed, CsCl, NaCl,…
Heading perception in patients with advanced retinitis pigmentosa
NASA Technical Reports Server (NTRS)
Li, Li; Peli, Eli; Warren, William H.
2002-01-01
PURPOSE: We investigated whether retinis pigmentosa (RP) patients with residual visual field of < 100 degrees could perceive heading from optic flow. METHODS: Four RP patients and four age-matched normally sighted control subjects viewed displays simulating an observer walking over a ground. In experiment 1, subjects viewed either the entire display with free fixation (full-field condition) or through an aperture with a fixation point at the center (aperture condition). In experiment 2, patients viewed displays of different durations. RESULTS: RP patients' performance was comparable to that of the age-matched control subjects: heading judgment was better in the full-field condition than in the aperture condition. Increasing display duration from 0.5 s to 1 s improved patients' heading performance, but giving them more time (3 s) to gather more visual information did not consistently further improve their performance. CONCLUSIONS: RP patients use active scanning eye movements to compensate for their visual field loss in heading perception; they might be able to gather sufficient optic flow information for heading perception in about 1 s.
Heading perception in patients with advanced retinitis pigmentosa.
Li, Li; Peli, Eli; Warren, William H
2002-09-01
We investigated whether retinis pigmentosa (RP) patients with residual visual field of < 100 degrees could perceive heading from optic flow. Four RP patients and four age-matched normally sighted control subjects viewed displays simulating an observer walking over a ground. In experiment 1, subjects viewed either the entire display with free fixation (full-field condition) or through an aperture with a fixation point at the center (aperture condition). In experiment 2, patients viewed displays of different durations. RP patients' performance was comparable to that of the age-matched control subjects: heading judgment was better in the full-field condition than in the aperture condition. Increasing display duration from 0.5 s to 1 s improved patients' heading performance, but giving them more time (3 s) to gather more visual information did not consistently further improve their performance. RP patients use active scanning eye movements to compensate for their visual field loss in heading perception; they might be able to gather sufficient optic flow information for heading perception in about 1 s.
103. View of transmitter building no. 102, missile warning operation ...
103. View of transmitter building no. 102, missile warning operation center, overall view of center in operation with staff at consoles. Note defcon (defense condition) display panel (upper right) showing "simulated status"activity level. Also note fiber optic display panel at upper right-center. Official photograph BMEWS Project by Hansen 30 September, 1976, clear as negative no. A-14568. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
2009-12-01
forward-looking infrared FOV field-of-view HDU helmet display unit HMD helmet-mounted display IHADSS Integrated Helmet and Display...monocular Integrated Helmet and Display Sighting System (IHADSS) helmet-mounted display ( HMD ) in the British Army’s Apache AH Mk 1 attack helicopter has any...Integrated Helmet and Display Sighting System, IHADSS, Helmet-mounted display, HMD , Apache helicopter, Visual performance UNCLAS UNCLAS UNCLAS SAR 96
NASA Astrophysics Data System (ADS)
To, T.; Nguyen, D.; Tran, G.
2015-04-01
Heritage system of Vietnam has decline because of poor-conventional condition. For sustainable development, it is required a firmly control, space planning organization, and reasonable investment. Moreover, in the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used. With the potential of high-resolution, low-cost, large field of view, easiness, rapidity and completeness, the derivation of 3D metric information from Structure-and- Motion images is receiving great attention. In addition, heritage objects in form of 3D physical models are recorded not only for documentation issues, but also for historical interpretation, restoration, cultural and educational purposes. The study suggests the archaeological documentation of the "One Pilla" pagoda placed in Hanoi capital, Vietnam. The data acquired through digital camera Cannon EOS 550D, CMOS APS-C sensor 22.3 x 14.9 mm. Camera calibration and orientation were carried out by VisualSFM, CMPMVS (Multi-View Reconstruction) and SURE (Photogrammetric Surface Reconstruction from Imagery) software. The final result represents a scaled 3D model of the One Pilla Pagoda and displayed different views in MeshLab software.
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; Fleming, Gary A.
2007-01-01
Virtual Diagnostics Interface technology, or ViDI, is a suite of techniques utilizing image processing, data handling and three-dimensional computer graphics. These techniques aid in the design, implementation, and analysis of complex aerospace experiments. LiveView3D is a software application component of ViDI used to display experimental wind tunnel data in real-time within an interactive, three-dimensional virtual environment. The LiveView3D software application was under development at NASA Langley Research Center (LaRC) for nearly three years. LiveView3D recently was upgraded to perform real-time (as well as post-test) comparisons of experimental data with pre-computed Computational Fluid Dynamics (CFD) predictions. This capability was utilized to compare experimental measurements with CFD predictions of the surface pressure distribution of the NASA Ares I Crew Launch Vehicle (CLV) - like vehicle when tested in the NASA LaRC Unitary Plan Wind Tunnel (UPWT) in December 2006 - January 2007 timeframe. The wind tunnel tests were conducted to develop a database of experimentally-measured aerodynamic performance of the CLV-like configuration for validation of CFD predictive codes.
Visual Costs of the Inhomogeneity of Luminance and Contrast by Viewing LCD-TFT Screens Off-Axis.
Ziefle, Martina; Groeger, Thomas; Sommer, Dietmar
2003-01-01
In this study the anisotropic characteristics of TFT-LCD (Thin-Film-Transistor-Liquid Crystal Display) screens were examined. Anisotropy occurs as the distribution of luminance and contrast changes over the screen surface due to different viewing angles. On the basis of detailed photometric measurements the detection performance in a visual reaction task was measured in different viewing conditions. Viewing angle (0 degrees, frontal view; 30 degrees, off-axis; 50 degrees, off-axis) as well as ambient lighting (a dark or illuminated room) were varied. Reaction times and accuracy of detection performance were recorded. Results showed TFT's anisotropy to be a crucial factor deteriorating performance. With an increasing viewing angle performance decreased. It is concluded that TFT's anisotropy is a limiting factor for overall suitability and usefulness of this new display technology.
NASA Astrophysics Data System (ADS)
Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Zhang, Wanlu; Yan, Binbin; Yu, Chongxiu
2018-06-01
The floating 3D display system based on Tessar array and directional diffuser screen is proposed. The directional diffuser screen can smoothen the gap of lens array and make the 3D image's brightness continuous. The optical structure and aberration characteristics of the floating three-dimensional (3D) display system are analyzed. The simulation and experiment are carried out, which show that the 3D image quality becomes more and more deteriorative with the further distance of the image plane and the increasing viewing angle. To suppress the aberrations, the Tessar array is proposed according to the aberration characteristics of the floating 3D display system. A 3840 × 2160 liquid crystal display panel (LCD) with the size of 23.6 inches, a directional diffuser screen and a Tessar array are used to display the final 3D images. The aberrations are reduced and the definition is improved compared with that of the display with a single-lens array. The display depth of more than 20 cm and the viewing angle of more than 45° can be achieved.
Evaluation of Equivalent Vision Technologies for Supersonic Aircraft Operations
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Williams, Steven P.; Wilz, Susan P.; Arthur, Jarvis J., III; Bailey, Randall E.
2009-01-01
Twenty-four air transport-rated pilots participated as subjects in a fixed-based simulation experiment to evaluate the use of Synthetic/Enhanced Vision (S/EV) and eXternal Vision System (XVS) technologies as enabling technologies for future all-weather operations. Three head-up flight display concepts were evaluated a monochromatic, collimated Head-up Display (HUD) and a color, non-collimated XVS display with a field-of-view (FOV) equal to and also, one significantly larger than the collimated HUD. Approach, landing, departure, and surface operations were conducted. Additionally, the apparent angle-of-attack (AOA) was varied (high/low) to investigate the vertical field-of-view display requirements and peripheral, side window visibility was experimentally varied. The data showed that lateral approach tracking performance and lateral landing position were excellent regardless of the display type and AOA condition being evaluated or whether or not there were peripheral cues in the side windows. Longitudinal touchdown and glideslope tracking were affected by the display concepts. Larger FOV display concepts showed improved longitudinal touchdown control, superior glideslope tracking, significant situation awareness improvements and workload reductions compared to smaller FOV display concepts.
Ultrahigh-definition dynamic 3D holographic display by active control of volume speckle fields
NASA Astrophysics Data System (ADS)
Yu, Hyeonseung; Lee, Kyeoreh; Park, Jongchan; Park, Yongkeun
2017-01-01
Holographic displays generate realistic 3D images that can be viewed without the need for any visual aids. They operate by generating carefully tailored light fields that replicate how humans see an actual environment. However, the realization of high-performance, dynamic 3D holographic displays has been hindered by the capabilities of present wavefront modulator technology. In particular, spatial light modulators have a small diffraction angle range and limited pixel number limiting the viewing angle and image size of a holographic 3D display. Here, we present an alternative method to generate dynamic 3D images by controlling volume speckle fields significantly enhancing image definition. We use this approach to demonstrate a dynamic display of micrometre-sized optical foci in a volume of 8 mm × 8 mm × 20 mm.
Design of a projection display screen with vanishing color shift for rear-projection HDTV
NASA Astrophysics Data System (ADS)
Liu, Xiu; Zhu, Jin-lin
1996-09-01
Using bi-convex cylinder lens with matrix structure, the transmissive projection display screen with high contrast and wider viewing angle has been widely used in large rear projection TV and video projectors, it obtained a inhere color shift and puzzled the designer of display screen for RGB projection tube in-line adjustment. Based on the method of light beam racing, the general software of designing projection display screen has been developed and the computer model of vanishing color shift for rear projection HDTV has bee completed. This paper discussed the practical designing method to vanish the defect of color shift and mentioned the relations between the primary optical parameters of display screen and relative geometry sizes of lens' surface. The distributions of optical gain to viewing angle and the influences on engineering design are briefly analyzed.
SU-F-J-72: A Clinical Usable Integrated Contouring Quality Evaluation Software for Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, S; Dolly, S; Cai, B
Purpose: To introduce the Auto Contour Evaluation (ACE) software, which is the clinical usable, user friendly, efficient and all-in-one toolbox for automatically identify common contouring errors in radiotherapy treatment planning using supervised machine learning techniques. Methods: ACE is developed with C# using Microsoft .Net framework and Windows Presentation Foundation (WPF) for elegant GUI design and smooth GUI transition animations through the integration of graphics engines and high dots per inch (DPI) settings on modern high resolution monitors. The industrial standard software design pattern, Model-View-ViewModel (MVVM) pattern, is chosen to be the major architecture of ACE for neat coding structure, deepmore » modularization, easy maintainability and seamless communication with other clinical software. ACE consists of 1) a patient data importing module integrated with clinical patient database server, 2) a 2D DICOM image and RT structure simultaneously displaying module, 3) a 3D RT structure visualization module using Visualization Toolkit or VTK library and 4) a contour evaluation module using supervised pattern recognition algorithms to detect contouring errors and display detection results. ACE relies on supervised learning algorithms to handle all image processing and data processing jobs. Implementations of related algorithms are powered by Accord.Net scientific computing library for better efficiency and effectiveness. Results: ACE can take patient’s CT images and RT structures from commercial treatment planning software via direct user input or from patients’ database. All functionalities including 2D and 3D image visualization and RT contours error detection have been demonstrated with real clinical patient cases. Conclusion: ACE implements supervised learning algorithms and combines image processing and graphical visualization modules for RT contours verification. ACE has great potential for automated radiotherapy contouring quality verification. Structured with MVVM pattern, it is highly maintainable and extensible, and support smooth connections with other clinical software tools.« less
Analysis of Crosstalk in 3D Circularly Polarized LCDs Depending on the Vertical Viewing Location.
Zeng, Menglin; Nguyen, Truong Q
2016-03-01
Crosstalk in circularly polarized (CP) liquid crystal display (LCD) with polarized glasses (passive 3D glasses) is mainly caused by two factors: 1) the polarizing system including wave retarders and 2) the vertical misalignment (VM) of light between the LC module and the patterned retarder. We show that the latter, which is highly dependent on the vertical viewing location, is a much more significant factor of crosstalk in CP LCD than the former. There are three contributions in this paper. Initially, a display model for CP LCD, which accurately characterizes VM, is proposed. A novel display calibration method for the VM characterization that only requires pictures of the screen taken at four viewing locations. In addition, we prove that the VM-based crosstalk cannot be efficiently reduced by either preprocessing the input images or optimizing the polarizing system. Furthermore, we derive the analytic solution for the viewing zone, where the entire screen does not have the VM-based crosstalk.
How long do people look and listen to forest-oriented exhibits?
James William Shiner; Elwood L., Jr. Shafer
1975-01-01
To gain a better understanding of public reaction to I & E displays, average visitor-viewing time was measured for a variety of exhibits at the Adirondack Museum, Blue Mountain Lake, N.Y. Visitors viewed displays 15 to 64 percent of the time required to read or listen to the total message presented. The longer the message per exhibit, the less time was spent...
2006-09-01
application with the aim of finding an affordable display with acceptable resolution and field of view (5DT, Cyvisor, eMagin ). The HMD that was chosen was the... eMagin z800, which contains OLED displays capable of 800x600 (SVGA) resolution with a 40 degree diagonal field of view (http://www.emagin.com
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-12-01
EPA Region III has assembled on this CD a multitude of environmental data, in both visual and textual formats. While targeted for Facility Response Planning under the Oil Pollution Act of 1990, this information will prove helpful to anyone in the environmental arena. Specifically, the CD will aid contingency planning and emergency response personnel. Combining innovative GIS technology with EPA`s state-specific data allows you to display maps, find and identify map features, look at tabular information about map features, and print out maps. The CD was designed to be easy to use and incorporates example maps as well as helpmore » sections describing the use of the environmental data on the CD, and introduces you to the IACP Viewer and its capabilities. These help features will make it easy for you to conduct analysis, produce maps, and browse the IACP Plan. The IACP data are included in two formats: shapefiles, which can be viewed with the IACP Viewer or ESRI`s ArcView software (Version 2.1 or higher), and ARC/INFO export files, which can be imported into ARC/INFO or converted to other GIS data formats. Point Data Sources: Sensitive Areas, Surface Drinking Water Intakes, Groundwater Intakes, Groundwater Supply Facilities, NPL (National Priority List) Sites, FRP (Facility Response Plan) Facilities, NPDES (National Pollutant Discharge Elimination System) Facilities, Hospitals, RCRA (Resource Conservation and Recovery Act) Sites, TRI (Toxic Release Inventory) Sites, CERCLA (Comprehensive Environmental Response, Compensation, and Liability Act) Sites Line Data Sources: TIGER Roads, TIGER Railroads, TIGER Hydrography, Pipelines Polygon Data Sources: State Boundaries, County Boundaries, Watershed Boundaries (8-digit HUC), TIGER Hydrography, Public Lands, Populated Places, IACP Boundaries, Coast Guard Boundaries, Forest Types, US Congressional Districts, One-half Mile Buffer of Surface Drinking Water Intakes.« less
Supervisory Presentation for Research, Information, Integration and Testing (SPRINT)
2015-03-29
autonomous UAVs in subsequent tests. The Vigilant Spirit Control Station ( VSCS ) is a test bed designed by the Air Force Research Laboratory for studying... VSCS has tactical situation displays (i.e., geo-spatial maps), vehicle status displays, route planning interfaces for creating vehicle flight plans...is considered one of those novel displays; Figure 2). The model builder software was integrated into the VSCS that constructs a mission model that is
NASA Technical Reports Server (NTRS)
Wichmann, Benjamin C.
2013-01-01
I work directly with the System Monitoring and Control (SMC) software engineers who develop, test and release custom and commercial software in support of the Kennedy Space Center Spaceport Command and Control System. (SCCS). SMC uses Commercial Off-The-Shelf (COTS) Enterprise Management Systems (EMS) software which provides a centralized subsystem for configuring, monitoring, and controlling SCCS hardware and software used in the Control Rooms. There are multiple projects being worked on using the COTS EMS software. I am currently working with the HP Operations Manager for UNIX (OMU) software which allows Master Console Operators (MCO) to access, view and interpret messages regarding the status of the SCCS hardware and software. The OMU message browser gets cluttered with messages which can make it difficult for the MCO to manage. My main project involves determining ways to reduce the number of messages being displayed in the OMU message browser. I plan to accomplish this task in two different ways: (1) by correlating multiple messages into one single message being displayed and (2) to create policies that will determine the significance of each message and whether or not it needs to be displayed to the MCO. The core idea is to lessen the number of messages being sent to the OMU message browser so the MCO can more effectively use it.
Binocular device for displaying numerical information in field of view
NASA Technical Reports Server (NTRS)
Fuller, H. V. (Inventor)
1977-01-01
An apparatus is described for superimposing numerical information on the field of view of binoculars. The invention has application in the flying of radio-controlled model airplanes. Information such as airspeed and angle of attack are sensed on a model airplane and transmitted back to earth where this information is changed into numerical form. Optical means are attached to the binoculars that a pilot is using to track the model air plane for displaying the numerical information in the field of view of the binoculars. The device includes means for focusing the numerical information at infinity whereby the user of the binoculars can see both the field of view and the numerical information without refocusing his eyes.
Web-based CERES Clouds QC Property Viewing Tool
NASA Astrophysics Data System (ADS)
Smith, R. A.; Chu, C.; Sun-Mack, S.; Chen, Y.; Heckert, E.; Minnis, P.
2014-12-01
This presentation will display the capabilities of a web-based CERES cloud property viewer. Terra data will be chosen for examples. It will demonstrate viewing of cloud properties in gridded global maps, histograms, time series displays, latitudinal zonal images, binned data charts, data frequency graphs, and ISCCP plots. Images can be manipulated by the user to narrow boundaries of the map as well as color bars and value ranges, compare datasets, view data values, and more. Other atmospheric studies groups will be encouraged to put their data into the underlying NetCDF data format and view their data with the tool. A laptop will hopefully be available to allow conference attendees to try navigating the tool.
NASA Astrophysics Data System (ADS)
Lee, Seokhee; Lee, Kiyoung; Kim, Man Bae; Kim, JongWon
2005-11-01
In this paper, we propose a design of multi-view stereoscopic HD video transmission system based on MPEG-21 Digital Item Adaptation (DIA). It focuses on the compatibility and scalability to meet various user preferences and terminal capabilities. There exist a large variety of multi-view 3D HD video types according to the methods for acquisition, display, and processing. By following the MPEG-21 DIA framework, the multi-view stereoscopic HD video is adapted according to user feedback. A user can be served multi-view stereoscopic video which corresponds with his or her preferences and terminal capabilities. In our preliminary prototype, we verify that the proposed design can support two deferent types of display device (stereoscopic and auto-stereoscopic) and switching viewpoints between two available viewpoints.
NASA Astrophysics Data System (ADS)
Samsuri, Norlyiana; Reza, Faruque; Begum, Tahamina; Yusoff, Nasir; Idris, Badrisyah; Omar, Hazim; Isa, Salmi Mohd
2016-10-01
This study focused on which display design of advertisement that would be able to attract most attention by measuring cognitive response and gaze behavior. Total of 15 subjects were recruited from USM undergraduate medical students. The event related potential (ERP) as a cognitive response during viewing different display design were recorded from 17 electrode sites using 128 electrode sensors net which was applied on the subject's scalp according to the 10-20 international electrode placement system. The amplitude of the evoked N100 and P300 ERP components were identified. To determine the statistical significance, amplitude data were analyzed using one way ANOVA test and reaction time was analyzed using Independent t-test. Two out of the 15 subjects participated in the ERP recording in order to measure the fixation duration, pupil size and attention maps of eye movement as a gaze behavioral response to the different display design using Eye Tracking. The ERP and the gaze behavior results were consistent. Higher amplitudes of the N100 and the P300 ERP components during the RLG view proved that participants had larger visual selective attention and visual cognitive processing during visual presentation of the RLG view. Visual interpretation of the attention map together with the fixation duration and the pupil size of gaze behavior data from two case studies revealed that the RLG view attracted more attention than its counterpart. In regards to color as a confounder, gaze performance data from two cases opened an interesting finding is that both subjects showed common interest in red color during both the LLG and the RLG view, indicating color may play a different role in the display design. The finding of this research has important information for the marketers to design their advertisement making it cost-effective and limited space advertising. And on that case, RLG view is the most prioritize display design.
Ragan, Eric D; Scerbo, Siroberto; Bacim, Felipe; Bowman, Doug A
2017-08-01
Many types of virtual reality (VR) systems allow users to use natural, physical head movements to view a 3D environment. In some situations, such as when using systems that lack a fully surrounding display or when opting for convenient low-effort interaction, view control can be enabled through a combination of physical and virtual turns to view the environment, but the reduced realism could potentially interfere with the ability to maintain spatial orientation. One solution to this problem is to amplify head rotations such that smaller physical turns are mapped to larger virtual turns, allowing trainees to view the entire surrounding environment with small head movements. This solution is attractive because it allows semi-natural physical view control rather than requiring complete physical rotations or a fully-surrounding display. However, the effects of amplified head rotations on spatial orientation and many practical tasks are not well understood. In this paper, we present an experiment that evaluates the influence of amplified head rotation on 3D search, spatial orientation, and cybersickness. In the study, we varied the amount of amplification and also varied the type of display used (head-mounted display or surround-screen CAVE) for the VR search task. By evaluating participants first with amplification and then without, we were also able to study training transfer effects. The findings demonstrate the feasibility of using amplified head rotation to view 360 degrees of virtual space, but noticeable problems were identified when using high amplification with a head-mounted display. In addition, participants were able to more easily maintain a sense of spatial orientation when using the CAVE version of the application, which suggests that visibility of the user's body and awareness of the CAVE's physical environment may have contributed to the ability to use the amplification technique while keeping track of orientation.
Fringe periods of color moirés in contact-type 3-D displays.
Lee, Hyoung; Kim, Sung-Kyu; Sohn, Kwanghoon; Son, Jung-Young; Chernyshov, Oleksii O
2016-06-27
A mathematical formula of calculating the fringe periods of the color moirés appearing at the contact-type 3-D displays is derived. It is typical that the color moirés are chirped and the period of the line pattern in viewing zone forming optics is more than two times of that of the pixel pattern in the display panel. These make impossible to calculate the fringe periods of the color moirés with the conventional beat frequency formula. The derived formula work very well for any combination of two line patterns having either a same line period or different line periods. This is experimentally proved. Furthermore, it is also shown that the fringe period can be expressed in terms of the viewing distance and focal length of the viewing zone forming optics.
Recent improvements in SPE3D: a VR-based surgery planning environment
NASA Astrophysics Data System (ADS)
Witkowski, Marcin; Sitnik, Robert; Verdonschot, Nico
2014-02-01
SPE3D is a surgery planning environment developed within TLEMsafe project [1] (funded by the European Commission FP7). It enables the operator to plan a surgical procedure on the customized musculoskeletal (MS) model of the patient's lower limbs, send the modified model to the biomechanical analysis module, and export the scenario's parameters to the surgical navigation system. The personalized patient-specific three-dimensional (3-D) MS model is registered with 3-D MRI dataset of lower limbs and the two modalities may be visualized simultaneously. Apart from main planes, any arbitrary MRI cross-section can be rendered on the 3-D MS model in real time. The interface provides tools for: bone cutting, manipulating and removal, repositioning muscle insertion points, modifying muscle force, removing muscles and placing implants stored in the implant library. SPE3D supports stereoscopic viewing as well as natural inspection/manipulation with use of haptic devices. Alternatively, it may be controlled with use of a standard computer keyboard, mouse and 2D display or a touch screen (e.g. in an operating room). The interface may be utilized in two main fields. Experienced surgeons may use it to simulate their operative plans and prepare input data for a surgical navigation system while student or novice surgeons can use it for training.
Automatic view synthesis by image-domain-warping.
Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa
2013-09-01
Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.
High luminance monochrome vs. color displays: impact on performance and search
NASA Astrophysics Data System (ADS)
Krupinski, Elizabeth A.; Roehrig, Hans; Matsui, Takashi
2011-03-01
To determine if diagnostic accuracy and visual search efficiency with a high luminance medical-grade color display are equivalent to a high luminance medical-grade monochrome display. Six radiologists viewed DR chest images, half with a solitary pulmonary nodule and half without. Observers reported whether or not a nodule was present and their confidence in that decision. Total viewing time per image was recorded. On a subset of 15 cases eye-position was recorded. Confidence data were analyzed using MRMC ROC techniques. There was no statistically significant difference (F = 0.0136, p = 0.9078) between color (mean Az = 0.8981, se = 0.0065) and monochrome (mean Az = 0.8945, se = 0.0148) diagnostic performance. Total viewing time per image did not differ significantly (F = 0.392, p = 0.5315) as a function of color (mean = 27.36 sec, sd = 12.95) vs monochrome (mean = 28.04, sd = 14.36) display. There were no significant differences in decision dwell times (true and false, positive and negative) overall for color vs monochrome displays (F = 0.133, p = 0.7154). The true positive (TP) and false positive (FP) decisions were associated with the longest dwell times, the false negatives (FN) with slightly shorter dwell times, and the true negative decisions (TN) with the shortest (F = 50.552, p < 0.0001) and these trends were consistent for both color and monochrome displays. Current color medical-grade displays are suitable for primary diagnostic interpretation in clinical radiology.
Performance characterization of a single bi-axial scanning MEMS mirror-based head-worn display
NASA Astrophysics Data System (ADS)
Liang, Minhua
2002-06-01
The NomadTM Personal Display System is a head-worn display (HWD) with a see-through, high-resolution, high-luminance display capability. It is based on a single bi-axial scanning MEMS mirror. In the Nomad HWD system, a red laser diode emits a beam of light that is scanned bi-axially by a single MEMS mirror. A diffractive beam diffuser and an ocular expand the beam to form a 12mm exit pupil for comfortable viewing. The Nomad display has an SVGA (800x600) resolution, 60Hz frame rate, 23-degree horizontal field of view (FOV) and 3:4 vertical to horizontal aspect ratio, a luminance of 800~900 foot-Lamberts, see-through capability, 30mm eye-relief distance, and 1-foot to infinity focusing adjustment. We have characterized the performance parameters, such as field of view, distortion, contrast ratio (4x4 black and white checker board), modulation depth, exit pupil size, eye relief distance, maximum luminance, dynamic range ratio (full-on-to-full-off ratio), dimming ratio, and luminance uniformity at image plane. The Class-1 eye-safety requirements per IEC 60825-1 Amendment 2 (CDRH Laser Notice No. 50) are analyzed and verified by experiments. The paper describes all of the testing methods and set-ups as well as the representative test results. The test results demonstrate that the Nomad display is an eye-safe display product with good image quality and good user ergonomics.
Report of mass communication Ceylon: October 1969-December 31, 1970.
1971-01-01
Experience with media usage by the FPA (Family Planning Association of Ceylon between October 1969 and December 1970 is summarized. During this time, the Association purchased 100-200 column inches each of contract advertising space in 26 newspapers. The press has published 268 press release, I.P.P.F., U.N., features and international press clipping in addition to specialized medical articles on family planning methods and 8 articles by FPA office-bearers. In January 1970; the Association launched local radio's first 5-minute daily commercial in Sinhala and Tamil. The program was repeated from April to July 1970. A series of 5 slides on family planning has been shown in movie theathers and more sets are being prepared for viewing. Posters have been used on buses and are currently on display on the National Railways project. Folders, leaflect, and poster calendars have been produced and used. Family Planning stickers have put up in 700 barber saloons. The FPA had stalls in the 1970 3-day National Exhibition at Batticaloa, the 4-day U.N. Poster Exhibition at Badulla, and the 2-week Ceylon Medical College Centenary Exhibition in Colombo. The Information Unit of FPA has answered 18, 541 written inquiries. A family planning communication us regularly dispatched to members of the Cabinet, government and opposition members of parliament, senators, chairmen of local bodies, and key trade union officials.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Retinal projection type super multi-view head-mounted display
NASA Astrophysics Data System (ADS)
Takahashi, Hideya; Ito, Yutaka; Nakata, Seigo; Yamada, Kenji
2014-02-01
We propose a retinal projection type super multi-view head-mounted display (HMD). The smooth motion parallax provided by the super multi-view technique enables a precise superposition of virtual 3D images on the real scene. Moreover, if a viewer focuses one's eyes on the displayed 3D image, the stimulus for the accommodation of the human eye is produced naturally. Therefore, although proposed HMD is a monocular HMD, it provides observers with natural 3D images. The proposed HMD consists of an image projection optical system and a holographic optical element (HOE). The HOE is used as a combiner, and also works as a condenser lens to implement the Maxwellian view. Some parallax images are projected onto the HOE, and converged on the pupil, and then projected onto the retina. In order to verify the effectiveness of the proposed HMD, we constructed the prototype HMD. In the prototype HMD, the number of parallax images and the number of convergent points on the pupil is three. The distance between adjacent convergent points is 2 mm. We displayed virtual images at the distance from 20 cm to 200 cm in front of the pupil, and confirmed the accommodation. This paper describes the principle of proposed HMD, and also describes the experimental result.
Coronary artery stenosis detection with holographic display of 3D angiograms
NASA Astrophysics Data System (ADS)
Ritman, Erik L.; Schwanke, Todd D.; Simari, Robert D.; Schwartz, Robert S.; Thomas, Paul J.
1995-05-01
The objective of this study was to establish the accuracy of an holographic display approach for detection of stenoses in coronary arteries. The rationale for using an holographic display approach is that multiple angles of view of the coronary arteriogram are provided by a single 'x-ray'-like film, backlit by a special light box. This should be more convenient in that the viewer does not have to page back and forth through a cine angiogram to obtain the multiple angles of view. The method used to test this technique involved viewing 100 3D coronary angiograms. These images were generated from the 3D angiographic images of nine normal coronary arterial trees generated with the Dynamic Spatial Reconstructor (DSR) fast CT scanner. Using our image processing programs, the image of the coronary artery lumen was locally 'narrowed' by an amount and length and at a location determined by a random look-up table. Two independent, blinded, experienced angiographers viewed the holographic displays of these angiograms and recorded their confidence about the presence, location, and severity of the stenoses. This procedure evaluates the sensitivity and specificity of the detection of coronary artery stenoses as a function of the severity, size, and location along the arteries.
Overview of FTV (free-viewpoint television)
NASA Astrophysics Data System (ADS)
Tanimoto, Masayuki
2010-07-01
We have developed a new type of television named FTV (Free-viewpoint TV). FTV is the ultimate 3DTV that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. FTV is based on the rayspace method that represents one ray in real space with one point in the ray-space. We have developed ray capture, processing and display technologies for FTV. FTV can be carried out today in real time on a single PC or on a mobile player. We also realized FTV with free listening-point audio. The international standardization of FTV has been conducted in MPEG. The first phase of FTV was MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in May 2009. The Blu-ray 3D specification has adopted MVC for compression. 3DV is a standard that targets serving a variety of 3D displays. The view generation function of FTV is used to decouple capture and display in 3DV. FDU (FTV Data Unit) is proposed as a data format for 3DV. FTU can compensate errors of the synthesized views caused by depth error.
Interactive display of molecular models using a microcomputer system
NASA Technical Reports Server (NTRS)
Egan, J. T.; Macelroy, R. D.
1980-01-01
A simple, microcomputer-based, interactive graphics display system has been developed for the presentation of perspective views of wire frame molecular models. The display system is based on a TERAK 8510a graphics computer system with a display unit consisting of microprocessor, television display and keyboard subsystems. The operating system includes a screen editor, file manager, PASCAL and BASIC compilers and command options for linking and executing programs. The graphics program, written in USCD PASCAL, involves the centering of the coordinate system, the transformation of centered model coordinates into homogeneous coordinates, the construction of a viewing transformation matrix to operate on the coordinates, clipping invisible points, perspective transformation and scaling to screen coordinates; commands available include ZOOM, ROTATE, RESET, and CHANGEVIEW. Data file structure was chosen to minimize the amount of disk storage space. Despite the inherent slowness of the system, its low cost and flexibility suggests general applicability.
Inexpensive Monocular Pico-Projector-based Augmented Reality Display for Surgical Microscope
Shi, Chen; Becker, Brian C.; Riviere, Cameron N.
2013-01-01
This paper describes an inexpensive pico-projector-based augmented reality (AR) display for a surgical microscope. The system is designed for use with Micron, an active handheld surgical tool that cancels hand tremor of surgeons to improve microsurgical accuracy. Using the AR display, virtual cues can be injected into the microscope view to track the movement of the tip of Micron, show the desired position, and indicate the position error. Cues can be used to maintain high performance by helping the surgeon to avoid drifting out of the workspace of the instrument. Also, boundary information such as the view range of the cameras that record surgical procedures can be displayed to tell surgeons the operation area. Furthermore, numerical, textual, or graphical information can be displayed, showing such things as tool tip depth in the work space and on/off status of the canceling function of Micron. PMID:25264542
Visualizing UAS-collected imagery using augmented reality
NASA Astrophysics Data System (ADS)
Conover, Damon M.; Beidleman, Brittany; McAlinden, Ryan; Borel-Donohue, Christoph C.
2017-05-01
One of the areas where augmented reality will have an impact is in the visualization of 3-D data. 3-D data has traditionally been viewed on a 2-D screen, which has limited its utility. Augmented reality head-mounted displays, such as the Microsoft HoloLens, make it possible to view 3-D data overlaid on the real world. This allows a user to view and interact with the data in ways similar to how they would interact with a physical 3-D object, such as moving, rotating, or walking around it. A type of 3-D data that is particularly useful for military applications is geo-specific 3-D terrain data, and the visualization of this data is critical for training, mission planning, intelligence, and improved situational awareness. Advances in Unmanned Aerial Systems (UAS), photogrammetry software, and rendering hardware have drastically reduced the technological and financial obstacles in collecting aerial imagery and in generating 3-D terrain maps from that imagery. Because of this, there is an increased need to develop new tools for the exploitation of 3-D data. We will demonstrate how the HoloLens can be used as a tool for visualizing 3-D terrain data. We will describe: 1) how UAScollected imagery is used to create 3-D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3-D object at the same time.
Vertical Field of View Reference Point Study for Flight Path Control and Hazard Avoidance
NASA Technical Reports Server (NTRS)
Comstock, J. Raymond, Jr.; Rudisill, Marianne; Kramer, Lynda J.; Busquets, Anthony M.
2002-01-01
Researchers within the eXternal Visibility System (XVS) element of the High-Speed Research (HSR) program developed and evaluated display concepts that will provide the flight crew of the proposed High-Speed Civil Transport (HSCT) with integrated imagery and symbology to permit path control and hazard avoidance functions while maintaining required situation awareness. The challenge of the XVS program is to develop concepts that would permit a no-nose-droop configuration of an HSCT and expanded low visibility HSCT operational capabilities. This study was one of a series of experiments exploring the 'design space' restrictions for physical placement of an XVS display. The primary experimental issues here was 'conformality' of the forward display vertical position with respect to the side window in simulated flight. 'Conformality' refers to the case such that the horizon and objects appear in the same relative positions when viewed through the forward windows or display and the side windows. This study quantified the effects of visual conformality on pilot flight path control and hazard avoidance performance. Here, conformality related to the positioning and relationship of the artificial horizon line and associated symbology presented on the forward display and the horizon and associated ground, horizon, and sky textures as they would appear in the real view through a window presented in the side window display. No significant performance consequences were found for the non-conformal conditions.
An Investigation of Interval Management Displays
NASA Technical Reports Server (NTRS)
Swieringa, Kurt A.; Wilson, Sara R.; Shay, Rick
2015-01-01
NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to transition the most mature ATM technologies from the laboratory to the National Airspace System. One selected technology is Interval Management (IM), which uses onboard aircraft automation to compute speeds that help the flight crew achieve and maintain precise spacing behind a preceding aircraft. Since ATD-1 focuses on a near-term environment, the ATD-1 flight demonstration prototype requires radio voice communication to issue an IM clearance. Retrofit IM displays will enable pilots to both enter information into the IM avionics and monitor IM operation. These displays could consist of an interface to enter data from an IM clearance and also an auxiliary display that presents critical information in the primary field-of-view. A human-in-the-loop experiment was conducted to examine usability and acceptability of retrofit IM displays, which flight crews found acceptable. Results also indicate the need for salient alerting when new speeds are generated and the desire to have a primary field of view display available that can display text and graphic trend indicators.
NASA Technical Reports Server (NTRS)
2002-01-01
Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.
Controllable 3D Display System Based on Frontal Projection Lenticular Screen
NASA Astrophysics Data System (ADS)
Feng, Q.; Sang, X.; Yu, X.; Gao, X.; Wang, P.; Li, C.; Zhao, T.
2014-08-01
A novel auto-stereoscopic three-dimensional (3D) projection display system based on the frontal projection lenticular screen is demonstrated. It can provide high real 3D experiences and the freedom of interaction. In the demonstrated system, the content can be changed and the dense of viewing points can be freely adjusted according to the viewers' demand. The high dense viewing points can provide smooth motion parallax and larger image depth without blurry. The basic principle of stereoscopic display is described firstly. Then, design architectures including hardware and software are demonstrated. The system consists of a frontal projection lenticular screen, an optimally designed projector-array and a set of multi-channel image processors. The parameters of the frontal projection lenticular screen are based on the demand of viewing such as the viewing distance and the width of view zones. Each projector is arranged on an adjustable platform. The set of multi-channel image processors are made up of six PCs. One of them is used as the main controller, the other five client PCs can process 30 channel signals and transmit them to the projector-array. Then a natural 3D scene will be perceived based on the frontal projection lenticular screen with more than 1.5 m image depth in real time. The control section is presented in detail, including parallax adjustment, system synchronization, distortion correction, etc. Experimental results demonstrate the effectiveness of this novel controllable 3D display system.
Real time 3D scanner: investigations and results
NASA Astrophysics Data System (ADS)
Nouri, Taoufik; Pflug, Leopold
1993-12-01
This article presents a concept of reconstruction of 3-D objects using non-invasive and touch loss techniques. The principle of this method is to display parallel interference optical fringes on an object and then to record the object under two angles of view. According to an appropriated treatment one reconstructs the 3-D object even when the object has no symmetrical plan. The 3-D surface data is available immediately in digital form for computer- visualization and for analysis software tools. The optical set-up for recording the 3-D object, the 3-D data extraction and treatment, as well as the reconstruction of the 3-D object are reported and commented on. This application is dedicated for reconstructive/cosmetic surgery, CAD, animation and research purposes.
DOT National Transportation Integrated Search
1995-07-01
In support of the Federal Aviation Administration (FAA) Airborne Data Link : Program, CTA INCORPORATED researched airlines' anticipated near future cockpit : control and display capabilities and associated plans for Data Link : communication. This ef...
One application of mega-geomorphology in education
NASA Technical Reports Server (NTRS)
Blair, R. W., Jr.
1985-01-01
One advantage of a synoptic view displaying landform assemblages provided by imagery is that one can often identify geomorphic processes which have shaped the region and which may affect the habitability of the area over a human life time. Considering the continued growth of the world population and the resultant pressure and the exploitation of land, usually without any consideration given to geologic processes, it is imperative that we attempt to educate as large a segment of the population as we can about geologic processes and how they influence land use. Space platform imagery which exhibits regional landscapes can be used: (1) to show students the impact of geologic processes over relatively short periods of time (e.g., the Mount St. Helens lateral blast); (2) to display the effects of poor planning because of a lack of knowledge of the local geologic processes (e.g., the 1973 image of the Mississippi River flood around St. Louis, MO); and (3) to show the association of certain types of landforms with building materials and other resources (e.g., drumlins and gravel deposits).
Expert system for neurosurgical treatment planning
NASA Astrophysics Data System (ADS)
Cheng, Andrew Y. S.; Chung, Sally S. Y.; Kwok, John C. K.
1996-04-01
A specially designed expert system is in development for neurosurgical treatment planning. The knowledge base contains knowledge and experiences on neurosurgical treatment planning from neurosurgeon consultants, who also determine the risks of different regions in human brains. When completed, the system can simulate the decision making process of neurosurgeons to determine the safest probing path for operation. The Computed Tomography (CT) or Magnetic Resonance Imaging (MRI) scan images for each patient are grabbed as the input. The system also allows neurosurgeons to include for any particular patient the additional information, such as how the tumor affects its neighboring functional regions, which is also important for calculating the safest probing path. It can then consider all the relevant information and find the most suitable probing path on the patient's brain. A 3D brain model is constructed for each set of the CT/MRI scan images and is displayed real-time together with the possible probing paths found. The precise risk value of each path is shown as a number between 0 and 1, together with its possible damages in text. Neurosurgeons can view more than one possible path simultaneously, and make the final decision on the selected path for operation.
Table screen 360-degree holographic display using circular viewing-zone scanning.
Inoue, Tatsuaki; Takaki, Yasuhiro
2015-03-09
A table screen 360-degree holographic display is proposed, with an increased screen size, having an expanded viewing zone over all horizontal directions around the table screen. It consists of a microelectromechanical systems spatial light modulator (MEMS SLM), a magnifying imaging system, and a rotating screen. The MEMS SLM generates hologram patterns at a high frame rate, the magnifying imaging system increases the screen of the MEMS SLM, and the reduced viewing zones are scanned circularly by the rotating screen. The viewing zones are localized to practically realize wavefront reconstruction. An experimental system has been constructed. The generation of 360-degree three-dimensional (3D) images was achieved by scanning 800 reduced and localized viewing zones circularly. The table screen had a diameter of 100 mm, and the frame rate of 3D image generation was 28.4 Hz.
Hegarty, Mary; Canham, Matt S; Fabrikant, Sara I
2010-01-01
Three experiments examined how bottom-up and top-down processes interact when people view and make inferences from complex visual displays (weather maps). Bottom-up effects of display design were investigated by manipulating the relative visual salience of task-relevant and task-irrelevant information across different maps. Top-down effects of domain knowledge were investigated by examining performance and eye fixations before and after participants learned relevant meteorological principles. Map design and knowledge interacted such that salience had no effect on performance before participants learned the meteorological principles; however, after learning, participants were more accurate if they viewed maps that made task-relevant information more visually salient. Effects of display design on task performance were somewhat dissociated from effects of display design on eye fixations. The results support a model in which eye fixations are directed primarily by top-down factors (task and domain knowledge). They suggest that good display design facilitates performance not just by guiding where viewers look in a complex display but also by facilitating processing of the visual features that represent task-relevant information at a given display location. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
The design of electronic map displays
NASA Technical Reports Server (NTRS)
Aretz, Anthony J.
1991-01-01
This paper presents a cognitive analysis of a pilot's navigation task and describes an experiment comparing a new map display that employs the principle of visual momentum with the two traditional approaches, track-up and north-up. The data show that the advantage of a track-up alignment is its congruence with the egocentered forward view; however, the inconsistency of the rotating display hinders development of a cognitive map. The stability of a north-up alignment aids the acquisition of a cognitive map, but there is a cost associated with the mental rotation of the display to a track-up alignment for tasks involving the ego-centered forward view. The data also show that the visual momentum design captures the benefits and reduces the costs associated with the two traditional approaches.
Simulation and display of macromolecular complexes
NASA Technical Reports Server (NTRS)
Nir, S.; Garduno, R.; Rein, R.; Macelroy, R. D.
1977-01-01
In association with an investigation of the interaction of proteins with DNA and RNA, an interactive computer program for building, manipulating, and displaying macromolecular complexes has been designed. The system provides perspective, planar, and stereoscopic views on the computer terminal display, as well as views for standard and nonstandard observer locations. The molecule or its parts may be rotated and/or translated in any direction; bond connections may be added or removed by the viewer. Molecular fragments may be juxtaposed in such a way that given bonds are aligned, and given planes and points coincide. Another subroutine provides for the duplication of a given unit such as a DNA or amino-acid base.
GlastCam: A Telemetry-Driven Spacecraft Visualization Tool
NASA Technical Reports Server (NTRS)
Stoneking, Eric T.; Tsai, Dean
2009-01-01
Developed for the GLAST project, which is now the Fermi Gamma-ray Space Telescope, GlastCam software ingests telemetry from the Integrated Test and Operations System (ITOS) and generates four graphical displays of geometric properties in real time, allowing visual assessment of the attitude, configuration, position, and various cross-checks. Four windows are displayed: a "cam" window shows a 3D view of the satellite; a second window shows the standard position plot of the satellite on a Mercator map of the Earth; a third window displays star tracker fields of view, showing which stars are visible from the spacecraft in order to verify star tracking; and the fourth window depicts
NASA Technical Reports Server (NTRS)
Douard, Stephane
1994-01-01
Known as a Graphic Server, the system presented was designed for the control ground segment of the Telecom 2 satellites. It is a tool used to dynamically display telemetry data within graphic pages, also known as views. The views are created off-line through various utilities and then, on the operator's request, displayed and animated in real time as data is received. The system was designed as an independent component, and is installed in different Telecom 2 operational control centers. It enables operators to monitor changes in the platform and satellite payloads in real time. It has been in operation since December 1991.
Motion-Base Simulator Evaluation of an Aircraft Using an External Vision System
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Williams, Steven P.; Arthur, J. J.; Rehfeld, Sherri A.; Harrison, Stephanie
2012-01-01
Twelve air transport-rated pilots participated as subjects in a motion-base simulation experiment to evaluate the use of eXternal Vision Systems (XVS) as enabling technologies for future supersonic aircraft without forward facing windows. Three head-up flight display concepts were evaluated -a monochromatic, collimated Head-up Display (HUD) and a color, non-collimated XVS display with a field-of-view (FOV) equal to and also, one significantly larger than the collimated HUD. Approach, landing, departure, and surface operations were conducted. Additionally, the apparent angle-of-attack (AOA) was varied (high/low) to investigate the vertical field-of-view display requirements and peripheral, side window visibility was experimentally varied. The data showed that lateral approach tracking performance and lateral landing position were excellent regardless of AOA, display FOV, display collimation or whether peripheral cues were present. However, the data showed glide slope approach tracking appears to be affected by display size (i.e., FOV) and collimation. The monochrome, collimated HUD and color, uncollimated XVS with Full FOV display had (statistically equivalent) glide path performance improvements over the XVS with HUD FOV display. Approach path performance results indicated that collimation may not be a requirement for an XVS display if the XVS display is large enough and employs color. Subjective assessments of mental workload and situation awareness also indicated that an uncollimated XVS display may be feasible. Motion cueing appears to have improved localizer tracking and touchdown sink rate across all displays.
Field emitter displays for future avionics applications
NASA Astrophysics Data System (ADS)
Jones, Susan K.; Jones, Gary W.; Zimmerman, Steven M.; Blazejewski, Edward R.
1995-06-01
Field emitter array-based display technology offers CRT-like characteristics in a thin flat-panel display with many potential applications for vehicle-mounted, crew workstation, and helmet-mounted displays, as well as many other military and commercial applications. In addition to thinness, high brightness, wide viewing angle, wide temperature range, and low weight, field emitter array displays also offer potential advantages such as row-at-a-time matrix addressability and the ability to be segmented.
Ultrabright Head Mounted Displays Using LED-Illuminated LCOS
2006-01-01
light-piping systems using surface features," in Nonimaging Optics and Efficient Illumination Systems II; Roland Winston , R. John Koshel, eds...Jay Morreale, ed. pp. 1078-1080 (Society for Information Display, San Jose, CA, 2002). 4 Roland Winston , Juan C. Mifiano, and Pablo Benitez, Nonimaging ...ferroelectric liquid-crystal-on-silicon microdisplay and a red-green-blue LED. With an 8x viewing optic giving a 35 degree diagonal field of view, the
--------------------------------------------------------------------------------------------------*/ /* undo the min-height 100% trick used to fill the container's height */ .fc-time-grid { min-height: 0 !important; } /* don't display the side axis at all ("all-day" and time cells) */ .fc-agenda-view .fc-axis { display: none; } /* don't display the horizontal lines */ .fc-slats, .fc-time-grid hr
ERIC Educational Resources Information Center
Koeninger, Jimmy G.
The instructional package was developed to provide the distributive education teacher-coordinator with visual materials that can be used to supplement existing textbook offerings in the area of display (visual merchandising). Designed for use with 35mm slides of retail store displays, the package allows the student to view the slides of displays…
1983-08-01
AD- R136 99 THE INTEGRATED MISSION-PLNNING STATION: FUNCTIONAL 1/3 REQUIREMENTS AVIATOR-..(U) RNACAPR SCIENCES INC SANTA BARBARA CA S P ROGERS RUG...Continue on reverse side o necess.ar and identify by btock number) Interactive Systems Aviation Control-Display Functional Require- Plan-Computer...Dialogue Avionics Systems ments Map Display Army Aviation Design Criteria Helicopters M4ission Planning Cartography Digital Map Human Factors Navigation
Negative emotional stimuli reduce contextual cueing but not response times in inefficient search.
Kunar, Melina A; Watson, Derrick G; Cole, Louise; Cox, Angeline
2014-02-01
In visual search, previous work has shown that negative stimuli narrow the focus of attention and speed reaction times (RTs). This paper investigates these two effects by first asking whether negative emotional stimuli narrow the focus of attention to reduce the learning of a display context in a contextual cueing task and, second, whether exposure to negative stimuli also reduces RTs in inefficient search tasks. In Experiment 1, participants viewed either negative or neutral images (faces or scenes) prior to a contextual cueing task. In a typical contextual cueing experiment, RTs are reduced if displays are repeated across the experiment compared with novel displays that are not repeated. The results showed that a smaller contextual cueing effect was obtained after participants viewed negative stimuli than when they viewed neutral stimuli. However, in contrast to previous work, overall search RTs were not faster after viewing negative stimuli (Experiments 2 to 4). The findings are discussed in terms of the impact of emotional content on visual processing and the ability to use scene context to help facilitate search.
Psychophysical Comparison Of A Video Display System To Film By Using Bone Fracture Images
NASA Astrophysics Data System (ADS)
Seeley, George W.; Stempski, Mark; Roehrig, Hans; Nudelman, Sol; Capp, M. P.
1982-11-01
This study investigated the possibility of using a video display system instead of film for radiological diagnosis. Also investigated were the relationships between characteristics of the system and the observer's accuracy level. Radiologists were used as observers. Thirty-six clinical bone fractures were separated into two matched sets of equal difficulty. The difficulty parameters and ratings were defined by a panel of expert bone radiologists at the Arizona Health Sciences Center, Radiology Department. These two sets of fracture images were then matched with verifiably normal images using parameters such as film type, angle of view, size, portion of anatomy, the film's density range, and the patient's age and sex. The two sets of images were then displayed, using a counterbalanced design, to each of the participating radiologists for diagnosis. Whenever a response was given to a video image, the radiologist used enhancement controls to "window in" on the grey levels of interest. During the TV phase, the radiologist was required to record the settings of the calibrated controls of the image enhancer during interpretation. At no time did any single radiologist see the same film in both modes. The study was designed so that a standard analysis of variance would show the effects of viewing mode (film vs TV), the effects due to stimulus set, and any interactions with observers. A signal detection analysis of observer performance was also performed. Results indicate that the TV display system is almost as good as the view box display; an average of only two more errors were made on the TV display. The difference between the systems has been traced to four observers who had poor accuracy on a small number of films viewed on the TV display. This information is now being correlated with the video system's signal-to-noise ratio (SNR), signal transfer function (STF), and resolution measurements, to obtain information on the basic display and enhancement requirements for a video-based radiologic system. Due to time constraints the results are not included here. The complete results of this study will be reported at the conference.
Interactive 3D Mars Visualization
NASA Technical Reports Server (NTRS)
Powell, Mark W.
2012-01-01
The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.
Chen, Yan; James, Jonathan J; Turnbull, Anne E; Gale, Alastair G
2015-10-01
To establish whether lower resolution, lower cost viewing devices have the potential to deliver mammographic interpretation training. On three occasions over eight months, fourteen consultant radiologists and reporting radiographers read forty challenging digital mammography screening cases on three different displays: a digital mammography workstation, a standard LCD monitor, and a smartphone. Standard image manipulation software was available for use on all three devices. Receiver operating characteristic (ROC) analysis and ANOVA (Analysis of Variance) were used to determine the significance of differences in performance between the viewing devices with/without the application of image manipulation software. The effect of reader's experience was also assessed. Performance was significantly higher (p < .05) on the mammography workstation compared to the other two viewing devices. When image manipulation software was applied to images viewed on the standard LCD monitor, performance improved to mirror levels seen on the mammography workstation with no significant difference between the two. Image interpretation on the smartphone was uniformly poor. Film reader experience had no significant effect on performance across all three viewing devices. Lower resolution standard LCD monitors combined with appropriate image manipulation software are capable of displaying mammographic pathology, and are potentially suitable for delivering mammographic interpretation training. • This study investigates potential devices for training in mammography interpretation. • Lower resolution standard LCD monitors are potentially suitable for mammographic interpretation training. • The effect of image manipulation tools on mammography workstation viewing is insignificant. • Reader experience had no significant effect on performance in all viewing devices. • Smart phones are not suitable for displaying mammograms.
Case study: using a stereoscopic display for mission planning
NASA Astrophysics Data System (ADS)
Kleiber, Michael; Winkelholz, Carsten
2009-02-01
This paper reports on the results of a study investigating the benefits of using an autostereoscopic display in the training targeting process of the Germain Air Force. The study examined how stereoscopic 3D visualizations can help to improve flight path planning and the preparation of a mission in general. An autostereoscopic display was used because it allows the operator to perceive the stereoscopic images without shutter glasses which facilitates the integration into a workplace with conventional 2D monitors and arbitrary lighting conditions.
Power management of direct-view LED backlight for liquid crystal display
NASA Astrophysics Data System (ADS)
Lee, Xuan-Hao; Lin, Che-Chu; Chang, Yu-Yu; Chen, He-Xiang; Sun, Ching-Cherng
2013-03-01
In this paper, we present a study of management of power in function of luminous efficacy of white LED as well as the efficiency enhancement of the direct-view backlight with photon recycling. A cavity efficiency as high as 90.7% is demonstrated for a direct-view backlight with photon recycling. In the future, with a 90% backlight cavity, luminous efficacy of 200 lm/W for white LEDs, and a transmission efficiency of 10% for the liquid crystal panel, the required power of LEDs could be only 16 W. Up to 85% energy saving could be achieved in comparison to the power of the current liquid crystal display.
Advanced methods for displays and remote control of robots.
Eliav, Ami; Lavie, Talia; Parmet, Yisrael; Stern, Helman; Edan, Yael
2011-11-01
An in-depth evaluation of the usability and situation awareness performance of different displays and destination controls of robots are presented. In two experiments we evaluate the way information is presented to the operator and assess different means for controlling the robot. Our study compares three types of displays: a "blocks" display, a HUD (head-up display), and a radar display, and two types of controls: touch screen and hand gestures. The HUD demonstrated better performance when compared to the blocks display and was perceived to have greater usability compared to the radar display. The HUD was also found to be more useful when the operation of the robot was more difficult, i.e., when using the hand-gesture method. The experiments also pointed to the importance of using a wide viewing angle to minimize distortion and for easier coping with the difficulties of locating objects in the field of view margins. The touch screen was found to be superior in terms of both objective performance and its perceived usability. No differences were found between the displays and the controllers in terms of situation awareness. This research sheds light on the preferred display type and controlling method for operating robots from a distance, making it easier to cope with the challenges of operating such systems. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Technical Reports Server (NTRS)
Chouinard, Caroline; Fisher, Forest; Estlin, Tara; Gaines, Daniel; Schaffer, Steven
2005-01-01
The Grid Visualization Tool (GVT) is a computer program for displaying the path of a mobile robotic explorer (rover) on a terrain map. The GVT reads a map-data file in either portable graymap (PGM) or portable pixmap (PPM) format, representing a gray-scale or color map image, respectively. The GVT also accepts input from path-planning and activity-planning software. From these inputs, the GVT generates a map overlaid with one or more rover path(s), waypoints, locations of targets to be explored, and/or target-status information (indicating success or failure in exploring each target). The display can also indicate different types of paths or path segments, such as the path actually traveled versus a planned path or the path traveled to the present position versus planned future movement along a path. The program provides for updating of the display in real time to facilitate visualization of progress. The size of the display and the map scale can be changed as desired by the user. The GVT was written in the C++ language using the Open Graphics Library (OpenGL) software. It has been compiled for both Sun Solaris and Linux operating systems.
NASA Astrophysics Data System (ADS)
Lyon, A. L.; Kowalkowski, J. B.; Jones, C. D.
2017-10-01
ParaView is a high performance visualization application not widely used in High Energy Physics (HEP). It is a long standing open source project led by Kitware and involves several Department of Energy (DOE) and Department of Defense (DOD) laboratories. Futhermore, it has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community that are often not found in applications written by the HEP community. In-situ visualization of events, where event details are visualized during processing/analysis, is a common task for experiment software frameworks. Kitware supplies Catalyst, a library that enables scientific software to serve visualization objects to client ParaView viewers yielding a real-time event display. Connecting ParaView to the Fermilab art framework will be described and the capabilities it brings discussed.
Wide-view transflective liquid crystal display for mobile applications
NASA Astrophysics Data System (ADS)
Kim, Hyang Yul; Ge, Zhibing; Wu, Shin-Tson; Lee, Seung Hee
2007-12-01
A high optical efficiency and wide-view transflective liquid crystal display based on fringe-field switching structure is proposed. The transmissive part has a homogenous liquid crystal (LC) alignment and is driven by a fringe electric field, which exhibits excellent electro-optic characteristics. The reflective part has a hybrid LC alignment with quarter-wave phase retardation and is also driven by a fringe electric field. Consequently, the transmissive and reflective parts have similar gamma curves.
Design of retinal-projection-based near-eye display with contact lens.
Wu, Yuhang; Chen, Chao Ping; Mi, Lantian; Zhang, Wenbo; Zhao, Jingxin; Lu, Yifan; Guo, Weiqian; Yu, Bing; Li, Yang; Maitlo, Nizamuddin
2018-04-30
We propose a design of a retinal-projection-based near-eye display for achieving ultra-large field of view, vision correction, and occlusion. Our solution is highlighted by a contact lens combo, a transparent organic light-emitting diode panel, and a twisted nematic liquid crystal panel. Its design rules are set forth in detail, followed by the results and discussion regarding the field of view, angular resolution, modulation transfer function, contrast ratio, distortion, and simulated imaging.
1998-08-07
This aerial view, looking north, shows the Apollo/Saturn V Center, part of the KSC Visitor Complex. Located about 2 miles north of the Vehicle Assembly Building on the Kennedy Parkway, it is near the current Banana Creek VIP Shuttle launch viewing site. The 100,000-square-foot attraction includes a refurbished 363-foot-long Apollo-era Saturn V rocket that had been displayed previously near the VAB. Inside, the center includes artifacts and historical displays, plus two film theaters
Wide-Field-of-View, High-Resolution, Stereoscopic Imager
NASA Technical Reports Server (NTRS)
Prechtl, Eric F.; Sedwick, Raymond J.
2010-01-01
A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.
3D brain MR angiography displayed by a multi-autostereoscopic screen
NASA Astrophysics Data System (ADS)
Magalhães, Daniel S. F.; Ribeiro, Fádua H.; Lima, Fabrício O.; Serra, Rolando L.; Moreno, Alfredo B.; Li, Li M.
2012-02-01
The magnetic resonance angiography (MRA) can be used to examine blood vessels in key areas of the body, including the brain. In the MRA, a powerful magnetic field, radio waves and a computer produce the detailed images. Physicians use the procedure in brain images mainly to detect atherosclerosis disease in the carotid artery of the neck, which may limit blood flow to the brain and cause a stroke and identify a small aneurysm or arteriovenous malformation inside the brain. Multi-autostereoscopic displays provide multiple views of the same scene, rather than just two, as in autostereoscopic systems. Each view is visible from a different range of positions in front of the display. This allows the viewer to move left-right in front of the display and see the correct view from any position. The use of 3D imaging in the medical field has proven to be a benefit to doctors when diagnosing patients. For different medical domains a stereoscopic display could be advantageous in terms of a better spatial understanding of anatomical structures, better perception of ambiguous anatomical structures, better performance of tasks that require high level of dexterity, increased learning performance, and improved communication with patients or between doctors. In this work we describe a multi-autostereoscopic system and how to produce 3D MRA images to be displayed with it. We show results of brain MR angiography images discussing, how a 3D visualization can help physicians to a better diagnosis.
Smith, Richard W.
1979-01-01
An acoustic imaging system for displaying an object viewed by a moving array of transducers as the array is pivoted about a fixed point within a given plane. A plurality of transducers are fixedly positioned and equally spaced within a laterally extending array and operatively directed to transmit and receive acoustic signals along substantially parallel transmission paths. The transducers are sequentially activated along the array to transmit and receive acoustic signals according to a preestablished sequence. Means are provided for generating output voltages for each reception of an acoustic signal, corresponding to the coordinate position of the object viewed as the array is pivoted. Receptions from each of the transducers are presented on the same display at coordinates corresponding to the actual position of the object viewed to form a plane view of the object scanned.
Display area, looking north towards the classified storage rooms, D.M. ...
Display area, looking north towards the classified storage rooms, D.M. Logistics and D.O. Offices in northwest corner. Viewing bridge is at upper left, and alert status display at upper right - March Air Force Base, Strategic Air Command, Combat Operations Center, 5220 Riverside Drive, Moreno Valley, Riverside County, CA
ERIC Educational Resources Information Center
Rozga, Agata; King, Tricia Z.; Vuduc, Richard W.; Robins, Diana L.
2013-01-01
We examined facial electromyography (fEMG) activity to dynamic, audio-visual emotional displays in individuals with autism spectrum disorders (ASD) and typically developing (TD) individuals. Participants viewed clips of happy, angry, and fearful displays that contained both facial expression and affective prosody while surface electrodes measured…
64.1: Display Technologies for Therapeutic Applications of Virtual Reality
Hoffman, Hunter G.; Schowengerdt, Brian T.; Lee, Cameron M.; Magula, Jeff; Seibel, Eric J.
2015-01-01
A paradigm shift in image source technology for VR helmets is needed. Using scanning fiber displays to replace LCD displays creates lightweight, safe, low cost, wide field of view, portable VR goggles ideal for reducing pain during severe burn wound care in hospitals and possibly in austere combat-transport environments. PMID:26146424
Planning in sentence production: Evidence for the phrase as a default planning scope
Martin, Randi C.; Crowther, Jason E.; Knight, Meredith; Tamborello, Franklin P.; Yang, Chin-Lung
2010-01-01
Controversy remains as to the scope of advanced planning in language production. Smith and Wheeldon (1999) found significantly longer onset latencies when subjects described moving picture displays by producing sentences beginning with a complex noun phrase than for matched sentences beginning with a simple noun phrase. While these findings are consistent with a phrasal scope of planning, they might also be explained on the basis of: 1) greater retrieval fluency for the second content word in the simple initial noun phrase sentences and 2) visual grouping factors. In Experiments 1 and 2, retrieval fluency for the second content word was equated for the complex and simple initial noun phrase conditions. Experiments 3 and 4 addressed the visual grouping hypothesis by using stationary displays and by comparing onset latencies for the same display for sentence and list productions. Longer onset latencies for the sentences beginning with a complex noun phrase were obtained in all experiments, supporting the phrasal scope of planning hypothesis. The results indicate that in speech, as in other motor production domains, planning occurs beyond the minimal production unit. PMID:20501338
Augmented reality system for CT-guided interventions: system description and initial phantom trials
NASA Astrophysics Data System (ADS)
Sauer, Frank; Schoepf, Uwe J.; Khamene, Ali; Vogt, Sebastian; Das, Marco; Silverman, Stuart G.
2003-05-01
We are developing an augmented reality (AR) image guidance system, in which information derived from medical images is overlaid onto a video view of the patient. The interventionalist wears a head-mounted display (HMD) that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture the stereo view of the scene. A third video camera, operating in the near IR, is also attached to the HMD and is used for head tracking. The system achieves real-time performance of 30 frames per second. The graphics appears firmly anchored in the scne, without any noticeable swimming or jitter or time lag. For the application of CT-guided interventions, we extended our original prototype system to include tracking of a biopsy needle to which we attached a set of optical markers. The AR visualization provides very intuitive guidance for planning and placement of the needle and reduces radiation to patient and radiologist. We used an interventional abdominal phantom with simulated liver lesions to perform an inital set of experiments. The users were consistently able to locate the target lesion with the first needle pass. These results provide encouragement to move the system towards clinical trials.
User interface for ground-water modeling: Arcview extension
Tsou, Ming‐shu; Whittemore, Donald O.
2001-01-01
Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.
Immersive Input Display Device (I2D2) for tactical information viewing
NASA Astrophysics Data System (ADS)
Tremper, David E.; Burnett, Kevin P.; Malloy, Andrew R.; Wert, Robert
2006-05-01
Daylight readability of hand-held displays has been an ongoing issue for both commercial and military applications. In an effort to reduce the effects of ambient light on the readability of military displays, the Naval Research Laboratory (NRL) began investigating and developing advanced hand-held displays. Analysis and research of display technologies with consideration for vulnerability to environmental conditions resulted in the complete design and fabrication of the hand-held Immersive Input Display Device (I2D2) monocular. The I2D2 combines an Organic Light Emitting Diode (OLED) SVGA+ micro-display developed by eMagin Corporation with an optics configuration inside a cylindrical housing. A rubber pressure-eyecup allows view ability only when the eyecup is depressed, eliminating light from both entering and leaving the device. This feature allows the I2D2 to be used during the day, while not allowing ambient light to affect the readability. It simultaneously controls light leakage, effectively eliminating the illumination, and thus preserving the tactical position, of the user in the dark. This paper will examine the characteristics and introduce the design of the I2D2.
Peterka, Tom; Kooima, Robert L; Sandin, Daniel J; Johnson, Andrew; Leigh, Jason; DeFanti, Thomas A
2008-01-01
A solid-state dynamic parallax barrier autostereoscopic display mitigates some of the restrictions present in static barrier systems, such as fixed view-distance range, slow response to head movements, and fixed stereo operating mode. By dynamically varying barrier parameters in real time, viewers may move closer to the display and move faster laterally than with a static barrier system, and the display can switch between 3D and 2D modes by disabling the barrier on a per-pixel basis. Moreover, Dynallax can output four independent eye channels when two viewers are present, and both head-tracked viewers receive an independent pair of left-eye and right-eye perspective views based on their position in 3D space. The display device is constructed by using a dual-stacked LCD monitor where a dynamic barrier is rendered on the front display and a modulated virtual environment composed of two or four channels is rendered on the rear display. Dynallax was recently demonstrated in a small-scale head-tracked prototype system. This paper summarizes the concepts presented earlier, extends the discussion of various topics, and presents recent improvements to the system.
The relationship between ambient illumination and psychological factors in viewing of display Images
NASA Astrophysics Data System (ADS)
Iwanami, Takuya; Kikuchi, Ayano; Kaneko, Takashi; Hirai, Keita; Yano, Natsumi; Nakaguchi, Toshiya; Tsumura, Norimichi; Yoshida, Yasuhiro; Miyake, Yoichi
2009-01-01
In this paper, we have clarified the relationship between ambient illumination and psychological factors in viewing of display images. Psychological factors were obtained by the factor analysis with the results of the semantic differential (SD) method. In the psychological experiments, subjects evaluated the impressions of displayed images with changing ambient illuminating conditions. The illumination conditions were controlled by a fluorescent ceiling light and a color LED illumination which was located behind the display. We experimented under two kinds of conditions. One was the experiment with changing brightness of the ambient illumination. The other was the experiment with changing the colors of the background illumination. In the results of the experiment, two factors "realistic sensation, dynamism" and "comfortable," were extracted under different brightness of the ambient illumination of the display surroundings. It was shown that the "comfortable" was improved by the brightness of display surroundings. On the other hand, when the illumination color of surroundings was changed, three factors "comfortable," "realistic sensation, dynamism" and "activity" were extracted. It was also shown that the value of "comfortable" and "realistic sensation, dynamism" increased when the display surroundings were illuminated by the average color of the image contents.
Multiple Views of Space: Continuous Visual Flow Enhances Small-Scale Spatial Learning
ERIC Educational Resources Information Center
Holmes, Corinne A.; Marchette, Steven A.; Newcombe, Nora S.
2017-01-01
In the real word, we perceive our environment as a series of static and dynamic views, with viewpoint transitions providing a natural link from one static view to the next. The current research examined if experiencing such transitions is fundamental to learning the spatial layout of small-scale displays. In Experiment 1, participants viewed a…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-18
... internal distributors count and report each server and display device that processes TotalView-ITCH data as a professional TotalView and OpenView user. Some firms report upwards of 500 devices, while other.... Nasdaq has offered similar enterprise licenses for professional and non-professional usage of TotalView...
NASA Astrophysics Data System (ADS)
Fan, Hang; Li, Kunyang; Zhou, Yangui; Liang, Haowen; Wang, Jiahui; Zhou, Jianying
2016-09-01
Recent upsurge on virtual and augmented realities (VR and AR) has re-ignited the interest to the immerse display technology. The VR/AR technology based on stereoscopic display is believed in its early stage as glasses-free, or autostereoscopic display, will be ultimately adopted for the viewing convenience, visual comfort and for the multi-viewer purposes. On the other hand, autostereoscopic display has not yet received positive market response for the past years neither with stereoscopic displays using shutter or polarized glasses. We shall present the analysis on the real-world applications, rigid user demand, the drawbacks to the existing barrier- and lenticular lens-based LCD autostereoscopy. We shall emphasize the emerging autostereoscopic display, and notably on directional backlight LCD technology using a hybrid spatial- and temporal-control scenario. We report the numerical simulation of a display system using Monte-Carlo ray-tracing method with the human retina as the real image receiver. The system performance is optimized using newly developed figure of merit for system design. The reduced crosstalk in an autostereoscopic system, the enhanced display quality, including the high resolution received by the retina, the display homogeneity without Moiré- and defect-pattern, will be highlighted. Recent research progress including a novel scheme for diffraction-free backlight illumination, the expanded viewing zone for autostereoscopic display, and the novel Fresnel lens array to achieve a near perfect display in 2D/3D mode will be introduced. The experimental demonstration will be presented to the autostereoscopic display with the highest resolution, low crosstalk, Moiré- and defect- pattern free.
Design of virtual display and testing system for moving mass electromechanical actuator
NASA Astrophysics Data System (ADS)
Gao, Zhigang; Geng, Keda; Zhou, Jun; Li, Peng
2015-12-01
Aiming at the problem of control, measurement and movement virtual display of moving mass electromechanical actuator(MMEA), the virtual testing system of MMEA was developed based on the PC-DAQ architecture and the software platform of LabVIEW, and the comprehensive test task such as drive control of MMEA, tests of kinematic parameter, measurement of centroid position and virtual display of movement could be accomplished. The system could solve the alignment for acquisition time between multiple measurement channels in different DAQ cards, then on this basis, the researches were focused on the dynamic 3D virtual display by the LabVIEW, and the virtual display of MMEA were realized by the method of calling DLL and the method of 3D graph drawing controls. Considering the collaboration with the virtual testing system, including the hardware drive, the measurement software of data acquisition, and the 3D graph drawing controls method was selected, which could obtained the synchronization measurement, control and display. The system can measure dynamic centroid position and kinematic position of movable mass block while controlling the MMEA, and the interface of 3D virtual display has realistic effect and motion smooth, which can solve the problem of display and playback about MMEA in the closed shell.
Re-engineering the stereoscope for the 21st Century
NASA Astrophysics Data System (ADS)
Kollin, Joel S.; Hollander, Ari J.
2007-02-01
While discussing the current state of stereo head-mounted and 3D projection displays, the authors came to the realization that flat-panel LCD displays offer higher resolution than projection for stereo display at a low (and continually dropping) cost. More specifically, where head-mounted displays of moderate resolution and field-of-view cost tens of thousands of dollars, we can achieve an angular resolution approaching that of the human eye with a field-of-view (FOV) greater than 90° for less than $1500. For many immersive applications head tracking is unnecessary and sometimes even undesirable, and a low cost/high quality wide FOV display may significantly increase the application space for 3D display. After outlining the problem and potential of this solution we describe the initial construction of a simple Wheatstone stereoscope using 24" LCD displays and then show engineering improvements that increase the FOV and usability of the system. The applicability of a high-immersion, high-resolution display for art, entertainment, and simulation is presented along with a content production system that utilizes the capabilities of the system. We then discuss the potential use of the system for VR pain control therapy, treatment of post-traumatic stress disorders and other serious games applications.
Buttussi, Fabio; Chittaro, Luca
2018-02-01
The increasing availability of head-mounted displays (HMDs) for home use motivates the study of the possible effects that adopting this new hardware might have on users. Moreover, while the impact of display type has been studied for different kinds of tasks, it has been scarcely explored in procedural training. Our study considered three different types of displays used by participants for training in aviation safety procedures with a serious game. The three displays were respectively representative of: (i) desktop VR (a standard desktop monitor), (ii) many setups for immersive VR used in the literature (an HMD with narrow field of view and a 3-DOF tracker), and (iii) new setups for immersive home VR (an HMD with wide field of view and 6-DOF tracker). We assessed effects on knowledge gain, and different self-reported measures (self-efficacy, engagement, presence). Unlike previous studies of display type that measured effects only immediately after the VR experience, we considered also a longer time span (2 weeks). Results indicated that the display type played a significant role in engagement and presence. The training benefits (increased knowledge and self-efficacy) were instead obtained, and maintained at two weeks, regardless of the display used. The paper discusses the implications of these results.
Features and limitations of mobile tablet devices for viewing radiological images.
Grunert, J H
2015-03-01
Mobile radiological image display systems are becoming increasingly common, necessitating a comparison of the features of these systems, specifically the operating system employed, connection to stationary PACS, data security and rang of image display and image analysis functions. In the fall of 2013, a total of 17 PACS suppliers were surveyed regarding the technical features of 18 mobile radiological image display systems using a standardized questionnaire. The study also examined to what extent the technical specifications of the mobile image display systems satisfy the provisions of the Germany Medical Devices Act as well as the provisions of the German X-ray ordinance (RöV). There are clear differences in terms of how the mobile systems connected to the stationary PACS. Web-based solutions allow the mobile image display systems to function independently of their operating systems. The examined systems differed very little in terms of image display and image analysis functions. Mobile image display systems complement stationary PACS and can be used to view images. The impacts of the new quality assurance guidelines (QS-RL) as well as the upcoming new standard DIN 6868 - 157 on the acceptance testing of mobile image display units for the purpose of image evaluation are discussed. © Georg Thieme Verlag KG Stuttgart · New York.
Spatial Linkage and Urban Expansion: AN Urban Agglomeration View
NASA Astrophysics Data System (ADS)
Jiao, L. M.; Tang, X.; Liu, X. P.
2017-09-01
Urban expansion displays different characteristics in each period. From the perspective of the urban agglomeration, studying the spatial and temporal characteristics of urban expansion plays an important role in understanding the complex relationship between urban expansion and network structure of urban agglomeration. We analyze urban expansion in the Yangtze River Delta Urban Agglomeration (YRD) through accessibility to and spatial interaction intensity from core cities as well as accessibility of road network. Results show that: (1) Correlation between urban expansion intensity and spatial indicators such as location and space syntax variables is remarkable and positive, while it decreases after rapid expansion. (2) Urban expansion velocity displays a positive correlation with spatial indicators mentioned above in the first (1980-1990) and second (1990-2000) period. However, it exhibits a negative relationship in the third period (2000-2010), i.e., cities located in the periphery of urban agglomeration developing more quickly. Consequently, the hypothesis of convergence of urban expansion in rapid expansion stage is put forward. (3) Results of Zipf's law and Gibrat's law show urban expansion in YRD displays a convergent trend in rapid expansion stage, small and medium-sized cities growing faster. This study shows that spatial linkage plays an important but evolving role in urban expansion within the urban agglomeration. In addition, it serves as a reference to the planning of Yangtze River Delta Urban Agglomeration and regulation of urban expansion of other urban agglomerations.
NASA Technical Reports Server (NTRS)
Bolton, Matthew L.; Bass, Ellen J.; Comstock, James R., Jr.
2006-01-01
Synthetic Vision Systems (SVS) depict computer generated views of terrain surrounding an aircraft. In the assessment of textures and field of view (FOV) for SVS, no studies have directly measured the 3 levels of spatial awareness: identification of terrain, its relative spatial location, and its relative temporal location. This work introduced spatial awareness measures and used them to evaluate texture and FOV in SVS displays. Eighteen pilots made 4 judgments (relative angle, distance, height, and abeam time) regarding the location of terrain points displayed in 112 5-second, non-interactive simulations of a SVS heads down display. Texture produced significant main effects and trends for the magnitude of error in the relative distance, angle, and abeam time judgments. FOV was significant for the directional magnitude of error in the relative distance, angle, and height judgments. Pilots also provided subjective terrain awareness ratings that were compared with the judgment based measures. The study found that elevation fishnet, photo fishnet, and photo elevation fishnet textures best supported spatial awareness for both the judgments and the subjective awareness measures.
Assessing the impact of PACS on patient care in a medical intensive care unit
NASA Astrophysics Data System (ADS)
Shile, Peter E.; Kundel, Harold L.; Seshadri, Sridhar B.; Carey, Bruce; Brikman, Inna; Kishore, Sheel; Feingold, Eric R.; Lanken, Paul N.
1993-09-01
In this paper we have present data from pilot studies to estimate the impact on patient care of an intensive care unit display station. The data were collected during two separate one-month periods in 1992. We compared these two different periods in terms of the relative speeds with which images were first viewed by MICU physicians. First, we found that images for routine chest radiographs (CXRs) are viewed by a greater number of physicians and slightly sooner with the PACS display station operating in the MICU than when it is not. Thus, for routine exams, PACS provide the potential for shortening of time intervals between exam completions and image-based clinical actions. A second finding is that the use of the display station for viewing non-routine CXRs is strongly influenced by the speed with which films are digitized. Hence, if film digitization is not rapid, the presence of a MICU display station is unlikely to contribute to a shortening of time intervals between exam completions and image-based clinical actions. This finding supports the use of computed radiography for CXRs in an intensive care unit.
The zone of comfort: Predicting visual discomfort with stereo displays
Shibata, Takashi; Kim, Joohwan; Hoffman, David M.; Banks, Martin S.
2012-01-01
Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence–accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence–accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema. PMID:21778252
The zone of comfort: Predicting visual discomfort with stereo displays.
Shibata, Takashi; Kim, Joohwan; Hoffman, David M; Banks, Martin S
2011-07-21
Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence-accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence-accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema.
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Williams, Steven P.
1993-01-01
To provide stereopsis, binocular helmet-mounted display (HMD) systems must trade some of the total field of view available from their two monocular fields to obtain a partial overlap region. The visual field then provides a mixture of cues, with monocular regions on both peripheries and a binoptic (the same image in both eyes) region or, if lateral disparity is introduced to produce two images, a stereoscopic region in the overlapped center. This paper reports on in-simulator assessment of the trade-offs arising from the mixture of color cueing and monocular, binoptic, and stereoscopic cueing information in peripheral monitoring displays as utilized in HMD systems. The accompanying effect of stereoscopic cueing in the tracking information in the central region of the display is also assessed. The pilot's task for the study was to fly at a prescribed height above an undulating pathway in the sky while monitoring a dynamic bar chart displayed in the periphery of their field of view. Control of the simulated rotorcraft was limited to the longitudinal and vertical degrees of freedom to ensure the lateral separation of the viewing conditions of the concurrent tasks.
Extended experience with digital radiography and viewing in an ICU environment
NASA Astrophysics Data System (ADS)
Humphrey, Louis M.; Fitzpatrick, Kevin; Paine, Susan; Ravin, Carl E.
1992-07-01
After several years of continual operation, the utility of digital viewing stations was investigated by distributing questionnaires to past and present users. The results of the questionnaire indicated that the respondents preferred using the workstations over handling film. For evaluation of line placements, chest tubes and pleural effusions, softcopy display was preferred over hardcopy. However, for analysis of air space disease and pneumothorax, images displayed on the workstation were not perceived to be as useful as standard hardcopy.
Prevention: lessons from video display installations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margach, C.B.
1983-04-01
Workers interacting with video display units for periods in excess of two hours per day report significantly increased visual discomfort, fatigue and inefficiencies, as compared with workers performing similar tasks, but without the video viewing component. Difficulties in focusing and the appearance of myopia are among the problems being described. With a view to preventing or minimizing such problems, principles and procedures are presented providing for (a) modification of physical features of the video workstation and (b) improvement in the visual performances of the individual video unit operator.
13. Photographic copy of site plan displaying Test Stand 'C' ...
13. Photographic copy of site plan displaying Test Stand 'C' (4217/E-18), Test Stand 'D' (4223/E-24), and Control and Recording Center (4221/E-22) with ancillary structures, and connecting roads and services. California Institute of Technology, Jet Propulsion Laboratory, Facilities Engineering and Construction Office 'Repairs to Test Stand 'C,' Edwards Test Station, Legend & Site Plan M-1,' drawing no. ESP/115, August 14, 1987. - Jet Propulsion Laboratory Edwards Facility, Test Stand C, Edwards Air Force Base, Boron, Kern County, CA
The Effects of Shared Information on Pilot-Controller Situation Awareness And Re-Route Negotiation
NASA Technical Reports Server (NTRS)
Farley, Todd C.; Hansman, R. John; Endsley, Mica R.; Amonlirdviman, Keith
1999-01-01
The effect of shared information is assessed in terms of pilot-controller negotiating behavior and shared situation awareness. Pilot goals and situation awareness requirements are developed and compared against those of air traffic controllers to identify areas of common and competing interest. An exploratory, part-task simulator experiment is described which evaluates the extent to which shared information may lead pilots and controllers to cooperate or compete when negotiating route amendments. Results are presented which indicate that shared information enhances situation awareness and can engender more collaborative interaction between pilots and air traffic controllers. Furthermore, the value of providing controllers with a good-quality weather overlay on their plan view displays is demonstrated. Observed improvements in situation awareness and separation assurance are discussed.
Cost of ownership for military cargo aircraft using a common versus disparate display configuration
NASA Astrophysics Data System (ADS)
Desjardins, Daniel D.; Most, Marvin C.
2010-04-01
A 2009 paper considered possibilities for applying a common display suite to various front-line bubble canopy fighters, whereas further research suggests the cost savings, post Milestone C production/deployment, might not be advantageous. The situation for military cargo and tanker aircraft, may offer a different paradigm. The primary objective of Defense acquisition is to acquire quality products that satisfy user needs with measurable improvements to mission capability and operational support, in a timely manner, and at a fair and reasonable price. DODD 5000.01 specifies that all participants in the acquisition system shall recognize the reality of fiscal constraints, viewing cost as an independent variable. DoD Components must therefore plan programs based on realistic projections of the dollars and manpower likely to be available in future years and also identify the total costs of ownership, as well as the major drivers of total ownership costs. In theory, therefore, this has already been done for existing cargo/tanker aircraft programs accommodating independent, disparate display suites. This paper goes beyond that stage by exploring total costs of ownership for a hypothetical common approach to cargo/tanker display avionics, bounded by looking at a limited number of such aircraft, e.g., C-5, C-17, C-130H (variants), and C-130J. It is the purpose of this paper to reveal whether there are total cost of ownership advantages for a common approach over and above the existing disparate approach. Aside from cost issues, other considerations, i.e., availability and supportability, may also be analyzed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-16
... Zones; Annual Firework Displays Within the Captain of the Port, Puget Sound Area of Responsibility... published in the Federal Register on October 4, 2011, for the Safety Zones; Annual Firework Displays Within...-6323, email [email protected] . If you have questions on viewing the docket, call Renee V...
ERIC Educational Resources Information Center
Hegarty, Mary; Canham, Matt S.; Fabrikant, Sara I.
2010-01-01
Three experiments examined how bottom-up and top-down processes interact when people view and make inferences from complex visual displays (weather maps). Bottom-up effects of display design were investigated by manipulating the relative visual salience of task-relevant and task-irrelevant information across different maps. Top-down effects of…
Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Yoo, Kwan-Hee; Baasantseren, Ganbat; Park, Jae-Hyeung; Kim, Eun-Soo; Kim, Nam
2014-04-15
We propose a 360 degree integral-floating display with an enhanced vertical viewing angle. The system projects two-dimensional elemental image arrays via a high-speed digital micromirror device projector and reconstructs them into 3D perspectives with a lens array. Double floating lenses relate initial 3D perspectives to the center of a vertically curved convex mirror. The anamorphic optic system tailors the initial 3D perspectives horizontally and vertically disperse light rays more widely. By the proposed method, the entire 3D image provides both monocular and binocular depth cues, a full-parallax demonstration with high-angular ray density and an enhanced vertical viewing angle.
Depth-tunable three-dimensional display with interactive light field control
NASA Astrophysics Data System (ADS)
Xie, Songlin; Wang, Peng; Sang, Xinzhu; Li, Chenyu; Dou, Wenhua; Xiao, Liquan
2016-07-01
A software-defined depth-tunable three-dimensional (3D) display with interactive 3D depth control is presented. With the proposed post-processing system, the disparity of the multi-view media can be freely adjusted. Benefiting from a wealth of information inherently contains in dense multi-view images captured with parallel arrangement camera array, the 3D light field is built and the light field structure is controlled to adjust the disparity without additional acquired depth information since the light field structure itself contains depth information. A statistical analysis based on the least square is carried out to extract the depth information inherently exists in the light field structure and the accurate depth information can be used to re-parameterize light fields for the autostereoscopic display, and a smooth motion parallax can be guaranteed. Experimental results show that the system is convenient and effective to adjust the 3D scene performance in the 3D display.
Description of a landing site indicator (LASI) for light aircraft operation
NASA Technical Reports Server (NTRS)
Fuller, H. V.; Outlaw, B. K. E.
1976-01-01
An experimental cockpit mounted head-up type display system was developed and evaluated by LaRC pilots during the landing phase of light aircraft operations. The Landing Site Indicator (LASI) system display consists of angle of attack, angle of sideslip, and indicated airspeed images superimposed on the pilot's view through the windshield. The information is made visible to the pilot by means of a partially reflective viewing screen which is suspended directly in frot of the pilot's eyes. Synchro transmitters are operated by vanes, located at the left wing tip, which sense angle of attack and sideslip angle. Information is presented near the center of the display in the form of a moving index on a fixed grid. The airspeed is sensed by a pitot-static pressure transducer and is presented in numerical form at the top center of the display.
Portrait view of ESA Spacelab Specialists
NASA Technical Reports Server (NTRS)
1978-01-01
Portrait view of European Space Agency (ESA) Spacelab Specialist Ulf Merbold in civilian clothes standing in front of a display case. The photo was taken at the Marshall Space Flight Center (MSFC), Huntsville, Alabama.
Web-based CERES Clouds QC Property Viewing Tool
NASA Astrophysics Data System (ADS)
Smith, R. A.
2015-12-01
Churngwei Chu1, Rita Smith1, Sunny Sun-Mack1, Yan Chen1, Elizabeth Heckert1, Patrick Minnis21 Science Systems and Applications, Inc., Hampton, Virginia2 NASA Langley Research Center, Hampton, Virginia This presentation will display the capabilities of a web-based CERES cloud property viewer. Aqua/Terra/NPP data will be chosen for examples. It will demonstrate viewing of cloud properties in gridded global maps, histograms, time series displays, latitudinal zonal images, binned data charts, data frequency graphs, and ISCCP plots. Images can be manipulated by the user to narrow boundaries of the map as well as color bars and value ranges, compare datasets, view data values, and more. Other atmospheric studies groups will be encouraged to put their data into the underlying NetCDF data format and view their data with the tool.
Change Blindness Phenomena for Virtual Reality Display Systems.
Steinicke, Frank; Bruder, Gerd; Hinrichs, Klaus; Willemsen, Pete
2011-09-01
In visual perception, change blindness describes the phenomenon that persons viewing a visual scene may apparently fail to detect significant changes in that scene. These phenomena have been observed in both computer-generated imagery and real-world scenes. Several studies have demonstrated that change blindness effects occur primarily during visual disruptions such as blinks or saccadic eye movements. However, until now the influence of stereoscopic vision on change blindness has not been studied thoroughly in the context of visual perception research. In this paper, we introduce change blindness techniques for stereoscopic virtual reality (VR) systems, providing the ability to substantially modify a virtual scene in a manner that is difficult for observers to perceive. We evaluate techniques for semiimmersive VR systems, i.e., a passive and active stereoscopic projection system as well as an immersive VR system, i.e., a head-mounted display, and compare the results to those of monoscopic viewing conditions. For stereoscopic viewing conditions, we found that change blindness phenomena occur with the same magnitude as in monoscopic viewing conditions. Furthermore, we have evaluated the potential of the presented techniques for allowing abrupt, and yet significant, changes of a stereoscopically displayed virtual reality environment.
NASA Astrophysics Data System (ADS)
Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.
2005-03-01
We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.
Reduced-thickness backlighter for autostereoscopic display and display using the backlighter
NASA Technical Reports Server (NTRS)
Eichenlaub, Jesse B (Inventor); Gruhlke, Russell W (Inventor)
1999-01-01
A reduced-thickness backlighter for an autostereoscopic display is disclosed having a lightguide and at least one light source parallel to an edge of the lightguide so as to be substantially coplanar with the lightguide. The lightguide is provided with a first surface which has a plurality of reflective linear regions, such as elongated grooves or glossy lines, parallel to the illuminated edge of the lightguide. Preferably the lightguide further has a second surface which has a plurality of lenticular lenses for reimaging the reflected light from the linear regions into a series of thin vertical lines outside the guide. Because of the reduced thickness of the backlighter system, autostereoscopic viewing is enabled in applications requiring thin backlighter systems. In addition to taking up less space, the reduced-thickness backlighter uses less lamps and less power. For accommodating 2-D applications, a 2-D diffuser plate or a 2-D lightguide parallel to the 3-D backlighter is disclosed for switching back and forth between 3-D viewing and 2-D viewing.
Build YOUR All-Sky View with Aladin
NASA Astrophysics Data System (ADS)
Oberto, A.; Fernique, P.; Boch, T.; Bonnarel, F.
2011-07-01
From the need to extend the display outside the boundaries of a single image, the Aladin team recently developed a new feature to visualize wide areas or even all of the sky. This all-sky view is particularly useful for visualization of very large objects and, with coverage of the whole sky, maps from the Planck satellite. To improve on this capability, some catalogs and maps have been built from many surveys (e.g., DSS, IRIS, GLIMPSE, SDSS, 2MASS) in mixed resolutions, allowing progressive display. The maps are constructed by mosaicing individual images. Now, we provide a new tool to build an all-sky view with your own images. From the images you have selected, it will compose a mosaic with several resolutions (HEALPix tessellation), and organize them to allow their progressive display in Aladin. For convenience, you can export it to a HEALPix map, or share it with the community through Aladin from your web site or eventually from the CDS image collection.
Using a Low Cost Flight Simulation Environment for Interdisciplinary Education
NASA Technical Reports Server (NTRS)
Khan, M. Javed; Rossi, Marcia; ALi, Syed F.
2004-01-01
A multi-disciplinary and inter-disciplinary education is increasingly being emphasized for engineering undergraduates. However, often the focus is on interaction between engineering disciplines. This paper discusses the experience at Tuskegee University in providing interdisciplinary research experiences for undergraduate students in both Aerospace Engineering and Psychology through the utilization of a low cost flight simulation environment. The environment, which is pc-based, runs a low-cost of-the-shelf software and is configured for multiple out-of-the-window views and a synthetic heads down display with joystick, rudder and throttle controls. While the environment is being utilized to investigate and evaluate various strategies for training novice pilots, students were involved to provide them with experience in conducting such interdisciplinary research. On the global inter-disciplinary level these experiences included developing experimental designs and research protocols, consideration of human participant ethical issues, and planning and executing the research studies. During the planning phase students were apprised of the limitations of the software in its basic form and the enhancements desired to investigate human factors issues. A number of enhancements to the flight environment were then undertaken, from creating Excel macros for determining the performance of the 'pilots', to interacting with the software to provide various audio/video cues based on the experimental protocol. These enhancements involved understanding the flight model and performance, stability & control issues. Throughout this process, discussions of data analysis included a focus from a human factors perspective as well as an engineering point of view.
Projection displays and MEMS: timely convergence for a bright future
NASA Astrophysics Data System (ADS)
Hornbeck, Larry J.
1995-09-01
Projection displays and microelectromechanical systems (MEMS) have evolved independently, occasionally crossing paths as early as the 1950s. But the commercially viable use of MEMS for projection displays has been illusive until the recent invention of Texas Instruments Digital Light Processing TM (DLP) technology. DLP technology is based on the Digital Micromirror DeviceTM (DMD) microchip, a MEMS technology that is a semiconductor digital light switch that precisely controls a light source for projection display and hardcopy applications. DLP technology provides a unique business opportunity because of the timely convergence of market needs and technology advances. The world is rapidly moving to an all- digital communications and entertainment infrastructure. In the near future, most of the technologies necessary for this infrastrucutre will be available at the right performance and price levels. This will make commercially viable an all-digital chain (capture, compression, transmission, reception decompression, hearing, and viewing). Unfortunately, the digital images received today must be translated into analog signals for viewing on today's televisions. Digital video is the final link in the all-digital infrastructure and DLP technoogy provides that link. DLP technology is an enabler for digital, high-resolution, color projection displays that have high contrast, are bright, seamless, and have the accuracy of color and grayscale that can be achieved only by digital control. This paper contains an introduction to DMD and DLP technology, including the historical context from which to view their developemnt. The architecture, projection operation, and fabrication are presented. Finally, the paper includes an update about current DMD business opportunities in projection displays and hardcopy.
Experiments using electronic display information in the NASA terminal configured vehicle
NASA Technical Reports Server (NTRS)
Morello, S. A.
1980-01-01
The results of research experiments concerning pilot display information requirements and visualization techniques for electronic display systems are presented. Topics deal with display related piloting tasks in flight controls for approach-to-landing, flight management for the descent from cruise, and flight operational procedures considering the display of surrounding air traffic. Planned research of advanced integrated display formats for primary flight control throughout the various phases of flight is also discussed.
Ageist attitudes block young adults' ability for compassion toward incapacitated older adults.
Bergman, Yoav S; Bodner, Ehud
2015-09-01
Upon encountering older adults, individuals display varying degrees of prosocial attitudes and behaviors. While some display compassion and empathy, others draw away and wish to maintain their distance from them. The current study examined if and how ageist attitudes influence the association between the sight of physical incapacity in older age and compassionate reactions toward them. We predicted that ageist attitudes would interfere with the ability to respond to them with compassion. Young adults (N = 149, ages 19-29) were randomly distributed into two experimental conditions, each viewing a short video portraying different aspects of older adult physicality; one group viewed older adults displaying incapacitated behavior, and the other viewed fit behavior. Participants subsequently filled out scales assessing aging anxieties, and ageist and compassionate attitudes. Ageism was associated with reduced compassion toward the figures. Moreover, viewing incapacitated older adults led to increased concern toward them and perceived efficacy in helping them. However, significant interactions proved that higher scores of ageism in response to the videos led to increased need for distance and reduced efficacy toward incapacitated adults, an effect not observed among subjects with lower ageism scores. Ageism seems to be a factor which disengages individuals from older adults displaying fragility, leading them to disregard social norms which dictate compassion. The results are discussed from the framework of terror management theory, as increased mortality salience and death-related thoughts could have led to the activation of negative attitudes which, in turn, reduce compassion.
NASA Astrophysics Data System (ADS)
Meertens, C. M.; Murray, D.; McWhirter, J.
2004-12-01
Over the last five years, UNIDATA has developed an extensible and flexible software framework for analyzing and visualizing geoscience data and models. The Integrated Data Viewer (IDV), initially developed for visualization and analysis of atmospheric data, has broad interdisciplinary application across the geosciences including atmospheric, ocean, and most recently, earth sciences. As part of the NSF-funded GEON Information Technology Research project, UNAVCO has enhanced the IDV to display earthquakes, GPS velocity vectors, and plate boundary strain rates. These and other geophysical parameters can be viewed simultaneously with three-dimensional seismic tomography and mantle geodynamic model results. Disparate data sets of different formats, variables, geographical projections and scales can automatically be displayed in a common projection. The IDV is efficient and fully interactive allowing the user to create and vary 2D and 3D displays with contour plots, vertical and horizontal cross-sections, plan views, 3D isosurfaces, vector plots and streamlines, as well as point data symbols or numeric values. Data probes (values and graphs) can be used to explore the details of the data and models. The IDV is a freely available Java application using Java3D and VisAD and runs on most computers. UNIDATA provides easy-to-follow instructions for download, installation and operation of the IDV. The IDV primarily uses netCDF, a self-describing binary file format, to store multi-dimensional data, related metadata, and source information. The IDV is designed to work with OPeNDAP-equipped data servers that provide real-time observations and numerical models from distributed locations. Users can capture and share screens and animations, or exchange XML "bundles" that contain the state of the visualization and embedded links to remote data files. A real-time collaborative feature allows groups of users to remotely link IDV sessions via the Internet and simultaneously view and control the visualization. A Jython-based formulation facility allows computations on disparate data sets using simple formulas. Although the IDV is an advanced tool for research, its flexible architecture has also been exploited for educational purposes with the Virtual Geophysical Exploration Environment (VGEE) development. The VGEE demonstration added physical concept models to the IDV and curricula for atmospheric science education intended for the high school to graduate student levels.
Integrating Satellite, Radar and Surface Observation with Time and Space Matching
NASA Astrophysics Data System (ADS)
Ho, Y.; Weber, J.
2015-12-01
The Integrated Data Viewer (IDV) from Unidata is a Java™-based software framework for analyzing and visualizing geoscience data. It brings together the ability to display and work with satellite imagery, gridded data, surface observations, balloon soundings, NWS WSR-88D Level II and Level III RADAR data, and NOAA National Profiler Network data, all within a unified interface. Applying time and space matching on the satellite, radar and surface observation datasets will automatically synchronize the display from different data sources and spatially subset to match the display area in the view window. These features allow the IDV users to effectively integrate these observations and provide 3 dimensional views of the weather system to better understand the underlying dynamics and physics of weather phenomena.
Method and apparatus for providing a seamless tiled display
NASA Technical Reports Server (NTRS)
Dubin, Matthew B. (Inventor); Johnson, Michael J. (Inventor)
2002-01-01
A display for producing a seamless composite image from at least two discrete images. The display includes one or more projectors for projecting each of the discrete images separately onto a screen such that at least one of the discrete images overlaps at least one other of the discrete images by more than 25 percent. The amount of overlap that is required to reduce the seams of the composite image to an acceptable level over a predetermined viewing angle depends on a number of factors including the field-of-view and aperture size of the projectors, the screen gain profile, etc. For rear-projection screens and some front projection screens, an overlap of more than 25 percent is acceptable.
Image volume analysis of omnidirectional parallax regular-polyhedron three-dimensional displays.
Kim, Hwi; Hahn, Joonku; Lee, Byoungho
2009-04-13
Three-dimensional (3D) displays having regular-polyhedron structures are proposed and their imaging characteristics are analyzed. Four types of conceptual regular-polyhedron 3D displays, i.e., hexahedron, octahedron, dodecahedron, and icosahedrons, are considered. In principle, regular-polyhedron 3D display can present omnidirectional full parallax 3D images. Design conditions of structural factors such as viewing angle of facet panel and observation distance for 3D display with omnidirectional full parallax are studied. As a main issue, image volumes containing virtual 3D objects represented by the four types of regular-polyhedron displays are comparatively analyzed.
Helmet-Mounted Display Symbology and Stabilization Concepts
NASA Technical Reports Server (NTRS)
Newman, Richard L.
1995-01-01
The helmet-mounted display (HMD) presents flight, sensor, and weapon information in the pilot's line of sight. The HMD was developed to allow the pilot to retain aircraft and weapon information and to view sensor images while looking off boresight.
NASA Astrophysics Data System (ADS)
Horikawa, H.; Takaesu, M.; Sueki, K.; Takahashi, N.; Sonoda, A.; Miura, S.; Tsuboi, S.
2014-12-01
Mega-thrust earthquakes are anticipated to occur in the Nankai Trough in southwest Japan. In the source areas, we have deployed seafloor seismic network, DONET (Dense Ocean-floor Network System for Earthquake and Tsunamis), in 2010 in order to monitor seismicity, crustal deformations, and tsunamis. DONET system consists of totally 20 stations, which is composed of six kinds of sensors, including strong-motion seismometers and quartz pressure gauges. Those stations are densely distributed with an average spatial interval of 15-20 km and cover near the trench axis to coastal areas. Observed data are transferred to a land station through a fiber-optical cable and then to JAMSTEC (Japan Agency for Marine-Earth Science and Technology) data management center through a private network in real time. After 2011 off the Pacific coast of Tohoku Earthquake, each local government close to Nankai Trough try to plan disaster prevention scheme. JAMSTEC will disseminate DONET data combined with research accomplishment so that they will be widely recognized as important earthquake information. In order to open DONET data observed for research to local government, we have developed a web application system, REIS (Real-time Earthquake Information System). REIS is providing seismic waveform data to some local governments close to Nankai Trough as a pilot study. As soon as operation of DONET is ready, REIS will start full-scale operation. REIS can display seismic waveform data of DONET in real-time, users can select strong motion and pressure data, and configure the options of trace view arrangement, time scale, and amplitude. In addition to real-time monitoring, REIS can display past seismic waveform data and show earthquake epicenters on the map. In this presentation, we briefly introduce DONET system and then show our web application system. We also discuss our future plans for further developments of REIS.
A Framework for Realistic Modeling and Display of Object Surface Appearance
NASA Astrophysics Data System (ADS)
Darling, Benjamin A.
With advances in screen and video hardware technology, the type of content presented on computers has progressed from text and simple shapes to high-resolution photographs, photorealistic renderings, and high-definition video. At the same time, there have been significant advances in the area of content capture, with the development of devices and methods for creating rich digital representations of real-world objects. Unlike photo or video capture, which provide a fixed record of the light in a scene, these new technologies provide information on the underlying properties of the objects, allowing their appearance to be simulated for novel lighting and viewing conditions. These capabilities provide an opportunity to continue the computer display progression, from high-fidelity image presentations to digital surrogates that recreate the experience of directly viewing objects in the real world. In this dissertation, a framework was developed for representing objects with complex color, gloss, and texture properties and displaying them onscreen to appear as if they are part of the real-world environment. At its core, there is a conceptual shift from a traditional image-based display workflow to an object-based one. Instead of presenting the stored patterns of light from a scene, the objective is to reproduce the appearance attributes of a stored object by simulating its dynamic patterns of light for the real viewing and lighting geometry. This is accomplished using a computational approach where the physical light sources are modeled and the observer and display screen are actively tracked. Surface colors are calculated for the real spectral composition of the illumination with a custom multispectral rendering pipeline. In a set of experiments, the accuracy of color and gloss reproduction was evaluated by measuring the screen directly with a spectroradiometer. Gloss reproduction was assessed by comparing gonio measurements of the screen output to measurements of the real samples in the same measurement configuration. A chromatic adaptation experiment was performed to evaluate color appearance in the framework and explore the factors that contribute to differences when viewing self-luminous displays as opposed to reflective objects. A set of sample applications was developed to demonstrate the potential utility of the object display technology for digital proofing, psychophysical testing, and artwork display.
NASA Technical Reports Server (NTRS)
Laudeman, Irene V.; Brasil, Connie L.; Stassart, Philippe
1998-01-01
The Planview Graphical User Interface (PGUI) is the primary display of air traffic for the Conflict Prediction and Trial Planning, function of the Center TRACON Automation System. The PGUI displays air traffic information that assists the user in making decisions related to conflict detection, conflict resolution, and traffic flow management. The intent of this document is to outline the human factors issues related to the design of the conflict prediction and trial planning portions of the PGUI, document all human factors related design changes made to the PGUI from December 1996 to September 1997, and outline future plans for the ongoing PGUI design.
Planning in sentence production: evidence for the phrase as a default planning scope.
Martin, Randi C; Crowther, Jason E; Knight, Meredith; Tamborello, Franklin P; Yang, Chin-Lung
2010-08-01
Controversy remains as to the scope of advanced planning in language production. Smith and Wheeldon (1999) found significantly longer onset latencies when subjects described moving-picture displays by producing sentences beginning with a complex noun phrase than for matched sentences beginning with a simple noun phrase. While these findings are consistent with a phrasal scope of planning, they might also be explained on the basis of: (1) greater retrieval fluency for the second content word in the simple initial noun phrase sentences and (2) visual grouping factors. In Experiments 1 and 2, retrieval fluency for the second content word was equated for the complex and simple initial noun phrase conditions. Experiments 3 and 4 addressed the visual grouping hypothesis by using stationary displays and by comparing onset latencies for the same display for sentence and list productions. Longer onset latencies for the sentences beginning with a complex noun phrase were obtained in all experiments, supporting the phrasal scope of planning hypothesis. The results indicate that in speech, as in other motor production domains, planning occurs beyond the minimal production unit. Copyright (c) 2010 Elsevier B.V. All rights reserved.
[Bone drilling simulation by three-dimensional imaging].
Suto, Y; Furuhata, K; Kojima, T; Kurokawa, T; Kobayashi, M
1989-06-01
The three-dimensional display technique has a wide range of medical applications. Pre-operative planning is one typical application: in orthopedic surgery, three-dimensional image processing has been used very successfully. We have employed this technique in pre-operative planning for orthopedic surgery, and have developed a simulation system for bone-drilling. Positive results were obtained by pre-operative rehearsal; when a region of interest is indicated by means of a mouse on the three-dimensional image displayed on the CRT, the corresponding region appears on the slice image which is displayed simultaneously. Consequently, the status of the bone-drilling is constantly monitored. In developing this system, we have placed emphasis on the quality of the reconstructed three-dimensional images, on fast processing, and on the easy operation of the surgical planning simulation.
High-immersion three-dimensional display of the numerical computer model
NASA Astrophysics Data System (ADS)
Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu
2013-08-01
High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.
How colorful! A feature it is, isn't it?
NASA Astrophysics Data System (ADS)
Lebowsky, Fritz
2015-01-01
A display's color subpixel geometry provides an intriguing opportunity for improving readability of text. True type fonts can be positioned at the precision of subpixel resolution. With such a constraint in mind, how does one need to design font characteristics? On the other hand, display manufactures try hard in addressing the color display's dilemma: smaller pixel pitch and larger display diagonals strongly increase the total number of pixels. Consequently, cost of column and row drivers as well as power consumption increase. Perceptual color subpixel rendering using color component subsampling may save about 1/3 of color subpixels (and reduce power dissipation). This talk will try to elaborate the following questions, based on simulation of several different layouts of subpixel matrices: Up to what level are display device constraints compatible with software specific ideas of rendering text? How much of color contrast will remain? How to best consider preferred viewing distance for readability of text? How much does visual acuity vary at 20/20 vision? Can simplified models of human visual color perception be easily applied to text rendering on displays? How linear is human visual contrast perception around band limit of a display's spatial resolution? How colorful does the rendered text appear on the screen? How much does viewing angle influence the performance of subpixel layouts and color subpixel rendering?
Effects of field-of-view restrictions on speed and accuracy of manoeuvring.
Toet, Alexander; Jansen, Sander E M; Delleman, Nico J
2007-12-01
Effects of field-of-view restrictions on the speed and accuracy of participants performing a real-world manoeuvring task through an obstacled environment were investigated. Although field-of-view restrictions are known to affect human behaviour and to degrade performance for a range of different tasks, the relationship between human manoeuvring performance and field-of-view size is not known. This knowledge is essential to evaluate a trade-off between human performance, cost, and ergonomic aspects of field-of-view limiting devises like head-mounted displays and night vision goggles which are frequently deployed for tasks involving human motion through environments with obstacles. In this study the speed and accuracy of movement were measured in 15 participants (8 men, 7 women, 22.9 +/- 2.8 yr. of age) traversing a course formed by three wall segments for different field-of-view restrictions. Analysis showed speed decreased linearly with decreasing field-of-view extent, while accuracy was consistently reduced for all restricted field-of-view conditions. Present results may be used to evaluate cost and performance trade-offs for field-of-view restricting devices deployed to perform time-limited human-locomotion tasks in complex structured environments, such as night-vision goggles and head-mounted displays.
Acquisition of stereo panoramas for display in VR environments
NASA Astrophysics Data System (ADS)
Ainsworth, Richard A.; Sandin, Daniel J.; Schulze, Jurgen P.; Prudhomme, Andrew; DeFanti, Thomas A.; Srinivasan, Madhusudhanan
2011-03-01
Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer's perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.
Bartha, Michael C; Allie, Paul; Kokot, Douglas; Roe, Cynthia Purvis
2015-01-01
Computer users continue to report eye and upper body discomfort even as workstation flexibility has improved. Research shows a relationship between character size, viewing distance, and reading performance. Few reports exist regarding text height viewed under normal office work conditions and eye discomfort. This paper reports self-selected computer display placement, text characteristics, and subjective comfort for older and younger computer workers under real-world conditions. Computer workers were provided with monitors and adjustable display support(s). In Study 1, older workers wearing progressive-addition lenses (PALs) were observed. In study 2, older workers wearing multifocal lenses and younger workers were observed. Workers wearing PALs experienced less eye and body discomfort with adjustable displays, and less eye and neck discomfort for text visual angles near or greater than ergonomic recommendations. Older workers wearing multifocal correction positioned displays much lower than younger workers. In general, computer users did not adjust character size to ensure that fovial images of text fell within the recommended range. Ergonomic display placement recommendations should be different for computer users wearing multifocal correction for presbyopia. Ergonomic training should emphasize adjusting text size for user comfort.
Evaluation of stereoscopic display with visual function and interview
NASA Astrophysics Data System (ADS)
Okuyama, Fumio
1999-05-01
The influence of binocular stereoscopic (3D) television display on the human eye were compared with one of a 2D display, using human visual function testing and interviews. A 40- inch double lenticular display was used for 2D/3D comparison experiments. Subjects observed the display for 30 minutes at a distance 1.0 m, with a combination of 2D material and one of 3D material. The participants were twelve young adults. Main optometric test with visual function measured were visual acuity, refraction, phoria, near vision point, accommodation etc. The interview consisted of 17 questions. Testing procedures were performed just before watching, just after watching, and forty-five minutes after watching. Changes in visual function are characterized as prolongation of near vision point, decrease of accommodation and increase in phoria. 3D viewing interview results show much more visual fatigue in comparison with 2D results. The conclusions are: 1) change in visual function is larger and visual fatigue is more intense when viewing 3D images. 2) The evaluation method with visual function and interview proved to be very satisfactory for analyzing the influence of stereoscopic display on human eye.
Context based configuration management system
NASA Technical Reports Server (NTRS)
Gurram, Mohana M. (Inventor); Maluf, David A. (Inventor); Mederos, Luis A. (Inventor); Gawdiak, Yuri O. (Inventor)
2010-01-01
A computer-based system for configuring and displaying information on changes in, and present status of, a collection of events associated with a project. Classes of icons for decision events, configurations and feedback mechanisms, and time lines (sequential and/or simultaneous) for related events are displayed. Metadata for each icon in each class is displayed by choosing and activating the corresponding icon. Access control (viewing, reading, writing, editing, deleting, etc.) is optionally imposed for metadata and other displayed information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyon, A. L.; Kowalkowski, J. B.; Jones, C. D.
ParaView is a high performance visualization application not widely used in High Energy Physics (HEP). It is a long standing open source project led by Kitware and involves several Department of Energy (DOE) and Department of Defense (DOD) laboratories. Futhermore, it has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community that are often not found in applications written by the HEP community. In-situ visualization of events, where event details are visualized during processing/analysis, is a common task for experiment software frameworks.more » Kitware supplies Catalyst, a library that enables scientific software to serve visualization objects to client ParaView viewers yielding a real-time event display. Connecting ParaView to the Fermilab art framework will be described and the capabilities it brings discussed.« less
NASA Astrophysics Data System (ADS)
Tan, S. L. E.
2005-03-01
Stereoscopy was used in medicine as long ago as 1898, but has not gained widespread acceptance except for a peak in the 1930's. It retains a use in orthopaedics in the form of Radiostereogrammetrical Analysis (RSA), though this is now done by computer software without using stereopsis. Combining computer assisted stereoscopic displays with both conventional plain films and reconstructed volumetric axial data, we are reassessing the use of stereoscopy in orthopaedics. Applications include use in developing nations or rural settings, erect patients where axial imaging cannot be used, and complex deformity and trauma reconstruction. Extension into orthopaedic endoscopic systems and teaching aids (e.g. operative videos) are further possibilities. The benefits of stereoscopic vision in increased perceived resolution and depth perception can help orthopaedic surgeons achieve more accurate diagnosis and better pre-operative planning. Limitations to currently available stereoscopic displays which need to be addressed prior to widespread acceptance are: availability of hardware and software, loss of resolution, use of glasses, and image "ghosting". Journal publication, the traditional mode of information dissemination in orthopaedics, is also viewed as a hindrance to the acceptance of stereoscopy - it does not deliver the full impact of stereoscopy and "hands-on" demonstrations are needed.
NASA Technical Reports Server (NTRS)
2003-01-01
Dark smoke from oil fires extend for about 60 kilometers south of Iraq's capital city of Baghdad in these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 2, 2003. The thick, almost black smoke is apparent near image center and contains chemical and particulate components hazardous to human health and the environment.The top panel is from MISR's vertical-viewing (nadir) camera. Vegetated areas appear red here because this display is constructed using near-infrared, red and blue band data, displayed as red, green and blue, respectively, to produce a false-color image. The bottom panel is a combination of two camera views of the same area and is a 3-D stereo anaglyph in which red band nadir camera data are displayed as red, and red band data from the 60-degree backward-viewing camera are displayed as green and blue. Both panels are oriented with north to the left in order to facilitate stereo viewing. Viewing the 3-D anaglyph with red/blue glasses (with the red filter placed over the left eye and the blue filter over the right) makes it possible to see the rising smoke against the surface terrain. This technique helps to distinguish features in the atmosphere from those on the surface. In addition to the smoke, several high, thin cirrus clouds (barely visible in the nadir view) are readily observed using the stereo image.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17489. The panels cover an area of about 187 kilometers x 123 kilometers, and use data from blocks 63 to 65 within World Reference System-2 path 168.MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Flatbed-type 3D display systems using integral imaging method
NASA Astrophysics Data System (ADS)
Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki
2006-10-01
We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.
Can Effective Synthetic Vision System Displays be Implemented on Limited Size Display Spaces?
NASA Technical Reports Server (NTRS)
Comstock, J. Raymond, Jr.; Glaab, Lou J.; Prinzel, Lance J.; Elliott, Dawn M.
2004-01-01
The Synthetic Vision Systems (SVS) element of the NASA Aviation Safety Program is striving to eliminate poor visibility as a causal factor in aircraft accidents, and to enhance operational capabilities of all types or aircraft. To accomplish these safety and situation awareness improvements, the SVS concepts are designed to provide a clear view of the world ahead through the display of computer generated imagery derived from an onboard database of terrain, obstacle and airport information. An important issue for the SVS concept is whether useful and effective Synthetic Vision System (SVS) displays can be implemented on limited size display spaces as would be required to implement this technology on older aircraft with physically smaller instrument spaces. In this study, prototype SVS displays were put on the following display sizes: (a) size "A' (e.g. 757 EADI), (b) form factor "D" (e.g. 777 PFD), and (c) new size "X" (Rectangular flat-panel, approximately 20 x 25 cm). Testing was conducted in a high-resolution graphics simulation facility at NASA Langley Research Center. Specific issues under test included the display size as noted above, the field-of-view (FOV) to be shown on the display and directly related to FOV is the degree of minification of the displayed image or picture. Using simulated approaches with display size and FOV conditions held constant no significant differences by these factors were found. Preferred FOV based on performance was determined by using approaches during which pilots could select FOV. Mean preference ratings for FOV were in the following order: (1) 30 deg., (2) Unity, (3) 60 deg., and (4) 90 deg., and held true for all display sizes tested. Limitations of the present study and future research directions are discussed.
Synthetic vision systems: the effects of guidance symbology, display size, and field of view.
Alexander, Amy L; Wickens, Christopher D; Hardy, Thomas J
2005-01-01
Two experiments conducted in a high-fidelity flight simulator examined the effects of guidance symbology, display size, and geometric field of view (GFOV) within a synthetic vision system (SVS). In Experiment 1, 18 pilots flew highlighted and low-lighted tunnel-in-the-sky displays, as well as a less cluttered follow-me aircraft (FMA), through a series of curved approaches over rugged terrain. The results revealed that both tunnels supported better flight path tracking and lower workload levels than did the FMA because of the availability of more preview information. Increasing tunnel intensity had no benefit on tracking and, in fact, degraded traffic awareness because of clutter and attentional tunneling. In Experiment 2, 24 pilots flew a lowlighted tunnel configured according to different display sizes (small or large) and GFOVs (30 degrees or 60 degrees). Measures of flight path tracking and terrain awareness generally favored the 60 degrees GFOV; however, there were no effects of display size. Actual or potential applications of this research include understanding the impact of SVS properties on flight path tracking, traffic and terrain awareness, workload, and the allocation of attention.
Data display and analysis with μView
NASA Astrophysics Data System (ADS)
Tucakov, Ivan; Cosman, Jacob; Brewer, Jess H.
2006-03-01
The μView utility is a new Java applet version of the old db program, extended to include direct access to MUD data files, from which it can construct a variety of spectrum types, including complex and RRF-transformed spectra. By using graphics features built into all modern Web browsers, it provides full graphical display capabilities consistently across all platforms. It has the full command-line functionality of db as well as a more intuitive graphical user interface and extensive documentation, and can read and write db, csv and XML format files.
Chang, Yia-Chung; Tang, Li-Chuan; Yin, Chun-Yi
2013-01-01
Both an analytical formula and an efficient numerical method for simulation of the accumulated intensity profile of light that is refracted through a lenticular lens array placed on top of a liquid-crystal display (LCD) are presented. The influence due to light refracted through adjacent lens is examined in the two-view and four-view systems. Our simulation results are in good agreement with those obtained by a piece of commercial software, ASAP, but our method is much more efficient. This proposed method allows one to adjust the design parameters and carry out simulation for the performance of a subpixel-matched auto-stereoscopic LCD more efficiently and easily.
[The research in a foot pressure measuring system based on LabVIEW].
Li, Wei; Qiu, Hong; Xu, Jiang; He, Jiping
2011-01-01
This paper presents a system of foot pressure measuring system based on LabVIEW. The designs of hardware and software system are figured out. LabVIEW is used to design the application interface for displaying plantar pressure. The system can realize the plantar pressure data acquisition, data storage, waveform display, and waveform playback. It was also shown that the testing results of the system were in line with the changing trend of normal gait, which conformed to human system engineering theory. It leads to the demonstration of system reliability. The system gives vivid and visual results, and provides a new method of how to measure foot-pressure and some references for the design of Insole System.
46. Exterior view at the corner of Seventh Avenue and ...
46. Exterior view at the corner of Seventh Avenue and Olive Way, looking NE. Opening night film, 'The Broadway Melody,' displayed on the canopy marquee. - Fox Theater, Seventh Avenue & Olive Way, Seattle, King County, WA
Venus - 3-D Perspective View of Eastern Edge of Alpha Regio
1996-03-13
A portion of the eastern edge of Alpha Regio is displayed in this three-dimensional perspective view of the surface of Venus from NASA Magellan spacecraft. http://photojournal.jpl.nasa.gov/catalog/PIA00246
General view of the flight deck of the Orbiter Discovery ...
General view of the flight deck of the Orbiter Discovery looking forward along the approximate center line of the orbiter at the center console. The Multifunction Electronic Display System (MEDS) is evident in the mid-ground center of this image, this system was a major upgrade from the previous analog display system. The commander's station is on the port side or left in this view and the pilot's station is on the starboard side or right tin this view. Not the grab bar in the upper center of the image which was primarily used for commander and pilot ingress with the orbiter in a vertical position on the launch pad. Also note that the forward observation windows have protective covers over them. This image was taken at Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
NASA Astrophysics Data System (ADS)
Kay, Paul A.; Robb, Richard A.; King, Bernard F.; Myers, R. P.; Camp, Jon J.
1995-04-01
Thousands of radical prostatectomies for prostate cancer are performed each year. Radical prostatectomy is a challenging procedure due to anatomical variability and the adjacency of critical structures, including the external urinary sphincter and neurovascular bundles that subserve erectile function. Because of this, there are significant risks of urinary incontinence and impotence following this procedure. Preoperative interaction with three-dimensional visualization of the important anatomical structures might allow the surgeon to understand important individual anatomical relationships of patients. Such understanding might decrease the rate of morbidities, especially for surgeons in training. Patient specific anatomic data can be obtained from preoperative 3D MRI diagnostic imaging examinations of the prostate gland utilizing endorectal coils and phased array multicoils. The volumes of the important structures can then be segmented using interactive image editing tools and then displayed using 3-D surface rendering algorithms on standard work stations. Anatomic relationships can be visualized using surface displays and 3-D colorwash and transparency to allow internal visualization of hidden structures. Preoperatively a surgeon and radiologist can interactively manipulate the 3-D visualizations. Important anatomical relationships can better be visualized and used to plan the surgery. Postoperatively the 3-D displays can be compared to actual surgical experience and pathologic data. Patients can then be followed to assess the incidence of morbidities. More advanced approaches to visualize these anatomical structures in support of surgical planning will be implemented on virtual reality (VR) display systems. Such realistic displays are `immersive,' and allow surgeons to simultaneously see and manipulate the anatomy, to plan the procedure and to rehearse it in a realistic way. Ultimately the VR systems will be implemented in the operating room (OR) to assist the surgeon in conducting the surgery. Such an implementation will bring to the OR all of the pre-surgical planning data and rehearsal experience in synchrony with the actual patient and operation to optimize the effectiveness and outcome of the procedure.
A multi-directional backlight for a wide-angle, glasses-free three-dimensional display.
Fattal, David; Peng, Zhen; Tran, Tho; Vo, Sonny; Fiorentino, Marco; Brug, Jim; Beausoleil, Raymond G
2013-03-21
Multiview three-dimensional (3D) displays can project the correct perspectives of a 3D image in many spatial directions simultaneously. They provide a 3D stereoscopic experience to many viewers at the same time with full motion parallax and do not require special glasses or eye tracking. None of the leading multiview 3D solutions is particularly well suited to mobile devices (watches, mobile phones or tablets), which require the combination of a thin, portable form factor, a high spatial resolution and a wide full-parallax view zone (for short viewing distance from potentially steep angles). Here we introduce a multi-directional diffractive backlight technology that permits the rendering of high-resolution, full-parallax 3D images in a very wide view zone (up to 180 degrees in principle) at an observation distance of up to a metre. The key to our design is a guided-wave illumination technique based on light-emitting diodes that produces wide-angle multiview images in colour from a thin planar transparent lightguide. Pixels associated with different views or colours are spatially multiplexed and can be independently addressed and modulated at video rate using an external shutter plane. To illustrate the capabilities of this technology, we use simple ink masks or a high-resolution commercial liquid-crystal display unit to demonstrate passive and active (30 frames per second) modulation of a 64-view backlight, producing 3D images with a spatial resolution of 88 pixels per inch and full-motion parallax in an unprecedented view zone of 90 degrees. We also present several transparent hand-held prototypes showing animated sequences of up to six different 200-view images at a resolution of 127 pixels per inch.
Status of display systems in B-52H
NASA Astrophysics Data System (ADS)
Hopper, Darrel G.; Meyer, Frederick M.; Wodke, Kenneth E.
1999-08-01
Display technologies for the B-52 were selected some 40 years ago have become unsupportable. Electromechanical and old cathode ray tube technologies, including an exotic six-gun 13 in. tube, have become unsupportable due to the vanishing vendor syndrome. Thus, it is necessary to insert new technologies which will be available for the next 40 years to maintain the capability heretofore provided by those now out of favor with the commercial sector. With this paper we begin a look at the status of displays in the B-52H, which will remain in inventory until 2046 according to current plans. From a component electronics technology perspective, such as displays, the B-52H provides several 10-year life cycle cost (LCC) planning cycles to consider multiple upgrades. Three Productivity, Reliability, Availability, and Maintainability (PRAM) projects are reviewed to replace 1950s CRTs in several sizes: 3, 9, and 13 in. A different display technology has been selected in each case. Additional display upgrades in may be anticipated and are discussed.
Upgrading the Space Shuttle Caution and Warning System
NASA Technical Reports Server (NTRS)
McCandless, Jeffrey W.; McCann, Robert S.; Hilty, Bruce T.
2005-01-01
A report describes the history and the continuing evolution of an avionic system aboard the space shuttle, denoted the caution and warning system, that generates visual and auditory displays to alert astronauts to malfunctions. The report focuses mainly on planned human-factors-oriented upgrades of an alphanumeric fault-summary display generated by the system. Such upgrades are needed because the display often becomes cluttered with extraneous messages that contribute to the difficulty of diagnosing malfunctions. In the first of two planned upgrades, the fault-summary display will be rebuilt with a more logical task-oriented graphical layout and multiple text fields for malfunction messages. In the second upgrade, information displayed will be changed, such that text fields will indicate only the sources (that is, root causes) of malfunctions; messages that are not operationally useful will no longer appear on the displays. These and other aspects of the upgrades are based on extensive collaboration among astronauts, engineers, and human-factors scientists. The report describes the human-factors principles applied in the upgrades.
Shen, Liangbo; Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Waterman, Gar; Hahn, Paul S.; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.
2016-01-01
Intra-operative optical coherence tomography (OCT) requires a display technology which allows surgeons to visualize OCT data without disrupting surgery. Previous research and commercial intrasurgical OCT systems have integrated heads-up display (HUD) systems into surgical microscopes to provide monoscopic viewing of OCT data through one microscope ocular. To take full advantage of our previously reported real-time volumetric microscope-integrated OCT (4D MIOCT) system, we describe a stereoscopic HUD which projects a stereo pair of OCT volume renderings into both oculars simultaneously. The stereoscopic HUD uses a novel optical design employing spatial multiplexing to project dual OCT volume renderings utilizing a single micro-display. The optical performance of the surgical microscope with the HUD was quantitatively characterized and the addition of the HUD was found not to substantially effect the resolution, field of view, or pincushion distortion of the operating microscope. In a pilot depth perception subject study, five ophthalmic surgeons completed a pre-set dexterity task with 50.0% (SD = 37.3%) higher success rate and in 35.0% (SD = 24.8%) less time on average with stereoscopic OCT vision compared to monoscopic OCT vision. Preliminary experience using the HUD in 40 vitreo-retinal human surgeries by five ophthalmic surgeons is reported, in which all surgeons reported that the HUD did not alter their normal view of surgery and that live surgical maneuvers were readily visible in displayed stereoscopic OCT volumes. PMID:27231616
Virtual reality: a reality for future military pilotage?
NASA Astrophysics Data System (ADS)
McIntire, John P.; Martinsen, Gary L.; Marasco, Peter L.; Havig, Paul R.
2009-05-01
Virtual reality (VR) systems provide exciting new ways to interact with information and with the world. The visual VR environment can be synthetic (computer generated) or be an indirect view of the real world using sensors and displays. With the potential opportunities of a VR system, the question arises about what benefits or detriments a military pilot might incur by operating in such an environment. Immersive and compelling VR displays could be accomplished with an HMD (e.g., imagery on the visor), large area collimated displays, or by putting the imagery on an opaque canopy. But what issues arise when, instead of viewing the world directly, a pilot views a "virtual" image of the world? Is 20/20 visual acuity in a VR system good enough? To deliver this acuity over the entire visual field would require over 43 megapixels (MP) of display surface for an HMD or about 150 MP for an immersive CAVE system, either of which presents a serious challenge with current technology. Additionally, the same number of sensor pixels would be required to drive the displays to this resolution (and formidable network architectures required to relay this information), or massive computer clusters are necessary to create an entirely computer-generated virtual reality with this resolution. Can we presently implement such a system? What other visual requirements or engineering issues should be considered? With the evolving technology, there are many technological issues and human factors considerations that need to be addressed before a pilot is placed within a virtual cockpit.
Use of mobile devices for medical imaging.
Hirschorn, David S; Choudhri, Asim F; Shih, George; Kim, Woojin
2014-12-01
Mobile devices have fundamentally changed personal computing, with many people forgoing the desktop and even laptop computer altogether in favor of a smaller, lighter, and cheaper device with a touch screen. Doctors and patients are beginning to expect medical images to be available on these devices for consultative viewing, if not actual diagnosis. However, this raises serious concerns with regard to the ability of existing mobile devices and networks to quickly and securely move these images. Medical images often come in large sets, which can bog down a network if not conveyed in an intelligent manner, and downloaded data on a mobile device are highly vulnerable to a breach of patient confidentiality should that device become lost or stolen. Some degree of regulation is needed to ensure that the software used to view these images allows all relevant medical information to be visible and manipulated in a clinically acceptable manner. There also needs to be a quality control mechanism to ensure that a device's display accurately conveys the image content without loss of contrast detail. Furthermore, not all mobile displays are appropriate for all types of images. The smaller displays of smart phones, for example, are not well suited for viewing entire chest radiographs, no matter how small and numerous the pixels of the display may be. All of these factors should be taken into account when deciding where, when, and how to use mobile devices for the display of medical images. Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Presence and preferable viewing conditions when using an ultrahigh-definition large-screen display
NASA Astrophysics Data System (ADS)
Masaoka, Kenichiro; Emoto, Masaki; Sugawara, Masayuki; Okano, Fumio
2005-01-01
We are investigating psychological aspects to obtain guidelines for the design of TVs aimed at future high-presence broadcasting. In this study, we performed subjective assessment tests to examine the psychological effects of different combinations of viewing conditions obtained by varying the viewing distance, screen size, and picture resolution (between 4000 and 1000 scan lines). The evaluation images were presented in the form of two-minute programs comprising a sequence of 10 still images, and the test subjects were asked to complete a questionnaire consisting of 20 items relating to psychological effects such as "presence", "adverse effects", and "preferability". It was found that the test subjects reported a higher feeling of presence for 1000-line images when viewed around a distance of 1.5H (less than the standard viewing distance of 3H, which is recommended as a viewing distance for subjective evaluation of image quality for HDTV), and reported a higher feeling of presence for 4000-line images than for 1000-line images. The adverse effects such as "difficulty of viewing" did not differ significantly with resolution, but were evaluated to be lower as the viewing distance increased and tended to saturate at viewing distances above 2H. The viewing conditions were evaluated as being more preferable as the screen size increased, showing that it is possible to broadcast comfortable high-presence pictures using high-resolution large-screen displays.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-19
... FURTHER INFORMATION CONTACT section to view the hard copy of the docket. You may view the hard copy of the... to illustrate your concerns, and suggest alternatives. g. Explain your views as clearly as possible..., Maintenance Demonstration, Monitoring Network/Verification of Continued Attainment, Contingency Plan, and...
Brief history of electronic stereoscopic displays
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-02-01
A brief history of recent developments in electronic stereoscopic displays is given concentrating on products that have succeeded in the market place and hence have had a significant influence on future implementations. The concentration is on plano-stereoscopic (two-view) technology because it is now the dominant display modality in the marketplace. Stereoscopic displays were created for the motion picture industry a century ago, and this technology influenced the development of products for science and industry, which in turn influenced product development for entertainment.
Rationale and description of a coordinated cockpit display for aircraft flight management
NASA Technical Reports Server (NTRS)
Baty, D. L.
1976-01-01
The design for aircraft cockpit display systems is discussed in detail. The system consists of a set of three beam penetration color cathode ray tubes (CRT). One of three orthogonal projects of the aircraft's state appears on each CRT which displays different views of the same information. The color feature is included to obtain visual separation of information elements. The colors of red, green and yellow are used to differentiate control, performance and navigation information. Displays are coordinated in information and color.
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.
1991-01-01
The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.
NASA Astrophysics Data System (ADS)
Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.
1991-03-01
The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.
Effect of display size on visual attention.
Chen, I-Ping; Liao, Chia-Ning; Yeh, Shih-Hao
2011-06-01
Attention plays an important role in the design of human-machine interfaces. However, current knowledge about attention is largely based on data obtained when using devices of moderate display size. With advancement in display technology comes the need for understanding attention behavior over a wider range of viewing sizes. The effect of display size on test participants' visual search performance was studied. The participants (N = 12) performed two types of visual search tasks, that is, parallel and serial search, under three display-size conditions (16 degrees, 32 degrees, and 60 degrees). Serial, but not parallel, search was affected by display size. In the serial task, mean reaction time for detecting a target increased with the display size.
Micro-valve pump light valve display
Yeechun Lee.
1993-01-19
A flat panel display incorporates a plurality of micro-pump light valves (MLV's) to form pixels for recreating an image. Each MLV consists of a dielectric drop sandwiched between substrates, at least one of which is transparent, a holding electrode for maintaining the drop outside a viewing area, and a switching electrode from accelerating the drop from a location within the holding electrode to a location within the viewing area. The sustrates may further define non-wetting surface areas to create potential energy barriers to assist in controlling movement of the drop. The forces acting on the drop are quadratic in nature to provide a nonlinear response for increased image contrast. A crossed electrode structure can be used to activate the pixels whereby a large flat panel display is formed without active driver components at each pixel.
Micro-valve pump light valve display
Lee, Yee-Chun
1993-01-01
A flat panel display incorporates a plurality of micro-pump light valves (MLV's) to form pixels for recreating an image. Each MLV consists of a dielectric drop sandwiched between substrates, at least one of which is transparent, a holding electrode for maintaining the drop outside a viewing area, and a switching electrode from accelerating the drop from a location within the holding electrode to a location within the viewing area. The sustrates may further define non-wetting surface areas to create potential energy barriers to assist in controlling movement of the drop. The forces acting on the drop are quadratic in nature to provide a nonlinear response for increased image contrast. A crossed electrode structure can be used to activate the pixels whereby a large flat panel display is formed without active driver components at each pixel.
NASA Technical Reports Server (NTRS)
Bogart, Edward H. (Inventor); Pope, Alan T. (Inventor)
2000-01-01
A system for display on a single video display terminal of multiple physiological measurements is provided. A subject is monitored by a plurality of instruments which feed data to a computer programmed to receive data, calculate data products such as index of engagement and heart rate, and display the data in a graphical format simultaneously on a single video display terminal. In addition live video representing the view of the subject and the experimental setup may also be integrated into the single data display. The display may be recorded on a standard video tape recorder for retrospective analysis.
41. INTERIOR VIEW, GREEN SWITCH TOWER, COS COB, SHOWING SWITCH ...
41. INTERIOR VIEW, GREEN SWITCH TOWER, COS COB, SHOWING SWITCH LEVER ASSEMBLAGE AND DISPLAY BOARD - New York, New Haven & Hartford Railroad, Automatic Signalization System, Long Island Sound shoreline between Stamford & New Haven, Stamford, Fairfield County, CT
43. OBLIQUE VIEW, GREEN SWITCH TOWER, COS COB, SHOWING SWITCH ...
43. OBLIQUE VIEW, GREEN SWITCH TOWER, COS COB, SHOWING SWITCH LEVER ASSEMBLAGE AND DISPLAY BOARD - New York, New Haven & Hartford Railroad, Automatic Signalization System, Long Island Sound shoreline between Stamford & New Haven, Stamford, Fairfield County, CT
Venus - Three-Dimensional Perspective View of Alpha Region
1996-12-02
A portion of Alpha Regio is displayed in this three-dimensional perspective view of the surface of Venus from NASA Magellan spacecraft. In 1963, Alpha Regio was the first feature on Venus to be identified from Earth-based radar.
Plan Turbines 3 & 4, Side View Turbines ...
Plan - Turbines 3 & 4, Side View - Turbines 3 & 4, Section A-A - American Falls Water, Power & Light Company, Island Power Plant, Snake River, below American Falls Dam, American Falls, Power County, ID
Dezawa, Akira; Sairyo, Koichi
2014-05-01
Organic electroluminescence displays (OELD) use organic materials that self-emit light with the passage of an electric current. OELD provide high contrast, excellent color reproducibility at low brightness, excellent video images, and less restricted viewing angles. OELD are thus promising for medical use. This study compared the utility of an OELD with conventional liquid crystal displays (LCD) for imaging in orthopedic endoscopic surgery. One OELD and two conventional LCD that were indistinguishable in external appearance were used in this study. Images from 18 patients were displayed simultaneously on three monitors and evaluated by six orthopedic surgeons with extensive surgical experience. Images were shown for 2 min, repeated twice, and viewed from the front and side (diagonally). Surgeon rated both clinical utility (12 parameters) and image quality (11 parameters) for each image on a 5-point scale: 1, very good; 2, good; 3, average; 4, poor; and 5, very poor. For clinical utility in 16 percutaneous endoscopic discectomy cases, mean scores for all 12 parameters were significantly better on the OELD than on the LCD, including organ distinguishability (2.1 vs 3.2, respectively), lesion identification (2.2 vs 3.1), and overall viewing impression (2.1 vs 3.1). For image quality, all 11 parameters were better on the OELD than on LCD. Significant differences were identified in six parameters, including contrast (1.8 vs 2.9), color reproducibility in dark areas (1.8 vs 2.9), and viewing angle (2.2 vs 2.9). The high contrast and excellent color reproducibility of the OELD reduced the constraints of imaging under endoscopy, in which securing a field of view may be difficult. Distinguishability of organs was good, including ligaments, dura mater, nerves, and adipose tissue, contributing to good stereoscopic images of the surgical field. These findings suggest the utility of OELD for excellent display of surgical images and for enabling safe and highly accurate endoscopic surgery. © 2014 Japan Society for Endoscopic Surgery, Asia Endosurgery Task Force and Wiley Publishing Asia Pty Ltd.
On the Uses of Full-Scale Schlieren Flow Visualization
NASA Astrophysics Data System (ADS)
Settles, G. S.; Miller, J. D.; Dodson-Dreibelbis, L. J.
2000-11-01
A lens-and-grid-type schlieren system using a very large grid as a light source was described at earlier APS/DFD meetings. With a field-of-view of 2.3x2.9 m (7.5x9.5 feet), it is the largest indoor schlieren system in the world. Still and video examples of several full-scale airflows and heat-transfer problems visualized thus far will be shown. These include: heating and ventilation airflows, flows due to appliances and equipment, the thermal plumes of people, the aerodynamics of an explosive trace detection portal, gas leak detection, shock wave motion associated with aviation security problems, and heat transfer from live crops. Planned future projects include visualizing fume-hood and grocery display freezer airflows and studying the dispersion of insect repellent plumes at full scale.
Effect of Viewing Plane on Perceived Distances in Real and Virtual Environments
ERIC Educational Resources Information Center
Geuss, Michael N.; Stefanucci, Jeanine K.; Creem-Regehr, Sarah H.; Thompson, William B.
2012-01-01
Three experiments examined perceived absolute distance in a head-mounted display virtual environment (HMD-VE) and a matched real-world environment, as a function of the type and orientation of the distance viewed. In Experiment 1, participants turned and walked, without vision, a distance to match the viewed interval for both egocentric…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-28
.... Provide specific examples to illustrate your concerns, and suggest alternatives. 7. Explain your views as... reasonable further progress (RFP) plan, contingency measures, and any other planning State Implementation... did not clearly explain EPA's views on the applicability of CFR 51.1004(c) to the 2006 PM 2.5 NAAQS...
Volumetric depth peeling for medical image display
NASA Astrophysics Data System (ADS)
Borland, David; Clarke, John P.; Fielding, Julia R.; TaylorII, Russell M.
2006-01-01
Volumetric depth peeling (VDP) is an extension to volume rendering that enables display of otherwise occluded features in volume data sets. VDP decouples occlusion calculation from the volume rendering transfer function, enabling independent optimization of settings for rendering and occlusion. The algorithm is flexible enough to handle multiple regions occluding the object of interest, as well as object self-occlusion, and requires no pre-segmentation of the data set. VDP was developed as an improvement for virtual arthroscopy for the diagnosis of shoulder-joint trauma, and has been generalized for use in other simple and complex joints, and to enable non-invasive urology studies. In virtual arthroscopy, the surfaces in the joints often occlude each other, allowing limited viewpoints from which to evaluate these surfaces. In urology studies, the physician would like to position the virtual camera outside the kidney collecting system and see inside it. By rendering invisible all voxels between the observer's point of view and objects of interest, VDP enables viewing from unconstrained positions. In essence, VDP can be viewed as a technique for automatically defining an optimal data- and task-dependent clipping surface. Radiologists using VDP display have been able to perform evaluations of pathologies more easily and more rapidly than with clinical arthroscopy, standard volume rendering, or standard MRI/CT slice viewing.
Measuring visual discomfort associated with 3D displays
NASA Astrophysics Data System (ADS)
Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.
2009-02-01
Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.
Using Optimization to Improve Test Planning
2017-09-01
friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool for the test and... evaluation schedulers. 14. SUBJECT TERMS schedule optimization, test planning 15. NUMBER OF PAGES 223 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...make the input more user-friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool
Virtual presence for mission visualization: computer game technology provides a new approach
NASA Astrophysics Data System (ADS)
Hussey, K.
2007-08-01
The concept of virtual presence for mission and planetary science visualization is to allow the public to "see" in space as if they were either riding aboard or standing next to an ESA/NASA spacecraft. Our approach to accomplishing this goal is to utilize and extend the same technology used by the computer gaming industry.With this technology, people would be able to immediately "look" in any direction from their virtual location and "zoom-in" at will. Whenever real data for their "view" exists it would be incorporated into the scene. Where data is missing, a high-fidelity simulation of the view would be generated to fill in the chosen field of view. The observer could also change the time of observation into the past or future. The potential for the application of this technology for the development of educational curricula is huge. On the engineering side, all allowable spacecraft and environmental parameters that are being measured and sent to Earth would be immediately viewable as if looking at the dashboard of a car or an instrument panel of an aircraft. Historical information could also be displayed upon request. This can revolutionize the way the general public and planetary scientific community views ESA/NASA missions and provides an educational context that is attractive to the younger generation. While conceptually using this technology is quite simple, the cross-discipline technical challenges are very demanding. This technology is currently under development and application at JPL to assist current missions in viewing their data, communicating with the public and visualizing future mission plans. Real-time demonstrations of the technology described will be shown.
NASA Technical Reports Server (NTRS)
Luo, Victor; Khanampornpan, Teerapat; Boehmer, Rudy A.; Kim, Rachel Y.
2011-01-01
This software graphically displays all pertinent information from a Predicted Events File (PEF) using the Java Swing framework, which allows for multi-platform support. The PEF is hard to weed through when looking for specific information and it is a desire for the MRO (Mars Reconn aissance Orbiter) Mission Planning & Sequencing Team (MPST) to have a different way to visualize the data. This tool will provide the team with a visual way of reviewing and error-checking the sequence product. The front end of the tool contains much of the aesthetically appealing material for viewing. The time stamp is displayed in the top left corner, and highlighted details are displayed in the bottom left corner. The time bar stretches along the top of the window, and the rest of the space is allotted for blocks and step functions. A preferences window is used to control the layout of the sections along with the ability to choose color and size of the blocks. Double-clicking on a block will show information contained within the block. Zooming into a certain level will graphically display that information as an overlay on the block itself. Other functions include using hotkeys to navigate, an option to jump to a specific time, enabling a vertical line, and double-clicking to zoom in/out. The back end involves a configuration file that allows a more experienced user to pre-define the structure of a block, a single event, or a step function. The individual will have to determine what information is important within each block and what actually defines the beginning and end of a block. This gives the user much more flexibility in terms of what the tool is searching for. In addition to the configurability, all the settings in the preferences window are saved in the configuration file as well
Method To Display Data On A Face Mask
NASA Technical Reports Server (NTRS)
Moore, Kevin-Duron
1995-01-01
Proposed electronic instrument displays information on diver's or firefighter's face mask. Includes mask, prism, electronic readouts, transceiver and control electronics. Mounted at periphery of diver's field of view to provide data on elapsed time, depth, pressure, and temperature. Provides greater safety and convenience to user.
ERIC Educational Resources Information Center
Walsh, Janet
1982-01-01
Discusses issues related to possible health hazards associated with viewing video display terminals. Includes some findings of the 1979 NIOSH report on Potential Hazards of Video Display Terminals indicating level of radiation emitted is low and providing recommendations related to glare and back pain/muscular fatigue problems. (JN)
FSC LCD technology for military and avionics applications
NASA Astrophysics Data System (ADS)
Sarma, Kalluri R.; Schmidt, John; Roush, Jerry
2009-05-01
Field sequential color (FSC) liquid crystal displays (LCD) using a high speed LCD mode and an R, G, B LED backlight, offers a significant potential for lower power consumption, higher resolution, higher brightness and lower cost compared to the conventional R, G, B color filter based LCD, and thus is of interest to various military and avionic display applications. While the DLP projection TVs, and Camcorder LCD view finder type displays using the FSC technology have been introduced in the consumer market, large area direct view LCD displays based on the FSC technology have not reached the commercial market yet. Further, large area FSC LCDs can present unique operational issues in avionic and military environments particularly for operation in a broad temperature range and with respect to its susceptibility for the color breakup image artifact. In this paper we will review the current status of the FSC LCD technology and then discuss the results of our efforts on the FSC LCD technology evaluation for the avionic applications.
1980-03-01
full-color terrain model designed to 6 ,. .. ., CINE MA ICiINE h MRO FILM To’ N U Ix v M C CTMIRROR MONITOR FRESNEL REAR- PROJECTION SCREEN Z...ments of NOE flight, recorded on videotape. One segment was filmed at the Hunter-Liggett, Calif., ntilitary reservation, and the other segment was... filmed near Fort Rucker, Ala. The tapes were viewed by each participant pilot on the 13-cm and the 26-cm CRT displays. They could attenuate the display
High-brightness displays in integrated weapon sight systems
NASA Astrophysics Data System (ADS)
Edwards, Tim; Hogan, Tim
2014-06-01
In the past several years Kopin has demonstrated the ability to provide ultra-high brightness, low power display solutions in VGA, SVGA, SXGA and 2k x 2k display formats. This paper will review various approaches for integrating high brightness overlay displays with existing direct view rifle sights and augmenting their precision aiming and targeting capability. Examples of overlay display systems solutions will be presented and discussed. This paper will review significant capability enhancements that are possible when augmenting the real-world as seen through a rifle sight with other soldier system equipment including laser range finders, ballistic computers and sensor systems.
An evaluation of dynamic lip-tooth characteristics during speech and smile in adolescents.
Ackerman, Marc B; Brensinger, Colleen; Landis, J Richard
2004-02-01
This retrospective study was conducted to measure lip-tooth characteristics of adolescents. Pretreatment video clips of 1242 consecutive patients were screened for Class-I skeletal and dental patterns. After all inclusion criteria were applied, the final sample consisted of 50 patients (27 boys, 23 girls) with a mean age of 12.5 years. The raw digital video stream of each patient was edited to select a single image frame representing the patient saying the syllable "chee" and a second single image representing the patient's posed social smile and saved as part of a 12-frame image sequence. Each animation image was analyzed using a SmileMesh computer application to measure the smile index (the ratio of the intercommissure width divided by the interlabial gap), intercommissure width (mm), interlabial gap (mm), percent incisor below the intercommissure line, and maximum incisor exposure (mm). The data were analyzed using SAS (version 8.1). All recorded differences in linear measures had to be > or = 2 mm. The results suggest that anterior tooth display at speech and smile should be recorded independently but evaluated as part of a dynamic range. Asking patients to say "cheese" and then smile is no longer a valid method to elicit the parameters of anterior tooth display. When planning the vertical positions of incisors during orthodontic treatment, the orthodontist should view the dynamics of anterior tooth display as a continuum delineated by the time points of rest, speech, posed social smile, and a Duchenne smile.
NASA Technical Reports Server (NTRS)
Clark, T. E.; Salazr, G. A; Brainard, G. C.
2016-01-01
The goal of this investigation is to determine design limitations and architectural solutions that limit the impact light from displays and indicator lamps have on the operational environment task lighting and lighting countermeasure spectrum constraints. It is concerning that this innovative architectural lighting system, could be compromised by spectrums from display systems, architectural materials, and structures that are not considered as part a full system design implementation. The introduction of many Commercial Off the Shelf (COTS) products to the spacecraft volume that contain LEDs, without consideration to the human factors and biological constraints, is another problem. Displays and indicators are a necessary part of the spacecraft and it is the goal of this research project to determine constraints and solutions that allow these systems to be integrated while minimizing how the lighting environment is modified by them. Due to the potentially broad scope of this endeavor, the project team developed constraints for the evaluation. The evaluation will be on a set of tasks that required significant exposure in the same environment while having a large chance of impacting the light spectrum the crew is expected to receive from the architectural lighting system. The team plans to use recent HRP research on "Net Habitable Volume" [1] to provide the boundary conditions for volume size. A Zemax ® lighting model was developed of a small enclosure that had high intensity overhead lighting and a standard intensity display with LED indicator arrays. The computer model demonstrated a work surface illuminated at a high level by the overhead light source compared to displays and indicators whose light is parallel to the work plane. The overhead lighting oversaturated spectral contributions from the display and indicator at the task work surface. Interestingly, when the observer looked at the displays and LEDs within the small enclosure, their spectral contribution was significant but could be reduced by reflecting overhead light from the wall(s) to the observer. Direct observation of displays and LEDs are an issue because the user's viewing area is a display, not an illuminated work surface. Since avionics command centers consume significant crew time, the tasks that seemed at higher risk for unwanted spectral contributions as an operational volume with significant quantity of displays and indicators that were either under direct observation of the crew or impacting a volume the crew may be required to sleep in.
ERIC Educational Resources Information Center
Tetlow, Linda
2009-01-01
Display took a wide variety of forms ranging from students presenting their initial planning and thought processes, to displays of their finished work, and their suggestions for extending the task should they, or others, have time to return to it in the future. A variety of different media were used from traditional posters in many shapes and…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-02
... listed in the FOR FURTHER INFORMATION CONTACT section to view the hard copy of the docket. You may view... to illustrate your concerns, and suggest alternatives. g. Explain your views as clearly as possible... years following designation--i.e., until 2014--and must include contingency measures. In May 20, 2005...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-29
... INFORMATION CONTACT section to view the hard copy of the docket. You may view the hard copy of the docket... your concerns, and suggest alternatives. g. Explain your views as clearly as possible, avoiding the use..., Contingency Plan, and Transportation Conformity Requirements including the Motor Vehicle Emission Budget for...
Preferred viewing distance of liquid crystal high-definition television.
Lee, Der-Song
2012-01-01
This study explored the effect of TV size, illumination, and viewing angle on preferred viewing distance in high-definition liquid crystal display televisions (HDTV). Results showed that the mean preferred viewing distance was 2856 mm. TV size and illumination significantly affected preferred viewing distance. The larger the screen size, the greater the preferred viewing distance, at around 3-4 times the width of the screen (W). The greater the illumination, the greater the preferred viewing distance. Viewing angle also correlated significantly with preferred viewing distance. The more deflected from direct frontal view, the shorter the preferred viewing distance seemed to be. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Space Science Day Events-Engaging Students in Science
NASA Technical Reports Server (NTRS)
Foxworth, S.; Mosie, A.; Allen, J.; Kent, J.; Green, A.
2015-01-01
The NASA Space Science Day Event follows the same format of planning and execution at all host universities and colleges. These institutions realized the importance of such an event and sought funding to continue hosting NSSD events. In 2014, NASA Johnson Space Center ARES team has supported the following universities and colleges that have hosted a NSSD event; the University of Texas at Brownsville, San Jacinto College, Georgia Tech University and Huston-Tillotson University. Other universities and colleges are continuing to conduct their own NSSD events. NASA Space Science Day Events are supported through continued funding through NASA Discovery Program. Community Night begins with a NASA speaker and Astromaterials display. The entire community surrounding the host university or college is invited to the Community Night. This year at the Huston-Tillotson (HTU) NSSD, we had Dr. Laurie Carrillo, a NASA Engineer, speak to the public and students. She answered questions, shared her experiences and career path. The speaker sets a tone of adventure and discovery for the NSSD event. After the speaker, the public is able to view Lunar and Meteorite samples and ask questions from the ARES team. The students and teachers from nearby schools attended the NSSD Event the following day. Students are able to see the university or college campus and the university or college mentors are available for questions. Students rotate through hour long Science Technology Engineering and Mathematics (STEM) sessions and a display area. These activities are from the Discovery Program activities that tie in directly with k- 12 instruction. The sessions highlight the STEM in exploration and discovery. The Lunar and Meteorite display is again available for students to view and ask questions. In the display area, there are also other interactive displays. Angela Green, from San Jacinto College, brought the Starlab for students to watch a planetarium exhibit for the NSSD at Huston-Tillotson University. Many HTU mentors were leading activities in the display room such as build a comet, volcano layering and robotics manipulation. Students were exposed to a variety STEM career possibilities and information. The students could relate the displays and sessions to what they were learning in school. The HTU mentors made the connection clear for the students. The students ended the event with a mission design presentation. They were able to take what they had learned during the day and were able to create a mission. Students presented their Mission Design and gained confidence in STEM. Conclusion: NASA Space Science Day Events provides an out of school experiential learning environment for students to enhance their STEM curriculum and let students see a college campus. The experiences students gain from attending NSSD gives them the confidence to see themselves on a college campus, possibly majoring in a STEM degree, and understand the importance of completing school.
Front and rear projection autostereoscopic 3D displays based on lenticular sheets
NASA Astrophysics Data System (ADS)
Wang, Qiong-Hua; Zang, Shang-Fei; Qi, Lin
2015-03-01
A front projection autostereoscopic display is proposed. The display is composed of eight projectors and a 3D-imageguided screen which having a lenticular sheet and a retro-reflective diffusion screen. Based on the optical multiplexing and de-multiplexing, the optical functions of the 3D-image-guided screen are parallax image interlacing and viewseparating, which is capable of reconstructing 3D images without quality degradation from the front direction. The operating principle, optical design calculation equations and correction method of parallax images are given. A prototype of the front projection autostereoscopic display is developed, which enhances the brightness and 3D perceptions, and improves space efficiency. The performance of this prototype is evaluated by measuring the luminance and crosstalk distribution along the horizontal direction at the optimum viewing distance. We also propose a rear projection autostereoscopic display. The display consists of eight projectors, a projection screen, and two lenticular sheets. The operation principle and calculation equations are described in detail and the parallax images are corrected by means of homography. A prototype of the rear projection autostereoscopic display is developed. The normalized luminance distributions of viewing zones from the measurement are given. Results agree well with the designed values. The prototype presents high resolution and high brightness 3D images. The research has potential applications in some commercial entertainments and movies for the realistic 3D perceptions.
Atmospheric Science Data Center
2013-04-16
... is a stereoscopic "anaglyph" created from data in MISR's red spectral band, and generated by displaying the 46-degree backward view in red ... the surface. Viewing the anaglyph with red-cyan glasses (red filter over the left eye) gives a perception of height. No separation is ...
A Structural View of American Educational History
ERIC Educational Resources Information Center
Maxcy, Spencer J.
1977-01-01
Displays the components of the structuralist views of Levi-Strauss, Michel Foucault, and Thomas S. Kuhn; constructs a model for doing structuralist studies in educational research; and tests the model on the pragmatic/progressive period in American educational history. (Author/IRT)
Development of an immersive virtual reality head-mounted display with high performance.
Wang, Yunqi; Liu, Weiqi; Meng, Xiangxiang; Fu, Hanyi; Zhang, Daliang; Kang, Yusi; Feng, Rui; Wei, Zhonglun; Zhu, Xiuqing; Jiang, Guohua
2016-09-01
To resolve the contradiction between large field of view and high resolution in immersive virtual reality (VR) head-mounted displays (HMDs), an HMD monocular optical system with a large field of view and high resolution was designed. The system was fabricated by adopting aspheric technology with CNC grinding and a high-resolution LCD as the image source. With this monocular optical system, an HMD binocular optical system with a wide-range continuously adjustable interpupillary distance was achieved in the form of partially overlapping fields of view (FOV) combined with a screw adjustment mechanism. A fast image processor-centered LCD driver circuit and an image preprocessing system were also built to address binocular vision inconsistency in the partially overlapping FOV binocular optical system. The distortions of the HMD optical system with a large field of view were measured. Meanwhile, the optical distortions in the display and the trapezoidal distortions introduced during image processing were corrected by a calibration model for reverse rotations and translations. A high-performance not-fully-transparent VR HMD device with high resolution (1920×1080) and large FOV [141.6°(H)×73.08°(V)] was developed. The full field-of-view average value of angular resolution is 18.6 pixels/degree. With the device, high-quality VR simulations can be completed under various scenarios, and the device can be utilized for simulated trainings in aeronautics, astronautics, and other fields with corresponding platforms. The developed device has positive practical significance.
Stage Cylindrical Immersive Display
NASA Technical Reports Server (NTRS)
Abramyan, Lucy; Norris, Jeffrey S.; Powell, Mark W.; Mittman, David S.; Shams, Khawaja S.
2011-01-01
Panoramic images with a wide field of view intend to provide a better understanding of an environment by placing objects of the environment on one seamless image. However, understanding the sizes and relative positions of the objects in a panorama is not intuitive and prone to errors because the field of view is unnatural to human perception. Scientists are often faced with the difficult task of interpreting the sizes and relative positions of objects in an environment when viewing an image of the environment on computer monitors or prints. A panorama can display an object that appears to be to the right of the viewer when it is, in fact, behind the viewer. This misinterpretation can be very costly, especially when the environment is remote and/or only accessible by unmanned vehicles. A 270 cylindrical display has been developed that surrounds the viewer with carefully calibrated panoramic imagery that correctly engages their natural kinesthetic senses and provides a more accurate awareness of the environment. The cylindrical immersive display offers a more natural window to the environment than a standard cubic CAVE (Cave Automatic Virtual Environment), and the geometry allows multiple collocated users to simultaneously view data and share important decision-making tasks. A CAVE is an immersive virtual reality environment that allows one or more users to absorb themselves in a virtual environment. A common CAVE setup is a room-sized cube where the cube sides act as projection planes. By nature, all cubic CAVEs face a problem with edge matching at edges and corners of the display. Modern immersive displays have found ways to minimize seams by creating very tight edges, and rely on the user to ignore the seam. One significant deficiency of flat-walled CAVEs is that the sense of orientation and perspective within the scene is broken across adjacent walls. On any single wall, parallel lines properly converge at their vanishing point as they should, and the sense of perspective within the scene contained on only one wall has integrity. Unfortunately, parallel lines that lie on adjacent walls do not necessarily remain parallel. This results in inaccuracies in the scene that can distract the viewer and subtract from the immersive experience of the CAVE.
Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content
NASA Astrophysics Data System (ADS)
Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio
2013-05-01
With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.
Advanced freeform optics enabling ultra-compact VR headsets
NASA Astrophysics Data System (ADS)
Benitez, Pablo; Miñano, Juan C.; Zamora, Pablo; Grabovičkić, Dejan; Buljan, Marina; Narasimhan, Bharathwaj; Gorospe, Jorge; López, Jesús; Nikolić, Milena; Sánchez, Eduardo; Lastres, Carmen; Mohedano, Ruben
2017-06-01
We present novel advanced optical designs with a dramatically smaller display to eye distance, excellent image quality and a large field of view (FOV). This enables headsets to be much more compact, typically occupying about a fourth of the volume of a conventional headset with the same FOV. The design strategy of these optics is based on a multichannel approach, which reduces the distance from the eye to the display and the display size itself. Unlike conventional microlens arrays, which are also multichannel devices, our designs use freeform optical surfaces to produce excellent imaging quality in the entire field of view, even when operating at very oblique incidences. We present two families of compact solutions that use different types of lenslets: (1) refractive designs, whose lenslets are composed typically of two refractive surfaces each; and (2) light-folding designs that use prism-like three-surface lenslets, in which rays undergo refraction, reflection, total internal reflection and refraction again. The number of lenslets is not fixed, so different configurations may arise, adaptable for flat or curved displays with different aspect ratios. In the refractive designs the distance between the optics and the display decreases with the number of lenslets, allowing for displaying a light-field when the lenslet becomes significantly small than the eye pupil. On the other hand, the correlation between number of lenslets and the optics to display distance is broken in light-folding designs, since their geometry permits achieving a very short display to eye distance with even a small number of lenslets.
Head-Mounted and Head-Up Display Glossary
NASA Technical Reports Server (NTRS)
Newman, Richard L.; Allen, J. Edwin W. (Technical Monitor)
1997-01-01
One of the problems in head-up and helmet-mounted display (HMD) literature has been a lack of standardization of words and abbreviations. Several different words have been used for the same concept; for example, flight path angle, flight path marker, velocity vector, and total velocity vector all refer to the same thing. In other cases, the same term has been used with two different meanings, such as binocular field-of-view which means the field-of-view visible to both left and right eyes according to some or the field-of-view visible to either the left or right eye or both according to others. Many of the terms used in HMD studies have not been well-defined. We need to have a common language to ensure that system descriptions are communicated. As an example, the term 'stabilized' has been widely used with two meanings. 'Roll-stabilized' has been used to mean a symbol which rotates to indicate the roll or bank of the aircraft. 'World-stabilized' and 'head-stabilized' have both been used to indicate symbols which move to remain fixed with respect to external objects. HMDs present unique symbology problems not found in HUDs. Foremost among these is the issue of maintaining spatial orientation of the symbols. All previous flight displays, round dial instruments, HDDs, and HUDs have been fixed in the cockpit. With the HMD, the flight display can move through a large angle. The coordinates use in transforming from the real-world to the aircraft to the HMD have not been consistently defined. This glossary contains terms relating to optics and vision, displays, and flight information, weapons and aircraft systems. Some definitions, such as Navigation Display, have been added to clarify the definitions for Primary Flight Display and Primary Flight Reference. A list of HUD/HMD related abbreviations is also included.
42 CFR 37.4 - Plans for chest roentgenographic examinations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... covered by the plan; (4) The name and location of the approved X-ray facility or facilities, and the... approval as the originally approved plan. (e) The operator must promptly display in a visible location on... NIOSH. The proposed plan or change in plan must remain posted in a visible location on the bulletin...
NASA Astrophysics Data System (ADS)
Chen, Shu-Hsia; Wu, Shin-Tson
1992-10-01
A broad range of interdisciplinary subjects related to display technologies is addressed, with emphasis on high-definition displays, CRTs, projection displays, materials for display application, flat-panel displays, display modeling, and polymer-dispersed liquid crystals. Particular attention is given to a CRT approach to high-definition television display, a superhigh-resolution electron gun for color display CRT, a review of active-matrix liquid-crystal displays, color design for LCD parameters in projection and direct-view applications, annealing effects on ZnS:TbF3 electroluminescent devices prepared by RF sputtering, polycrystalline silicon thin film transistors with low-temperature gate dielectrics, refractive index dispersions of liquid crystals, a new rapid-response polymer-dispersed liquid-crystal material, and improved liquid crystals for active-matrix displays using high-tilt-orientation layers. (No individual items are abstracted in this volume)
Generalized pipeline for preview and rendering of synthetic holograms
NASA Astrophysics Data System (ADS)
Pappu, Ravikanth; Sparrell, Carlton J.; Underkoffler, John S.; Kropp, Adam B.; Chen, Benjie; Plesniak, Wendy J.
1997-04-01
We describe a general pipeline for the computation and display of either fully-computed holograms or holographic stereograms using the same 3D database. A rendering previewer on a Silicon Graphics Onyx allows a user to specify viewing geometry, database transformations, and scene lighting. The previewer then generates one of two descriptions of the object--a series of perspective views or a polygonal model--which is then used by a fringe rendering engine to compute fringes specific to hologram type. The images are viewed on the second generation MIT Holographic Video System. This allows a viewer to compare holographic stereograms with fully-computed holograms originating from the same database and comes closer to the goal of a single pipeline being able to display the same data in different formats.
Psycho-physiological effects of head-mounted displays in ubiquitous use
NASA Astrophysics Data System (ADS)
Kawai, Takashi; Häkkinen, Jukka; Oshima, Keisuke; Saito, Hiroko; Yamazoe, Takashi; Morikawa, Hiroyuki; Nyman, Göte
2011-02-01
In this study, two experiments were conducted to evaluate the psycho-physiological effects by practical use of monocular head-mounted display (HMD) in a real-world environment, based on the assumption of consumer-level applications as viewing video content and receiving navigation information while walking. In the experiment 1, the workload was examined for different types of presenting stimuli using an HMD (monocular or binocular, see-through or non-see-through). The experiment 2 focused on the relationship between the real-world environment and the visual information presented using a monocular HMD. The workload was compared between a case where participants walked while viewing video content without relation to the real-world environment, and a case where participants walked while viewing visual information to augment the real-world environment as navigations.
Combat Vehicle Command and Control System Architecture Overview
1994-10-01
inserted in the software. • Interactive interface displays and controls were prepared using rapidly prototyped software and were retained at the MWTB for...being simulated "* controls , sensor displays, and out-the-window displays for the crew "* computer image generators (CIGs) for out-the-window and...black hot viewing modes. The commander may access a number of capabilities of the CITV simulation, described below, from controls located around the
Natural 3D content on glasses-free light-field 3D cinema
NASA Astrophysics Data System (ADS)
Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.
2013-03-01
This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.
Visual display aid for orbital maneuvering - Design considerations
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Ellis, Stephen R.
1993-01-01
This paper describes the development of an interactive proximity operations planning system that allows on-site planning of fuel-efficient multiburn maneuvers in a potential multispacecraft environment. Although this display system most directly assists planning by providing visual feedback to aid visualization of the trajectories and constraints, its most significant features include: (1) the use of an 'inverse dynamics' algorithm that removes control nonlinearities facing the operator, and (2) a trajectory planning technique that separates, through a 'geometric spreadsheet', the normally coupled complex problems of planning orbital maneuvers and allows solution by an iterative sequence of simple independent actions. The visual feedback of trajectory shapes and operational constraints, provided by user-transparent and continuously active background computations, allows the operator to make fast, iterative design changes that rapidly converge to fuel-efficient solutions. The planning tool provides an example of operator-assisted optimization of nonlinear cost functions.
IYA Outreach Plans for Appalachian State University's Observatories
NASA Astrophysics Data System (ADS)
Caton, Daniel B.; Pollock, J. T.; Saken, J. M.
2009-01-01
Appalachian State University will provide a variety of observing opportunities for the public during the International Year of Astronomy. These will be focused at both the campus GoTo Telescope Facility used by Introductory Astronomy students and the research facilities at our Dark Sky Observatory. The campus facility is composed of a rooftop deck with a roll-off roof housing fifteen Celestron C11 telescopes. During astronomy lab class meetings these telescopes are used either in situ or remotely by computer control from the adjacent classroom. For the IYA we will host the public for regular observing sessions at these telescopes. The research facility features a 32-inch DFM Engineering telescope with its dome attached to the Cline Visitor Center. The Visitor Center is still under construction and we anticipate its completion for a spring opening during IYA. The CVC will provide areas for educational outreach displays and a view of the telescope control room. Visitors will view celestial objects directly at the eyepiece. We are grateful for the support of the National Science Foundation, through grant number DUE-0536287, which provided instrumentation for the GoTO facility, and to J. Donald Cline for support of the Visitor Center.
3D Displays And User Interface Design For A Radiation Therapy Treatment Planning CAD Tool
NASA Astrophysics Data System (ADS)
Mosher, Charles E.; Sherouse, George W.; Chaney, Edward L.; Rosenman, Julian G.
1988-06-01
The long term goal of the project described in this paper is to improve local tumor control through the use of computer-aided treatment design methods that can result in selection of better treatment plans compared with conventional planning methods. To this end, a CAD tool for the design of radiation treatment beams is described. Crucial to the effectiveness of this tool are high quality 3D display techniques. We have found that 2D and 3D display methods dramatically improve the comprehension of the complex spatial relationships between patient anatomy, radiation beams, and dose distributions. In order to take full advantage of these displays, an intuitive and highly interactive user interface was created. If the system is to be used by physicians unfamiliar with computer systems, it is essential that a user interface is incorporated that allows the user to navigate through each step of the design process in a manner similar to what they are used to. Compared with conventional systems, we believe our display and CAD tools will allow the radiotherapist to achieve more accurate beam targetting leading to a better radiation dose configuration to the tumor volume. This would result in a reduction of the dose to normal tissue.
Use of videotape for off-line viewing of computer-assisted radionuclide cardiology studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thrall, J.H.; Pitt, B.; Marx, R.S.
1978-02-01
Videotape offers an inexpensive method for off-line viewing of dynamic radionuclide cardiac studies. Two approaches to videotaping have been explored and demonstrated to be feasible. In the first, a video camera in conjunction with a cassette-type recorder is used to record from the computer display scope. Alternatively, for computer systems already linked to video display units, the video signal can be routed directly to the recorder. Acceptance and use of tracer cardiology studies will be enhanced by increased availability of the studies for clinical review. Videotape offers an inexpensive flexible means of achieving this.
Multiuser Collaboration with Networked Mobile Devices
NASA Technical Reports Server (NTRS)
Tso, Kam S.; Tai, Ann T.; Deng, Yong M.; Becks, Paul G.
2006-01-01
In this paper we describe a multiuser collaboration infrastructure that enables multiple mission scientists to remotely and collaboratively interact with visualization and planning software, using wireless networked personal digital assistants(PDAs) and other mobile devices. During ground operations of planetary rover and lander missions, scientists need to meet daily to review downlinked data and plan science activities. For example, scientists use the Science Activity Planner (SAP) in the Mars Exploration Rover (MER) mission to visualize downlinked data and plan rover activities during the science meetings [1]. Computer displays are projected onto large screens in the meeting room to enable the scientists to view and discuss downlinked images and data displayed by SAP and other software applications. However, only one person can interact with the software applications because input to the computer is limited to a single mouse and keyboard. As a result, the scientists have to verbally express their intentions, such as selecting a target at a particular location on the Mars terrain image, to that person in order to interact with the applications. This constrains communication and limits the returns of science planning. Furthermore, ground operations for Mars missions are fundamentally constrained by the short turnaround time for science and engineering teams to process and analyze data, plan the next uplink, generate command sequences, and transmit the uplink to the vehicle [2]. Therefore, improving ground operations is crucial to the success of Mars missions. The multiuser collaboration infrastructure enables users to control software applications remotely and collaboratively using mobile devices. The infrastructure includes (1) human-computer interaction techniques to provide natural, fast, and accurate inputs, (2) a communications protocol to ensure reliable and efficient coordination of the input devices and host computers, (3) an application-independent middleware that maintains the states, sessions, and interactions of individual users of the software applications, (4) an application programming interface to enable tight integration of applications and the middleware. The infrastructure is able to support any software applications running under the Windows or Unix platforms. The resulting technologies not only are applicable to NASA mission operations, but also useful in other situations such as design reviews, brainstorming sessions, and business meetings, as they can benefit from having the participants concurrently interact with the software applications (e.g., presentation applications and CAD design tools) to illustrate their ideas and provide inputs.
Real-Time Multimission Event Notification System for Mars Relay
NASA Technical Reports Server (NTRS)
Wallick, Michael N.; Allard, Daniel A.; Gladden, Roy E.; Wang, Paul; Hy, Franklin H.
2013-01-01
As the Mars Relay Network is in constant flux (missions and teams going through their daily workflow), it is imperative that users are aware of such state changes. For example, a change by an orbiter team can affect operations on a lander team. This software provides an ambient view of the real-time status of the Mars network. The Mars Relay Operations Service (MaROS) comprises a number of tools to coordinate, plan, and visualize various aspects of the Mars Relay Network. As part of MaROS, a feature set was developed that operates on several levels of the software architecture. These levels include a Web-based user interface, a back-end "ReSTlet" built in Java, and databases that store the data as it is received from the network. The result is a real-time event notification and management system, so mission teams can track and act upon events on a moment-by-moment basis. This software retrieves events from MaROS and displays them to the end user. Updates happen in real time, i.e., messages are pushed to the user while logged into the system, and queued when the user is not online for later viewing. The software does not do away with the email notifications, but augments them with in-line notifications. Further, this software expands the events that can generate a notification, and allows user-generated notifications. Existing software sends a smaller subset of mission-generated notifications via email. A common complaint of users was that the system-generated e-mails often "get lost" with other e-mail that comes in. This software allows for an expanded set (including user-generated) of notifications displayed in-line of the program. By separating notifications, this can improve a user's workflow.
NASA Astrophysics Data System (ADS)
Al-Kuhali, K.; Hussain M., I.; Zain Z., M.; Mullenix, P.
2015-05-01
Aim: This paper contribute to the flat panel display industry it terms of aggregate production planning. Methodology: For the minimization cost of total production of LCD manufacturing, a linear programming was applied. The decision variables are general production costs, additional cost incurred for overtime production, additional cost incurred for subcontracting, inventory carrying cost, backorder costs and adjustments for changes incurred within labour levels. Model has been developed considering a manufacturer having several product types, which the maximum types are N, along a total time period of T. Results: Industrial case study based on Malaysia is presented to test and to validate the developed linear programming model for aggregate production planning. Conclusion: The model development is fit under stable environment conditions. Overall it can be recommended to adapt the proven linear programming model to production planning of Malaysian flat panel display industry.
SeaTrack: Ground station orbit prediction and planning software for sea-viewing satellites
NASA Technical Reports Server (NTRS)
Lambert, Kenneth S.; Gregg, Watson W.; Hoisington, Charles M.; Patt, Frederick S.
1993-01-01
An orbit prediction software package (Sea Track) was designed to assist High Resolution Picture Transmission (HRPT) stations in the acquisition of direct broadcast data from sea-viewing spacecraft. Such spacecraft will be common in the near future, with the launch of the Sea viewing Wide Field-of-view Sensor (SeaWiFS) in 1994, along with the continued Advanced Very High Resolution Radiometer (AVHRR) series on NOAA platforms. The Brouwer-Lyddane model was chosen for orbit prediction because it meets the needs of HRPT tracking accuracies, provided orbital elements can be obtained frequently (up to within 1 week). Sea Track requires elements from the U.S. Space Command (NORAD Two-Line Elements) for the satellite's initial position. Updated Two-Line Elements are routinely available from many electronic sources (some are listed in the Appendix). Sea Track is a menu-driven program that allows users to alter input and output formats. The propagation period is entered by a start date and end date with times in either Greenwich Mean Time (GMT) or local time. Antenna pointing information is provided in tabular form and includes azimuth/elevation pointing angles, sub-satellite longitude/latitude, acquisition of signal (AOS), loss of signal (LOS), pass orbit number, and other pertinent pointing information. One version of Sea Track (non-graphical) allows operation under DOS (for IBM-compatible personal computers) and UNIX (for Sun and Silicon Graphics workstations). A second, graphical, version displays orbit tracks, and azimuth-elevation for IBM-compatible PC's, but requires a VGA card and Microsoft FORTRAN.
Future directions in 3-dimensional imaging and neurosurgery: stereoscopy and autostereoscopy.
Christopher, Lauren A; William, Albert; Cohen-Gadol, Aaron A
2013-01-01
Recent advances in 3-dimensional (3-D) stereoscopic imaging have enabled 3-D display technologies in the operating room. We find 2 beneficial applications for the inclusion of 3-D imaging in clinical practice. The first is the real-time 3-D display in the surgical theater, which is useful for the neurosurgeon and observers. In surgery, a 3-D display can include a cutting-edge mixed-mode graphic overlay for image-guided surgery. The second application is to improve the training of residents and observers in neurosurgical techniques. This article documents the requirements of both applications for a 3-D system in the operating room and for clinical neurosurgical training, followed by a discussion of the strengths and weaknesses of the current and emerging 3-D display technologies. An important comparison between a new autostereoscopic display without glasses and current stereo display with glasses improves our understanding of the best applications for 3-D in neurosurgery. Today's multiview autostereoscopic display has 3 major benefits: It does not require glasses for viewing; it allows multiple views; and it improves the workflow for image-guided surgery registration and overlay tasks because of its depth-rendering format and tools. Two current limitations of the autostereoscopic display are that resolution is reduced and depth can be perceived as too shallow in some cases. Higher-resolution displays will be available soon, and the algorithms for depth inference from stereo can be improved. The stereoscopic and autostereoscopic systems from microscope cameras to displays were compared by the use of recorded and live content from surgery. To the best of our knowledge, this is the first report of application of autostereoscopy in neurosurgery.
17. View to west. Detail, connection point L2 (see plans), ...
17. View to west. Detail, connection point L2 (see plans), from below deck. (135mm lens) - South Fork Trinity River Bridge, State Highway 299 spanning South Fork Trinity River, Salyer, Trinity County, CA
Floor Plan, Axonometric View, Site Location Key, Cesar Chavez Fasting ...
Floor Plan, Axonometric View, Site Location Key, Cesar Chavez Fasting Room Diagram - Forty Acres, Tomasa Zapata Mireles Co-op Building , 30168 Garces Highway (Northwest Corner of Garces Highway and Mettler Avenue), Delano, Kern County, CA
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-10
... possible, you contact the individual listed in the FOR FURTHER INFORMATION CONTACT section to view the hard copy of the docket. You may view the hard copy of the docket Monday through Friday, 8 a.m. to 4 p.m... your concerns, and suggest alternatives. g. Explain your views as clearly as possible, avoiding the use...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... FURTHER INFORMATION CONTACT section to view the hard copy of the docket. You may view the hard copy of the... your views as clearly as possible, avoiding the use of profanity or personal threats. h. Make sure to... Attainment, Contingency Plan, and Conformity Determinations. Below, we describe our evaluation of each of...
Aberration analyses for improving the frontal projection three-dimensional display.
Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Wang, Peng; Cao, Xuemei; Sun, Lei; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua
2014-09-22
The crosstalk severely affects the viewing experience for the auto-stereoscopic 3D displays based on frontal projection lenticular sheet. To suppress unclear stereo vision and ghosts are observed in marginal viewing zones(MVZs), aberration of the lenticular sheet combining with the frontal projector is analyzed and designed. Theoretical and experimental results show that increasing radius of curvature (ROC) or decreasing aperture of the lenticular sheet can suppress the aberration and reduce the crosstalk. A projector array with 20 micro-projectors is used to frontally project 20 parallax images one lenticular sheet with the ROC of 10 mm and the size of 1.9 m × 1.2 m. The 3D image with the high quality is experimentally demonstrated in both the mid-viewing zone and MVZs in the optimal viewing plane. The 3D clear depth of 1.2m can be perceived. To provide an excellent 3D image and enlarge the field of view at the same time, a novel structure of lenticular sheet is presented to reduce aberration, and the crosstalk is well suppressed.
Software for Displaying Data from Planetary Rovers
NASA Technical Reports Server (NTRS)
Powell, Mark; Backers, Paul; Norris, Jeffrey; Vona, Marsette; Steinke, Robert
2003-01-01
Science Activity Planner (SAP) DownlinkBrowser is a computer program that assists in the visualization of processed telemetric data [principally images, image cubes (that is, multispectral images), and spectra] that have been transmitted to Earth from exploratory robotic vehicles (rovers) on remote planets. It is undergoing adaptation to (1) the Field Integrated Design and Operations (FIDO) rover (a prototype Mars-exploration rover operated on Earth as a test bed) and (2) the Mars Exploration Rover (MER) mission. This program has evolved from its predecessor - the Web Interface for Telescience (WITS) software - and surpasses WITS in the processing, organization, and plotting of data. SAP DownlinkBrowser creates Extensible Markup Language (XML) files that organize data files, on the basis of content, into a sortable, searchable product database, without the overhead of a relational database. The data-display components of SAP DownlinkBrowser (descriptively named ImageView, 3DView, OrbitalView, PanoramaView, ImageCubeView, and SpectrumView) are designed to run in a memory footprint of at least 256MB on computers that utilize the Windows, Linux, and Solaris operating systems.
38. INTERIOR VIEW, BERK SWITCH TOWER, SOUTH NORWALK, SHOWING COMPLETE ...
38. INTERIOR VIEW, BERK SWITCH TOWER, SOUTH NORWALK, SHOWING COMPLETE SWITCH LEVER ASSEMBLAGE AND DISPLAY BOARD ON FRONT WALL - New York, New Haven & Hartford Railroad, Automatic Signalization System, Long Island Sound shoreline between Stamford & New Haven, Stamford, Fairfield County, CT
Innovative railroad information displays : executive summary
DOT National Transportation Integrated Search
1998-01-01
The objectives ofthis study were to explore the potential of advanced digital technology, : novel concepts of information management, geographic information databases and : display capabilities in order to enhance planning and decision-making process...
Simplifying operations with an uplink/downlink integration toolkit
NASA Technical Reports Server (NTRS)
Murphy, Susan C.; Miller, Kevin J.; Guerrero, Ana Maria; Joe, Chester; Louie, John J.; Aguilera, Christine
1994-01-01
The Operations Engineering Lab (OEL) at JPL has developed a simple, generic toolkit to integrate the uplink/downlink processes, (often called closing the loop), in JPL's Multimission Ground Data System. This toolkit provides capabilities for integrating telemetry verification points with predicted spacecraft commands and ground events in the Mission Sequence Of Events (SOE) document. In the JPL ground data system, the uplink processing functions and the downlink processing functions are separate subsystems that are not well integrated because of the nature of planetary missions with large one-way light times for spacecraft-to-ground communication. Our new closed-loop monitoring tool allows an analyst or mission controller to view and save uplink commands and ground events with their corresponding downlinked telemetry values regardless of the delay in downlink telemetry and without requiring real-time intervention by the user. An SOE document is a time-ordered list of all the planned ground and spacecraft events, including all commands, sequence loads, ground events, significant mission activities, spacecraft status, and resource allocations. The SOE document is generated by expansion and integration of spacecraft sequence files, ground station allocations, navigation files, and other ground event files. This SOE generation process has been automated within the OEL and includes a graphical, object-oriented SOE editor and real-time viewing tool running under X/Motif. The SOE toolkit was used as the framework for the integrated implementation. The SOE is used by flight engineers to coordinate their operations tasks, serving as a predict data set in ground operations and mission control. The closed-loop SOE toolkit allows simple, automated integration of predicted uplink events with correlated telemetry points in a single SOE document for on-screen viewing and archiving. It automatically interfaces with existing real-time or non real-time sources of information, to display actual values from the telemetry data stream. This toolkit was designed to greatly simplify the user's ability to access and view telemetry data, and also provide a means to view this data in the context of the commands and ground events that are used to interpret it. A closed-loop system can prove especially useful in small missions with limited resources requiring automated monitoring tools. This paper will discuss the toolkit implementation, including design trade-offs and future plans for enhancing the automated capabilities.
Simplifying operations with an uplink/downlink integration toolkit
NASA Astrophysics Data System (ADS)
Murphy, Susan C.; Miller, Kevin J.; Guerrero, Ana Maria; Joe, Chester; Louie, John J.; Aguilera, Christine
1994-11-01
The Operations Engineering Lab (OEL) at JPL has developed a simple, generic toolkit to integrate the uplink/downlink processes, (often called closing the loop), in JPL's Multimission Ground Data System. This toolkit provides capabilities for integrating telemetry verification points with predicted spacecraft commands and ground events in the Mission Sequence Of Events (SOE) document. In the JPL ground data system, the uplink processing functions and the downlink processing functions are separate subsystems that are not well integrated because of the nature of planetary missions with large one-way light times for spacecraft-to-ground communication. Our new closed-loop monitoring tool allows an analyst or mission controller to view and save uplink commands and ground events with their corresponding downlinked telemetry values regardless of the delay in downlink telemetry and without requiring real-time intervention by the user. An SOE document is a time-ordered list of all the planned ground and spacecraft events, including all commands, sequence loads, ground events, significant mission activities, spacecraft status, and resource allocations. The SOE document is generated by expansion and integration of spacecraft sequence files, ground station allocations, navigation files, and other ground event files. This SOE generation process has been automated within the OEL and includes a graphical, object-oriented SOE editor and real-time viewing tool running under X/Motif. The SOE toolkit was used as the framework for the integrated implementation. The SOE is used by flight engineers to coordinate their operations tasks, serving as a predict data set in ground operations and mission control. The closed-loop SOE toolkit allows simple, automated integration of predicted uplink events with correlated telemetry points in a single SOE document for on-screen viewing and archiving. It automatically interfaces with existing real-time or non real-time sources of information, to display actual values from the telemetry data stream. This toolkit was designed to greatly simplify the user's ability to access and view telemetry data, and also provide a means to view this data in the context of the commands and ground events that are used to interpret it. A closed-loop system can prove especially useful in small missions with limited resources requiring automated monitoring tools. This paper will discuss the toolkit implementation, including design trade-offs and future plans for enhancing the automated capabilities.
Advanced interactive display formats for terminal area traffic control
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.
1995-01-01
The basic design considerations for perspective Air Traffic Control displays are described. A software framework has been developed for manual viewing parameter setting (MVPS) in preparation for continued, ongoing developments on automated viewing parameter setting (AVPS) schemes. The MVPS system is based on indirect manipulation of the viewing parameters. Requests for changes in viewing parameter setting are entered manually by the operator by moving viewing parameter manipulation pointers on the screen. The motion of these pointers, which are an integral part of the 3-D scene, is limited to the boundaries of screen. This arrangement has been chosen, in order to preserve the correspondence between the new and the old viewing parameter setting, a feature which contributes to preventing spatial disorientation of the operator. For all viewing operations, e.g. rotation, translation and ranging, the actual change is executed automatically by the system, through gradual transitions with an exponentially damped, sinusoidal velocity profile, in this work referred to as 'slewing' motions. The slewing functions, which eliminate discontinuities in the viewing parameter changes, are designed primarily for enhancing the operator's impression that he, or she, is dealing with an actually existing physical system, rather than an abstract computer generated scene. Current, ongoing efforts deal with the development of automated viewing parameter setting schemes. These schemes employ an optimization strategy, aimed at identifying the best possible vantage point, from which the Air Traffic Control scene can be viewed, for a given traffic situation.
Java Mission Evaluation Workstation System
NASA Technical Reports Server (NTRS)
Pettinger, Ross; Watlington, Tim; Ryley, Richard; Harbour, Jeff
2006-01-01
The Java Mission Evaluation Workstation System (JMEWS) is a collection of applications designed to retrieve, display, and analyze both real-time and recorded telemetry data. This software is currently being used by both the Space Shuttle Program (SSP) and the International Space Station (ISS) program. JMEWS was written in the Java programming language to satisfy the requirement of platform independence. An object-oriented design was used to satisfy additional requirements and to make the software easily extendable. By virtue of its platform independence, JMEWS can be used on the UNIX workstations in the Mission Control Center (MCC) and on office computers. JMEWS includes an interactive editor that allows users to easily develop displays that meet their specific needs. The displays can be developed and modified while viewing data. By simply selecting a data source, the user can view real-time, recorded, or test data.
Effects of Retinal Eccentricity on Human Manual Control
NASA Technical Reports Server (NTRS)
Popovici, Alexandru; Zaal, Peter M. T.
2017-01-01
This study investigated the effects of viewing a primary flight display at different retinal eccentricities on human manual control behavior and performance. Ten participants performed a pitch tracking task while looking at a simplified primary flight display at different horizontal and vertical retinal eccentricities, and with two different controlled dynamics. Tracking performance declined at higher eccentricity angles and participants behaved more nonlinearly. The visual error rate gain increased with eccentricity for single-integrator-like controlled dynamics, but decreased for double-integrator-like dynamics. Participants' visual time delay was up to 100 ms higher at the highest horizontal eccentricity compared to foveal viewing. Overall, vertical eccentricity had a larger impact than horizontal eccentricity on most of the human manual control parameters and performance. Results might be useful in the design of displays and procedures for critical flight conditions such as in an aerodynamic stall.
How to reinforce perception of depth in single two-dimensional pictures
NASA Technical Reports Server (NTRS)
Nagata, S.
1989-01-01
The physical conditions of the display of single 2-D pictures, which produce images realistically, were studied by using the characteristics of the intake of the information for visual depth perception. Depth sensitivity, which is defined as the ratio of viewing distance to depth discrimination threshold, was introduced in order to evaluate the availability of various cues for depth perception: binocular parallax, motion parallax, accommodation, convergence, size, texture, brightness, and air-perspective contrast. The effects of binocular parallax in different conditions, the depth sensitivity of which is greatest at a distance of up to about 10 m, were studied with the new versatile stereoscopic display. From these results, four conditions to reinforce the perception of depth in single pictures were proposed, and these conditions are met by the old viewing devices and the new high-definition and wide television displays.
Depth assisted compression of full parallax light fields
NASA Astrophysics Data System (ADS)
Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.
2015-03-01
Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.
Light Redirective Display Panel And A Method Of Making A Light Redirective Display Panel
Veligdan, James T.
2005-07-26
An optical display panel which provides improved light intensity at a viewing angle by redirecting light emitting from the viewing screen, and a method of making a light redirective display panel, are disclosed. The panel includes an inlet face at one end for receiving light, and an outlet screen at an opposite end for displaying the light. The inlet face is defined at one end of a transparent body, which body may be formed by a plurality of waveguides, and the outlet screen is defined at an opposite end of the body. The screen includes light redirective elements at the outlet screen for re-directing light emitting from the outlet screen. The method includes stacking a plurality of glass sheets, with a layer of adhesive or epoxy between each sheet, curing the adhesive to form a stack, placing the stack against a saw and cutting the stack at two opposite ends to form a wedge-shaped panel having an inlet face and an outlet face, and forming at the outlet face a plurality of light redirective elements which direct light incident on the outlet face into a controlled light cone.
Light redirective display panel and a method of making a light redirective display panel
Veligdan, James T.
2002-01-01
An optical display panel which provides improved light intensity at a viewing angle by redirecting light emitting from the viewing screen, and a method of making a light redirective display panel, are disclosed. The panel includes an inlet face at one end for receiving light, and an outlet screen at an opposite end for displaying the light. The inlet face is defined at one end of a transparent body, which body may be formed by a plurality of waveguides, and the outlet screen is defined at an opposite end of the body. The screen includes light redirective elements at the outlet screen for re-directing light emitting from the outlet screen. The method includes stacking a plurality of glass sheets, with a layer of adhesive or epoxy between each sheet, curing the adhesive to form a stack, placing the stack against a saw and cutting the stack at two opposite ends to form a wedge-shaped panel having an inlet face and an outlet face, and forming at the outlet face a plurality of light redirective elements which direct light incident on the outlet face into a controlled light cone.
A head up display format for application to V/STOL aircraft approach and landing
NASA Technical Reports Server (NTRS)
Merrick, Vernon K.; Farris, Glenn G.; Vanags, Andrejs A.
1990-01-01
A head up display (HUD) format developed at NASA Ames Research Center to provide pilots of V/STOL aircraft with complete flight guidance and control information for category-3C terminal-area flight operations, is described in detail. These flight operations cover a large spectrum, from STOL operations on land-based runways to VTOL operations on small ships in high seas. Included in this description is a complete geometrical specification of the HUD elements and their drive laws. The principal features of this display format are the integration of the flightpath and pursuit guidance information into a narrow field of view, easily assimilated by the pilot with a single glance, and the superposition of vertical and horizontal situation information. The display is a derivative of a successful design developed for conventional transport aircraft. The design is the outcome of many piloted simulations conducted over a four-year period. Whereas the concepts on which the display format rests could not be fully exploited because of field-of-view restrictions, and some reservations remain about the acceptability of superimposing vertical and horizontal situation information, the design successfully fulfilled its intended objectives.
SimGraph: A Flight Simulation Data Visualization Workstation
NASA Technical Reports Server (NTRS)
Kaplan, Joseph A.; Kenney, Patrick S.
1997-01-01
Today's modern flight simulation research produces vast amounts of time sensitive data, making a qualitative analysis of the data difficult while it remains in a numerical representation. Therefore, a method of merging related data together and presenting it to the user in a more comprehensible format is necessary. Simulation Graphics (SimGraph) is an object-oriented data visualization software package that presents simulation data in animated graphical displays for easy interpretation. Data produced from a flight simulation is presented by SimGraph in several different formats, including: 3-Dimensional Views, Cockpit Control Views, Heads-Up Displays, Strip Charts, and Status Indicators. SimGraph can accommodate the addition of new graphical displays to allow the software to be customized to each user s particular environment. A new display can be developed and added to SimGraph without having to design a new application, allowing the graphics programmer to focus on the development of the graphical display. The SimGraph framework can be reused for a wide variety of visualization tasks. Although it was created for the flight simulation facilities at NASA Langley Research Center, SimGraph can be reconfigured to almost any data visualization environment. This paper describes the capabilities and operations of SimGraph.
Liquid-crystal displays for medical imaging: a discussion of monochrome versus color
NASA Astrophysics Data System (ADS)
Wright, Steven L.; Samei, Ehsan
2004-05-01
A common view is that color displays cannot match the performance of monochrome displays, normally used for diagnostic x-ray imaging. This view is based largely on historical experience with cathode-ray tube (CRT) displays, and does not apply in the same way to liquid-crystal displays (LCDs). Recent advances in color LCD technology have considerably narrowed performance differences with monochrome LCDs for medical applications. The most significant performance advantage of monochrome LCDs is higher luminance, a concern for use under bright ambient conditions. LCD luminance is limited primarily by backlight design, yet to be optimized for color LCDs for medical applications. Monochrome LCDs have inherently higher contrast than color LCDs, but this is not a major advantage under most conditions. There is no practical difference in luminance precision between color and monochrome LCDs, with a slight theoretical advantage for color. Color LCDs can provide visualization and productivity enhancement for medical applications, using digital drive from standard commercial graphics cards. The desktop computer market for color LCDs far exceeds the medical monitor market, with an economy of scale. The performance-to-price ratio for color LCDs is much higher than monochrome, and warrants re-evaluation for medical applications.
Design of platform for removing screws from LCD display shields
NASA Astrophysics Data System (ADS)
Tu, Zimei; Qin, Qin; Dou, Jianfang; Zhu, Dongdong
2017-11-01
Removing the screws on the sides of a shield is a necessary process in disassembling a computer LCD display. To solve this issue, a platform has been designed for removing the screws on display shields. This platform uses virtual instrument technology with LabVIEW as the development environment to design the mechanical structure with the technologies of motion control, human-computer interaction and target recognition. This platform removes the screws from the sides of the shield of an LCD display mechanically thus to guarantee follow-up separation and recycle.
Planning, Imagined Interaction, and the Nonverbal Display of Anxiety.
ERIC Educational Resources Information Center
Allen, Terre H.; Honeycutt, James M.
1997-01-01
Examines a nonverbal indicator of anxiety--use of object adaptors. Examines effects of planning for an anticipated encounter and level of discrepancy individuals report they have in imagined interactions on use of object adaptors. Discusses findings in terms of spontaneous helplessness, plan efficacy, and accretion of plan strategies in response…
Gabriele, Alex; Marco, Valeria; Gatto, Laura; Paoletti, Giulia; Di Vito, Luca; Castriota, Fausto; Romagnoli, Enrico; Ricciardi, Andrea; Prati, Francesco
2014-10-01
The optical coherence tomography (OCT) evaluation of the stent anatomy requires the inspection of sequential cross section (CS). However stent coils cannot be appreciated in the conventional format as the OCT CS simply display stent struts, that are poorly representative of the stent architecture. The aim of the present study was to validate a new software (Carpet View), which unfolds the stented segment, reconstructing it as an open structure and displaying the stent meshwork. 21 patients were studied with frequency domain OCT after the deployment of different stents: seven bio-absorbable scaffolds (Dream), seven bare metal stent (Vision/Multilink8), seven drug eluting stent (Cre8). Conventional CS reconstructions were post-processed with the Carpet View software and analyzed by the same reader twice (intra-observer variability) and by two different readers (inter-observer variability). A small average difference in the number of all struts was obtained with the two methods (conventional vs carpet view reconstruction). Using the carpet view, high intra-observer and inter-observer correlations were found for the number of struts obtained in each coil. The Pearson correlation values were 0.98 (p = 0.0001) and 0.96 (p = 0.0001) respectively. The same number of coils was found when analyses were repeated by the same reader or by a different reader whilst mild differences in the count of stent junctions were reported. The Carpet View can be used to address the stent geometry with high reproducibility. This approach enables the matching of the same stent portion during serial time points and promises to improve the stent assessment.
Satellite Data Processing System (SDPS) users manual V1.0
NASA Technical Reports Server (NTRS)
Caruso, Michael; Dunn, Chris
1989-01-01
SDPS is a menu driven interactive program designed to facilitate the display and output of image and line-based data sets common to telemetry, modeling and remote sensing. This program can be used to display up to four separate raster images and overlay line-based data such as coastlines, ship tracks and velocity vectors. The program uses multiple windows to communicate information with the user. At any given time, the program may have up to four image display windows as well as auxiliary windows containing information about each image displayed. SDPS is not a commercial program. It does not contain complete type checking or error diagnostics which may allow the program to crash. Known anomalies will be mentioned in the appropriate section as notes or cautions. SDPS was designed to be used on Sun Microsystems Workstations running SunView1 (Sun Visual/Integrated Environment for Workstations). It was primarily designed to be used on workstations equipped with color monitors, but most of the line-based functions and several of the raster-based functions can be used with monochrome monitors. The program currently runs on Sun 3 series workstations running Sun OS 4.0 and should port easily to Sun 4 and Sun 386 series workstations with SunView1. Users should also be familiar with UNIX, Sun workstations and the SunView window system.