Atmospheric Science Data Center
2013-04-16
... using data from multiple MISR cameras within automated computer processing algorithms. The stereoscopic algorithms used to generate ... NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Science Mission Directorate, Washington, D.C. The Terra spacecraft is managed ...
Stereoscopic Feature Tracking System for Retrieving Velocity of Surface Waters
NASA Astrophysics Data System (ADS)
Zuniga Zamalloa, C. C.; Landry, B. J.
2017-12-01
The present work is concerned with the surface velocity retrieval of flows using a stereoscopic setup and finding the correspondence in the images via feature tracking (FT). The feature tracking provides a key benefit of substantially reducing the level of user input. In contrast to other commonly used methods (e.g., normalized cross-correlation), FT does not require the user to prescribe interrogation window sizes and removes the need for masking when specularities are present. The results of the current FT methodology are comparable to those obtained via Large Scale Particle Image Velocimetry while requiring little to no user input which allowed for rapid, automated processing of imagery.
Semi-autonomous wheelchair system using stereoscopic cameras.
Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T
2009-01-01
This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.
Automated volumetric evaluation of stereoscopic disc photography
Xu, Juan; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A; Kagemann, Larry; Craig, Jamie E; Mackey, David A; Hewitt, Alex W; Schuman, Joel S
2010-01-01
PURPOSE: To develop a fully automated algorithm (AP) to perform a volumetric measure of the optic disc using conventional stereoscopic optic nerve head (ONH) photographs, and to compare algorithm-produced parameters with manual photogrammetry (MP), scanning laser ophthalmoscope (SLO) and optical coherence tomography (OCT) measurements. METHODS: One hundred twenty-two stereoscopic optic disc photographs (61 subjects) were analyzed. Disc area, rim area, cup area, cup/disc area ratio, vertical cup/disc ratio, rim volume and cup volume were automatically computed by the algorithm. Latent variable measurement error models were used to assess measurement reproducibility for the four techniques. RESULTS: AP had better reproducibility for disc area and cup volume and worse reproducibility for cup/disc area ratio and vertical cup/disc ratio, when the measurements were compared to the MP, SLO and OCT methods. CONCLUSION: AP provides a useful technique for an objective quantitative assessment of 3D ONH structures. PMID:20588996
Three-Dimensional High-Resolution Optical/X-Ray Stereoscopic Tracking Velocimetry
NASA Technical Reports Server (NTRS)
Cha, Soyoung S.; Ramachandran, Narayanan
2004-01-01
Measurement of three-dimensional (3-D) three-component velocity fields is of great importance in a variety of research and industrial applications for understanding materials processing, fluid physics, and strain/displacement measurements. The 3-D experiments in these fields most likely inhibit the use of conventional techniques, which are based only on planar and optically-transparent-field observation. Here, we briefly review the current status of 3-D diagnostics for motion/velocity detection, for both optical and x-ray systems. As an initial step for providing 3-D capabilities, we nave developed stereoscopic tracking velocimetry (STV) to measure 3-D flow/deformation through optical observation. The STV is advantageous in system simplicity, for continually observing 3- D phenomena in near real-time. In an effort to enhance the data processing through automation and to avoid the confusion in tracking numerous markers or particles, artificial neural networks are employed to incorporate human intelligence. Our initial optical investigations have proven the STV to be a very viable candidate for reliably measuring 3-D flow motions. With previous activities are focused on improving the processing efficiency, overall accuracy, and automation based on the optical system, the current efforts is directed to the concurrent expansion to the x-ray system for broader experimental applications.
Three-Dimensional High-Resolution Optical/X-Ray Stereoscopic Tracking Velocimetry
NASA Technical Reports Server (NTRS)
Cha, Soyoung S.; Ramachandran, Naryanan
2005-01-01
Measurement of three-dimensional (3-D) three-component velocity fields is of great importance in a variety of research and industrial applications for understanding materials processing, fluid physics, and strain/displacement measurements. The 3-D experiments in these fields most likely inhibit the use of conventional techniques, which are based only on planar and optically-transparent-field observation. Here, we briefly review the current status of 3-D diagnostics for motion/velocity detection, for both optical and x-ray systems. As an initial step for providing 3-D capabilities, we have developed stereoscopic tracking velocimetry (STV) to measure 3-D flow/deformation through optical observation. The STV is advantageous in system simplicity, for continually observing 3-D phenomena in near real-time. In an effort to enhance the data processing through automation and to avoid the confusion in tracking numerous markers or particles, artificial neural networks are employed to incorporate human intelligence. Our initial optical investigations have proven the STV to be a very viable candidate for reliably measuring 3-D flow motions. With previous activities focused on improving the processing efficiency, overall accuracy, and automation based on the optical system, the current efforts is directed to the concurrent expansion to the x-ray system for broader experimental applications.
Effects of Stereoscopic 3D Digital Radar Displays on Air Traffic Controller Performance
2013-03-01
between men and women , but no significant influence was found. Experience in ATC was considered as a potential covariate that would be presumed to have...depicts altitude through the use of stereoscopic disparity, permitting vertical separation to be visually represented as differences in disparity...handling information via different sources (e.g., radar screen with a series of automated visual cues, paper or electronic flight progress strips, radio
NASA Technical Reports Server (NTRS)
Lee, David; Ge, Yi; Cha, Soyoung Stephen; Ramachandran, Narayanan; Rose, M. Franklin (Technical Monitor)
2001-01-01
Measurement of three-dimensional (3-D) three-component velocity fields is of great importance in both ground and space experiments for understanding materials processing and fluid physics. The experiments in these fields most likely inhibit the application of conventional planar probes for observing 3-D phenomena. Here, we present the investigation results of stereoscopic tracking velocimetry (STV) for measuring 3-D velocity fields, which include diagnostic technology development, experimental velocity measurement, and comparison with analytical and numerical computation. STV is advantageous in system simplicity for building compact hardware and in software efficiency for continual near-real-time monitoring. It has great freedom in illuminating and observing volumetric fields from arbitrary directions. STV is based on stereoscopic observation of particles-Seeded in a flow by CCD sensors. In the approach, part of the individual particle images that provide data points is likely to be lost or cause errors when their images overlap and crisscross each other especially under a high particle density. In order to maximize the valid recovery of data points, neural networks are implemented for these two important processes. For the step of particle overlap decomposition, the back propagation neural network is utilized because of its ability in pattern recognition with pertinent particle image feature parameters. For the step of particle tracking, the Hopfield neural network is employed to find appropriate particle tracks based on global optimization. Our investigation indicates that the neural networks are very efficient and useful for stereoscopically tracking particles. As an initial assessment of the diagnostic technology performance, laminar water jets with and without pulsation are measured. The jet tip velocity profiles are in good agreement with analytical predictions. Finally, for testing in material processing applications, a simple directional solidification apparatus is built for experimenting with a metal analog of succinonitrile. Its 3-D velocity field at the liquid phase is then measured to be compared with those from numerical computation. Our theoretical, numerical, and experimental investigations have proven STV to be a viable candidate for reliably measuring 3-D flow velocities. With current activities are focused on further improving the processing efficiency, overall accuracy, and automation, the eventual efforts of broad experimental applications and concurrent numerical modeling validation will be vital to many areas in fluid flow and materials processing.
NASA Technical Reports Server (NTRS)
Robbins, Woodrow E. (Editor); Fisher, Scott S. (Editor)
1989-01-01
Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems.
Using Stereo Vision to Support the Automated Analysis of Surveillance Videos
NASA Astrophysics Data System (ADS)
Menze, M.; Muhle, D.
2012-07-01
Video surveillance systems are no longer a collection of independent cameras, manually controlled by human operators. Instead, smart sensor networks are developed, able to fulfil certain tasks on their own and thus supporting security personnel by automated analyses. One well-known task is the derivation of people's positions on a given ground plane from monocular video footage. An improved accuracy for the ground position as well as a more detailed representation of single salient people can be expected from a stereoscopic processing of overlapping views. Related work mostly relies on dedicated stereo devices or camera pairs with a small baseline. While this set-up is helpful for the essential step of image matching, the high accuracy potential of a wide baseline and the according good intersection geometry is not utilised. In this paper we present a stereoscopic approach, working on overlapping views of standard pan-tilt-zoom cameras which can easily be generated for arbitrary points of interest by an appropriate reconfiguration of parts of a sensor network. Experiments are conducted on realistic surveillance footage to show the potential of the suggested approach and to investigate the influence of different baselines on the quality of the derived surface model. Promising estimations of people's position and height are retrieved. Although standard matching approaches show helpful results, future work will incorporate temporal dependencies available from image sequences in order to reduce computational effort and improve the derived level of detail.
Sharma, Ashish; Oakley, Jonathan D.; Schiffman, Joyce C.; Budenz, Donald L.; Anderson, Douglas R.
2010-01-01
OBJECTIVE To evaluate a new automated analysis of optic disc images obtained by spectral domain optical coherence tomography (SD-OCT). Areas of the optic disc, cup, and neural rim in SD-OCT images were compared with these areas from stereoscopic photographs, to represent the current traditional optic nerve evaluation. The repeatability of measurements by each method was determined and compared. DESIGN Evaluation of diagnostic technology. PARTICIPANTS 119 healthy eyes, 23 eyes with glaucoma, and 7 suspect eyes METHODS Optic disc and cup margins were traced from stereoscopic photographs by three individuals independently. Optic disc margins and rim widths were determined automatically in SD-OCT. A subset of photographs was examined and traced a second time, and duplicate SD-OCT images were also analyzed. MAIN OUTCOME MEASUREMENTS Agreement among photograph readers, between duplicate readings, and between SD-OCT and photographs were quantified by the intraclass correlation coefficient (ICC), by the root mean square (RMS), and the standard deviation (SD) of the differences. RESULTS Optic disc areas tended to be slightly larger when judged in photographs than by SD-OCT, while cup areas were similar. Cup and optic disc areas showed good correlation (0.8) between average photographic reading and SD-OCT, but only fair correlation of rim areas (0.4). The SD-OCT was highly reproducible (ICC of 0.96 to 0.99). Each reader was also consistent with himself on duplicate readings of 21 photographs (ICC 0.80 to 0.88 for rim area, 0.95 to 0.98 for all other measurements), but reproducibility was not as good as SD-OCT. Measurements derived from SD-OCT did not differ from photographic readings more than the readings of photographs by different readers differed from each other. CONCLUSIONS Designation of the cup and optic disc boundaries by an automated analysis of SD-OCT was within the range of variable designations by different readers from color stereoscopic photographs, but use of different landmarks typically made the designation of the optic disc size somewhat smaller in the automated analysis. There was better repeatability among measurements from SD-OCT than from among readers of photographs. The repeatability of automated measurement of SD-OCT images is promising for use both in diagnosis and in monitoring of progression. PMID:21397334
A stereoscopic lens for digital cinema cameras
NASA Astrophysics Data System (ADS)
Lipton, Lenny; Rupkalvis, John
2015-03-01
Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.
NASA Astrophysics Data System (ADS)
Lee, Seokhee; Lee, Kiyoung; Kim, Man Bae; Kim, JongWon
2005-11-01
In this paper, we propose a design of multi-view stereoscopic HD video transmission system based on MPEG-21 Digital Item Adaptation (DIA). It focuses on the compatibility and scalability to meet various user preferences and terminal capabilities. There exist a large variety of multi-view 3D HD video types according to the methods for acquisition, display, and processing. By following the MPEG-21 DIA framework, the multi-view stereoscopic HD video is adapted according to user feedback. A user can be served multi-view stereoscopic video which corresponds with his or her preferences and terminal capabilities. In our preliminary prototype, we verify that the proposed design can support two deferent types of display device (stereoscopic and auto-stereoscopic) and switching viewpoints between two available viewpoints.
S3D depth-axis interaction for video games: performance and engagement
NASA Astrophysics Data System (ADS)
Zerebecki, Chris; Stanfield, Brodie; Hogue, Andrew; Kapralos, Bill; Collins, Karen
2013-03-01
Game developers have yet to embrace and explore the interactive stereoscopic 3D medium. They typically view stereoscopy as a separate mode that can be disabled throughout the design process and rarely develop game mechanics that take advantage of the stereoscopic 3D medium. What if we designed games to be S3D-specific and viewed traditional 2D viewing as a separate mode that can be disabled? The design choices made throughout such a process may yield interesting and compelling results. Furthermore, we believe that interaction within a stereoscopic 3D environment is more important than the visual experience itself and therefore, further exploration is needed to take into account the interactive affordances presented by stereoscopic 3D displays. Stereoscopic 3D displays allow players to perceive objects at different depths, thus we hypothesize that designing a core mechanic to take advantage of this viewing paradigm will create compelling content. In this paper, we describe Z-Fighter a game that we have developed that requires the player to interact directly along the stereoscopic 3D depth axis. We also outline an experiment conducted to investigate the performance, perception, and enjoyment of this game in stereoscopic 3D vs. traditional 2D viewing.
The relationship between three-dimensional imaging and group decision making: an exploratory study.
Litynski, D M; Grabowski, M; Wallace, W A
1997-07-01
This paper describes an empirical investigation of the effect of three dimensional (3-D) imaging on group performance in a tactical planning task. The objective of the study is to examine the role that stereoscopic imaging can play in supporting face-to-face group problem solving and decision making-in particular, the alternative generation and evaluation processes in teams. It was hypothesized that with the stereoscopic display, group members would better visualize the information concerning the task environment, producing open communication and information exchanges. The experimental setting was a tactical command and control task, and the quality of the decisions and nature of the group decision process were investigated with three treatments: 1) noncomputerized, i.e., topographic maps with depth cues; 2) two-dimensional (2-D) imaging; and 3) stereoscopic imaging. The results were mixed on group performance. However, those groups with the stereoscopic displays generated more alternatives and spent less time on evaluation. In addition, the stereoscopic decision aid did not interfere with the group problem solving and decision-making processes. The paper concludes with a discussion of potential benefits, and the need to resolve demonstrated weaknesses of the technology.
NASA Astrophysics Data System (ADS)
Ye, Peng; Wu, Xiang; Gao, Dingguo; Liang, Haowen; Wang, Jiahui; Deng, Shaozhi; Xu, Ningsheng; She, Juncong; Chen, Jun
2017-02-01
The horizontal binocular disparity is a critical factor for the visual fatigue induced by watching stereoscopic TVs. Stereoscopic images that possess the disparity within the ‘comfort zones’ and remain still in the depth direction are considered comfortable to the viewers as 2D images. However, the difference in brain activities between processing such comfortable stereoscopic images and 2D images is still less studied. The DP3 (differential P3) signal refers to an event-related potential (ERP) component indicating attentional processes, which is typically evoked by odd target stimuli among standard stimuli in an oddball task. The present study found that the DP3 signal elicited by the comfortable 3D images exhibits the delayed peak latency and enhanced peak amplitude over the anterior and central scalp regions compared to the 2D images. The finding suggests that compared to the processing of the 2D images, more attentional resources are involved in the processing of the stereoscopic images even though they are subjectively comfortable.
NASA Astrophysics Data System (ADS)
Świąder, Andrzej
2014-12-01
Digital Terrain Models (DTMs) produced from stereoscopic, submeter-resolution High Resolution Imaging Science Experiment (HiRISE) imagery provide a solid basis for all morphometric analyses of the surface of Mars. In view of the fact that a more effective use of DTMs is hindered by complicated and time-consuming manual handling, the automated process provided by specialists of the Ames Intelligent Robotics Group (NASA), Ames Stereo Pipeline, constitutes a good alternative. Four DTMs, covering the global dichotomy boundary between the southern highlands and northern lowlands along the line of the presumable Arabia shoreline, were produced and analysed. One of them included forms that are likely to be indicative of an oceanic basin that extended across the lowland northern hemisphere of Mars in the geological past. The high resolution DTMs obtained were used in the process of landscape visualisation.
Stereoscopic depth increases intersubject correlations of brain networks.
Gaebler, Michael; Biessmann, Felix; Lamke, Jan-Peter; Müller, Klaus-Robert; Walter, Henrik; Hetzer, Stefan
2014-10-15
Three-dimensional movies presented via stereoscopic displays have become more popular in recent years aiming at a more engaging viewing experience. However, neurocognitive processes associated with the perception of stereoscopic depth in complex and dynamic visual stimuli remain understudied. Here, we investigate the influence of stereoscopic depth on both neurophysiology and subjective experience. Using multivariate statistical learning methods, we compare the brain activity of subjects when freely watching the same movies in 2D and in 3D. Subjective reports indicate that 3D movies are more strongly experienced than 2D movies. On the neural level, we observe significantly higher intersubject correlations of cortical networks when subjects are watching 3D movies relative to the same movies in 2D. We demonstrate that increases in intersubject correlations of brain networks can serve as neurophysiological marker for stereoscopic depth and for the strength of the viewing experience. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Case study: using a stereoscopic display for mission planning
NASA Astrophysics Data System (ADS)
Kleiber, Michael; Winkelholz, Carsten
2009-02-01
This paper reports on the results of a study investigating the benefits of using an autostereoscopic display in the training targeting process of the Germain Air Force. The study examined how stereoscopic 3D visualizations can help to improve flight path planning and the preparation of a mission in general. An autostereoscopic display was used because it allows the operator to perceive the stereoscopic images without shutter glasses which facilitates the integration into a workplace with conventional 2D monitors and arbitrary lighting conditions.
Methodology for stereoscopic motion-picture quality assessment
NASA Astrophysics Data System (ADS)
Voronov, Alexander; Vatolin, Dmitriy; Sumin, Denis; Napadovsky, Vyacheslav; Borisov, Alexey
2013-03-01
Creating and processing stereoscopic video imposes additional quality requirements related to view synchronization. In this work we propose a set of algorithms for detecting typical stereoscopic-video problems, which appear owing to imprecise setup of capture equipment or incorrect postprocessing. We developed a methodology for analyzing the quality of S3D motion pictures and for revealing their most problematic scenes. We then processed 10 modern stereo films, including Avatar, Resident Evil: Afterlife and Hugo, and analyzed changes in S3D-film quality over the years. This work presents real examples of common artifacts (color and sharpness mismatch, vertical disparity and excessive horizontal disparity) in the motion pictures we processed, as well as possible solutions for each problem. Our results enable improved quality assessment during the filming and postproduction stages.
Binocular coordination in response to stereoscopic stimuli
NASA Astrophysics Data System (ADS)
Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.
2009-02-01
Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.
The Role of Amodal Surface Completion in Stereoscopic Transparency
Anderson, Barton L.; Schmid, Alexandra C.
2012-01-01
Previous work has shown that the visual system can decompose stereoscopic textures into percepts of inhomogeneous transparency. We investigate whether this form of layered image decomposition is shaped by constraints on amodal surface completion. We report a series of experiments that demonstrate that stereoscopic depth differences are easier to discriminate when the stereo images generate a coherent percept of surface color, than when images require amodally integrating a series of color changes into a coherent surface. Our results provide further evidence for the intimate link between the segmentation processes that occur in conditions of transparency and occlusion, and the interpolation processes involved in the formation of amodally completed surfaces. PMID:23060829
Original and creative stereoscopic film making
NASA Astrophysics Data System (ADS)
Criado, Enrique
2008-02-01
The stereoscopic cinema has become, once again, a hot topic in the film production. For filmmakers to be successful in this field, a technical background in the principles of binocular perception and how our brain interprets the incoming data from our eyes, are fundamental. It is also paramount for a stereoscopic production to adhere certain rules for comfort and safety. There is an immense variety of options in the art of standard "flat" photography, and the possibilities only can be multiply with the stereo. The stereoscopic imaging has its own unique areas for subjective, original and creative control that allow an incredible range of possible combinations by working inside the standards, and in some cases on the boundaries of the basic stereo rules. The stereoscopic imaging can be approached in a "flat" manner, like channeling sound through an audio equalizer with all the bands at the same level. It can provide a realistic perception, which in many cases can be sufficient, thanks to the rock-solid viewing inherent to the stereoscopic image, but there are many more possibilities. This document describes some of the basic operating parameters and concepts for stereoscopic imaging, but it also offers ideas for a creative process based on the variation and combination of these basic parameters, which can lead into a truly innovative and original viewing experience.
Investigation of 1 : 1,000 Scale Map Generation by Stereo Plotting Using Uav Images
NASA Astrophysics Data System (ADS)
Rhee, S.; Kim, T.
2017-08-01
Large scale maps and image mosaics are representative geospatial data that can be extracted from UAV images. Map drawing using UAV images can be performed either by creating orthoimages and digitizing them, or by stereo plotting. While maps generated by digitization may serve the need for geospatial data, many institutions and organizations require map drawing using stereoscopic vision on stereo plotting systems. However, there are several aspects to be checked for UAV images to be utilized for stereo plotting. The first aspect is the accuracy of exterior orientation parameters (EOPs) generated through automated bundle adjustment processes. It is well known that GPS and IMU sensors mounted on a UAV are not very accurate. It is necessary to adjust initial EOPs accurately using tie points. For this purpose, we have developed a photogrammetric incremental bundle adjustment procedure. The second aspect is unstable shooting conditions compared to aerial photographing. Unstable image acquisition may bring uneven stereo coverage, which will result in accuracy loss eventually. Oblique stereo pairs will create eye fatigue. The third aspect is small coverage of UAV images. This aspect will raise efficiency issue for stereo plotting of UAV images. More importantly, this aspect will make contour generation from UAV images very difficult. This paper will discuss effects relate to these three aspects. In this study, we tried to generate 1 : 1,000 scale map from the dataset using EOPs generated from software developed in-house. We evaluated Y-disparity of the tie points extracted automatically through the photogrammetric incremental bundle adjustment process. We could confirm that stereoscopic viewing is possible. Stereoscopic plotting work was carried out by a professional photogrammetrist. In order to analyse the accuracy of the map drawing using stereoscopic vision, we compared the horizontal and vertical position difference between adjacent models after drawing a specific model. The results of analysis showed that the errors were within the specification of 1 : 1,000 map. Although the Y-parallax can be eliminated, it is still necessary to improve the accuracy of absolute ground position error in order to apply this technique to the actual work. There are a few models in which the difference in height between adjacent models is about 40 cm. We analysed the stability of UAV images by checking angle differences between adjacent images. We also analysed the average area covered by one stereo model and discussed the possible difficulty associated with this narrow coverage. In the future we consider how to reduce position errors and improve map drawing performances from UAVs.
Stereoscopic augmented reality for laparoscopic surgery.
Kang, Xin; Azizian, Mahdi; Wilson, Emmanuel; Wu, Kyle; Martin, Aaron D; Kane, Timothy D; Peters, Craig A; Cleary, Kevin; Shekhar, Raj
2014-07-01
Conventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field. The stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine. The preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy. We presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and expand the capacity of minimally invasive laparoscopic surgeries.
Helmet-Mounted Display Of Clouds Of Harmful Gases
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Barengoltz, Jack B.; Schober, Wayne R.
1995-01-01
Proposed helmet-mounted opto-electronic instrument provides real-time stereoscopic views of clouds of otherwise invisible toxic, explosive, and/or corrosive gas. Display semitransparent: images of clouds superimposed on scene ordinarily visible to wearer. Images give indications on sizes and concentrations of gas clouds and their locations in relation to other objects in scene. Instruments serve as safety devices for astronauts, emergency response crews, fire fighters, people cleaning up chemical spills, or anyone working near invisible hazardous gases. Similar instruments used as sensors in automated emergency response systems that activate safety equipment and emergency procedures. Both helmet-mounted and automated-sensor versions used at industrial sites, chemical plants, or anywhere dangerous and invisible or difficult-to-see gases present. In addition to helmet-mounted and automated-sensor versions, there could be hand-held version. In some industrial applications, desirable to mount instruments and use them similarly to parking-lot surveillance cameras.
Cosmic cookery: making a stereoscopic 3D animated movie
NASA Astrophysics Data System (ADS)
Holliman, Nick; Baugh, Carlton; Frenk, Carlos; Jenkins, Adrian; Froner, Barbara; Hassaine, Djamel; Helly, John; Metcalfe, Nigel; Okamoto, Takashi
2006-02-01
This paper describes our experience making a short stereoscopic movie visualizing the development of structure in the universe during the 13.7 billion years from the Big Bang to the present day. Aimed at a general audience for the Royal Society's 2005 Summer Science Exhibition, the movie illustrates how the latest cosmological theories based on dark matter and dark energy are capable of producing structures as complex as spiral galaxies and allows the viewer to directly compare observations from the real universe with theoretical results. 3D is an inherent feature of the cosmology data sets and stereoscopic visualization provides a natural way to present the images to the viewer, in addition to allowing researchers to visualize these vast, complex data sets. The presentation of the movie used passive, linearly polarized projection onto a 2m wide screen but it was also required to playback on a Sharp RD3D display and in anaglyph projection at venues without dedicated stereoscopic display equipment. Additionally lenticular prints were made from key images in the movie. We discuss the following technical challenges during the stereoscopic production process; 1) Controlling the depth presentation, 2) Editing the stereoscopic sequences, 3) Generating compressed movies in display specific formats. We conclude that the generation of high quality stereoscopic movie content using desktop tools and equipment is feasible. This does require careful quality control and manual intervention but we believe these overheads are worthwhile when presenting inherently 3D data as the result is significantly increased impact and better understanding of complex 3D scenes.
Interactive 2D to 3D stereoscopic image synthesis
NASA Astrophysics Data System (ADS)
Feldman, Mark H.; Lipton, Lenny
2005-03-01
Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.
Evaluation of a glaucoma patient
Thomas, Ravi; Loibl, Klaus; Parikh, Rajul
2011-01-01
The diagnosis of glaucoma is usually made clinically and requires a comprehensive eye examination, including slit lamp, applanation tonometry, gonioscopy and dilated stereoscopic evaluation of the optic disc and retina. Automated perimetry is obtained if glaucoma is suspected. This establishes the presence of functional damage and provides a baseline for follow-up. Imaging techniques are not essential for the diagnosis but may have a role to play in the follow-up. We recommend a comprehensive eye examination for every clinic patient with the objective of detecting all potentially sight-threatening diseases, including glaucoma. PMID:21150033
Shao, Feng; Lin, Weisi; Gu, Shanbo; Jiang, Gangyi; Srikanthan, Thambipillai
2013-05-01
Perceptual quality assessment is a challenging issue in 3D signal processing research. It is important to study 3D signal directly instead of studying simple extension of the 2D metrics directly to the 3D case as in some previous studies. In this paper, we propose a new perceptual full-reference quality assessment metric of stereoscopic images by considering the binocular visual characteristics. The major technical contribution of this paper is that the binocular perception and combination properties are considered in quality assessment. To be more specific, we first perform left-right consistency checks and compare matching error between the corresponding pixels in binocular disparity calculation, and classify the stereoscopic images into non-corresponding, binocular fusion, and binocular suppression regions. Also, local phase and local amplitude maps are extracted from the original and distorted stereoscopic images as features in quality assessment. Then, each region is evaluated independently by considering its binocular perception property, and all evaluation results are integrated into an overall score. Besides, a binocular just noticeable difference model is used to reflect the visual sensitivity for the binocular fusion and suppression regions. Experimental results show that compared with the relevant existing metrics, the proposed metric can achieve higher consistency with subjective assessment of stereoscopic images.
NASA Astrophysics Data System (ADS)
Yuter, S. E.; Garrett, T. J.; Fallgatter, C.; Shkurko, K.; Howlett, D.; Dean, J.; Hardin, N.
2012-12-01
We introduce a new instrument, the Fallgatter Technologies Multi-Angle Snowflake Camera (MASC), that provides <30 micron resolution stereoscopic photographic images of individual large falling hydrometeors with accurate measurements of their fallspeed. Previously, identification of hydrometeor form has required initial collection on a flat surface, a process that is somewhat subjective and remarkably finicky due to the fragile nature of the particles. Other hydrometeor instruments such as the 2DVD, are automated and leave the particle untouched and provide fallspeed data. However, they provide only 200 micron resolution silhouettes, which can be insufficient for habit and riming identification and the requirements of microwave scattering calculations. The MASC is like the 2DVD but uses a sensitive IR motion sensor for a trigger and actually photographs the particle surface from multiple angles. Field measurements from Alta Ski Area near Salt Lake City are providing beautiful images and fallspeed data, suggesting that MASC measurements may help development of improved parameterizations for hydrometeor microwave scattering. Hundreds of thousands of images have been collected enabling comparisons of hydrometeor development, morphology and fallspeed with a co-located vertically pointing 24 GHz MicroRainRadar radar. Here we show multi-angle images from the MASC, size fallspeed relationships, and discrete dipole approximation scattering calculations for a range of hydrometeor forms at the frequencies of 24 GHz, 94 GHz and 183 GHz. The scattering calculations indicate that complex, aggregated snowflake shapes appear to be more strongly forward scattering, at the expense of reduced back-scatter, than graupel particles of similar size.
Measuring stereoscopic image quality experience with interpretation based quality methodology
NASA Astrophysics Data System (ADS)
Häkkinen, Jukka; Kawai, Takashi; Takatalo, Jari; Leisti, Tuomas; Radun, Jenni; Hirsaho, Anni; Nyman, Göte
2008-01-01
Stereoscopic technologies have developed significantly in recent years. These advances require also more understanding of the experiental dimensions of stereoscopic contents. In this article we describe experiments in which we explore the experiences that viewers have when they view stereoscopic contents. We used eight different contents that were shown to the participants in a paired comparison experiment where the task of the participants was to compare the same content in stereoscopic and non-stereoscopic form. The participants indicated their preference but were also interviewed about the arguments they used when making the decision. By conducting a qualitative analysis of the interview texts we categorized the significant experiental factors related to viewing stereoscopic material. Our results indicate that reality-likeness as well as artificiality were often used as arguments in comparing the stereoscopic materials. Also, there were more emotional terms in the descriptions of the stereoscopic films, which might indicate that the stereoscopic projection technique enhances the emotions conveyed by the film material. Finally, the participants indicated that the three-dimensional material required longer presentation time, as there were more interesting details to see.
Is eye damage caused by stereoscopic displays?
NASA Astrophysics Data System (ADS)
Mayer, Udo; Neumann, Markus D.; Kubbat, Wolfgang; Landau, Kurt
2000-05-01
A normal developing child will achieve emmetropia in youth and maintain it. Thereby cornea, lens and axial length of the eye grow astonishingly coordinated. In the last years research has evidenced that this coordinated growing process is a visually controlled closed loop. The mechanism has been studied particularly in animals. It was found that the growth of the axial length of the eyeball is controlled by image focus information from the retina. It was shown that maladjustment can occur by this visually-guided growth control mechanism that result in ametropia. Thereby it has been proven that e.g. short-sightedness is not only caused by heredity, but is acquired under certain visual conditions. It is shown that these conditions are similar to the conditions of viewing stereoscopic displays where the normal accommodation convergence coupling is disjoint. An evaluation is given of the potential of damaging the eyes by viewing stereoscopic displays. Concerning this, different viewing methods for stereoscopic displays are evaluated. Moreover, clues are given how the environment and display conditions shall be set and what users shall be chosen to minimize the risk of eye damages.
A systematized WYSIWYG pipeline for digital stereoscopic 3D filmmaking
NASA Astrophysics Data System (ADS)
Mueller, Robert; Ward, Chris; Hušák, Michal
2008-02-01
Digital tools are transforming stereoscopic 3D content creation and delivery, creating an opportunity for the broad acceptance and success of stereoscopic 3D films. Beginning in late 2005, a series of mostly CGI features has successfully initiated the public to this new generation of highly-comfortable, artifact-free digital 3D. While the response has been decidedly favorable, a lack of high-quality live-action films could hinder long-term success. Liveaction stereoscopic films have historically been more time-consuming, costly, and creatively-limiting than 2D films - thus a need arises for a live-action 3D filmmaking process which minimizes such limitations. A unique 'systematized' what-you-see-is-what-you-get (WYSIWYG) pipeline is described which allows the efficient, intuitive and accurate capture and integration of 3D and 2D elements from multiple shoots and sources - both live-action and CGI. Throughout this pipeline, digital tools utilize a consistent algorithm to provide meaningful and accurate visual depth references with respect to the viewing audience in the target theater environment. This intuitive, visual approach introduces efficiency and creativity to the 3D filmmaking process by eliminating both the need for a 'mathematician mentality' of spreadsheets and calculators, as well as any trial and error guesswork, while enabling the most comfortable, 'pixel-perfect', artifact-free 3D product possible.
NASA Astrophysics Data System (ADS)
Lee, Seungwon; Park, Ilkwon; Kim, Manbae; Byun, Hyeran
2006-10-01
As digital broadcasting technologies have been rapidly progressed, users' expectations for realistic and interactive broadcasting services also have been increased. As one of such services, 3D multi-view broadcasting has received much attention recently. In general, all the view sequences acquired at the server are transmitted to the client. Then, the user can select a part of views or all the views according to display capabilities. However, this kind of system requires high processing power of the server as well as the client, thus posing a difficulty in practical applications. To overcome this problem, a relatively simple method is to transmit only two view-sequences requested by the client in order to deliver a stereoscopic video. In this system, effective communication between the server and the client is one of important aspects. In this paper, we propose an efficient multi-view system that transmits two view-sequences and their depth maps according to user's request. The view selection process is integrated into MPEG-21 DIA (Digital Item Adaptation) so that our system is compatible to MPEG-21 multimedia framework. DIA is generally composed of resource adaptation and descriptor adaptation. It is one of merits that SVA (stereoscopic video adaptation) descriptors defined in DIA standard are used to deliver users' preferences and device capabilities. Furthermore, multi-view descriptions related to multi-view camera and system are newly introduced. The syntax of the descriptions and their elements is represented in XML (eXtensible Markup Language) schema. If the client requests an adapted descriptor (e.g., view numbers) to the server, then the server sends its associated view sequences. Finally, we present a method which can reduce user's visual discomfort that might occur while viewing stereoscopic video. This phenomenon happens when view changes as well as when a stereoscopic image produces excessive disparity caused by a large baseline between two cameras. To solve for the former, IVR (intermediate view reconstruction) is employed for smooth transition between two stereoscopic view sequences. As well, a disparity adjustment scheme is used for the latter. Finally, from the implementation of testbed and the experiments, we can show the valuables and possibilities of our system.
SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion
NASA Astrophysics Data System (ADS)
von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger
2015-04-01
The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to fit to each other. Also the data selection may depend on the visualization task. Not only can the amplitude data be used but also different seismic attribute transformations. The development is supplemented by interviews, to analyse the efficiency and manageability of the stereoscopic workplace environment. Another point of investigation is the immersion, i.e. the increased concentration on the observed scene when passing through the data, triggered by the stereoscopic viewing. This effect is reinforced by a user interface which is so intuitive and simple that it does not draw attention away from the scene. For the seismic interpretation purpose the stereoscopic view supports the pattern recognition of geological structures and the detection of their spatial heterogeneity. These are topics which are relevant for the actual geothermal exploration in Germany.
Compression of stereoscopic video using MPEG-2
NASA Astrophysics Data System (ADS)
Puri, A.; Kollarits, Richard V.; Haskell, Barry G.
1995-10-01
Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outilne the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.
Compression of stereoscopic video using MPEG-2
NASA Astrophysics Data System (ADS)
Puri, Atul; Kollarits, Richard V.; Haskell, Barry G.
1995-12-01
Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outline the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.
Stereoscopic advantages for vection induced by radial, circular, and spiral optic flows.
Palmisano, Stephen; Summersby, Stephanie; Davies, Rodney G; Kim, Juno
2016-11-01
Although observer motions project different patterns of optic flow to our left and right eyes, there has been surprisingly little research into potential stereoscopic contributions to self-motion perception. This study investigated whether visually induced illusory self-motion (i.e., vection) is influenced by the addition of consistent stereoscopic information to radial, circular, and spiral (i.e., combined radial + circular) patterns of optic flow. Stereoscopic vection advantages were found for radial and spiral (but not circular) flows when monocular motion signals were strong. Under these conditions, stereoscopic benefits were greater for spiral flow than for radial flow. These effects can be explained by differences in the motion aftereffects generated by these displays, which suggest that the circular motion component in spiral flow selectively reduced adaptation to stereoscopic motion-in-depth. Stereoscopic vection advantages were not observed for circular flow when monocular motion signals were strong, but emerged when monocular motion signals were weakened. These findings show that stereoscopic information can contribute to visual self-motion perception in multiple ways.
The compatibility of consumer DLP projectors with time-sequential stereoscopic 3D visualisation
NASA Astrophysics Data System (ADS)
Woods, Andrew J.; Rourke, Tegan
2007-02-01
A range of advertised "Stereo-Ready" DLP projectors are now available in the market which allow high-quality flickerfree stereoscopic 3D visualization using the time-sequential stereoscopic display method. The ability to use a single projector for stereoscopic viewing offers a range of advantages, including extremely good stereoscopic alignment, and in some cases, portability. It has also recently become known that some consumer DLP projectors can be used for timesequential stereoscopic visualization, however it was not well understood which projectors are compatible and incompatible, what display modes (frequency and resolution) are compatible, and what stereoscopic display quality attributes are important. We conducted a study to test a wide range of projectors for stereoscopic compatibility. This paper reports on the testing of 45 consumer DLP projectors of widely different specifications (brand, resolution, brightness, etc). The projectors were tested for stereoscopic compatibility with various video formats (PAL, NTSC, 480P, 576P, and various VGA resolutions) and video input connections (composite, SVideo, component, and VGA). Fifteen projectors were found to work well at up to 85Hz stereo in VGA mode. Twenty three projectors would work at 60Hz stereo in VGA mode.
Analysis of physiological impact while reading stereoscopic radiographs
NASA Astrophysics Data System (ADS)
Unno, Yasuko Y.; Tajima, Takashi; Kuwabara, Takao; Hasegawa, Akira; Natsui, Nobutaka; Ishikawa, Kazuo; Hatada, Toyohiko
2011-03-01
A stereoscopic viewing technology is expected to improve diagnostic performance in terms of reading efficiency by adding one more dimension to the conventional 2D images. Although a stereoscopic technology has been applied to many different field including TV, movies and medical applications, physiological fatigue through reading stereoscopic radiographs has been concerned although no established physiological fatigue data have been provided. In this study, we measured the α-amylase concentration in saliva, heart rates and normalized tissue hemoglobin index (nTHI) in blood of frontal area to estimate physiological fatigue through reading both stereoscopic radiographs and the conventional 2D radiographs. In addition, subjective assessments were also performed. As a result, the pupil contraction occurred just after the reading of the stereoscopic images, but the subjective assessments regarding visual fatigue were nearly identical for the reading the conventional 2D and stereoscopic radiographs. The α-amylase concentration and the nTHI continued to decline while examinees read both 2D and stereoscopic images, which reflected the result of subjective assessment that almost half of the examinees reported to feel sleepy after reading. The subjective assessments regarding brain fatigue showed that there were little differences between 2D and stereoscopic reading. In summary, this study shows that the physiological fatigue caused by stereoscopic reading is equivalent to the conventional 2D reading including ocular fatigue and burden imposed on brain.
Efficient stereoscopic contents file format on the basis of ISO base media file format
NASA Astrophysics Data System (ADS)
Kim, Kyuheon; Lee, Jangwon; Suh, Doug Young; Park, Gwang Hoon
2009-02-01
A lot of 3D contents haven been widely used for multimedia services, however, real 3D video contents have been adopted for a limited applications such as a specially designed 3D cinema. This is because of the difficulty of capturing real 3D video contents and the limitation of display devices available in a market. However, diverse types of display devices for stereoscopic video contents for real 3D video contents have been recently released in a market. Especially, a mobile phone with a stereoscopic camera has been released in a market, which provides a user as a consumer to have more realistic experiences without glasses, and also, as a content creator to take stereoscopic images or record the stereoscopic video contents. However, a user can only store and display these acquired stereoscopic contents with his/her own devices due to the non-existence of a common file format for these contents. This limitation causes a user not share his/her contents with any other users, which makes it difficult the relevant market to stereoscopic contents is getting expanded. Therefore, this paper proposes the common file format on the basis of ISO base media file format for stereoscopic contents, which enables users to store and exchange pure stereoscopic contents. This technology is also currently under development for an international standard of MPEG as being called as a stereoscopic video application format.
21 CFR 886.1880 - Fusion and stereoscopic target.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Fusion and stereoscopic target. 886.1880 Section... (CONTINUED) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1880 Fusion and stereoscopic target. (a) Identification. A fusion and stereoscopic target is a device intended for use as a viewing object...
21 CFR 886.1880 - Fusion and stereoscopic target.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Fusion and stereoscopic target. 886.1880 Section... (CONTINUED) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1880 Fusion and stereoscopic target. (a) Identification. A fusion and stereoscopic target is a device intended for use as a viewing object...
Effects of cortical damage on binocular depth perception.
Bridge, Holly
2016-06-19
Stereoscopic depth perception requires considerable neural computation, including the initial correspondence of the two retinal images, comparison across the local regions of the visual field and integration with other cues to depth. The most common cause for loss of stereoscopic vision is amblyopia, in which one eye has failed to form an adequate input to the visual cortex, usually due to strabismus (deviating eye) or anisometropia. However, the significant cortical processing required to produce the percept of depth means that, even when the retinal input is intact from both eyes, brain damage or dysfunction can interfere with stereoscopic vision. In this review, I examine the evidence for impairment of binocular vision and depth perception that can result from insults to the brain, including both discrete damage, temporal lobectomy and more systemic diseases such as posterior cortical atrophy.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Authors.
Effects of cortical damage on binocular depth perception
2016-01-01
Stereoscopic depth perception requires considerable neural computation, including the initial correspondence of the two retinal images, comparison across the local regions of the visual field and integration with other cues to depth. The most common cause for loss of stereoscopic vision is amblyopia, in which one eye has failed to form an adequate input to the visual cortex, usually due to strabismus (deviating eye) or anisometropia. However, the significant cortical processing required to produce the percept of depth means that, even when the retinal input is intact from both eyes, brain damage or dysfunction can interfere with stereoscopic vision. In this review, I examine the evidence for impairment of binocular vision and depth perception that can result from insults to the brain, including both discrete damage, temporal lobectomy and more systemic diseases such as posterior cortical atrophy. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269597
[Effects of stereoscopic cultivation on photosynthetic characteristics and growth of Tulipa edulis].
Sun, Yuan; Guo, Qiao-Sheng; Zhu, Zai-Biao; Lin, Jian-Luo; Zhou, Bo-Ya; Zhao, Min-Jie
2016-06-01
The effect of stereoscopic cultivation on the growth, photosynthetic characteristics and yield of Tulipa edulis was studied to explore the feasibility of stereoscopic cultivation on efficient cultivation of T.edulis. Total leaf area and photosynthetic parameters of T.edulis under stereoscopic cultivation (the upper, middle and the lower layers ) and the control were measured using LI-3100 leaf area meter and LI-6400XT photosynthesis system in the growing peak period of T.edulis.Plant biomass and biomass allocation were also determined.In addition, the bulb regeneration and yield of T.edulis were measured in the harvesting time.The results indicated that in the middle layer of stereoscopic cultivation, leaf biomass proportion was the highest, but total bulb fresh and dry weight and output growth (fresh weight) were the lowest among the treatments.And total bulb fresh weight in the middle of stereoscopic cultivation reduced significantly, by 22.84%, compared with the control.Light intensity in the lower layer of stereoscopic cultivation was moderate, in which T.edulis net photosynthetic rate and water use efficiency were higher than those of the other layers of stereoscopic cultivation, and bulb biomass proportion was the highest in all the treatments.No significant difference was detected in the total bulb fresh weight, dry weight and output growth (fresh weight) between the middle layer of stereoscopic cultivation and the control.In general, there was no significant difference in the growth status of T.edulis between stereoscopic cultivation and the control.Stereoscopic cultivation increased the yield of T.edulis by 161.66% in fresh weight and 141.35% in dry weight compared with the control in the condition of the same land area, respectively.In conclusion, stereoscopic cultivation can improve space utilization, increase the production, and achieve the high density cultivation of T.edulis. Copyright© by the Chinese Pharmaceutical Association.
ERIC Educational Resources Information Center
Zacharis, Georgios S.; Mikropoulos, Tassos A.; Priovolou, Chryssi
2013-01-01
Previous studies report the involvement of specific brain activation in stereoscopic vision and the perception of depth information. This work presents the first comparative results of adult women on the effects of stereoscopic perception in three different static environments; a real, a two dimensional (2D) and a stereoscopic three dimensional…
NASA Astrophysics Data System (ADS)
Perez-Bayas, Luis
2001-06-01
In stereoscopic perception of a three-dimensional world, binocular disparity might be thought of as the most important cue to 3D depth perception. Nevertheless, in reality there are many other factors involved before the 'final' conscious and subconscious stereoscopic perception, such as luminance, contrast, orientation, color, motion, and figure-ground extraction (pop-out phenomenon). In addition, more complex perceptual factors exist, such as attention and its duration (an equivalent of 'brain zooming') in relation to physiological central vision, In opposition to attention to peripheral vision and the brain 'top-down' information in relation to psychological factors like memory of previous experiences and present emotions. The brain's internal mapping of a pure perceptual world might be different from the internal mapping of a visual-motor space, which represents an 'action-directed perceptual world.' In addition, psychological factors (emotions and fine adjustments) are much more involved in a stereoscopic world than in a flat 2D-world, as well as in a world using peripheral vision (like VR, using a curved perspective representation, and displays, as natural vision does) as opposed to presenting only central vision (bi-macular stereoscopic vision) as in the majority of typical stereoscopic displays. Here is presented the most recent and precise information available about the psycho-neuro- physiological factors involved in the perception of stereoscopic three-dimensional world, with an attempt to give practical, functional, and pertinent guidelines for building more 'natural' stereoscopic displays.
Many-core computing for space-based stereoscopic imaging
NASA Astrophysics Data System (ADS)
McCall, Paul; Torres, Gildo; LeGrand, Keith; Adjouadi, Malek; Liu, Chen; Darling, Jacob; Pernicka, Henry
The potential benefits of using parallel computing in real-time visual-based satellite proximity operations missions are investigated. Improvements in performance and relative navigation solutions over single thread systems can be achieved through multi- and many-core computing. Stochastic relative orbit determination methods benefit from the higher measurement frequencies, allowing them to more accurately determine the associated statistical properties of the relative orbital elements. More accurate orbit determination can lead to reduced fuel consumption and extended mission capabilities and duration. Inherent to the process of stereoscopic image processing is the difficulty of loading, managing, parsing, and evaluating large amounts of data efficiently, which may result in delays or highly time consuming processes for single (or few) processor systems or platforms. In this research we utilize the Single-Chip Cloud Computer (SCC), a fully programmable 48-core experimental processor, created by Intel Labs as a platform for many-core software research, provided with a high-speed on-chip network for sharing information along with advanced power management technologies and support for message-passing. The results from utilizing the SCC platform for the stereoscopic image processing application are presented in the form of Performance, Power, Energy, and Energy-Delay-Product (EDP) metrics. Also, a comparison between the SCC results and those obtained from executing the same application on a commercial PC are presented, showing the potential benefits of utilizing the SCC in particular, and any many-core platforms in general for real-time processing of visual-based satellite proximity operations missions.
Stereoscopic 3D graphics generation
NASA Astrophysics Data System (ADS)
Li, Zhi; Liu, Jianping; Zan, Y.
1997-05-01
Stereoscopic display technology is one of the key techniques of areas such as simulation, multimedia, entertainment, virtual reality, and so on. Moreover, stereoscopic 3D graphics generation is an important part of stereoscopic 3D display system. In this paper, at first, we describe the principle of stereoscopic display and summarize some methods to generate stereoscopic 3D graphics. Secondly, to overcome the problems which came from the methods of user defined models (such as inconvenience, long modifying period and so on), we put forward the vector graphics files defined method. Thus we can design more directly; modify the model simply and easily; generate more conveniently; furthermore, we can make full use of graphics accelerator card and so on. Finally, we discuss the problem of how to speed up the generation.
NASA Astrophysics Data System (ADS)
Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin
2006-02-01
Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic group generally estimated resection depth to much lesser values than in reality. Although this was the case with some participants in the stereoscopic group, too, the estimation of depth features reflected the enhanced depth impression provided by stereoscopy. Conclusion: Following first implementation of stereoscopic video teaching, medical students who are inexperienced with ENT surgical procedures are able to reproduce depth information and therefore anatomically complex structures to a greater extent following stereoscopic video teaching. Besides extending video teaching to junior doctors, the next evaluation step will address its effect on the learning curve during the surgical training program.
An HTML Tool for Production of Interactive Stereoscopic Compositions.
Chistyakov, Alexey; Soto, Maria Teresa; Martí, Enric; Carrabina, Jordi
2016-12-01
The benefits of stereoscopic vision in medical applications were appreciated and have been thoroughly studied for more than a century. The usage of the stereoscopic displays has a proven positive impact on performance in various medical tasks. At the same time the market of 3D-enabled technologies is blooming. New high resolution stereo cameras, TVs, projectors, monitors, and head mounted displays become available. This equipment, completed with a corresponding application program interface (API), could be relatively easy implemented in a system. Such complexes could open new possibilities for medical applications exploiting the stereoscopic depth. This work proposes a tool for production of interactive stereoscopic graphical user interfaces, which could represent a software layer for web-based medical systems facilitating the stereoscopic effect. Further the tool's operation mode and the results of the conducted subjective and objective performance tests will be exposed.
NASA Astrophysics Data System (ADS)
Choi, Yang Hyun; Ahn, Jaehong
2010-02-01
Nowadays stereoscopic technology is being paid attention as a leading technology for the next generation film industry in many countries including Korea. In Korean stereoscopic film production, however, the quality but also the quantity of stereoscopic contents still leaves much to be desired, and know-how and skill of stereoscopic film production has been elevated in tardy progress. This paper shows a research on the correlation between stereoscopic cinematography and storytelling. Based on a casestudy of a documentary film about Ho Quyen, UNESCO World Heritage in Vietnam, we could deliver guidelines for the stereoscopic film production and storytelling. For this study, we analyzed scenes and shots of a documentary film script in pre-production stage. These analysis results were reflected on a storyboard. A stereographer grasped the idea of a storytelling that a director had meant through a script and storyboard. Then he applied suitable parameters for a stereoscopic cinematography to every shot with a beamsplitter rig. A researcher wrote major parameters like interaxial distance, convergence angle in every shot. Then average parameter values of scenes were calculated from the parameter database, and the relationship between stereoscopic cinematography and storytelling was derived by shot-by-shot analysis.
Stereoscopic 3D video games and their effects on engagement
NASA Astrophysics Data System (ADS)
Hogue, Andrew; Kapralos, Bill; Zerebecki, Chris; Tawadrous, Mina; Stanfield, Brodie; Hogue, Urszula
2012-03-01
With television manufacturers developing low-cost stereoscopic 3D displays, a large number of consumers will undoubtedly have access to 3D-capable televisions at home. The availability of 3D technology places the onus on content creators to develop interesting and engaging content. While the technology of stereoscopic displays and content generation are well understood, there are many questions yet to be answered surrounding its effects on the viewer. Effects of stereoscopic display on passive viewers for film are known, however video games are fundamentally different since the viewer/player is actively (rather than passively) engaged in the content. Questions of how stereoscopic viewing affects interaction mechanics have previously been studied in the context of player performance but very few have attempted to quantify the player experience to determine whether stereoscopic 3D has a positive or negative influence on their overall engagement. In this paper we present a preliminary study of the effects stereoscopic 3D have on player engagement in video games. Participants played a video game in two conditions, traditional 2D and stereoscopic 3D and their engagement was quantified using a previously validated self-reporting tool. The results suggest that S3D has a positive effect on immersion, presence, flow, and absorption.
A Review on Stereoscopic 3D: Home Entertainment for the Twenty First Century
NASA Astrophysics Data System (ADS)
Karajeh, Huda; Maqableh, Mahmoud; Masa'deh, Ra'ed
2014-12-01
In the last few years, stereoscopic developed very rapidly and employed in many different fields such as entertainment. Due to the importance of entertainment aspect of stereoscopic 3D (S3D) applications, a review of the current state of S3D development in entertainment technology is conducted. In this paper, a novel survey of the stereoscopic entertainment aspects is presented by discussing the significant development of a 3D cinema, the major development of 3DTV, the issues related to 3D video content and 3D video games. Moreover, we reviewed some problems that can be caused in the viewers' visual system from watching stereoscopic contents. Some stereoscopic viewers are not satisfied as they are frustrated from wearing glasses, have visual fatigue, complain from unavailability of 3D contents, and/or complain from some sickness. Therefore, we will discuss stereoscopic visual discomfort and to what extend the viewer will have an eye fatigue while watching 3D contents or playing 3D games. The suggested solutions in the literature for this problem are discussed.
The ghost of Helioth and his stereoscope: the return of a phantom.
Wade, Nicholas J
2012-01-01
Among the myths surrounding the invention of the stereoscope, that of Helioth stands as a supreme example of shoddy scholarship and its subsequent dissemination. Helioth was said to have made a simple stereoscope before Wheatstone presented his mirror stereoscope to the public in 1838. There is no evidence of Helioth's existence prior to a report in the mid-twentieth century, and despite attempts to dispel his ghost it has recently resurfaced.
NASA Astrophysics Data System (ADS)
Viale, Alberto; Villa, Dario
2011-03-01
Recently stereoscopy has increased a lot its popularity and various technologies are spreading in theaters and homes allowing observation of stereoscopic images and movies, becoming affordable even for home users. However there are some golden rules that users should follow to ensure a better enjoyment of stereoscopic images, first of all the viewing condition should not be too different from the ideal ones, which were assumed during the production process. To allow the user to perceive stereo depth instead of a flat image, two different views of the same scene are shown to the subject, one is seen just through his left eye and the other just through the right one; the vision process is making the work of merging the two images in a virtual three-dimensional scene, giving to the user the perception of depth. The two images presented to the user were created, either from image synthesis or from more traditional techniques, following the rules of perspective. These rules need some boundary conditions to be explicit, such as eye separation, field of view, parallax distance, viewer position and orientation. In this paper we are interested in studying how the variation of the viewer position and orientation from the ideal ones expressed as specified parameters in the image creation process, is affecting the correctness of the reconstruction of the three-dimensional virtual scene.
Stereoscopic visual fatigue assessment and modeling
NASA Astrophysics Data System (ADS)
Wang, Danli; Wang, Tingting; Gong, Yue
2014-03-01
Evaluation of stereoscopic visual fatigue is one of the focuses in the user experience research. It is measured in either subjective or objective methods. Objective measures are more preferred for their capability to quantify the degree of human visual fatigue without being affected by individual variation. However, little research has been conducted on the integration of objective indicators, or the sensibility of each objective indicator in reflecting subjective fatigue. The paper proposes a simply effective method to evaluate visual fatigue more objectively. The stereoscopic viewing process is divided into series of sessions, after each of which viewers rate their visual fatigue with subjective scores (SS) according to a five-grading scale, followed by tests of the punctum maximum accommodation (PMA) and visual reaction time (VRT). Throughout the entire viewing process, their eye movements are recorded by an infrared camera. The pupil size (PS) and percentage of eyelid closure over the pupil over time (PERCLOS) are extracted from the videos processed by the algorithm. Based on the method, an experiment with 14 subjects was conducted to assess visual fatigue induced by 3D images on polarized 3D display. The experiment consisted of 10 sessions (5min per session), each containing the same 75 images displayed randomly. The results show that PMA, VRT and PERCLOS are the most efficient indicators of subjective visual fatigue and finally a predictive model is derived from the stepwise multiple regressions.
Tele-transmission of stereoscopic images of the optic nerve head in glaucoma via Internet.
Bergua, Antonio; Mardin, Christian Y; Horn, Folkert K
2009-06-01
The objective was to describe an inexpensive system to visualize stereoscopic photographs of the optic nerve head on computer displays and to transmit such images via the Internet for collaborative research or remote clinical diagnosis in glaucoma. Stereoscopic images of glaucoma patients were digitized and stored in a file format (joint photographic stereoimage [jps]) containing all three-dimensional information for both eyes on an Internet Web site (www.trizax.com). The size of jps files was between 0.4 to 1.4 MB (corresponding to a diagonal stereo image size between 900 and 1400 pixels) suitable for Internet protocols. A conventional personal computer system equipped with wireless stereoscopic LCD shutter glasses and a CRT-monitor with high refresh rate (120 Hz) can be used to obtain flicker-free stereo visualization of true-color images with high resolution. Modern thin-film transistor-LCD displays in combination with inexpensive red-cyan goggles achieve stereoscopic visualization with the same resolution but reduced color quality and contrast. The primary aim of our study was met to transmit stereoscopic images via the Internet. Additionally, we found that with both stereoscopic visualization techniques, cup depth, neuroretinal rim shape, and slope of the inner wall of the optic nerve head, can be qualitatively better perceived and interpreted than with monoscopic images. This study demonstrates high-quality and low-cost Internet transmission of stereoscopic images of the optic nerve head from glaucoma patients. The technique allows exchange of stereoscopic images and can be applied to tele-diagnostic and glaucoma research.
View generation for 3D-TV using image reconstruction from irregularly spaced samples
NASA Astrophysics Data System (ADS)
Vázquez, Carlos
2007-02-01
Three-dimensional television (3D-TV) will become the next big step in the development of advanced TV systems. One of the major challenges for the deployment of 3D-TV systems is the diversity of display technologies and the high cost of capturing multi-view content. Depth image-based rendering (DIBR) has been identified as a key technology for the generation of new views for stereoscopic and multi-view displays from a small number of views captured and transmitted. We propose a disparity compensation method for DIBR that does not require spatial interpolation of the disparity map. We use a forward-mapping disparity compensation with real precision. The proposed method deals with the irregularly sampled image resulting from this disparity compensation process by applying a re-sampling algorithm based on a bi-cubic spline function space that produces smooth images. The fact that no approximation is made on the position of the samples implies that geometrical distortions in the final images due to approximations in sample positions are minimized. We also paid attention to the occlusion problem. Our algorithm detects the occluded regions in the newly generated images and uses simple depth-aware inpainting techniques to fill the gaps created by newly exposed areas. We tested the proposed method in the context of generation of views needed for viewing on SynthaGram TM auto-stereoscopic displays. We used as input either a 2D image plus a depth map or a stereoscopic pair with the associated disparity map. Our results show that this technique provides high quality images to be viewed on different display technologies such as stereoscopic viewing with shutter glasses (two views) and lenticular auto-stereoscopic displays (nine views).
An interactive in-game approach to user adjustment of stereoscopic 3D settings
NASA Astrophysics Data System (ADS)
Tawadrous, Mina; Hogue, Andrew; Kapralos, Bill; Collins, Karen
2013-03-01
Given the popularity of 3D film, content developers have been creating customizable stereoscopic 3D experiences for the user to enjoy at home. Stereoscopic 3D game developers often offer a `white box' approach whereby far too many controls and settings are exposed to the average consumer who may have little knowledge or interest to correctly adjust these settings. Improper settings can lead to users being uncomfortable or unimpressed with their own user-defined stereoscopic 3D experience. We have begun investigating interactive approaches to in-game adjustment of the various stereoscopic 3D parameters to reduce the reliance on the user doing so and thefore creating a more pleasurable stereoscopic 3D experience. In this paper, we describe a preliminary technique for interactively calibrating the various stereoscopic 3D parameters and we compare this interface with the typical slider-based control interface game developers utilize in commercial S3D games. Inspired by standard testing methodologies experienced at an optometrist, we've created a split-screen game with the same stereoscopic 3D game running in both screens, but with different interaxial distances. We expect that the interactive nature of the calibration will impact the final game experience providing us with an indication of whether in-game, interactive, S3D parameter calibration is a mechanism that game developers should adopt.
Using a high-definition stereoscopic video system to teach microscopic surgery
NASA Astrophysics Data System (ADS)
Ilgner, Justus; Park, Jonas Jae-Hyun; Labbé, Daniel; Westhofen, Martin
2007-02-01
Introduction: While there is an increasing demand for minimally invasive operative techniques in Ear, Nose and Throat surgery, these operations are difficult to learn for junior doctors and demanding to supervise for experienced surgeons. The motivation for this study was to integrate high-definition (HD) stereoscopic video monitoring in microscopic surgery in order to facilitate teaching interaction between senior and junior surgeon. Material and methods: We attached a 1280x1024 HD stereo camera (TrueVisionSystems TM Inc., Santa Barbara, CA, USA) to an operating microscope (Zeiss ProMagis, Zeiss Co., Oberkochen, Germany), whose images were processed online by a PC workstation consisting of a dual Intel® Xeon® CPU (Intel Co., Santa Clara, CA). The live image was displayed by two LCD projectors @ 1280x768 pixels on a 1,25m rear-projection screen by polarized filters. While the junior surgeon performed the surgical procedure based on the displayed stereoscopic image, all other participants (senior surgeon, nurse and medical students) shared the same stereoscopic image from the screen. Results: With the basic setup being performed only once on the day before surgery, fine adjustments required about 10 minutes extra during the operation schedule, which fitted into the time interval between patients and thus did not prolong operation times. As all relevant features of the operative field were demonstrated on one large screen, four major effects were obtained: A) Stereoscopy facilitated orientation for the junior surgeon as well as for medical students. B) The stereoscopic image served as an unequivocal guide for the senior surgeon to demonstrate the next surgical steps to the junior colleague. C) The theatre nurse shared the same image, anticipating the next instruments which were needed. D) Medical students instantly share the information given by all staff and the image, thus avoiding the need for an extra teaching session. Conclusion: High definition stereoscopy bears the potential to compress the learning curve for undergraduate as well as postgraduate medical professionals in minimally invasive surgery. Further studies will focus on the long term effect for operative training as well as on post-processing of HD stereoscopy video content for off-line interactive medical education.
Evaluating methods for controlling depth perception in stereoscopic cinematography
NASA Astrophysics Data System (ADS)
Sun, Geng; Holliman, Nick
2009-02-01
Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography. We anticipate the results will be of particular interest to 3D filmmaking and real time computer games.
Stereo depth and the control of locomotive heading
NASA Astrophysics Data System (ADS)
Rushton, Simon K.; Harris, Julie M.
1998-04-01
Does the addition of stereoscopic depth aid steering--the perceptual control of locomotor heading--around an environment? This is a critical question when designing a tele-operation or Virtual Environment system, with implications for computational resources and visual comfort. We examined the role of stereoscopic depth in the perceptual control of heading by employing an active steering task. Three conditions were tested: stereoscopic depth; incorrect stereoscopic depth and no stereoscopic depth. Results suggest that stereoscopic depth does not improve performance in a visual control task. A further set of experiments examined the importance of a ground plane. As a ground plane is a common feature of all natural environments and provides a pictorial depth cue, it has been suggested that the visual system may be especially attuned to exploit its presence. Thus it would be predicted that a ground plane would aid judgments of locomotor heading. Results suggest that the presence of rich motion information in the lower visual field produces significant performance advantages and that provision of such information may prove a better target for system resources than stereoscopic depth. These findings have practical consequences for a system designer and also challenge previous theoretical and psychophysical perceptual research.
Brief history of electronic stereoscopic displays
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-02-01
A brief history of recent developments in electronic stereoscopic displays is given concentrating on products that have succeeded in the market place and hence have had a significant influence on future implementations. The concentration is on plano-stereoscopic (two-view) technology because it is now the dominant display modality in the marketplace. Stereoscopic displays were created for the motion picture industry a century ago, and this technology influenced the development of products for science and industry, which in turn influenced product development for entertainment.
Papenmeier, Frank; Schwan, Stephan
2016-02-01
Viewing objects with stereoscopic displays provides additional depth cues through binocular disparity supporting object recognition. So far, it was unknown whether this results from the representation of specific stereoscopic information in memory or a more general representation of an object's depth structure. Therefore, we investigated whether continuous object rotation acting as depth cue during encoding results in a memory representation that can subsequently be accessed by stereoscopic information during retrieval. In Experiment 1, we found such transfer effects from continuous object rotation during encoding to stereoscopic presentations during retrieval. In Experiments 2a and 2b, we found that the continuity of object rotation is important because only continuous rotation and/or stereoscopic depth but not multiple static snapshots presented without stereoscopic information caused the extraction of an object's depth structure into memory. We conclude that an object's depth structure and not specific depth cues are represented in memory. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Price, Aaron; Lee, Hee-Sun
2010-02-01
We investigated whether and how student performance on three types of spatial cognition tasks differs when worked with two-dimensional or stereoscopic representations. We recruited nineteen middle school students visiting a planetarium in a large Midwestern American city and analyzed their performance on a series of spatial cognition tasks in terms of response accuracy and task completion time. Results show that response accuracy did not differ between the two types of representations while task completion time was significantly greater with the stereoscopic representations. The completion time increased as the number of mental manipulations of 3D objects increased in the tasks. Post-interviews provide evidence that some students continued to think of stereoscopic representations as two-dimensional. Based on cognitive load and cue theories, we interpret that, in the absence of pictorial depth cues, students may need more time to be familiar with stereoscopic representations for optimal performance. In light of these results, we discuss potential uses of stereoscopic representations for science learning.
Two Eyes, 3D: Stereoscopic Design Principles
NASA Astrophysics Data System (ADS)
Price, Aaron; Subbarao, M.; Wyatt, R.
2013-01-01
Two Eyes, 3D is a NSF-funded research project about how people perceive highly spatial objects when shown with 2D or stereoscopic ("3D") representations. As part of the project, we produced a short film about SN 2011fe. The high definition film has been rendered in both 2D and stereoscopic formats. It was developed according to a set of stereoscopic design principles we derived from the literature and past experience producing and studying stereoscopic films. Study participants take a pre- and post-test that involves a spatial cognition assessment and scientific knowledge questions about Type-1a supernovae. For the evaluation, participants use iPads in order to record spatial manipulation of the device and look for elements of embodied cognition. We will present early results and also describe the stereoscopic design principles and the rationale behind them. All of our content and software is available under open source licenses. More information is at www.twoeyes3d.org.
Distinguishing Clouds from Ice over the East Siberian Sea, Russia
NASA Technical Reports Server (NTRS)
2002-01-01
As a consequence of its capability to retrieve cloud-top elevations, stereoscopic observations from the Multi-angle Imaging SpectroRadiometer (MISR) can discriminate clouds from snow and ice. The central portion of Russia's East Siberian Sea, including one of the New Siberian Islands, Novaya Sibir, are portrayed in these views from data acquired on May 28, 2002.The left-hand image is a natural color view from MISR's nadir camera. On the right is a height field retrieved using automated computer processing of data from multiple MISR cameras. Although both clouds and ice appear white in the natural color view, the stereoscopic retrievals are able to identify elevated clouds based on the geometric parallax which results when they are observed from different angles. Owing to their elevation above sea level, clouds are mapped as green and yellow areas, whereas land, sea ice, and very low clouds appear blue and purple. Purple, in particular, denotes elevations very close to sea level. The island of Novaya Sibir is located in the lower left of the images. It can be identified in the natural color view as the dark area surrounded by an expanse of fast ice. In the stereo map the island appears as a blue region indicating its elevation of less than 100 meters above sea level. Areas where the automated stereo processing failed due to lack of sufficient spatial contrast are shown in dark gray. The northern edge of the Siberian mainland can be found at the very bottom of the panels, and is located a little over 250 kilometers south of Novaya Sibir. Pack ice containing numerous fragmented ice floes surrounds the fast ice, and narrow areas of open ocean are visible.The East Siberian Sea is part of the Arctic Ocean and is ice-covered most of the year. The New Siberian Islands are almost always covered by snow and ice, and tundra vegetation is very scant. Despite continuous sunlight from the end of April until the middle of August, the ice between the island and the mainland typically remains until August or September.The Multi-angle Imaging SpectroRadiometer views almost the entire Earth every 9 days. These images were acquired during Terra orbit 12986 and cover an area of about 380 kilometers x 1117 kilometers. They utilize data from blocks 24 to 32 within World Reference System-2 path 117.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Kastberger, Gerald; Maurer, Michael; Weihmann, Frank; Ruether, Matthias; Hoetzl, Thomas; Kranner, Ilse; Bischof, Horst
2011-02-08
The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time. The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study aspects of other mass phenomena that involve active and passive movements of individual agents in densely packed clusters.
2011-01-01
Background The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Results Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time. Conclusions The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study aspects of other mass phenomena that involve active and passive movements of individual agents in densely packed clusters. PMID:21303539
Clinically Normal Stereopsis Does Not Ensure Performance Benefit from Stereoscopic 3D Depth Cues
2014-10-28
Stereopsis, Binocular Vision, Optometry , Depth Perception, 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 16...Binocular Vision, Optometry , Depth Perception, 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 1 Distribution A: Approved
Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?
NASA Astrophysics Data System (ADS)
Schild, Jonas; Masuch, Maic
2012-03-01
This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.
2017-01-01
The history of the expression of three-dimensional structure in art can be traced from the use of occlusion in Palaeolithic cave paintings, through the use of shadow in classical art, to the development of perspective during the Renaissance. However, the history of the use of stereoscopic techniques is controversial. Although the first undisputed stereoscopic images were presented by Wheatstone in 1838, it has been claimed that two sketches by Jacopo Chimenti da Empoli (c. 1600) can be to be fused to yield an impression of stereoscopic depth, while others suggest that Leonardo da Vinci’s Mona Lisa is the world’s first stereogram. Here, we report the first quantitative study of perceived depth in these works, in addition to more recent works by Salvador Dalí. To control for the contribution of monocular depth cues, ratings of the magnitude and coherence of depth were recorded for both stereoscopic and pseudoscopic presentations, with a genuine contribution of stereoscopic cues revealed by a difference between these scores. Although effects were clear for Wheatstone and Dalí’s images, no such effects could be found for works produced earlier. As such, we have no evidence to reject the conventional view that the first producer of stereoscopic imagery was Sir Charles Wheatstone. PMID:28203349
Brooks, Kevin R
2017-01-01
The history of the expression of three-dimensional structure in art can be traced from the use of occlusion in Palaeolithic cave paintings, through the use of shadow in classical art, to the development of perspective during the Renaissance. However, the history of the use of stereoscopic techniques is controversial. Although the first undisputed stereoscopic images were presented by Wheatstone in 1838, it has been claimed that two sketches by Jacopo Chimenti da Empoli (c. 1600) can be to be fused to yield an impression of stereoscopic depth, while others suggest that Leonardo da Vinci's Mona Lisa is the world's first stereogram. Here, we report the first quantitative study of perceived depth in these works, in addition to more recent works by Salvador Dalí. To control for the contribution of monocular depth cues, ratings of the magnitude and coherence of depth were recorded for both stereoscopic and pseudoscopic presentations, with a genuine contribution of stereoscopic cues revealed by a difference between these scores. Although effects were clear for Wheatstone and Dalí's images, no such effects could be found for works produced earlier. As such, we have no evidence to reject the conventional view that the first producer of stereoscopic imagery was Sir Charles Wheatstone.
Interactive floating windows: a new technique for stereoscopic video games
NASA Astrophysics Data System (ADS)
Zerebecki, Chris; Stanfield, Brodie; Tawadrous, Mina; Buckstein, Daniel; Hogue, Andrew; Kapralos, Bill
2012-03-01
The film industry has a long history of creating compelling experiences in stereoscopic 3D. Recently, the video game as an artistic medium has matured into an effective way to tell engaging and immersive stories. Given the current push to bring stereoscopic 3D technology into the consumer market there is considerable interest to develop stereoscopic 3D video games. Game developers have largely ignored the need to design their games specifically for stereoscopic 3D and have thus relied on automatic conversion and driver technology. Game developers need to evaluate solutions used in other media, such as film, to correct perceptual problems such as window violations, and modify or create new solutions to work within an interactive framework. In this paper we extend the dynamic floating window technique into the interactive domain enabling the player to position a virtual window in space. Interactively changing the position, size, and the 3D rotation of the virtual window, objects can be made to 'break the mask' dramatically enhancing the stereoscopic effect. By demonstrating that solutions from the film industry can be extended into the interactive space, it is our hope that this initiates further discussion in the game development community to strengthen their story-telling mechanisms in stereoscopic 3D games.
Redundancy of stereoscopic images: Experimental evaluation
NASA Astrophysics Data System (ADS)
Yaroslavsky, L. P.; Campos, J.; Espínola, M.; Ideses, I.
2005-12-01
With the recent advancement in visualization devices over the last years, we are seeing a growing market for stereoscopic content. In order to convey 3D content by means of stereoscopic displays, one needs to transmit and display at least 2 points of view of the video content. This has profound implications on the resources required to transmit the content, as well as demands on the complexity of the visualization system. It is known that stereoscopic images are redundant which may prove useful for compression and may have positive effect on the construction of the visualization device. In this paper we describe an experimental evaluation of data redundancy in color stereoscopic images. In the experiments with computer generated and real life test stereo images, several observers visually tested the stereopsis threshold and accuracy of parallax measurement in anaglyphs and stereograms as functions of the blur degree of one of two stereo images. In addition, we tested the color saturation threshold in one of two stereo images for which full color 3D perception with no visible color degradations was maintained. The experiments support a theoretical estimate that one has to add, to data required to reproduce one of two stereoscopic images, only several percents of that amount of data in order to achieve stereoscopic perception.
Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance
Mela, Christopher A.; Patterson, Carrie; Thompson, William K.; Papay, Francis; Liu, Yang
2015-01-01
We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a) the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b) the first wearable system offering both large FOV and microscopic imaging simultaneously, (c) the first wearable system that offers both ultrasound imaging and fluorescence imaging capacities, and (d) the first demonstration of goggle-to-goggle communication to share stereoscopic views for medical guidance. PMID:26529249
Dan, Alex; Reiner, Miriam
2017-12-01
Interacting with 2D displays, such as computer screens, smartphones, and TV, is currently a part of our daily routine; however, our visual system is built for processing 3D worlds. We examined the cognitive load associated with a simple and a complex task of learning paper-folding (origami) by observing 2D or stereoscopic 3D displays. While connected to an electroencephalogram (EEG) system, participants watched a 2D video of an instructor demonstrating the paper-folding tasks, followed by a stereoscopic 3D projection of the same instructor (a digital avatar) illustrating identical tasks. We recorded the power of alpha and theta oscillations and calculated the cognitive load index (CLI) as the ratio of the average power of frontal theta (Fz.) and parietal alpha (Pz). The results showed a significantly higher cognitive load index associated with processing the 2D projection as compared to the 3D projection; additionally, changes in the average theta Fz power were larger for the 2D conditions as compared to the 3D conditions, while alpha average Pz power values were similar for 2D and 3D conditions for the less complex task and higher in the 3D state for the more complex task. The cognitive load index was lower for the easier task and higher for the more complex task in 2D and 3D. In addition, participants with lower spatial abilities benefited more from the 3D compared to the 2D display. These findings have implications for understanding cognitive processing associated with 2D and 3D worlds and for employing stereoscopic 3D technology over 2D displays in designing emerging virtual and augmented reality applications. Copyright © 2016 Elsevier B.V. All rights reserved.
Interlopers 3D: experiences designing a stereoscopic game
NASA Astrophysics Data System (ADS)
Weaver, James; Holliman, Nicolas S.
2014-03-01
Background In recent years 3D-enabled televisions, VR headsets and computer displays have become more readily available in the home. This presents an opportunity for game designers to explore new stereoscopic game mechanics and techniques that have previously been unavailable in monocular gaming. Aims To investigate the visual cues that are present in binocular and monocular vision, identifying which are relevant when gaming using a stereoscopic display. To implement a game whose mechanics are so reliant on binocular cues that the game becomes impossible or at least very difficult to play in non-stereoscopic mode. Method A stereoscopic 3D game was developed whose objective was to shoot down advancing enemies (the Interlopers) before they reached their destination. Scoring highly required players to make accurate depth judgments and target the closest enemies first. A group of twenty participants played both a basic and advanced version of the game in both monoscopic 2D and stereoscopic 3D. Results The results show that in both the basic and advanced game participants achieved higher scores when playing in stereoscopic 3D. The advanced game showed that by disrupting the depth from motion cue the game became more difficult in monoscopic 2D. Results also show a certain amount of learning taking place over the course of the experiment, meaning that players were able to score higher and finish the game faster over the course of the experiment. Conclusions Although the game was not impossible to play in monoscopic 2D, participants results show that it put them at a significant disadvantage when compared to playing in stereoscopic 3D.
Using mental rotation to evaluate the benefits of stereoscopic displays
NASA Astrophysics Data System (ADS)
Aitsiselmi, Y.; Holliman, N. S.
2009-02-01
Context: The idea behind stereoscopic displays is to create the illusion of depth and this concept could have many practical applications. A common spatial ability test involves mental rotation. Therefore a mental rotation task should be easier if being undertaken on a stereoscopic screen. Aim: The aim of this project is to evaluate stereoscopic displays (3D screen) and to assess whether they are better for performing a certain task than over a 2D display. A secondary aim was to perform a similar study but replicating the conditions of using a stereoscopic mobile phone screen. Method: We devised a spatial ability test involving a mental rotation task that participants were asked to complete on either a 3D or 2D screen. We also design a similar task to simulate the experience on a stereoscopic cell phone. The participants' error rate and response times were recorded. Using statistical analysis, we then compared the error rate and response times of the groups to see if there were any significant differences. Results: We found that the participants got better scores if they were doing the task on a stereoscopic screen as opposed to a 2D screen. However there was no statistically significant difference in the time it took them to complete the task. We also found similar results for 3D cell phone display condition. Conclusions: The results show that the extra depth information given by a stereoscopic display makes it easier to mentally rotate a shape as depth cues are readily available. These results could have many useful implications to certain industries.
Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.
Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick
2017-10-01
In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).
Predicting individual fusional range from optometric data
NASA Astrophysics Data System (ADS)
Endrikhovski, Serguei; Jin, Elaine; Miller, Michael E.; Ford, Robert W.
2005-03-01
A model was developed to predict the range of disparities that can be fused by an individual user from optometric measurements. This model uses parameters, such as dissociated phoria and fusional reserves, to calculate an individual user"s fusional range (i.e., the disparities that can be fused on stereoscopic displays) when the user views a stereoscopic stimulus from various distances. This model is validated by comparing its output with data from a study in which the individual fusional range of a group of users was quantified while they viewed a stereoscopic display from distances of 0.5, 1.0, and 2.0 meters. Overall, the model provides good data predictions for the majority of the subjects and can be generalized for other viewing conditions. The model may, therefore, be used within a customized stereoscopic system, which would render stereoscopic information in a way that accounts for the individual differences in fusional range. Because the comfort of an individual user also depends on the user"s ability to fuse stereo images, such a system may, consequently, improve the comfort level and viewing experience for people with different stereoscopic fusional capabilities.
Instruments and Methodologies for the Underwater Tridimensional Digitization and Data Musealization
NASA Astrophysics Data System (ADS)
Repola, L.; Memmolo, R.; Signoretti, D.
2015-04-01
In the research started within the SINAPSIS project of the Università degli Studi Suor Orsola Benincasa an underwater stereoscopic scanning aimed at surveying of submerged archaeological sites, integrable to standard systems for geomorphological detection of the coast, has been developed. The project involves the construction of hardware consisting of an aluminum frame supporting a pair of GoPro Hero Black Edition cameras and software for the production of point clouds and the initial processing of data. The software has features for stereoscopic vision system calibration, reduction of noise and the of distortion of underwater captured images, searching for corresponding points of stereoscopic images using stereo-matching algorithms (dense and sparse), for points cloud generating and filtering. Only after various calibration and survey tests carried out during the excavations envisaged in the project, the mastery of methods for an efficient acquisition of data has been achieved. The current development of the system has allowed generation of portions of digital models of real submerged scenes. A semi-automatic procedure for global registration of partial models is under development as a useful aid for the study and musealization of sites.
Visual discomfort in stereoscopic displays: a review
NASA Astrophysics Data System (ADS)
Lambooij, Marc T. M.; IJsselsteijn, Wijnand A.; Heynderickx, Ingrid
2007-02-01
Visual discomfort has been the subject of considerable research in relation to stereoscopic and autostereoscopic displays, but remains an ambiguous concept used to denote a variety of subjective symptoms potentially related to different underlying processes. In this paper we clarify the importance of various causes and aspects of visual comfort. Classical causative factors such as excessive binocular parallax and accommodation-convergence conflict appear to be of minor importance when disparity values do not surpass one degree limit of visual angle, which still provides sufficient range to allow for satisfactory depth perception in consumer applications, such as stereoscopic television. Visual discomfort, however, may still occur within this limit and we believe the following factors to be the most pertinent in contributing to this: (1) excessive demand of accommodation-convergence linkage, e.g., by fast motion in depth, viewed at short distances, (2) 3D artefacts resulting from insufficient depth information in the incoming data signal yielding spatial and temporal inconsistencies, and (3) unnatural amounts of blur. In order to adequately characterize and understand visual discomfort, multiple types of measurements, both objective and subjective, are needed.
Figure and Ground in the Visual Cortex: V2 Combines Stereoscopic Cues with Gestalt Rules
Qiu, Fangtu T.; von der Heydt, Rüdiger
2006-01-01
Figure-ground organization is a process by which the visual system identifies some image regions as foreground and others as background, inferring three-dimensional (3D) layout from 2D displays. A recent study reported that edge responses of neurons in area V2 are selective for side-of-figure, suggesting that figure-ground organization is encoded in the contour signals (border-ownership coding). Here we show that area V2 combines two strategies of computation, one that exploits binocular stereoscopic information for the definition of local depth order, and another that exploits the global configuration of contours (gestalt factors). These are combined in single neurons so that the ‘near’ side of the preferred 3D edge generally coincides with the preferred side-of-figure in 2D displays. Thus, area V2 represents the borders of 2D figures as edges of surfaces, as if the figures were objects in 3D space. Even in 3D displays gestalt factors influence the responses and can enhance or null the stereoscopic depth information. PMID:15996555
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; Ebrahimi, Touradj
2014-03-01
Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.
NASA Astrophysics Data System (ADS)
Garcia Fernandez, J.; Tammi, K.; Joutsiniemi, A.
2017-02-01
Recent advances in Terrestrial Laser Scanner (TLS), in terms of cost and flexibility, have consolidated this technology as an essential tool for the documentation and digitalization of Cultural Heritage. However, once the TLS data is used, it basically remains stored and left to waste.How can highly accurate and dense point clouds (of the built heritage) be processed for its reuse, especially to engage a broader audience? This paper aims to answer this question by a channel that minimizes the need for expert knowledge, while enhancing the interactivity with the as-built digital data: Virtual Heritage Dissemination through the production of VR content. Driven by the ProDigiOUs project's guidelines on data dissemination (EU funded), this paper advances in a production path to transform the point cloud into virtual stereoscopic spherical images, taking into account the different visual features that produce depth perception, and especially those prompting visual fatigue while experiencing the VR content. Finally, we present the results of the Hiedanranta's scans transformed into stereoscopic spherical animations.
Figure and ground in the visual cortex: v2 combines stereoscopic cues with gestalt rules.
Qiu, Fangtu T; von der Heydt, Rüdiger
2005-07-07
Figure-ground organization is a process by which the visual system identifies some image regions as foreground and others as background, inferring 3D layout from 2D displays. A recent study reported that edge responses of neurons in area V2 are selective for side-of-figure, suggesting that figure-ground organization is encoded in the contour signals (border ownership coding). Here, we show that area V2 combines two strategies of computation, one that exploits binocular stereoscopic information for the definition of local depth order, and another that exploits the global configuration of contours (Gestalt factors). These are combined in single neurons so that the "near" side of the preferred 3D edge generally coincides with the preferred side-of-figure in 2D displays. Thus, area V2 represents the borders of 2D figures as edges of surfaces, as if the figures were objects in 3D space. Even in 3D displays, Gestalt factors influence the responses and can enhance or null the stereoscopic depth information.
Liu, Xiujuan; Tao, Haiquan; Xiao, Xigang; Guo, Binbin; Xu, Shangcai; Sun, Na; Li, Maotong; Xie, Li; Wu, Changjun
2018-07-01
This study aimed to compare the diagnostic performance of the stereoscopic virtual reality display system with the conventional computed tomography (CT) workstation and three-dimensional rotational angiography (3DRA) for intracranial aneurysm detection and characterization, with a focus on small aneurysms and those near the bone. First, 42 patients with suspected intracranial aneurysms underwent both 256-row CT angiography (CTA) and 3DRA. Volume rendering (VR) images were captured using the conventional CT workstation. Next, VR images were transferred to the stereoscopic virtual reality display system. Two radiologists independently assessed the results that were obtained using the conventional CT workstation and stereoscopic virtual reality display system. The 3DRA results were considered as the ultimate reference standard. Based on 3DRA images, 38 aneurysms were confirmed in 42 patients. Two cases were misdiagnosed and 1 was missed when the traditional CT workstation was used. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of the conventional CT workstation were 94.7%, 85.7%, 97.3%, 75%, and99.3%, respectively, on a per-aneurysm basis. The stereoscopic virtual reality display system missed a case. The sensitivity, specificity, PPV, NPV, and accuracy of the stereoscopic virtual reality display system were 100%, 85.7%, 97.4%, 100%, and 97.8%, respectively. No difference was observed in the accuracy of the traditional CT workstation, stereoscopic virtual reality display system, and 3DRA in detecting aneurysms. The stereoscopic virtual reality display system has some advantages in detecting small aneurysms and those near the bone. The virtual reality stereoscopic vision obtained through the system was found as a useful tool in intracranial aneurysm diagnosis and pre-operative 3D imaging. Copyright © 2018 Elsevier B.V. All rights reserved.
Depth Perception In Remote Stereoscopic Viewing Systems
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Von Sydow, Marika
1989-01-01
Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.
Polarizing aperture stereoscopic cinema camera
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-03-01
The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor (the size of the standard 35mm frame) with the means to select left and right image information. Even with the added stereoscopic capability the appearance of existing camera bodies will be unaltered.
Polarizing aperture stereoscopic cinema camera
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-07-01
The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.
Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.
Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L
2015-06-01
Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.
NASA Astrophysics Data System (ADS)
Rousson, Johanna; Haar, Jérémy; Santal, Sarah; Kumcu, Asli; Platiša, Ljiljana; Piepers, Bastian; Kimpe, Tom; Philips, Wilfried
2016-03-01
While three-dimensional (3-D) imaging systems are entering hospitals, no study to date has explored the luminance calibration needs of 3-D stereoscopic diagnostic displays and if they differ from two-dimensional (2-D) displays. Since medical display calibration incorporates the human contrast sensitivity function (CSF), we first assessed the 2-D CSF for benchmarking and then examined the impact of two image parameters on the 3-D stereoscopic CSF: (1) five depth plane (DP) positions (between DP: -171 and DP: 2853 mm), and (2) three 3-D inclinations (0 deg, 45 deg, and 60 deg around the horizontal axis of a DP). Stimuli were stereoscopic images of a vertically oriented 2-D Gabor patch at one of seven frequencies ranging from 0.4 to 10 cycles/deg. CSFs were measured for seven to nine human observers with a staircase procedure. The results indicate that the 2-D CSF model remains valid for a 3-D stereoscopic display regardless of the amount of disparity between the stereo images. We also found that the 3-D CSF at DP≠0 does not differ from the 3-D CSF at DP=0 for DPs and disparities which allow effortless binocular fusion. Therefore, the existing 2-D medical luminance calibration algorithm remains an appropriate tool for calibrating polarized stereoscopic medical displays.
[Dendrobium officinale stereoscopic cultivation method].
Si, Jin-Ping; Dong, Hong-Xiu; Liao, Xin-Yan; Zhu, Yu-Qiu; Li, Hui
2014-12-01
The study is aimed to make the most of available space of Dendrobium officinale cultivation facility, reveal the yield and functional components variation of stereoscopic cultivated D. officinale, and improve quality, yield and efficiency. The agronomic traits and yield variation of stereoscopic cultivated D. officinale were studied by operating field experiment. The content of polysaccharide and extractum were determined by using phenol-sulfuric acid method and 2010 edition of "Chinese Pharmacopoeia" Appendix X A. The results showed that the land utilization of stereoscopic cultivated D. officinale increased 2.74 times, the stems, leaves and their total fresh or dry weight in unit area of stereoscopic cultivated D. officinale were all heavier than those of the ground cultivated ones. There was no significant difference in polysaccharide content between stereoscopic cultivation and ground cultivation. But the extractum content and total content of polysaccharide and extractum were significantly higher than those of the ground cultivated ones. In additional, the polysaccharide content and total content of polysaccharide and extractum from the top two levels of stereoscopic culture matrix were significantly higher than that of the ones from the other levels and ground cultivation. Steroscopic cultivation can effectively improves the utilization of space and yield, while the total content of polysaccharides and extractum were significantly higher than that of the ground cultivated ones. The significant difference in Dendrobium polysaccharides among the plants from different height of stereo- scopic culture matrix may be associated with light factor.
Cho, Hohyun; Kang, Min-Koo; Ahn, Sangtae; Kwon, Moonyoung; Yoon, Kuk-Jin; Kim, Kiwoong; Jun, Sung Chan
2017-01-01
Due to the recent explosion in various forms of 3D content, the evaluation of such content from a neuroscience perspective is quite interesting. However, existing investigations of cortical oscillatory responses in stereoscopic depth perception are quite rare. Therefore, we investigated spatiotemporal and spatio-temporo-spectral features at four different stereoscopic depths within the comfort zone. We adopted a simultaneous EEG/MEG acquisition technique to collect the oscillatory responses of eight participants. We defined subject-specific retinal disparities and designed a single trial-based stereoscopic viewing experimental paradigm. In the group analysis, we observed that, as the depth increased from Level 1 to Level 3, there was a time-locked increase in the N200 component in MEG and the P300 component in EEG in the occipital and parietal areas, respectively. In addition, initial alpha and beta event-related desynchronizations (ERD) were observed at approximately 500 to 1000 msec, while theta, alpha, and beta event-related synchronizations (ERS) appeared at approximately 1000 to 2000 ms. Interestingly, there was a saturation point in the increase in cognitive responses, including N200, P300, and alpha ERD, even when the depth increased only within the comfort zone. Meanwhile, the magnitude of low beta ERD decreased in the dorsal pathway as depth increased. From these findings, we concluded that cognitive responses are likely to become saturated in the visual comfort zone, while perceptual load may increase with depth.
NASA Astrophysics Data System (ADS)
Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul
2012-06-01
Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.
NASA Astrophysics Data System (ADS)
Yang, Yanlong; Zhou, Xing; Li, Runze; Van Horn, Mark; Peng, Tong; Lei, Ming; Wu, Di; Chen, Xun; Yao, Baoli; Ye, Tong
2015-03-01
Bessel beams have been used in many applications due to their unique optical properties of maintaining their intensity profiles unchanged during propagation. In imaging applications, Bessel beams have been successfully used to provide extended focuses for volumetric imaging and uniformed illumination plane in light-sheet microscopy. Coupled with two-photon excitation, Bessel beams have been successfully used in realizing fluorescence projected volumetric imaging. We demonstrated previously a stereoscopic solution-two-photon fluorescence stereomicroscopy (TPFSM)-for recovering the depth information in volumetric imaging with Bessel beams. In TPFSM, tilted Bessel beams were used to generate stereoscopic images on a laser scanning two-photon fluorescence microscope; upon post image processing we could successfully provide 3D perception of acquired volume images by wearing anaglyph 3D glasses. However, tilted Bessel beams were generated by shifting either an axicon or an objective laterally; the slow imaging speed and severe aberrations made it hard to use in real-time volume imaging. In this article, we report recent improvements of TPFSM with newly designed scanner and imaging software, which allows 3D stereoscopic imaging without moving any of the optical components on the setup. This improvement has dramatically improved focusing qualities and imaging speed so that the TPFSM can be performed potentially in real-time to provide 3D visualization in scattering media without post image processing.
NASA Astrophysics Data System (ADS)
McIntire, John; Geiselman, Eric; Heft, Eric; Havig, Paul
2011-06-01
Designers, researchers, and users of binocular stereoscopic head- or helmet-mounted displays (HMDs) face the tricky issue of what imagery to present in their particular displays, and how to do so effectively. Stereoscopic imagery must often be created in-house with a 3D graphics program or from within a 3D virtual environment, or stereoscopic photos/videos must be carefully captured, perhaps for relaying to an operator in a teleoperative system. In such situations, the question arises as to what camera separation (real or virtual) is appropriate or desirable for end-users and operators. We review some of the relevant literature regarding the question of stereo pair camera separation using deskmounted or larger scale stereoscopic displays, and employ our findings to potential HMD applications, including command & control, teleoperation, information and scientific visualization, and entertainment.
ERIC Educational Resources Information Center
School Science Review, 1983
1983-01-01
Presents chemistry experiments, laboratory procedures, demonstrations, and classroom materials/activities. These include: experiments on colloids, processing of uranium ore, action of heat on carbonates; color test for phenols and aromatic amines; solvent properties of non-electrolytes; stereoscopic applications/methods; a valency balance;…
ERIC Educational Resources Information Center
Lau, Kung Wong; Kan, Chi Wai; Lee, Pui Yuen
2017-01-01
Purpose: The purpose of this paper is to discuss the use of stereoscopic virtual technology in textile and fashion studies in particular to the area of chemical experiment. The development of a designed virtual platform, called Stereoscopic Chemical Laboratory (SCL), is introduced. Design/methodology/approach: To implement the suggested…
Wide-Field-of-View, High-Resolution, Stereoscopic Imager
NASA Technical Reports Server (NTRS)
Prechtl, Eric F.; Sedwick, Raymond J.
2010-01-01
A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.
NASA Astrophysics Data System (ADS)
van Beurden, Maurice H. P. H.; Ijsselsteijn, Wijnand A.; de Kort, Yvonne A. W.
2011-03-01
Stereoscopic displays are known to offer a number of key advantages in visualizing complex 3D structures or datasets. The large majority of studies that focus on evaluating stereoscopic displays for professional applications use completion time and/or the percentage of correct answers to measure potential performance advantages. However, completion time and accuracy may not fully reflect all the benefits of stereoscopic displays. In this paper, we argue that perceived workload is an additional valuable indicator reflecting the extent to which users can benefit from using stereoscopic displays. We performed an experiment in which participants were asked to perform a visual path-tracing task within a convoluted 3D wireframe structure, varying in level of complexity of the visualised structure and level of disparity of the visualisation. The results showed that an optimal performance (completion time, accuracy and workload), depend both on task difficulty and disparity level. Stereoscopic disparity revealed a faster and more accurate task performance, whereas we observed a trend that performance on difficult tasks stands to benefit more from higher levels of disparity than performance on easy tasks. Perceived workload (as measured using the NASA-TLX) showed a similar response pattern, providing evidence that perceived workload is sensitive to variations in disparity as well as task difficulty. This suggests that perceived workload could be a useful concept, in addition to standard performance indicators, in characterising and measuring human performance advantages when using stereoscopic displays.
Visual perception and stereoscopic imaging: an artist's perspective
NASA Astrophysics Data System (ADS)
Mason, Steve
2015-03-01
This paper continues my 2014 February IS and T/SPIE Convention exploration into the relationship of stereoscopic vision and consciousness (90141F-1). It was proposed then that by using stereoscopic imaging people may consciously experience, or see, what they are viewing and thereby help make them more aware of the way their brains manage and interpret visual information. Environmental imaging was suggested as a way to accomplish this. This paper is the result of further investigation, research, and follow-up imaging. A show of images, that is a result of this research, allows viewers to experience for themselves the effects of stereoscopy on consciousness. Creating dye-infused aluminum prints while employing ChromaDepth® 3D glasses, I hope to not only raise awareness of visual processing but also explore the differences and similarities between the artist and scientist―art increases right brain spatial consciousness, not only empirical thinking, while furthering the viewer's cognizance of the process of seeing. The artist must abandon preconceptions and expectations, despite what the evidence and experience may indicate in order to see what is happening in his work and to allow it to develop in ways he/she could never anticipate. This process is then revealed to the viewer in a show of work. It is in the experiencing, not just from the thinking, where insight is achieved. Directing the viewer's awareness during the experience using stereoscopic imaging allows for further understanding of the brain's function in the visual process. A cognitive transformation occurs, the preverbal "left/right brain shift," in order for viewers to "see" the space. Using what we know from recent brain research, these images will draw from certain parts of the brain when viewed in two dimensions and different ones when viewed stereoscopically, a shift, if one is looking for it, which is quite noticeable. People who have experienced these images in the context of examining their own visual process have been startled by the effect they have on how they perceive the world around them. For instance, when viewing the mountains on a trip to Montana, one woman exclaimed, "I could no longer see just mountains, but also so many amazing colors and shapes"―she could see beyond her preconceptions of mountains to realize more of the beauty that was really there, not just the objects she "thought" to be there. The awareness gained from experiencing the artist's perspective will help with creative thinking in particular and overall research in general. Perceiving the space in these works, completely removing the picture-plane by use of the 3D glasses, making a conscious connection between the feeling and visual content, and thus gaining a deeper appreciation of the visual process will all contribute to understanding how our thinking, our left-brain domination, gets in the way of our seeing what is right in front of us. We fool ourselves with concept and memory―experiencing these prints may help some come a little closer to reality.
ERIC Educational Resources Information Center
Hirmas, Daniel R.; Slocum, Terry; Halfen, Alan F.; White, Travis; Zautner, Eric; Atchley, Paul; Liu, Huan; Johnson, William C.; Egbert, Stephen; McDermott, Dave
2014-01-01
Recently, the use of stereoscopic three-dimensional (3-D) projection displays has increased in geoscience education. One concern in employing 3-D projection systems in large lecture halls, however, is that the 3-D effect is reported to diminish with increased angle and distance from the stereoscopic display. The goal of this work was to study that…
Stereoscopic Configurations To Minimize Distortions
NASA Technical Reports Server (NTRS)
Diner, Daniel B.
1991-01-01
Proposed television system provides two stereoscopic displays. Two-camera, two-monitor system used in various camera configurations and with stereoscopic images on monitors magnified to various degrees. Designed to satisfy observer's need to perceive spatial relationships accurately throughout workspace or to perceive them at high resolution in small region of workspace. Potential applications include industrial, medical, and entertainment imaging and monitoring and control of telemanipulators, telerobots, and remotely piloted vehicles.
Psychometric Assessment of Stereoscopic Head-Mounted Displays
2016-06-29
Journal Article 3. DATES COVERED (From – To) Jan 2015 - Dec 2015 4. TITLE AND SUBTITLE PSYCHOMETRIC ASSESSMENT OF STEREOSCOPIC HEAD- MOUNTED DISPLAYS...to render an immersive three-dimensional constructive environment. The purpose of this effort was to quantify the impact of aircrew vision on an...simulated tasks requiring precise depth discrimination. This work will provide an example validation method for future stereoscopic virtual immersive
Film patterned retarder for stereoscopic three-dimensional display using ink-jet printing method.
Lim, Young Jin; Yu, Ji Hoon; Song, Ki Hoon; Lee, Myong-Hoon; Ren, Hongwen; Mun, Byung-June; Lee, Gi-Dong; Lee, Seung Hee
2014-09-22
We propose a film patterned retarder (FPR) for stereoscopic three-dimensional display with polarization glasses using ink-jet printing method. Conventional FPR process requires coating of photo-alignment and then UV exposure using wire-grid mask, which is very expensive and difficult. The proposed novel fabrication method utilizes a plastic substrate made of polyether sulfone and an alignment layer, poly (4, 4' - (9, 9 -fluorenyl) diphenylene cyclobutanyltetracarboximide) (9FDA/CBDA) in which the former and the latter aligns reactive mesogen along and perpendicular to the rubbing direction, respectively. The ink-jet printing of 9FDA/CBDA line by line allows fabricating the cost effective FPR which can be widely applied for 3D display applications.
A Topological Array Trigger for AGIS, the Advanced Gamma ray Imaging System
NASA Astrophysics Data System (ADS)
Krennrich, F.; Anderson, J.; Buckley, J.; Byrum, K.; Dawson, J.; Drake, G.; Haberichter, W.; Imran, A.; Krawczynski, H.; Kreps, A.; Schroedter, M.; Smith, A.
2008-12-01
Next generation ground based γ-ray observatories such as AGIS1 and CTA2 are expected to cover a 1 km2 area with 50-100 imaging atmospheric Cherenkov telescopes. The stereoscopic view ol air showers using multiple view points raises the possibility to use a topological array trigger that adds substantial flexibility, new background suppression capabilities and a reduced energy threshold. In this paper we report on the concept and technical implementation of a fast topological trigger system, that makes use of real time image processing of individual camera patterns and their combination in a stereoscopic array analysis. A prototype system is currently under construction and we discuss the design and hardware of this topological array trigger system.
Stereoscopic applications for design visualization
NASA Astrophysics Data System (ADS)
Gilson, Kevin J.
2007-02-01
Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.
Analysis of brain activity and response during monoscopic and stereoscopic visualization
NASA Astrophysics Data System (ADS)
Calore, Enrico; Folgieri, Raffaella; Gadia, Davide; Marini, Daniele
2012-03-01
Stereoscopic visualization in cinematography and Virtual Reality (VR) creates an illusion of depth by means of two bidimensional images corresponding to different views of a scene. This perceptual trick is used to enhance the emotional response and the sense of presence and immersivity of the observers. An interesting question is if and how it is possible to measure and analyze the level of emotional involvement and attention of the observers during a stereoscopic visualization of a movie or of a virtual environment. The research aims represent a challenge, due to the large number of sensorial, physiological and cognitive stimuli involved. In this paper we begin this research by analyzing possible differences in the brain activity of subjects during the viewing of monoscopic or stereoscopic contents. To this aim, we have performed some preliminary experiments collecting electroencephalographic (EEG) data of a group of users using a Brain- Computer Interface (BCI) during the viewing of stereoscopic and monoscopic short movies in a VR immersive installation.
Experimental investigations of pupil accommodation factors.
Lee, Eui Chul; Lee, Ji Woo; Park, Kang Ryoung
2011-08-17
PURPOSE. The contraction and dilation of the iris muscle that controls the amount of light entering the retina causes pupil accommodation. In this study, experiments were performed and two of the three factors that influence pupil accommodation were analyzed: lighting conditions and depth fixations. The psychological benefits were not examined, because they could not be quantified. METHODS. A head-wearable eyeglasses-based, eye-capturing device was designed to measure pupil size. It included a near-infrared (NIR) camera and an NIR light-emitting diode. Twenty-four subjects watched two-dimensional (2D) and three-dimensional (3D) stereoscopic videos of the same content, and the changes in pupil size were measured by using the eye-capturing device and image-processing methods: RESULTS. The pupil size changed with the intensity of the videos and the disparities between the left and right images of a 3D stereoscopic video. There was correlation between the pupil size and average intensity. The pupil diameter could be estimated as being contracted from approximately 5.96 to 4.25 mm as the intensity varied from 0 to 255. Further, from the changes in the depth fixation for the pupil accommodation, it was confirmed that the depth fixation also affected accommodation of pupil size. CONCLUSIONS. It was confirmed that the lighting condition was an even more significant factor in pupil accommodation than was depth fixation (significance ratio: approximately 3.2:1) when watching 3D stereoscopic video. Pupil accommodation was more affected by depth fixation in the real world than was the binocular convergence in the 3D stereoscopic display.
Parallax scanning methods for stereoscopic three-dimensional imaging
NASA Astrophysics Data System (ADS)
Mayhew, Christopher A.; Mayhew, Craig M.
2012-03-01
Under certain circumstances, conventional stereoscopic imagery is subject to being misinterpreted. Stereo perception created from two static horizontally separated views can create a "cut out" 2D appearance for objects at various planes of depth. The subject volume looks three-dimensional, but the objects themselves appear flat. This is especially true if the images are captured using small disparities. One potential explanation for this effect is that, although three-dimensional perception comes primarily from binocular vision, a human's gaze (the direction and orientation of a person's eyes with respect to their environment) and head motion also contribute additional sub-process information. The absence of this information may be the reason that certain stereoscopic imagery appears "odd" and unrealistic. Another contributing factor may be the absence of vertical disparity information in a traditional stereoscopy display. Recently, Parallax Scanning technologies have been introduced, which provide (1) a scanning methodology, (2) incorporate vertical disparity, and (3) produce stereo images with substantially smaller disparities than the human interocular distances.1 To test whether these three features would improve the realism and reduce the cardboard cutout effect of stereo images, we have applied Parallax Scanning (PS) technologies to commercial stereoscopic digital cinema productions and have tested the results with a panel of stereo experts. These informal experiments show that the addition of PS information into the left and right image capture improves the overall perception of three-dimensionality for most viewers. Parallax scanning significantly increases the set of tools available for 3D storytelling while at the same time presenting imagery that is easy and pleasant to view.
NASA Astrophysics Data System (ADS)
Minamoto, Masahiko; Matsunaga, Katsuya
1999-05-01
Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.
Production and evaluation of stereoscopic video presentation in surgical training
NASA Astrophysics Data System (ADS)
Ilgner, Justus; Kawai, Takashi; Westhofen, Martin; Shibata, Takashi
2004-05-01
Stereoscopic video teaching can facilitate understanding for current minimally-invasive operative techniques. This project was created to set up a digital stereoscopic teaching environment for training of ENT residents and medical students. We recorded three ENT operative procedures (tympanoplasty, paranasal sinus operation and laser chordectomy) at the University Hospital Aachen. The material was edited stereoscopically at the Waseda University and converted into a streaming 3-D video format, which does not depend on PAL or NTSC signal standards. Video clips were evaluated by 5 ENT specialists and 11 residents in single sessions on an LCD monitor (8) and a CRT monitor (8). Emphasis was laid on depth perception, visual fatigue and time to achieve stereoscopic impression. Qualitative results were recorded on a visual analogue scale, ranging from 1 (excellent) to 5 (bad). The overall impression was rated 2,06 to 3,13 in the LCD group and 2,0 to 2,62 in the CRT group. The depth impression was rated 1,63 to 2,88 (LCD) and 1,63 to 2,25 (CRT). Stereoscopic video teaching was regarded as useful in ENT training by all participants. Further points for evaluation will be the quantification of depth information as well as the information gain in teaching junior colleagues.
Digital stereoscopic convergence where video games and movies for the home user meet
NASA Astrophysics Data System (ADS)
Schur, Ethan
2009-02-01
Today there is a proliferation of stereoscopic 3D display devices, 3D content, and 3D enabled video games. As we in the S-3D community bring stereoscopic 3D to the home user we have a real opportunity of using stereoscopic 3D to bridge the gap between exciting immersive games and home movies. But to do this, we cannot limit ourselves to current conceptions of gaming and movies. We need, for example, to imagine a movie that is fully rendered using avatars in a stereoscopic game environment. Or perhaps to imagine a pervasive drama where viewers can play too and become an essential part of the drama - whether at home or on the go on a mobile platform. Stereoscopic 3D is the "glue" that will bind these video and movie concepts together. As users feel more immersed, the lines between current media will blur. This means that we have the opportunity to shape the way that we, as humans, view and interact with each other, our surroundings and our most fundamental art forms. The goal of this paper is to stimulate conversation and further development on expanding the current gaming and home theatre infrastructures to support greatly-enhanced experiential entertainment.
Trelease, R B
1996-01-01
Advances in computer visualization and user interface technologies have enabled development of "virtual reality" programs that allow users to perceive and to interact with objects in artificial three-dimensional environments. Such technologies were used to create an image database and program for studying the human skull, a specimen that has become increasingly expensive and scarce. Stereoscopic image pairs of a museum-quality skull were digitized from multiple views. For each view, the stereo pairs were interlaced into a single, field-sequential stereoscopic picture using an image processing program. The resulting interlaced image files are organized in an interactive multimedia program. At run-time, gray-scale 3-D images are displayed on a large-screen computer monitor and observed through liquid-crystal shutter goggles. Users can then control the program and change views with a mouse and cursor to point-and-click on screen-level control words ("buttons"). For each view of the skull, an ID control button can be used to overlay pointers and captions for important structures. Pointing and clicking on "hidden buttons" overlying certain structures triggers digitized audio spoken word descriptions or mini lectures.
Matching and correlation computations in stereoscopic depth perception.
Doi, Takahiro; Tanabe, Seiji; Fujita, Ichiro
2011-03-02
A fundamental task of the visual system is to infer depth by using binocular disparity. To encode binocular disparity, the visual cortex performs two distinct computations: one detects matched patterns in paired images (matching computation); the other constructs the cross-correlation between the images (correlation computation). How the two computations are used in stereoscopic perception is unclear. We dissociated their contributions in near/far discrimination by varying the magnitude of the disparity across separate sessions. For small disparity (0.03°), subjects performed at chance level to a binocularly opposite-contrast (anti-correlated) random-dot stereogram (RDS) but improved their performance with the proportion of contrast-matched (correlated) dots. For large disparity (0.48°), the direction of perceived depth reversed with an anti-correlated RDS relative to that for a correlated one. Neither reversed nor normal depth was perceived when anti-correlation was applied to half of the dots. We explain the decision process as a weighted average of the two computations, with the relative weight of the correlation computation increasing with the disparity magnitude. We conclude that matching computation dominates fine depth perception, while both computations contribute to coarser depth perception. Thus, stereoscopic depth perception recruits different computations depending on the disparity magnitude.
Perceptual asymmetry reveals neural substrates underlying stereoscopic transparency.
Tsirlin, Inna; Allison, Robert S; Wilcox, Laurie M
2012-02-01
We describe a perceptual asymmetry found in stereoscopic perception of overlaid random-dot surfaces. Specifically, the minimum separation in depth needed to perceptually segregate two overlaid surfaces depended on the distribution of dots across the surfaces. With the total dot density fixed, significantly larger inter-plane disparities were required for perceptual segregation of the surfaces when the front surface had fewer dots than the back surface compared to when the back surface was the one with fewer dots. We propose that our results reflect an asymmetry in the signal strength of the front and back surfaces due to the assignment of the spaces between the dots to the back surface by disparity interpolation. This hypothesis was supported by the results of two experiments designed to reduce the imbalance in the neuronal response to the two surfaces. We modeled the psychophysical data with a network of inter-neural connections: excitatory within-disparity and inhibitory across disparity, where the spread of disparity was modulated according to figure-ground assignment. These psychophysical and computational findings suggest that stereoscopic transparency depends on both inter-neural interactions of disparity-tuned cells and higher-level processes governing figure ground segregation. Copyright © 2011 Elsevier Ltd. All rights reserved.
Competitive Parallel Processing For Compression Of Data
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Antony R. H.
1990-01-01
Momentarily-best compression algorithm selected. Proposed competitive-parallel-processing system compresses data for transmission in channel of limited band-width. Likely application for compression lies in high-resolution, stereoscopic color-television broadcasting. Data from information-rich source like color-television camera compressed by several processors, each operating with different algorithm. Referee processor selects momentarily-best compressed output.
Jig For Stereoscopic Photography
NASA Technical Reports Server (NTRS)
Nielsen, David J.
1990-01-01
Separations between views adjusted precisely for best results. Simple jig adjusted to set precisely, distance between right and left positions of camera used to make stereoscopic photographs. Camera slides in slot between extreme positions, where it takes stereoscopic pictures. Distance between extreme positions set reproducibly with micrometer. In view of trend toward very-large-scale integration of electronic circuits, training method and jig used to make training photographs useful to many companies to reduce cost of training manufacturing personnel.
Acquisition of Stereoscopic Particle Image Velocimetry System for Investigation of Unsteady Flows
2016-04-30
SECURITY CLASSIFICATION OF: The objective of the project titled “Acquisition of Stereoscopic Particle Image Velocimetry (S-PIV) System for...Distribution Unlimited UU UU UU UU 30-04-2016 1-Feb-2015 31-Jan-2016 Final Report: Acquisition of Stereoscopic Particle Image Velocimetry System For...ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Particle Image Velocimetry REPORT DOCUMENTATION PAGE 11
DWT-based stereoscopic image watermarking
NASA Astrophysics Data System (ADS)
Chammem, A.; Mitrea, M.; Pr"teux, F.
2011-03-01
Watermarking already imposed itself as an effective and reliable solution for conventional multimedia content protection (image/video/audio/3D). By persistently (robustly) and imperceptibly (transparently) inserting some extra data into the original content, the illegitimate use of data can be detected without imposing any annoying constraint to a legal user. The present paper deals with stereoscopic image protection by means of watermarking techniques. That is, we first investigate the peculiarities of the visual stereoscopic content from the transparency and robustness point of view. Then, we advance a new watermarking scheme designed so as to reach the trade-off between transparency and robustness while ensuring a prescribed quantity of inserted information. Finally, this method is evaluated on two stereoscopic image corpora (natural image and medical data).
Usage of stereoscopic visualization in the learning contents of rotational motion.
Matsuura, Shu
2013-01-01
Rotational motion plays an essential role in physics even at an introductory level. In addition, the stereoscopic display of three-dimensional graphics includes is advantageous for the presentation of rotational motions, particularly for depth recognition. However, the immersive visualization of rotational motion has been known to lead to dizziness and even nausea for some viewers. Therefore, the purpose of this study is to examine the onset of nausea and visual fatigue when learning rotational motion through the use of a stereoscopic display. The findings show that an instruction method with intermittent exposure of the stereoscopic display and a simplification of its visual components reduced the onset of nausea and visual fatigue for the viewers, which maintained the overall effect of instantaneous spatial recognition.
Stereoscopic display of 3D models for design visualization
NASA Astrophysics Data System (ADS)
Gilson, Kevin J.
2006-02-01
Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.
YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters
NASA Astrophysics Data System (ADS)
Schild, Jonas; Seele, Sven; Masuch, Maic
2012-03-01
Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.
Subjective experiences of watching stereoscopic Avatar and U2 3D in a cinema
NASA Astrophysics Data System (ADS)
Pölönen, Monika; Salmimaa, Marja; Takatalo, Jari; Häkkinen, Jukka
2012-01-01
A stereoscopic 3-D version of the film Avatar was shown to 85 people who subsequently answered questions related to sickness, visual strain, stereoscopic image quality, and sense of presence. Viewing Avatar for 165 min induced some symptoms of visual strain and sickness, but the symptom levels remained low. A comparison between Avatar and previously published results for the film U2 3D showed that sickness and visual strain levels were similar despite the films' runtimes. The genre of the film had a significant effect on the viewers' opinions and sense of presence. Avatar, which has been described as a combination of action, adventure, and sci-fi genres, was experienced as more immersive and engaging than the music documentary U2 3D. However, participants in both studies were immersed, focused, and absorbed in watching the stereoscopic 3-D (S3-D) film and were pleased with the film environments. The results also showed that previous stereoscopic 3-D experience significantly reduced the amount of reported eye strain and complaints about the weight of the viewing glasses.
Parallax Player: a stereoscopic format converter
NASA Astrophysics Data System (ADS)
Feldman, Mark H.; Lipton, Lenny
2003-05-01
The Parallax Player is a software application that is, in essence, a stereoscopic format converter. Various formats may be inputted and outputted. In addition to being able to take any one of a wide variety of different formats and play them back on many different kinds of PCs and display screens. The Parallax Player has built into it the capability to produce ersatz stereo from a planar still or movie image. The player handles two basic forms of digital content - still images, and movies. It is assumed that all data is digital, either created by means of a photographic film process and later digitized, or directly captured or authored in a digital form. In its current implementation, running on a number of Windows Operating Systems, The Parallax Player reads in a broad selection of contemporary file formats.
Hwang, Alex D.; Peli, Eli
2014-01-01
Watching 3D content using a stereoscopic display may cause various discomforting symptoms, including eye strain, blurred vision, double vision, and motion sickness. Numerous studies have reported motion-sickness-like symptoms during stereoscopic viewing, but no causal linkage between specific aspects of the presentation and the induced discomfort has been explicitly proposed. Here, we describe several causes, in which stereoscopic capture, display, and viewing differ from natural viewing resulting in static and, importantly, dynamic distortions that conflict with the expected stability and rigidity of the real world. This analysis provides a basis for suggested changes to display systems that may alleviate the symptoms, and suggestions for future studies to determine the relative contribution of the various effects to the unpleasant symptoms. PMID:26034562
High-speed switchable lens enables the development of a volumetric stereoscopic display
Love, Gordon D.; Hoffman, David M.; Hands, Philip J.W.; Gao, James; Kirby, Andrew K.; Banks, Martin S.
2011-01-01
Stereoscopic displays present different images to the two eyes and thereby create a compelling three-dimensional (3D) sensation. They are being developed for numerous applications including cinema, television, virtual prototyping, and medical imaging. However, stereoscopic displays cause perceptual distortions, performance decrements, and visual fatigue. These problems occur because some of the presented depth cues (i.e., perspective and binocular disparity) specify the intended 3D scene while focus cues (blur and accommodation) specify the fixed distance of the display itself. We have developed a stereoscopic display that circumvents these problems. It consists of a fast switchable lens synchronized to the display such that focus cues are nearly correct. The system has great potential for both basic vision research and display applications. PMID:19724571
Ahn, Dohyun; Seo, Youngnam; Kim, Minkyung; Kwon, Joung Huem; Jung, Younbo; Ahn, Jungsun
2014-01-01
Abstract This study examined the role of display size and mode in increasing users' sense of being together with and of their psychological immersion in a virtual character. Using a high-resolution three-dimensional virtual character, this study employed a 2×2 (stereoscopic mode vs. monoscopic mode×actual human size vs. small size display) factorial design in an experiment with 144 participants randomly assigned to each condition. Findings showed that stereoscopic mode had a significant effect on both users' sense of being together and psychological immersion. However, display size affected only the sense of being together. Furthermore, display size was not found to moderate the effect of stereoscopic mode. PMID:24606057
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Chen, Alexander Y. K.
1991-01-01
Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two articulated arms, one movable robot head, and two charged coupled device (CCD) cameras for producing the stereoscopic views, and articulated cylindrical-type lower body, and an optional mobile base. A functional prototype is demonstrated.
Bornhoft, J M; Strabala, K W; Wortman, T D; Lehman, A C; Oleynikov, D; Farritor, S M
2011-01-01
The objective of this research is to study the effectiveness of using a stereoscopic visualization system for performing remote surgery. The use of stereoscopic vision has become common with the advent of the da Vinci® system (Intuitive, Sunnyvale CA). This system creates a virtual environment that consists of a 3-D display for visual feedback and haptic tactile feedback, together providing an intuitive environment for remote surgical applications. This study will use simple in vivo robotic surgical devices and compare the performance of surgeons using the stereoscopic interfacing system to the performance of surgeons using one dimensional monitors. The stereoscopic viewing system consists of two cameras, two monitors, and four mirrors. The cameras are mounted to a multi-functional miniature in vivo robot; and mimic the depth perception of the actual human eyes. This is done by placing the cameras at a calculated angle and distance apart. Live video streams from the left and right cameras are displayed on the left and right monitors, respectively. A system of angled mirrors allows the left and right eyes to see the video stream from the left and right monitor, respectively, creating the illusion of depth. The haptic interface consists of two PHANTOM Omni® (SensAble, Woburn Ma) controllers. These controllers measure the position and orientation of a pen-like end effector with three degrees of freedom. As the surgeon uses this interface, they see a 3-D image and feel force feedback for collision and workspace limits. The stereoscopic viewing system has been used in several surgical training tests and shows a potential improvement in depth perception and 3-D vision. The haptic system accurately gives force feedback that aids in surgery. Both have been used in non-survival animal surgeries, and have successfully been used in suturing and gallbladder removal. Bench top experiments using the interfacing system have also been conducted. A group of participants completed two different surgical training tasks using both a two dimensional visual system and the stereoscopic visual system. Results suggest that the stereoscopic visual system decreased the amount of time taken to complete the tasks. All participants also reported that the stereoscopic system was easier to utilize than the two dimensional system. Haptic controllers combined with stereoscopic vision provides for a more intuitive virtual environment. This system provides the surgeon with 3-D vision, depth perception, and the ability to receive feedback through forces applied in the haptic controller while performing surgery. These capabilities potentially enable the performance of more complex surgeries with a higher level of precision.
Stereoscopic Imaging in Hypersonics Boundary Layers using Planar Laser-Induced Fluorescence
NASA Technical Reports Server (NTRS)
Danehy, Paul M.; Bathel, Brett; Inman, Jennifer A.; Alderfer, David W.; Jones, Stephen B.
2008-01-01
Stereoscopic time-resolved visualization of three-dimensional structures in a hypersonic flow has been performed for the first time. Nitric Oxide (NO) was seeded into hypersonic boundary layer flows that were designed to transition from laminar to turbulent. A thick laser sheet illuminated and excited the NO, causing spatially-varying fluorescence. Two cameras in a stereoscopic configuration were used to image the fluorescence. The images were processed in a computer visualization environment to provide stereoscopic image pairs. Two methods were used to display these image pairs: a cross-eyed viewing method which can be viewed by naked eyes, and red/blue anaglyphs, which require viewing through red/blue glasses. The images visualized three-dimensional information that would be lost if conventional planar laser-induced fluorescence imaging had been used. Two model configurations were studied in NASA Langley Research Center's 31-Inch Mach 10 Air Wind tunnel. One model was a 10 degree half-angle wedge containing a small protuberance to force the flow to transition. The other model was a 1/3-scale, truncated Hyper-X forebody model with blowing through a series of holes to force the boundary layer flow to transition to turbulence. In the former case, low flowrates of pure NO seeded and marked the boundary layer fluid. In the latter, a trace concentration of NO was seeded into the injected N2 gas. The three-dimensional visualizations have an effective time resolution of about 500 ns, which is fast enough to freeze this hypersonic flow. The 512x512 resolution of the resulting images is much higher than high-speed laser-sheet scanning systems with similar time response, which typically measure 10-20 planes.
Recent developments in stereoscopic and holographic 3D display technologies
NASA Astrophysics Data System (ADS)
Sarma, Kalluri
2014-06-01
Currently, there is increasing interest in the development of high performance 3D display technologies to support a variety of applications including medical imaging, scientific visualization, gaming, education, entertainment, air traffic control and remote operations in 3D environments. In this paper we will review the attributes of the various 3D display technologies including stereoscopic and holographic 3D, human factors issues of stereoscopic 3D, the challenges in realizing Holographic 3D displays and the recent progress in these technologies.
A 3-D mixed-reality system for stereoscopic visualization of medical dataset.
Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco
2009-11-01
We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.
Virtual and stereoscopic anatomy: when virtual reality meets medical education.
de Faria, Jose Weber Vieira; Teixeira, Manoel Jacobsen; de Moura Sousa Júnior, Leonardo; Otoch, Jose Pinhata; Figueiredo, Eberval Gadelha
2016-11-01
OBJECTIVE The authors sought to construct, implement, and evaluate an interactive and stereoscopic resource for teaching neuroanatomy, accessible from personal computers. METHODS Forty fresh brains (80 hemispheres) were dissected. Images of areas of interest were captured using a manual turntable and processed and stored in a 5337-image database. Pedagogic evaluation was performed in 84 graduate medical students, divided into 3 groups: 1 (conventional method), 2 (interactive nonstereoscopic), and 3 (interactive and stereoscopic). The method was evaluated through a written theory test and a lab practicum. RESULTS Groups 2 and 3 showed the highest mean scores in pedagogic evaluations and differed significantly from Group 1 (p < 0.05). Group 2 did not differ statistically from Group 3 (p > 0.05). Size effects, measured as differences in scores before and after lectures, indicate the effectiveness of the method. ANOVA results showed significant difference (p < 0.05) between groups, and the Tukey test showed statistical differences between Group 1 and the other 2 groups (p < 0.05). No statistical differences between Groups 2 and 3 were found in the practicum. However, there were significant differences when Groups 2 and 3 were compared with Group 1 (p < 0.05). CONCLUSIONS The authors conclude that this method promoted further improvement in knowledge for students and fostered significantly higher learning when compared with traditional teaching resources.
Enduring stereoscopic motion aftereffects induced by prolonged adaptation.
Bowd, C; Rose, D; Phinney, R E; Patterson, R
1996-11-01
This study investigated the effects of prolonged adaptation on the recovery of the stereoscopic motion aftereffect (adaptation induced by moving binocular disparity information). The adapting and test stimuli were stereoscopic grating patterns created from disparity, embedded in dynamic random-dot stereograms. Motion aftereffects induced by luminance stimuli were included in the study for comparison. Adaptation duration was either 1, 2, 4, 8, 16, 32 or 64 min and the duration of the ensuing aftereffect was the variable of interest. The results showed that aftereffect duration was proportional to the square root of adaptation duration for both stereoscopic and luminance stimuli; on log-log axes, the relation between aftereffect duration and adaptation duration was a power law with the slope near 0.5 in both cases. For both kinds of stimuli, there was no sign of adaptation saturation even at the longest adaptation duration.
Stereoscopic observations from meteorological satellites
NASA Astrophysics Data System (ADS)
Hasler, A. F.; Mack, R.; Negri, A.
The capability of making stereoscopic observations of clouds from meteorological satellites is a new basic analysis tool with a broad spectrum of applications. Stereoscopic observations from satellites were first made using the early vidicon tube weather satellites (e.g., Ondrejka and Conover [1]). However, the only high quality meteorological stereoscopy from low orbit has been done from Apollo and Skylab, (e.g., Shenk et al. [2] and Black [3], [4]). Stereoscopy from geosynchronous satellites was proposed by Shenk [5] and Bristor and Pichel [6] in 1974 which allowed Minzner et al. [7] to demonstrate the first quantitative cloud height analysis. In 1978 Bryson [8] and desJardins [9] independently developed digital processing techniques to remap stereo images which made possible precision height measurement and spectacular display of stereograms (Hasler et al. [10], and Hasler [11]). In 1980 the Japanese Geosynchronous Satellite (GMS) and the U.S. GOES-West satellite were synchronized to obtain stereo over the central Pacific as described by Fujita and Dodge [12] and in this paper. Recently the authors have remapped images from a Low Earth Orbiter (LEO) to the coordinate system of a Geosynchronous Earth Orbiter (GEO) and obtained stereoscopic cloud height measurements which promise to have quality comparable to previous all GEO stereo. It has also been determined that the north-south imaging scan rate of some GEOs can be slowed or reversed. Therefore the feasibility of obtaining stereoscopic observations world wide from combinations of operational GEO and LEO satellites has been demonstrated. Stereoscopy from satellites has many advantages over infrared techniques for the observation of cloud structure because it depends only on basic geometric relationships. Digital remapping of GEO and LEO satellite images is imperative for precision stereo height measurement and high quality displays because of the curvature of the earth and the large angular separation of the two satellites. A general solution for accurate height computation depends on precise navigation of the two satellites. Validation of the geosynchronous satellite stereo using high altitude mountain lakes and vertically pointing aircraft lidar leads to a height accuracy estimate of +/- 500 m for typical clouds which have been studied. Applications of the satellite stereo include: 1) cloud top and base height measurements, 2) cloud-wind height assignment, 3) vertical motion estimates for convective clouds (Mack et al. [13], [14]), 4) temperature vs. height measurements when stereo is used together with infrared observations and 5) cloud emissivity measurements when stereo, infrared and temperature sounding are used together (see Szejwach et al. [15]). When true satellite stereo image pairs are not available, synthetic stereo may be generated. The combination of multispectral satellite data using computer produced stereo image pairs is a dramatic example of synthetic stereoscopic display. The classic case uses the combination of infrared and visible data as first demonstrated by Pichel et al. [16]. Hasler et at. [17], Mosher and Young [18] and Lorenz [19], have expanded this concept to display many channels of data from various radiometers as well as real and simulated data fields. A future system of stereoscopic satellites would be comprised of both low orbiters (as suggested by Lorenz and Schmidt [20], [19]) and a global system of geosynchronous satellites. The low earth orbiters would provide stereo coverage day and night and include the poles. An optimum global system of stereoscopic geosynchronous satellites would require international standarization of scan rate and direction, and scan times (synchronization) and resolution of at least 1 km in all imaging channels. A stereoscopic satellite system as suggested here would make an extremely important contribution to the understanding and prediction of the atmosphere.
Stereoscopic Viewing Can Induce Changes in the CA/C Ratio.
Neveu, Pascaline; Roumes, Corinne; Philippe, Matthieu; Fuchs, Philippe; Priot, Anne-Emmanuelle
2016-08-01
Stereoscopic displays challenge the neural cross-coupling between accommodation and vergence by inducing a constant accommodative demand and a varying vergence demand. Stereoscopic viewing calls for a decrease in the gain of vergence accommodation, which is the accommodation caused by vergence, quantified by using the convergence-accommodation to convergence (CA/C) ratio. However, its adaptability is still a subject of debate. Cross-coupling (CA/C and AC/A ratios) and tonic components of vergence and accommodation were assessed in 12 participants (27.5 ± 5 years, stereoacuity better than 60 arc seconds, 6/6 acuity with corrected refractive error) before and after a 20-minute exposure to stereoscopic viewing. During stimulation, vergence demand oscillated from 1 to 3 meter angles along a virtual sagittal line in sinusoidal movements, while accommodative demand was fixed at 1.5 diopters. Results showed a decreased CA/C ratio (-10.36%, df = 10, t = 2.835, P = 0.018), with no change in the AC/A ratio (P = 0.090), tonic vergence (P = 0.708), and tonic accommodation (P = 0.493). These findings demonstrated that the CA/C ratio can exhibit adaptive adjustments. The observed nature and amount of the oculomotor modification failed to compensate for the stereoscopic constraint.
Stereoscopy in Astronomical Visualizations to Support Learning at Informal Education Settings
NASA Astrophysics Data System (ADS)
Price, Aaron; Lee, Hee-Sun
2015-08-01
Stereoscopy has been used in science education for 100 years. Recent innovations in low cost technology as well as trends in the entertainment industry have made stereoscopy popular among educators and audiences alike. However, experimental studies addressing whether stereoscopy actually impacts science learning are limited. Over the last decade, we have conducted a series of quasi-experimental and experimental studies on how children and adult visitors in science museums and planetariums learned about the structure and function of highly spatial scientific objects such as galaxies, supernova, etc. We present a synthesis of the results from these studies and implications for stereoscopic visualization development. The overall finding is that the impact of stereoscopy on perceptions of scientific objects is limited when presented as static imagery. However, when presented as full motion films, a significantly positive impact was detected. To conclude, we present a set of stereoscopic design principles that can help design astronomical stereoscopic films that support deep and effective learning. Our studies cover astronomical content such as the engineering of and imagery from the Mars rovers, artistic stereoscopic imagery of nebulae and a high-resolution stereoscopic film about how astronomers measure and model the structure of our galaxy.
Change Blindness Phenomena for Virtual Reality Display Systems.
Steinicke, Frank; Bruder, Gerd; Hinrichs, Klaus; Willemsen, Pete
2011-09-01
In visual perception, change blindness describes the phenomenon that persons viewing a visual scene may apparently fail to detect significant changes in that scene. These phenomena have been observed in both computer-generated imagery and real-world scenes. Several studies have demonstrated that change blindness effects occur primarily during visual disruptions such as blinks or saccadic eye movements. However, until now the influence of stereoscopic vision on change blindness has not been studied thoroughly in the context of visual perception research. In this paper, we introduce change blindness techniques for stereoscopic virtual reality (VR) systems, providing the ability to substantially modify a virtual scene in a manner that is difficult for observers to perceive. We evaluate techniques for semiimmersive VR systems, i.e., a passive and active stereoscopic projection system as well as an immersive VR system, i.e., a head-mounted display, and compare the results to those of monoscopic viewing conditions. For stereoscopic viewing conditions, we found that change blindness phenomena occur with the same magnitude as in monoscopic viewing conditions. Furthermore, we have evaluated the potential of the presented techniques for allowing abrupt, and yet significant, changes of a stereoscopically displayed virtual reality environment.
Ghosting in anaglyphic stereoscopic images
NASA Astrophysics Data System (ADS)
Woods, Andrew J.; Rourke, Tegan
2004-05-01
Anaglyphic 3D images are an easy way of displaying stereoscopic 3D images on a wide range of display types, e.g. CRT, LCD, print, etc. While the anaglyphic 3D image method is cheap and accessible, its use requires a compromise in stereoscopic image quality. A common problem with anaglyphic 3D images is ghosting. Ghosting (or crosstalk) is the leaking of an image to one eye, when it is intended exclusively for the other eye. Ghosting degrades the ability of the observer to fuse the stereoscopic image and hence the quality of the 3D image is reduced. Ghosting is present in various levels with most stereoscopic displays, however it is often particularly evident with anaglyphic 3D images. This paper describes a project whose aim was to characterize the presence of ghosting in anaglyphic 3D images due to spectral issues. The spectral response curves of several different display types and several different brands of anaglyph glasses were measured using a spectroradiometer or spectrophotometer. A mathematical model was then developed to predict the amount of crosstalk in anaglyphic 3D images when different combinations of displays and glasses are used, and therefore predict the best type of anaglyph glasses for use with a particular display type.
NASA Astrophysics Data System (ADS)
Starks, Michael R.
1990-09-01
A variety of low cost devices for capturing, editing and displaying field sequential 60 cycle stereoscopic video have recently been marketed by 3D TV Corp. and others. When properly used, they give very high quality images with most consumer and professional equipment. Our stereoscopic multiplexers for creating and editing field sequential video in NTSC or component(SVHS, Betacain, RGB) and Home 3D Theater system employing LCD eyeglasses have made 3D movies and television available to a large audience.
The compressed average image intensity metric for stereoscopic video quality assessment
NASA Astrophysics Data System (ADS)
Wilczewski, Grzegorz
2016-09-01
The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.
Analysis on the 3D crosstalk in stereoscopic display
NASA Astrophysics Data System (ADS)
Choi, Hee-Jin
2010-11-01
Nowadays, with the rapid progresses in flat panel display (FPD) technologies, the three-dimensional (3D) display is now becoming a next mainstream of display market. Among the various 3D display techniques, the stereoscopic 3D display shows different left/right images for each eye of observer using special glasses and is the most popular 3D technique with the advantages of low price and high 3D resolution. However, current stereoscopic 3D displays suffer with the 3D crosstalk which means the interference between the left eye mage and right eye images since it degrades the quality of 3D image severely. In this paper, the meaning and causes of the 3D crosstalk in stereoscopic 3D display are introduced and the pre-proposed methods of 3D crosstalk measurement vision science are reviewed. Based on them The threshold of 3D crosstalk to realize a 3D display with no degradation is analyzed.
López, David; Oehlberg, Lora; Doger, Candemir; Isenberg, Tobias
2016-05-01
We discuss touch-based navigation of 3D visualizations in a combined monoscopic and stereoscopic viewing environment. We identify a set of interaction modes, and a workflow that helps users transition between these modes to improve their interaction experience. In our discussion we analyze, in particular, the control-display space mapping between the different reference frames of the stereoscopic and monoscopic displays. We show how this mapping supports interactive data exploration, but may also lead to conflicts between the stereoscopic and monoscopic views due to users' movement in space; we resolve these problems through synchronization. To support our discussion, we present results from an exploratory observational evaluation with domain experts in fluid mechanics and structural biology. These experts explored domain-specific datasets using variations of a system that embodies the interaction modes and workflows; we report on their interactions and qualitative feedback on the system and its workflow.
NASA Astrophysics Data System (ADS)
Song, Weitao; Weng, Dongdong; Feng, Dan; Li, Yuqian; Liu, Yue; Wang, Yongtian
2015-05-01
As one of popular immersive Virtual Reality (VR) systems, stereoscopic cave automatic virtual environment (CAVE) system is typically consisted of 4 to 6 3m-by-3m sides of a room made of rear-projected screens. While many endeavors have been made to reduce the size of the projection-based CAVE system, the issue of asthenopia caused by lengthy exposure to stereoscopic images in such CAVE with a close viewing distance was seldom tangled. In this paper, we propose a light-weighted approach which utilizes a convex eyepiece to reduce visual discomfort induced by stereoscopic vision. An empirical experiment was conducted to examine the feasibility of convex eyepiece in a large depth of field (DOF) at close viewing distance both objectively and subjectively. The result shows the positive effects of convex eyepiece on the relief of eyestrain.
Kutsuna, Kenichiro; Matsuura, Yasuyuki; Fujikake, Kazuhiro; Miyao, Masaru; Takada, Hiroki
2013-01-01
Visually induced motion sickness (VIMS) is caused by sensory conflict, the disagreement between vergence and visual accommodation while observing stereoscopic images. VIMS can be measured by psychological and physiological methods. We propose a mathematical methodology to measure the effect of three-dimensional (3D) images on the equilibrium function. In this study, body sway in the resting state is compared with that during exposure to 3D video clips on a liquid crystal display (LCD) and on a head mounted display (HMD). In addition, the Simulator Sickness Questionnaire (SSQ) was completed immediately afterward. Based on the statistical analysis of the SSQ subscores and each index for stabilograms, we succeeded in determining the quantity of the VIMS during exposure to the stereoscopic images. Moreover, we discuss the metamorphism in the potential functions to control the standing posture during the exposure to stereoscopic video clips.
Effect of the accommodation-vergence conflict on vergence eye movements.
Vienne, Cyril; Sorin, Laurent; Blondé, Laurent; Huynh-Thu, Quan; Mamassian, Pascal
2014-07-01
With the broader use of stereoscopic displays, a flurry of research activity about the accommodation-vergence conflict has emerged to highlight the implications for the human visual system. In stereoscopic displays, the introduction of binocular disparities requires the eyes to make vergence movements. In this study, we examined vergence dynamics with regard to the conflict between the stimulus-to-accommodation and the stimulus-to-vergence. In a first experiment, we evaluated the immediate effect of the conflict on vergence responses by presenting stimuli with conflicting disparity and focus on a stereoscopic display (i.e. increasing the stereoscopic demand) or by presenting stimuli with matched disparity and focus using an arrangement of displays and a beam splitter (i.e. focus and disparity specifying the same locations). We found that the dynamics of vergence responses were slower overall in the first case due to the conflict between accommodation and vergence. In a second experiment, we examined the effect of a prolonged exposure to the accommodation-vergence conflict on vergence responses, in which participants judged whether an oscillating depth pattern was in front or behind the fixation plane. An increase in peak velocity was observed, thereby suggesting that the vergence system has adapted to the stereoscopic demand. A slight increase in vergence latency was also observed, thus indicating a small decline of vergence performance. These findings offer a better understanding and document how the vergence system behaves in stereoscopic displays. We describe what stimuli in stereo-movies might produce these oculomotor effects, and discuss potential applications perspectives. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1991-01-01
Methods for providing stereoscopic image presentation and stereoscopic configurations using stereoscopic viewing systems having converged or parallel cameras may be set up to reduce or eliminate erroneously perceived accelerations and decelerations by proper selection of parameters, such as an image magnification factor, q, and intercamera distance, 2w. For converged cameras, q is selected to be equal to Ve - qwl = 0, where V is the camera distance, e is half the interocular distance of an observer, w is half the intercamera distance, and l is the actual distance from the first nodal point of each camera to the convergence point, and for parallel cameras, q is selected to be equal to e/w. While converged cameras cannot be set up to provide fully undistorted three-dimensional views, they can be set up to provide a linear relationship between real and apparent depth and thus minimize erroneously perceived accelerations and decelerations for three sagittal planes, x = -w, x = 0, and x = +w which are indicated to the observer. Parallel cameras can be set up to provide fully undistorted three-dimensional views by controlling the location of the observer and by magnification and shifting of left and right images. In addition, the teachings of this disclosure can be used to provide methods of stereoscopic image presentation and stereoscopic camera configurations to produce a nonlinear relation between perceived and real depth, and erroneously produce or enhance perceived accelerations and decelerations in order to provide special effects for entertainment, training, or educational purposes.
Parallel-Processing Software for Correlating Stereo Images
NASA Technical Reports Server (NTRS)
Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric
2007-01-01
A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.
Use of camera drive in stereoscopic display of learning contents of introductory physics
NASA Astrophysics Data System (ADS)
Matsuura, Shu
2011-03-01
Simple 3D physics simulations with stereoscopic display were created for a part of introductory physics e-Learning. First, cameras to see the 3D world can be made controllable by the user. This enabled to observe the system and motions of objects from any position in the 3D world. Second, cameras were made attachable to one of the moving object in the simulation so as to observe the relative motion of other objects. By this option, it was found that users perceive the velocity and acceleration more sensibly on stereoscopic display than on non-stereoscopic 3D display. Simulations were made using Adobe Flash ActionScript, and Papervison 3D library was used to render the 3D models in the flash web pages. To display the stereogram, two viewports from virtual cameras were displayed in parallel in the same web page. For observation of stereogram, the images of two viewports were superimposed by using 3D stereogram projection box (T&TS CO., LTD.), and projected on an 80-inch screen. The virtual cameras were controlled by keyboard and also by Nintendo Wii remote controller buttons. In conclusion, stereoscopic display offers learners more opportunities to play with the simulated models, and to perceive the characteristics of motion better.
Stereoscopic vascular models of the head and neck: A computed tomography angiography visualization.
Cui, Dongmei; Lynch, James C; Smith, Andrew D; Wilson, Timothy D; Lehman, Michael N
2016-01-01
Computer-assisted 3D models are used in some medical and allied health science schools; however, they are often limited to online use and 2D flat screen-based imaging. Few schools take advantage of 3D stereoscopic learning tools in anatomy education and clinically relevant anatomical variations when teaching anatomy. A new approach to teaching anatomy includes use of computed tomography angiography (CTA) images of the head and neck to create clinically relevant 3D stereoscopic virtual models. These high resolution images of the arteries can be used in unique and innovative ways to create 3D virtual models of the vasculature as a tool for teaching anatomy. Blood vessel 3D models are presented stereoscopically in a virtual reality environment, can be rotated 360° in all axes, and magnified according to need. In addition, flexible views of internal structures are possible. Images are displayed in a stereoscopic mode, and students view images in a small theater-like classroom while wearing polarized 3D glasses. Reconstructed 3D models enable students to visualize vascular structures with clinically relevant anatomical variations in the head and neck and appreciate spatial relationships among the blood vessels, the skull and the skin. © 2015 American Association of Anatomists.
Programming standards for effective S-3D game development
NASA Astrophysics Data System (ADS)
Schneider, Neil; Matveev, Alexander
2008-02-01
When a video game is in development, more often than not it is being rendered in three dimensions - complete with volumetric depth. It's the PC monitor that is taking this three-dimensional information, and artificially displaying it in a flat, two-dimensional format. Stereoscopic drivers take the three-dimensional information captured from DirectX and OpenGL calls and properly display it with a unique left and right sided view for each eye so a proper stereoscopic 3D image can be seen by the gamer. The two-dimensional limitation of how information is displayed on screen has encouraged programming short-cuts and work-arounds that stifle this stereoscopic 3D effect, and the purpose of this guide is to outline techniques to get the best of both worlds. While the programming requirements do not significantly add to the game development time, following these guidelines will greatly enhance your customer's stereoscopic 3D experience, increase your likelihood of earning Meant to be Seen certification, and give you instant cost-free access to the industry's most valued consumer base. While this outline is mostly based on NVIDIA's programming guide and iZ3D resources, it is designed to work with all stereoscopic 3D hardware solutions and is not proprietary in any way.
Characterization of crosstalk in stereoscopic display devices.
Zafar, Fahad; Badano, Aldo
2014-12-01
Many different types of stereoscopic display devices are used for commercial and research applications. Stereoscopic displays offer the potential to improve performance in detection tasks for medical imaging diagnostic systems. Due to the variety of stereoscopic display technologies, it remains unclear how these compare with each other for detection and estimation tasks. Different stereo devices have different performance trade-offs due to their display characteristics. Among them, crosstalk is known to affect observer perception of 3D content and might affect detection performance. We measured and report the detailed luminance output and crosstalk characteristics for three different types of stereoscopic display devices. We recorded the effect of other issues on recorded luminance profiles such as viewing angle, use of different eye wear, and screen location. Our results show that the crosstalk signature for viewing 3D content can vary considerably when using different types of 3D glasses for active stereo displays. We also show that significant differences are present in crosstalk signatures when varying the viewing angle from 0 degrees to 20 degrees for a stereo mirror 3D display device. Our detailed characterization can help emulate the effect of crosstalk in conducting computational observer image quality assessment evaluations that minimize costly and time-consuming human reader studies.
Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron
2017-01-01
Abstract Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. PMID:28814063
Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H
2017-08-01
Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.
Introducing a Public Stereoscopic 3D High Dynamic Range (SHDR) Video Database
NASA Astrophysics Data System (ADS)
Banitalebi-Dehkordi, Amin
2017-03-01
High dynamic range (HDR) displays and cameras are paving their ways through the consumer market at a rapid growth rate. Thanks to TV and camera manufacturers, HDR systems are now becoming available commercially to end users. This is taking place only a few years after the blooming of 3D video technologies. MPEG/ITU are also actively working towards the standardization of these technologies. However, preliminary research efforts in these video technologies are hammered by the lack of sufficient experimental data. In this paper, we introduce a Stereoscopic 3D HDR database of videos that is made publicly available to the research community. We explain the procedure taken to capture, calibrate, and post-process the videos. In addition, we provide insights on potential use-cases, challenges, and research opportunities, implied by the combination of higher dynamic range of the HDR aspect, and depth impression of the 3D aspect.
The use of stereoscopic satellite observation in the determination of the emissivity of cirrus
NASA Astrophysics Data System (ADS)
Szejwach, G.; Sletten, T. N.; Hasler, A. F.
The feasibility of determining cirrus ``emissivity'' from combined stereoscopic and infrared satellite observations in conjunction with radiosounding data is investigated for a particular case study. Simultaneous visible images obtained during SESAME-1979 from two geosynchronous GOES meteorological satellites were processed on the NASA/Goddard interactive system (AOIPS) and were used to determine the stereo cloud top height ZC as described by Hasler [1]. Iso-contours of radiances were outlined on the corresponding infrared image. Total brightness temperature TB and ground surface brightness temperature TS were inferred from the radiances. The special SESAME network of radiosoundings was used to determine the cloud top temperature TCLD at the level defined by ZC. The ``effective cirrus emissivity'' NE where N is the fractional cirrus cloudiness and E is the emissivity in a GOES infrared picture element of about 10 km × 10 km is then computed from TB, TS and TCLD.
[Evaluation of Motion Sickness Induced by 3D Video Clips].
Matsuura, Yasuyuki; Takada, Hiroki
2016-01-01
The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology.
Research and Construction Lunar Stereoscopic Visualization System Based on Chang'E Data
NASA Astrophysics Data System (ADS)
Gao, Xingye; Zeng, Xingguo; Zhang, Guihua; Zuo, Wei; Li, ChunLai
2017-04-01
With lunar exploration activities carried by Chang'E-1, Chang'E-2 and Chang'E-3 lunar probe, a large amount of lunar data has been obtained, including topographical and image data covering the whole moon, as well as the panoramic image data of the spot close to the landing point of Chang'E-3. In this paper, we constructed immersive virtual moon system based on acquired lunar exploration data by using advanced stereoscopic visualization technology, which will help scholars to carry out research on lunar topography, assist the further exploration of lunar science, and implement the facilitation of lunar science outreach to the public. In this paper, we focus on the building of lunar stereoscopic visualization system with the combination of software and hardware by using binocular stereoscopic display technology, real-time rendering algorithm for massive terrain data, and building virtual scene technology based on panorama, to achieve an immersive virtual tour of the whole moon and local moonscape of Chang'E-3 landing point.
NASA Astrophysics Data System (ADS)
Forbes, Angus; Villegas, Javier; Almryde, Kyle R.; Plante, Elena
2014-03-01
In this paper, we present a novel application, 3D+Time Brain View, for the stereoscopic visualization of functional Magnetic Resonance Imaging (fMRI) data gathered from participants exposed to unfamiliar spoken languages. An analysis technique based on Independent Component Analysis (ICA) is used to identify statistically significant clusters of brain activity and their changes over time during different testing sessions. That is, our system illustrates the temporal evolution of participants' brain activity as they are introduced to a foreign language through displaying these clusters as they change over time. The raw fMRI data is presented as a stereoscopic pair in an immersive environment utilizing passive stereo rendering. The clusters are presented using a ray casting technique for volume rendering. Our system incorporates the temporal information and the results of the ICA into the stereoscopic 3D rendering, making it easier for domain experts to explore and analyze the data.
Monoscopic versus stereoscopic photography in screening for clinically significant macular edema.
Welty, Christopher J; Agarwal, Anita; Merin, Lawrence M; Chomsky, Amy
2006-01-01
The purpose of the study was to determine whether monoscopic photography could serve as an accurate tool when used to screen for clinically significant macular edema. In a masked randomized fashion, two readers evaluated monoscopic and stereoscopic retinal photographs of 100 eyes. The photographs were evaluated first individually for probable clinically significant macular edema based on the Early Treatment Diabetic Retinopathy Study criteria and then as stereoscopic pairs. Graders were evaluated for sensitivity and specificity individually and in combination. Individually, reader one had a sensitivity of 0.93 and a specificity of 0.77, and reader two had a sensitivity of 0.88 and a specificity of 0.94. In combination, the readers had a sensitivity of 0.91 and a specificity of 0.86. They correlated on 0.76 of the stereoscopic readings and 0.92 of the monoscopic readings. These results indicate that the use of monoscopic retinal photography may be an accurate screening tool for clinically significant macular edema.
Stereoscopic depth perception varies with hues
NASA Astrophysics Data System (ADS)
Chen, Zaiqing; Shi, Junsheng; Tai, Yonghang; Yun, Lijun
2012-09-01
The contribution of color information to stereopsis is controversial, and whether the stereoscopic depth perception varies with chromaticity is ambiguous. This study examined the changes in depth perception caused by hue variations. Based on the fact that a greater disparity range indicates more efficient stereoscopic perception, the effect of hue variations on depth perception was evaluated through the disparity range with random-dot stereogram stimuli. The disparity range was obtained by constant-stimulus method for eight chromaticity points sampled from the CIE 1931 chromaticity diagram. Eight sample points include four main color hues: red, yellow, green, and blue at two levels of chroma. The results show that the disparity range for the yellow hue is greater than the red hue, the latter being greater than the blue hue and the disparity range for green hue is smallest. We conclude that the perceived depth is not the same for different hues for a given size of disparity. We suggest that the stereoscopic depth perception can vary with chromaticity.
Usability of stereoscopic view in teleoperation
NASA Astrophysics Data System (ADS)
Boonsuk, Wutthigrai
2015-03-01
Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.
Perfect 3-D movies and stereoscopic movies on TV and projection screens: an appraisement
NASA Astrophysics Data System (ADS)
Klein, Susanne; Dultz, Wolfgang
1990-09-01
Since the invention of stereoscopy (WHEATSTONE 1838) reasons for and against 3-dimensional images have occupied the literature, but there has never been much doubt about the preference of autostereoscopic systems showing a scene which is 3-dimensional and true to life from all sides (perfect 3-dimensional image, HESSE 1939), especially since most stereoscopic movies of the past show serious imperfections with respect to image quality and technical operation. Leave aside that no convincing perfect 3D-TV-system is in sight, there are properties f the stereoscopic movie which are advantageous to certain representations on TV and important for the 3-dimensional motion picture. In this paper we investigate the influence of apparent motions of 3-dimensional images and classify the different projection systems with respect to presence and absence of these spectacular illusions. Apparent motions bring dramatic effects into stereoscopic movies which cannot be created with perfect 3-dimensional systems. In this study we describe their applications and limits for television.
NASA Astrophysics Data System (ADS)
Casadei, Diego; Jeffrey, Natasha L. S.; Kontar, Eduard P.
2017-09-01
Context. During a solar flare, a large percentage of the magnetic energy released goes into the kinetic energy of non-thermal particles, with X-ray observations providing a direct connection to keV flare-accelerated electrons. However, the electron angular distribution, a prime diagnostic tool of the acceleration mechanism and transport, is poorly known. Aims: During the next solar maximum, two upcoming space-borne X-ray missions, STIX on board Solar Orbiter and MiSolFA, will perform stereoscopic X-ray observations of solar flares at two different locations: STIX at 0.28 AU (at perihelion) and up to inclinations of 25°, and MiSolFA in a low-Earth orbit. The combined observations from these cross-calibrated detectors will allow us to infer the electron anisotropy of individual flares confidently for the first time. Methods: We simulated both instrumental and physical effects for STIX and MiSolFA including thermal shielding, background and X-ray Compton backscattering (albedo effect) in the solar photosphere. We predict the expected number of observable flares available for stereoscopic measurements during the next solar maximum. We also discuss the range of useful spacecraft observation angles for the challenging case of close-to-isotropic flare anisotropy. Results: The simulated results show that STIX and MiSolFA will be capable of detecting low levels of flare anisotropy, for M1-class or stronger flares, even with a relatively small spacecraft angular separation of 20-30°. Both instruments will directly measure the flare X-ray anisotropy of about 40 M- and X-class solar flares during the next solar maximum. Conclusions: Near-future stereoscopic observations with Solar Orbiter/STIX and MiSolFA will help distinguishing between competing flare-acceleration mechanisms, and provide essential constraints regarding collisional and non-collisional transport processes occurring in the flaring atmosphere for individual solar flares.
Stereoscopic wide field of view imaging system
NASA Technical Reports Server (NTRS)
Prechtl, Eric F. (Inventor); Sedwick, Raymond J. (Inventor); Jonas, Eric M. (Inventor)
2011-01-01
A stereoscopic imaging system incorporates a plurality of imaging devices or cameras to generate a high resolution, wide field of view image database from which images can be combined in real time to provide wide field of view or panoramic or omni-directional still or video images.
The rendering context for stereoscopic 3D web
NASA Astrophysics Data System (ADS)
Chen, Qinshui; Wang, Wenmin; Wang, Ronggang
2014-03-01
3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.
NASA Astrophysics Data System (ADS)
Boisson, Guillaume; Chamaret, Christel
2012-03-01
More and more numerous 3D movies are released each year. Thanks to the current spread of 3D-TV displays, these 3D Video (3DV) contents are about to enter massively the homes. Yet viewing conditions determine the stereoscopic features achievable for 3DV material. Because the conditions at home - screen size and distance to screen - differ significantly from a theater, 3D Cinema movies need to be repurposed before broadcast and replication on 3D Blu-ray Discs for being fully enjoyed at home. In that paper we tackle that particular issue of how to handle the variety of viewing conditions in stereoscopic contents delivery. To that extend we first investigate what is basically at stake for granting stereoscopic viewers' comfort, through the well-known - and sometimes dispraised - vergence-accommodation conflict. Thereby we define a set of basic rules that can serve as guidelines for 3DV creation. We propose disparity profiles as new requirements for 3DV production and repurposing. Meeting proposed background and foreground constraints prevents from visual fatigue, and occupying the whole depth budget available grants optimal 3D effects. We present an efficient algorithm for automatic disparity-based 3DV retargeting depending on the viewing conditions. Variants are proposed depending on the input format (stereoscopic binocular content or depth-based format) and the level of complexity achievable.
Immersive 3D exposure-based treatment for spider fear: A randomized controlled trial.
Minns, Sean; Levihn-Coon, Andrew; Carl, Emily; Smits, Jasper A J; Miller, Wayne; Howard, Don; Papini, Santiago; Quiroz, Simon; Lee-Furman, Eunjung; Telch, Michael; Carlbring, Per; Xanthopoulos, Drew; Powers, Mark B
2018-06-04
Stereoscopic 3D gives the viewer the same shape, size, perspective and depth they would experience viewing the real world and could mimic the perceptual threat cues present in real life. This is the first study to investigate whether an immersive stereoscopic 3D video exposure-based treatment would be effective in reducing fear of spiders. Participants with a fear of spiders (N = 77) watched two psychoeducational videos with facts about spiders and phobias. They were then randomized to a treatment condition that watched a single session of a stereoscopic 3D immersive video exposure-based treatment (six 5-min exposures) delivered through a virtual reality headset or a psychoeducation only control condition that watched a 30-min neutral video (2D documentary) presented on a computer monitor. Assessments of spider fear (Fear of Spiders Questionnaire [FSQ], Behavioral Approach Task [BAT], & subjective ratings of fear) were completed pre- and post-treatment. Consistent with prediction, the stereoscopic 3D video condition outperformed the control condition in reducing fear of spiders showing a large between-group effect size on the FSQ (Cohen's d = 0.85) and a medium between-group effect size on the BAT (Cohen's d = 0.47). This provides initial support for stereoscopic 3D video in treating phobias. Copyright © 2018 Elsevier Ltd. All rights reserved.
Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung
2014-05-15
We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.
Stereoscopy in Static Scientific Imagery in an Informal Education Setting: Does It Matter?
NASA Astrophysics Data System (ADS)
Price, C. Aaron; Lee, H.-S.; Malatesta, K.
2014-12-01
Stereoscopic technology (3D) is rapidly becoming ubiquitous across research, entertainment and informal educational settings. Children of today may grow up never knowing a time when movies, television and video games were not available stereoscopically. Despite this rapid expansion, the field's understanding of the impact of stereoscopic visualizations on learning is rather limited. Much of the excitement of stereoscopic technology could be due to a novelty effect, which will wear off over time. This study controlled for the novelty factor using a variety of techniques. On the floor of an urban science center, 261 children were shown 12 photographs and visualizations of highly spatial scientific objects and scenes. The images were randomly shown in either traditional (2D) format or in stereoscopic format. The children were asked two questions of each image—one about a spatial property of the image and one about a real-world application of that property. At the end of the test, the child was asked to draw from memory the last image they saw. Results showed no overall significant difference in response to the questions associated with 2D or 3D images. However, children who saw the final slide only in 3D drew more complex representations of the slide than those who did not. Results are discussed through the lenses of cognitive load theory and the effect of novelty on engagement.
The effects of 5.1 sound presentations on the perception of stereoscopic imagery in video games
NASA Astrophysics Data System (ADS)
Cullen, Brian; Galperin, Daniel; Collins, Karen; Hogue, Andrew; Kapralos, Bill
2013-03-01
Stereoscopic 3D (S3D) content in games, film and other audio-visual media has been steadily increasing over the past number of years. However, there are still open, fundamental questions regarding its implementation, particularly as it relates to a multi-modal experience that involves sound and haptics. Research has shown that sound has considerable impact on our perception of 2D phenomena, but very little research has considered how sound may influence stereoscopic 3D. Here we present the results of an experiment that examined the effects of 5.1 surround sound (5.1) and stereo loudspeaker setups on depth perception in relation to S3D imagery within a video game environment. Our aim was to answer the question: "can 5.1 surround sound enhance the participant's perception of depth in the stereoscopic field when compared to traditional stereo sound presentations?" In addition, our study examined how the presence or absence of Doppler frequency shift and frequency fall-off audio effects can also influence depth judgment under these conditions. Results suggest that 5.1 surround sound presentations enhance the apparent depth of stereoscopic imagery when compared to stereo presentations. Results also suggest that the addition of audio effects such as Doppler shift and frequency fall-off filters can influence the apparent depth of S3D objects.
Matte painting in stereoscopic synthetic imagery
NASA Astrophysics Data System (ADS)
Eisenmann, Jonathan; Parent, Rick
2010-02-01
While there have been numerous studies concerning human perception in stereoscopic environments, rules of thumb for cinematography in stereoscopy have not yet been well-established. To that aim, we present experiments and results of subject testing in a stereoscopic environment, similar to that of a theater (i.e. large flat screen without head-tracking). In particular we wish to empirically identify thresholds at which different types of backgrounds, referred to in the computer animation industry as matte paintings, can be used while still maintaining the illusion of seamless perspective and depth for a particular scene and camera shot. In monoscopic synthetic imagery, any type of matte painting that maintains proper perspective lines, depth cues, and coherent lighting and textures saves in production costs while still maintaining the illusion of an alternate cinematic reality. However, in stereoscopic synthetic imagery, a 2D matte painting that worked in monoscopy may fail to provide the intended illusion of depth because the viewer has added depth information provided by stereopsis. We intend to observe two stereoscopic perceptual thresholds in this study which will provide practical guidelines indicating when to use each of three types of matte paintings. We ran subject tests in two virtual testing environments, each with varying conditions. Data were collected showing how the choices of the users matched the correct response, and the resulting perceptual threshold patterns are discussed below.
Teaching Anatomy and Physiology Using Computer-Based, Stereoscopic Images
ERIC Educational Resources Information Center
Perry, Jamie; Kuehn, David; Langlois, Rick
2007-01-01
Learning real three-dimensional (3D) anatomy for the first time can be challenging. Two-dimensional drawings and plastic models tend to over-simplify the complexity of anatomy. The approach described uses stereoscopy to create 3D images of the process of cadaver dissection and to demonstrate the underlying anatomy related to the speech mechanisms.…
NASA Astrophysics Data System (ADS)
McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.
2017-12-01
Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.
Fusion Prevents the Redundant Signals Effect: Evidence from Stereoscopically Presented Stimuli
ERIC Educational Resources Information Center
Schroter, Hannes; Fiedler, Anja; Miller, Jeff; Ulrich, Rolf
2011-01-01
In a simple reaction time (RT) experiment, visual stimuli were stereoscopically presented either to one eye (single stimulation) or to both eyes (redundant stimulation), with brightness matched for single and redundant stimulations. Redundant stimulation resulted in two separate percepts when noncorresponding retinal areas were stimulated, whereas…
Stereoscopic video analysis of Anopheles gambiae behavior in the field: challenges and opportunities
USDA-ARS?s Scientific Manuscript database
Advances in our ability to localize and track individual swarming mosquitoes in the field via stereoscopic image analysis have enabled us to test long standing ideas about individual male behavior and directly observe coupling. These studies further our fundamental understanding of the reproductive ...
Stereoscopic image production: live, CGI, and integration
NASA Astrophysics Data System (ADS)
Criado, Enrique
2006-02-01
This paper shortly describes part of the experience gathered in more than 10 years of stereoscopic movie production, some of the most common problems found and the solutions, with more or less fortune, we applied to solve those problems. Our work is mainly focused in the entertainment market, theme parks, museums, and other cultural related locations and events. In our movies, we have been forced to develop our own devices to permit correct stereo shooting (stereoscopic rigs) or stereo monitoring (real-time), and to solve problems found with conventional film editing, compositing and postproduction software. Here, we discuss stereo lighting, monitoring, special effects, image integration (using dummies and more), stereo-camera parameters, and other general 3-D movie production aspects.
ERIC Educational Resources Information Center
Remmele, Martin; Schmidt, Elena; Lingenfelder, Melissa; Martens, Andreas
2018-01-01
Gross anatomy is located in a three-dimensional space. Visualizing aspects of structures in gross anatomy education should aim to provide information that best resembles their original spatial proportions. Stereoscopic three-dimensional imagery might offer possibilities to implement this aim, though some research has revealed potential impairments…
Stereoscopy in Static Scientific Imagery in an Informal Education Setting: Does It Matter?
ERIC Educational Resources Information Center
Price, C. Aaron; Lee, H.-S.; Malatesta, K.
2014-01-01
Stereoscopic technology (3D) is rapidly becoming ubiquitous across research, entertainment and informal educational settings. Children of today may grow up never knowing a time when movies, television and video games were not available stereoscopically. Despite this rapid expansion, the field's understanding of the impact of stereoscopic…
Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.
2017-01-01
Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382
Impact of floating windows on the accuracy of depth perception in games
NASA Astrophysics Data System (ADS)
Stanfield, Brodie; Zerebecki, Christopher; Hogue, Andrew; Kapralos, Bill; Collins, Karen
2013-03-01
The floating window technique is commonly employed by stereoscopic 3D filmmakers to reduce the effects of window violations by masking out portions of the screen that contain visual information that doesn't exist in one of the views. Although widely adopted in the film industry, and despite its potential benefits, the technique has not been adopted by video game developers to the same extent possibly because of the lack of understanding of how the floating window can be utilized in such an interactive medium. Here, we describe a quantitative study that investigates how the floating window technique affects users' depth perception in a simple game-like environment. Our goal is to determine how various stereoscopic 3D parameters such as the existence, shape, and size of the floating window affect the user experience and to devise a set of guidelines for game developers wishing to develop stereoscopic 3D content. Providing game designers with quantitative knowledge of how these parameters can affect user experience is invaluable when choosing to design interactive stereoscopic 3D content.
Blur spot limitations in distal endoscope sensors
NASA Astrophysics Data System (ADS)
Yaron, Avi; Shechterman, Mark; Horesh, Nadav
2006-02-01
In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.
Study of blur discrimination for 3D stereo viewing
NASA Astrophysics Data System (ADS)
Subedar, Mahesh; Karam, Lina J.
2014-03-01
Blur is an important attribute in the study and modeling of the human visual system. Blur discrimination was studied extensively using 2D test patterns. In this study, we present the details of subjective tests performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. Specifically, the effect of disparity on the blur discrimination thresholds is studied on a passive stereoscopic 3D display. The blur discrimination thresholds are measured using stereoscopic 3D test patterns with positive, negative and zero disparity values, at multiple reference blur levels. A disparity value of zero represents the 2D viewing case where both the eyes will observe the same image. The subjective test results indicate that the blur discrimination thresholds remain constant as we vary the disparity value. This further indicates that binocular disparity does not affect blur discrimination thresholds and the models developed for 2D blur discrimination thresholds can be extended to stereoscopic 3D blur discrimination thresholds. We have presented fitting of the Weber model to the 3D blur discrimination thresholds measured from the subjective experiments.
Shen, Liangbo; Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Waterman, Gar; Hahn, Paul S.; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.
2016-01-01
Intra-operative optical coherence tomography (OCT) requires a display technology which allows surgeons to visualize OCT data without disrupting surgery. Previous research and commercial intrasurgical OCT systems have integrated heads-up display (HUD) systems into surgical microscopes to provide monoscopic viewing of OCT data through one microscope ocular. To take full advantage of our previously reported real-time volumetric microscope-integrated OCT (4D MIOCT) system, we describe a stereoscopic HUD which projects a stereo pair of OCT volume renderings into both oculars simultaneously. The stereoscopic HUD uses a novel optical design employing spatial multiplexing to project dual OCT volume renderings utilizing a single micro-display. The optical performance of the surgical microscope with the HUD was quantitatively characterized and the addition of the HUD was found not to substantially effect the resolution, field of view, or pincushion distortion of the operating microscope. In a pilot depth perception subject study, five ophthalmic surgeons completed a pre-set dexterity task with 50.0% (SD = 37.3%) higher success rate and in 35.0% (SD = 24.8%) less time on average with stereoscopic OCT vision compared to monoscopic OCT vision. Preliminary experience using the HUD in 40 vitreo-retinal human surgeries by five ophthalmic surgeons is reported, in which all surgeons reported that the HUD did not alter their normal view of surgery and that live surgical maneuvers were readily visible in displayed stereoscopic OCT volumes. PMID:27231616
Development of 40-in hybrid hologram screen for auto-stereoscopic video display
NASA Astrophysics Data System (ADS)
Song, Hyun Ho; Nakashima, Y.; Momonoi, Y.; Honda, Toshio
2004-06-01
Usually in auto stereoscopic display, there are two problems. The first problem is that large image display is difficult, and the second problem is that the view zone (which means the zone in which both eyes are put for stereoscopic or 3-D image observation) is very narrow. We have been developing an auto stereoscopic large video display system (over 100 inches diagonal) which a few people can view simultaneously1,2. Usually in displays that are over 100 inches diagonal, an optical video projection system is used. As one of auto stereoscopic display systems the hologram screen has been proposed3,4,5,6. However, if the hologram screen becomes too large, the view zone (corresponding to the reconstructed diffused object) causes color dispersion and color aberration7. We also proposed the additional Fresnel lens attached to the hologram screen. We call the screen a "hybrid hologram screen", (HHS in short). We made the HHS 866mm(H)×433mm(V) (about 40 inch diagonal)8,9,10,11. By using the lens in the reconstruction step, the angle between object light and reference light can be small, compared to without the lens. So, the spread of the view zone by the color dispersion and color aberration becomes small. And also, the virtual image which is reconstructed from the hologram screen can be transformed to a real image (view zone). So, it is not necessary to use a large lens or concave mirror while making a large hologram screen.
Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung
2014-01-01
We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors. PMID:24834910
Norman, J Farley; Norman, Hideko F; Craft, Amy E; Walton, Crystal L; Bartholomew, Ashley N; Burton, Cory L; Wiesemann, Elizabeth Y; Crabtree, Charles E
2008-10-01
Three experiments investigated whether and to what extent increases in age affect the functionality of stereopsis. The observers' ages ranged from 18 to 83 years. The overall goal was to challenge the older stereoscopic visual system by utilizing high magnitudes of binocular disparity, ambiguous binocular disparity [cf., Julesz, B., & Chang, J. (1976). Interaction between pools of binocular disparity detectors tuned to different disparities. Biological Cybernetics, 22, 107-119], and by making binocular matching more difficult. In particular, Experiment 1 evaluated observers' abilities to discriminate ordinal depth differences away from the horopter using standing disparities of 6.5-46 min arc. Experiment 2 assessed observers' abilities to discriminate stereoscopic shape using line-element stereograms. The direction (crossed vs. uncrossed) and magnitude of the binocular disparity (13.7 and 51.5 min arc) were manipulated. Binocular matching was made more difficult by varying the orientations of corresponding line elements across the two eyes' views. The purpose of Experiment 3 was to determine whether the aging stereoscopic system can resolve ambiguous binocular disparities in a manner similar to that of younger observers. The results of all experiments demonstrated that older observers' stereoscopic vision is functionally comparable to that of younger observers in many respects. For example, both age groups exhibited a similar ability to discriminate depth and surface shape. The results also showed, however, that age-related differences in stereopsis do exist, and they become most noticeable when the older stereoscopic system is challenged by multiple simultaneous factors.
Stereoscopy in cinematographic synthetic imagery
NASA Astrophysics Data System (ADS)
Eisenmann, Jonathan; Parent, Rick
2009-02-01
In this paper we present experiments and results pertaining to the perception of depth in stereoscopic viewing of synthetic imagery. In computer animation, typical synthetic imagery is highly textured and uses stylized illumination of abstracted material models by abstracted light source models. While there have been numerous studies concerning stereoscopic capabilities, conventions for staging and cinematography in stereoscopic movies have not yet been well-established. Our long-term goal is to measure the effectiveness of various cinematography techniques on the human visual system in a theatrical viewing environment. We would like to identify the elements of stereoscopic cinema that are important in terms of enhancing the viewer's understanding of a scene as well as providing guidelines for the cinematographer relating to storytelling. In these experiments we isolated stereoscopic effects by eliminating as many other visual cues as is reasonable. In particular, we aim to empirically determine what types of movement in synthetic imagery affect the perceptual depth sensing capabilities of our viewers. Using synthetic imagery, we created several viewing scenarios in which the viewer is asked to locate a target object's depth in a simple environment. The scenarios were specifically designed to compare the effectiveness of stereo viewing, camera movement, and object motion in aiding depth perception. Data were collected showing the error between the choice of the user and the actual depth value, and patterns were identified that relate the test variables to the viewer's perceptual depth accuracy in our theatrical viewing environment.
Stereoscopically Observing Manipulative Actions
Ferri, S.; Pauwels, K.; Rizzolatti, G.; Orban, G. A.
2016-01-01
The purpose of this study was to investigate the contribution of stereopsis to the processing of observed manipulative actions. To this end, we first combined the factors “stimulus type” (action, static control, and dynamic control), “stereopsis” (present, absent) and “viewpoint” (frontal, lateral) into a single design. Four sites in premotor, retro-insular (2) and parietal cortex operated specifically when actions were viewed stereoscopically and frontally. A second experiment clarified that the stereo-action-specific regions were driven by actions moving out of the frontoparallel plane, an effect amplified by frontal viewing in premotor cortex. Analysis of single voxels and their discriminatory power showed that the representation of action in the stereo-action-specific areas was more accurate when stereopsis was active. Further analyses showed that the 4 stereo-action-specific sites form a closed network converging onto the premotor node, which connects to parietal and occipitotemporal regions outside the network. Several of the specific sites are known to process vestibular signals, suggesting that the network combines observed actions in peripersonal space with gravitational signals. These findings have wider implications for the function of premotor cortex and the role of stereopsis in human behavior. PMID:27252350
Using Visual Odometry to Estimate Position and Attitude
NASA Technical Reports Server (NTRS)
Maimone, Mark; Cheng, Yang; Matthies, Larry; Schoppers, Marcel; Olson, Clark
2007-01-01
A computer program in the guidance system of a mobile robot generates estimates of the position and attitude of the robot, using features of the terrain on which the robot is moving, by processing digitized images acquired by a stereoscopic pair of electronic cameras mounted rigidly on the robot. Developed for use in localizing the Mars Exploration Rover (MER) vehicles on Martian terrain, the program can also be used for similar purposes on terrestrial robots moving in sufficiently visually textured environments: examples include low-flying robotic aircraft and wheeled robots moving on rocky terrain or inside buildings. In simplified terms, the program automatically detects visual features and tracks them across stereoscopic pairs of images acquired by the cameras. The 3D locations of the tracked features are then robustly processed into an estimate of overall vehicle motion. Testing has shown that by use of this software, the error in the estimate of the position of the robot can be limited to no more than 2 percent of the distance traveled, provided that the terrain is sufficiently rich in features. This software has proven extremely useful on the MER vehicles during driving on sandy and highly sloped terrains on Mars.
Stereoscopically Observing Manipulative Actions.
Ferri, S; Pauwels, K; Rizzolatti, G; Orban, G A
2016-08-01
The purpose of this study was to investigate the contribution of stereopsis to the processing of observed manipulative actions. To this end, we first combined the factors "stimulus type" (action, static control, and dynamic control), "stereopsis" (present, absent) and "viewpoint" (frontal, lateral) into a single design. Four sites in premotor, retro-insular (2) and parietal cortex operated specifically when actions were viewed stereoscopically and frontally. A second experiment clarified that the stereo-action-specific regions were driven by actions moving out of the frontoparallel plane, an effect amplified by frontal viewing in premotor cortex. Analysis of single voxels and their discriminatory power showed that the representation of action in the stereo-action-specific areas was more accurate when stereopsis was active. Further analyses showed that the 4 stereo-action-specific sites form a closed network converging onto the premotor node, which connects to parietal and occipitotemporal regions outside the network. Several of the specific sites are known to process vestibular signals, suggesting that the network combines observed actions in peripersonal space with gravitational signals. These findings have wider implications for the function of premotor cortex and the role of stereopsis in human behavior. © The Author 2016. Published by Oxford University Press.
ERIC Educational Resources Information Center
Parikesit, Gea O. F.
2014-01-01
Shadows can be found easily everywhere around us, so that we rarely find it interesting to reflect on how they work. In order to raise curiosity among students on the optics of shadows, we can display the shadows in 3D, particularly using a stereoscopic set-up. In this paper we describe the optics of stereoscopic shadows using simple schematic…
ERIC Educational Resources Information Center
Malin, Brenton J.
2007-01-01
This essay explores a series of discourses surrounding the images of the early twentieth-century stereoscope, focusing on Underwood & Underwood of Ottawa, Kansas, and the Keystone View Company, of Meadville, Pennsylvania. By publishing images of particular geographic areas and historical events, as well as compendium volumes that included…
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Morris, K. R.
1986-01-01
Hurricane cloud and precipitation structure have been studied by means of IR and stereoscopic visual satellite data from synchronized scanning GOES-East and -West, in combination with ground-based radar data for Hurricane Frederico and time-composited airborne radar data for Hurricane Allen. It is noted that stereoscopically measured cloudtop height in these hurricanes is not as closely correlated to radar reflectivity at lower levels as it is in intense thunderstorms over land. This and other results obtained imply that satellite precipitation estimation techniques for tropical cyclones that are based on cloudtop measurements will not be accurate with respect to time and place scales that are less than several hours and a few hundred km, respectively.
Measuring sensitivity to viewpoint change with and without stereoscopic cues.
Bell, Jason; Dickinson, Edwin; Badcock, David R; Kingdom, Frederick A A
2013-12-04
The speed and accuracy of object recognition is compromised by a change in viewpoint; demonstrating that human observers are sensitive to this transformation. Here we discuss a novel method for simulating the appearance of an object that has undergone a rotation-in-depth, and include an exposition of the differences between perspective and orthographic projections. Next we describe a method by which human sensitivity to rotation-in-depth can be measured. Finally we discuss an apparatus for creating a vivid percept of a 3-dimensional rotation-in-depth; the Wheatstone Eight Mirror Stereoscope. By doing so, we reveal a means by which to evaluate the role of stereoscopic cues in the discrimination of viewpoint rotated shapes and objects.
NASA Astrophysics Data System (ADS)
Kleiber, Michael; Winkelholz, Carsten
2008-02-01
The aim of the presented research was to quantify the distortion of depth perception when using stereoscopic displays. The visualization parameters of the used virtual reality system such as perspective, haploscopic separation and width of stereoscopic separation were varied. The experiment was designed to measure distortion in depth perception according to allocentric frames of reference. The results of the experiments indicate that some of the parameters have an antithetic effect which allows to compensate the distortion of depth perception for a range of depths. In contrast to earlier research which reported underestimation of depth perception we found that depth was overestimated when using true projection parameters according to the position of the eyes of the user and display geometry.
Balance and coordination after viewing stereoscopic 3D television
Read, Jenny C. A.; Simonotto, Jennifer; Bohr, Iwo; Godfrey, Alan; Galna, Brook; Rochester, Lynn; Smulders, Tom V.
2015-01-01
Manufacturers and the media have raised the possibility that viewing stereoscopic 3D television (S3D TV) may cause temporary disruption to balance and visuomotor coordination. We looked for evidence of such effects in a laboratory-based study. Four hundred and thirty-three people aged 4–82 years old carried out tests of balance and coordination before and after viewing an 80 min movie in either conventional 2D or stereoscopic 3D, while wearing two triaxial accelerometers. Accelerometry produced little evidence of any change in body motion associated with S3D TV. We found no evidence that viewing the movie in S3D causes a detectable impairment in balance or in visuomotor coordination. PMID:26587261
Generating Stereoscopic Television Images With One Camera
NASA Technical Reports Server (NTRS)
Coan, Paul P.
1996-01-01
Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.
Measurement of the flux of ultra high energy cosmic rays by the stereo technique
NASA Astrophysics Data System (ADS)
High Resolution Fly'S Eye Collaboration; Abbasi, R. U.; Abu-Zayyad, T.; Al-Seady, M.; Allen, M.; Amann, J. F.; Archbold, G.; Belov, K.; Belz, J. W.; Bergman, D. R.; Blake, S. A.; Brusova, O. A.; Burt, G. W.; Cannon, C.; Cao, Z.; Deng, W.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Gray, R. C.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G.; Hüntemeyer, P.; Ivanov, D.; Jones, B. F.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Loh, E. C.; Maestas, M. M.; Manago, N.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; Moore, S. A.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Rodriguez, D.; Sasaki, M.; Schnetzer, S. R.; Scott, L. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Springer, R. W.; Stokes, B. T.; Stratton, S. R.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Wiencke, L. R.; Zech, A.; Zhang, B. K.; Zhang, X.; Zhang, Y.; High Resolution Fly's Eye Collaboration
2009-08-01
The High Resolution Fly’s Eye (HiRes) experiment has measured the flux of ultrahigh energy cosmic rays using the stereoscopic air fluorescence technique. The HiRes experiment consists of two detectors that observe cosmic ray showers via the fluorescence light they emit. HiRes data can be analyzed in monocular mode, where each detector is treated separately, or in stereoscopic mode where they are considered together. Using the monocular mode the HiRes collaboration measured the cosmic ray spectrum and made the first observation of the Greisen-Zatsepin-Kuzmin cutoff. In this paper we present the cosmic ray spectrum measured by the stereoscopic technique. Good agreement is found with the monocular spectrum in all details.
Casting Light and Shadows on a Saharan Dust Storm
NASA Technical Reports Server (NTRS)
2003-01-01
On March 2, 2003, near-surface winds carried a large amount of Saharan dust aloft and transported the material westward over the Atlantic Ocean. These observations from the Multi-angle Imaging SpectroRadiometer (MISR) aboard NASA's Terra satellite depict an area near the Cape Verde Islands (situated about 700 kilometers off of Africa's western coast) and provide images of the dust plume along with measurements of its height and motion. Tracking the three-dimensional extent and motion of air masses containing dust or other types of aerosols provides data that can be used to verify and improve computer simulations of particulate transport over large distances, with application to enhancing our understanding of the effects of such particles on meteorology, ocean biological productivity, and human health.MISR images the Earth by measuring the spatial patterns of reflected sunlight. In the upper panel of the still image pair, the observations are displayed as a natural-color snapshot from MISR's vertical-viewing (nadir) camera. High-altitude cirrus clouds cast shadows on the underlying ocean and dust layer, which are visible in shades of blue and tan, respectively. In the lower panel, heights derived from automated stereoscopic processing of MISR's multi-angle imagery show the cirrus clouds (yellow areas) to be situated about 12 kilometers above sea level. The distinctive spatial patterns of these clouds provide the necessary contrast to enable automated feature matching between images acquired at different view angles. For most of the dust layer, which is spatially much more homogeneous, the stereoscopic approach was unable to retrieve elevation data. However, the edges of shadows cast by the cirrus clouds onto the dust (indicated by blue and cyan pixels) provide sufficient spatial contrast for a retrieval of the dust layer's height, and indicate that the top of layer is only about 2.5 kilometers above sea level.Motion of the dust and clouds is directly observable with the assistance of the multi-angle 'fly-over' animation (Below). The frames of the animation consist of data acquired by the 70-degree, 60-degree, 46-degree and 26-degree forward-viewing cameras in sequence, followed by the images from the nadir camera and each of the four backward-viewing cameras, ending with 70-degree backward image. Much of the south-to-north shift in the position of the clouds is due to geometric parallax between the nine view angles (rather than true motion), whereas the west-to-east motion is due to actual motion of the clouds over the seven minutes during which all nine cameras observed the scene. MISR's automated data processing retrieved a primarily westerly (eastward) motion of these clouds with speeds of 30-40 meters per second. Note that there is much less geometric parallax for the cloud shadows owing to the relatively low altitude of the dust layer upon which the shadows are cast (the amount of parallax is proportional to elevation and a feature at the surface would have no geometric parallax at all); however, the westerly motion of the shadows matches the actual motion of the clouds. The automated processing was not able to resolve a velocity for the dust plume, but by manually tracking dust features within the plume images that comprise the animation sequence we can derive an easterly (westward) speed of about 16 meters per second. These analyses and visualizations of the MISR data demonstrate that not only are the cirrus clouds and dust separated significantly in elevation, but they exist in completely different wind regimes, with the clouds moving toward the east and the dust moving toward the west. [figure removed for brevity, see original site] (Click on image above for high resolution version)The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17040. The panels cover an area of about 312 kilometers x 242 kilometers, and use data from blocks 74 to 77 within World Reference System-2 path 207.MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Tropical Cyclone Monty Strikes Western Australia
NASA Technical Reports Server (NTRS)
2004-01-01
The Multi-angle Imaging SpectroRadiometer (MISR) acquired these natural color images and cloud top height measurements for Monty before and after the storm made landfall over the remote Pilbara region of Western Australia, on February 29 and March 2, 2004 (shown as the left and right-hand image sets, respectively). On February 29, Monty was upgraded to category 4 cyclone status. After traveling inland about 300 kilometers to the south, the cyclonic circulation had decayed considerably, although category 3 force winds were reported on the ground. Some parts of the drought-affected Pilbara region received more than 300 millimeters of rainfall, and serious and extensive flooding has occurred. The natural color images cover much of the same area, although the right-hand panels are offset slightly to the east. Automated stereoscopic processing of data from multiple MISR cameras was utilized to produce the cloud-top height fields. The distinctive spatial patterns of the clouds provide the necessary contrast to enable automated feature matching between images acquired at different view angles. The height retrievals are at this stage uncorrected for the effects of the high winds associated with cyclone rotation. Areas where heights could not be retrieved are shown in dark gray. The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbits 22335 and 22364. The panels cover an area of about 380 kilometers x 985 kilometers, and utilize data from blocks 105 to 111 within World Reference System-2 paths 115 and 113. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Training Performance of Laparoscopic Surgery in Two- and Three-Dimensional Displays.
Lin, Chiuhsiang Joe; Cheng, Chih-Feng; Chen, Hung-Jen; Wu, Kuan-Ying
2017-04-01
This research investigated differences in the effects of a state-of-art stereoscopic 3-dimensional (3D) display and a traditional 2-dimensional (2D) display in simulated laparoscopic surgery over a longer duration than in previous publications and studied the learning effects of the 2 display systems on novices. A randomized experiment with 2 factors, image dimensions and image sequence, was conducted to investigate differences in the mean movement time, the mean error frequency, NASA-TLX cognitive workload, and visual fatigue in pegboard and circle-tracing tasks. The stereoscopic 3D display had advantages in mean movement time ( P < .001 and P = .002) and mean error frequency ( P = .010 and P = .008) in both the tasks. There were no significant differences in the objective visual fatigue ( P = .729 and P = .422) and in the NASA-TLX ( P = .605 and P = .937) cognitive workload between the 3D and the 2D displays on both the tasks. For the learning effect, participants who used the stereoscopic 3D display first had shorter mean movement time in the 2D display environment on both the pegboard ( P = .011) and the circle-tracing ( P = .017) tasks. The results of this research suggest that a stereoscopic system would not result in higher objective visual fatigue and cognitive workload than a 2D system, and it might reduce the performance time and increase the precision of surgical operations. In addition, learning efficiency of the stereoscopic system on the novices in this study demonstrated its value for training and education in laparoscopic surgery.
Cui, Dongmei; Wilson, Timothy D; Rockhold, Robin W; Lehman, Michael N; Lynch, James C
2017-01-01
The head and neck region is one of the most complex areas featured in the medical gross anatomy curriculum. The effectiveness of using three-dimensional (3D) models to teach anatomy is a topic of much discussion in medical education research. However, the use of 3D stereoscopic models of the head and neck circulation in anatomy education has not been previously studied in detail. This study investigated whether 3D stereoscopic models created from computed tomographic angiography (CTA) data were efficacious teaching tools for the head and neck vascular anatomy. The test subjects were first year medical students at the University of Mississippi Medical Center. The assessment tools included: anatomy knowledge tests (prelearning session knowledge test and postlearning session knowledge test), mental rotation tests (spatial ability; presession MRT and postsession MRT), and a satisfaction survey. Results were analyzed using a Wilcoxon rank-sum test and linear regression analysis. A total of 39 first year medical students participated in the study. The results indicated that all students who were exposed to the stereoscopic 3D vascular models in 3D learning sessions increased their ability to correctly identify the head and neck vascular anatomy. Most importantly, for students with low-spatial ability, 3D learning sessions improved postsession knowledge scores to a level comparable to that demonstrated by students with high-spatial ability indicating that the use of 3D stereoscopic models may be particularly valuable to these students with low-spatial ability. Anat Sci Educ 10: 34-45. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.
Keebler, Joseph R; Jentsch, Florian; Schuster, David
2014-12-01
We investigated the effects of active stereoscopic simulation-based training and individual differences in video game experience on multiple indices of combat identification (CID) performance. Fratricide is a major problem in combat operations involving military vehicles. In this research, we aimed to evaluate the effects of training on CID performance in order to reduce fratricide errors. Individuals were trained on 12 combat vehicles in a simulation, which were presented via either a non-stereoscopic or active stereoscopic display using NVIDIA's GeForce shutter glass technology. Self-report was used to assess video game experience, leading to four between-subjects groups: high video game experience with stereoscopy, low video game experience with stereoscopy, high video game experience without stereoscopy, and low video game experience without stereoscopy. We then tested participants on their memory of each vehicle's alliance and name across multiple measures, including photographs and videos. There was a main effect for both video game experience and stereoscopy across many of the dependent measures. Further, we found interactions between video game experience and stereoscopic training, such that those individuals with high video game experience in the non-stereoscopic group had the highest performance outcomes in the sample on multiple dependent measures. This study suggests that individual differences in video game experience may be predictive of enhanced performance in CID tasks. Selection based on video game experience in CID tasks may be a useful strategy for future military training. Future research should investigate the generalizability of these effects, such as identification through unmanned vehicle sensors.
What is 3D good for? A review of human performance on stereoscopic 3D displays
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.
2012-06-01
This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.
Di Marco, Aimee N; Jeyakumar, Jenifa; Pratt, Philip J; Yang, Guang-Zhong; Darzi, Ara W
2016-01-01
To compare surgical performance with transanal endoscopic surgery (TES) using a novel 3-dimensional (3D) stereoscopic viewer against the current modalities of a 3D stereoendoscope, 3D, and 2-dimensional (2D) high-definition monitors. TES is accepted as the primary treatment for selected rectal tumors. Current TES systems offer a 2D monitor, or 3D image, viewed directly via a stereoendoscope, necessitating an uncomfortable operating position. To address this and provide a platform for future image augmentation, a 3D stereoscopic display was created. Forty participants, of mixed experience level, completed a simulated TES task using 4 visual displays (novel stereoscopic viewer and currently utilized stereoendoscope, 3D, and 2D high-definition monitors) in a randomly allocated order. Primary outcome measures were: time taken, path length, and accuracy. Secondary outcomes were: task workload and participant questionnaire results. Median time taken and path length were significantly shorter for the novel viewer versus 2D and 3D, and not significantly different to the traditional stereoendoscope. Significant differences were found in accuracy, task workload, and questionnaire assessment in favor of the novel viewer, as compared to all 3 modalities. This novel 3D stereoscopic viewer allows surgical performance in TES equivalent to that achieved using the current stereoendoscope and superior to standard 2D and 3D displays, but with lower physical and mental demands for the surgeon. Participants expressed a preference for this system, ranking it more highly on a questionnaire. Clinical translation of this work has begun with the novel viewer being used in 5 TES patients.
NASA Astrophysics Data System (ADS)
Xie, Yaoqin; Xing, Lei; Gu, Jia; Liu, Wu
2013-06-01
Real-time knowledge of tumor position during radiation therapy is essential to overcome the adverse effect of intra-fractional organ motion. The goal of this work is to develop a tumor tracking strategy by effectively utilizing the inherent image features of stereoscopic x-ray images acquired during dose delivery. In stereoscopic x-ray image guided radiation delivery, two orthogonal x-ray images are acquired either simultaneously or sequentially. The essence of markerless tumor tracking is the reliable identification of inherent points with distinct tissue features on each projection image and their association between two images. The identification of the feature points on a planar x-ray image is realized by searching for points with high intensity gradient. The feature points are associated by using the scale invariance features transform descriptor. The performance of the proposed technique is evaluated by using images of a motion phantom and four archived clinical cases acquired using either a CyberKnife equipped with a stereoscopic x-ray imaging system, or a LINAC equipped with an onboard kV imager and an electronic portal imaging device. In the phantom study, the results obtained using the proposed method agree with the measurements to within 2 mm in all three directions. In the clinical study, the mean error is 0.48 ± 0.46 mm for four patient data with 144 sequential images. In this work, a tissue feature-based tracking method for stereoscopic x-ray image guided radiation therapy is developed. The technique avoids the invasive procedure of fiducial implantation and may greatly facilitate the clinical workflow.
ERIC Educational Resources Information Center
Lau, Kung Wong
2015-01-01
Purpose: This study aims to deepen understanding of the use of stereoscopic 3D technology (stereo3D) in facilitating organizational learning. The emergence of advanced virtual technologies, in particular to the stereo3D virtual reality, has fundamentally changed the ways in which organizations train their employees. However, in academic or…
ERIC Educational Resources Information Center
Price, C. Aaron; Lee, Hee-Sun; Subbarao, Mark; Kasal, Evan; Aguileara, Julieta
2015-01-01
Science centers such as museums and planetariums have used stereoscopic ("three-dimensional") films to draw interest from and educate their visitors for decades. Despite the fact that most adults who are finished with their formal education get their science knowledge from such free-choice learning settings very little is known about the…
Teaching with Stereoscopic Video: Opportunities and Challenges
NASA Astrophysics Data System (ADS)
Variano, Evan
2017-11-01
I will present my work on creating stereoscopic videos for fluid pedagogy. I discuss a variety of workflows for content creation and a variety of platforms for content delivery. I review the qualitative lessons learned when teaching with this material, and discuss outlook for the future. This work was partially supported by the NSF award ENG-1604026 and the UC Berkeley Student Technology Fund.
Stereoscopic Vascular Models of the Head and Neck: A Computed Tomography Angiography Visualization
ERIC Educational Resources Information Center
Cui, Dongmei; Lynch, James C.; Smith, Andrew D.; Wilson, Timothy D.; Lehman, Michael N.
2016-01-01
Computer-assisted 3D models are used in some medical and allied health science schools; however, they are often limited to online use and 2D flat screen-based imaging. Few schools take advantage of 3D stereoscopic learning tools in anatomy education and clinically relevant anatomical variations when teaching anatomy. A new approach to teaching…
Single-channel stereoscopic ophthalmology microscope based on TRD
NASA Astrophysics Data System (ADS)
Radfar, Edalat; Park, Jihoon; Lee, Sangyeob; Ha, Myungjin; Yu, Sungkon; Jang, Seulki; Jung, Byungjo
2016-03-01
A stereoscopic imaging modality was developed for the application of ophthalmology surgical microscopes. A previous study has already introduced a single-channel stereoscopic video imaging modality based on a transparent rotating deflector (SSVIM-TRD), in which two different view angles, image disparity, are generated by imaging through a transparent rotating deflector (TRD) mounted on a stepping motor and is placed in a lens system. In this case, the image disparity is a function of the refractive index and the rotation angle of TRD. Real-time single-channel stereoscopic ophthalmology microscope (SSOM) based on the TRD is improved by real-time controlling and programming, imaging speed, and illumination method. Image quality assessments were performed to investigate images quality and stability during the TRD operation. Results presented little significant difference in image quality in terms of stability of structural similarity (SSIM). A subjective analysis was performed with 15 blinded observers to evaluate the depth perception improvement and presented significant improvement in the depth perception capability. Along with all evaluation results, preliminary results of rabbit eye imaging presented that the SSOM could be utilized as an ophthalmic operating microscopes to overcome some of the limitations of conventional ones.
NASA Astrophysics Data System (ADS)
Naimark, Michael
1997-05-01
Two immersive virtual environments produced as art installations investigate 'sense of place' in different but complimentary ways. One is a stereoscopic moviemap, the other a stereoscopic panorama. Moviemaps are interactive systems which allow 'travel' along pre-recorded routes with some control over speed and direction. Panoramas are 360 degree visual representations dating back to the late 18th century but which have recently experienced renewed interest due to 'virtual reality' systems. Moviemaps allow 'moving around' while panoramas allow 'looking around,' but to date there has been little or no attempt to produce either in stereo from camera-based material. 'See Banff stereoscopic moviemap about landscape, tourism, and growth in the Canadian Rocky Mountains. It was filmed with twin 16 mm cameras and displayed as a single-user experience housed in a cabinet resembling a century-old kinetoscope, with a crank on the side for 'moving through' the material. 'Be Now Here (Welcome to the Neighborhood)' (1995-6) is a stereoscopic panorama filmed in public gathering places around the world, based upon the UNESCO World Heritage 'In Danger' list. It was filmed with twin 35 mm motion picture cameras on a rotating tripod and displayed using a synchronized rotating floor.
Stereoscopic medical imaging collaboration system
NASA Astrophysics Data System (ADS)
Okuyama, Fumio; Hirano, Takenori; Nakabayasi, Yuusuke; Minoura, Hirohito; Tsuruoka, Shinji
2007-02-01
The computerization of the clinical record and the realization of the multimedia have brought improvement of the medical service in medical facilities. It is very important for the patients to obtain comprehensible informed consent. Therefore, the doctor should plainly explain the purpose and the content of the diagnoses and treatments for the patient. We propose and design a Telemedicine Imaging Collaboration System which presents a three dimensional medical image as X-ray CT, MRI with stereoscopic image by using virtual common information space and operating the image from a remote location. This system is composed of two personal computers, two 15 inches stereoscopic parallax barrier type LCD display (LL-151D, Sharp), one 1Gbps router and 1000base LAN cables. The software is composed of a DICOM format data transfer program, an operation program of the images, the communication program between two personal computers and a real time rendering program. Two identical images of 512×768 pixcels are displayed on two stereoscopic LCD display, and both images show an expansion, reduction by mouse operation. This system can offer a comprehensible three-dimensional image of the diseased part. Therefore, the doctor and the patient can easily understand it, depending on their needs.
Learning Receptive Fields and Quality Lookups for Blind Quality Assessment of Stereoscopic Images.
Shao, Feng; Lin, Weisi; Wang, Shanshan; Jiang, Gangyi; Yu, Mei; Dai, Qionghai
2016-03-01
Blind quality assessment of 3D images encounters more new challenges than its 2D counterparts. In this paper, we propose a blind quality assessment for stereoscopic images by learning the characteristics of receptive fields (RFs) from perspective of dictionary learning, and constructing quality lookups to replace human opinion scores without performance loss. The important feature of the proposed method is that we do not need a large set of samples of distorted stereoscopic images and the corresponding human opinion scores to learn a regression model. To be more specific, in the training phase, we learn local RFs (LRFs) and global RFs (GRFs) from the reference and distorted stereoscopic images, respectively, and construct their corresponding local quality lookups (LQLs) and global quality lookups (GQLs). In the testing phase, blind quality pooling can be easily achieved by searching optimal GRF and LRF indexes from the learnt LQLs and GQLs, and the quality score is obtained by combining the LRF and GRF indexes together. Experimental results on three publicly 3D image quality assessment databases demonstrate that in comparison with the existing methods, the devised algorithm achieves high consistent alignment with subjective assessment.
Toward a 3D video format for auto-stereoscopic displays
NASA Astrophysics Data System (ADS)
Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha
2008-08-01
There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.
NASA Astrophysics Data System (ADS)
Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing
2016-06-01
Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Williams, Steven P.
1993-01-01
To provide stereopsis, binocular helmet-mounted display (HMD) systems must trade some of the total field of view available from their two monocular fields to obtain a partial overlap region. The visual field then provides a mixture of cues, with monocular regions on both peripheries and a binoptic (the same image in both eyes) region or, if lateral disparity is introduced to produce two images, a stereoscopic region in the overlapped center. This paper reports on in-simulator assessment of the trade-offs arising from the mixture of color cueing and monocular, binoptic, and stereoscopic cueing information in peripheral monitoring displays as utilized in HMD systems. The accompanying effect of stereoscopic cueing in the tracking information in the central region of the display is also assessed. The pilot's task for the study was to fly at a prescribed height above an undulating pathway in the sky while monitoring a dynamic bar chart displayed in the periphery of their field of view. Control of the simulated rotorcraft was limited to the longitudinal and vertical degrees of freedom to ensure the lateral separation of the viewing conditions of the concurrent tasks.
Sehi, Mitra; Greenfield, David S.
2006-01-01
Purpose To describe a case of progressive glaucomatous optic neuropathy using scanning laser polarimetry with fixed (SLP-FCC) and variable corneal compensation (SLP-VCC) and optical coherence tomography (OCT). Design Observational case report. Methods A 21-year-old male with juvenile primary open-angle glaucoma developed progression because of noncompliance with therapy. The patient underwent dilated stereoscopic examination and photography of the optic disk, standard automated perimetry (SAP), OCT, and SLP imaging with FCC and VCC at the baseline examination and after four years of follow-up. Results Optic disk, retinal nerve fiber layer (RNFL) atrophy, and SAP progression was observed. Reduction in mean RNFL thickness (average, superior, inferior) was 18, 18, and 27 microns (OCT); 22, 40, and 17 microns (SLP-FCC); and 6, 12, and 12 microns (SLP-VCC), respectively. Conclusions This case demonstrates that digital imaging of the peripapillary RNFL is capable of documentation and measurement of progressive glaucomatous RNFL atrophy. PMID:17157591
Generation of High Resolution Global DSM from ALOS PRISM
NASA Astrophysics Data System (ADS)
Takaku, J.; Tadono, T.; Tsutsui, K.
2014-04-01
Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM), one of onboard sensors carried on the Advanced Land Observing Satellite (ALOS), was designed to generate worldwide topographic data with its optical stereoscopic observation. The sensor consists of three independent panchromatic radiometers for viewing forward, nadir, and backward in 2.5 m ground resolution producing a triplet stereoscopic image along its track. The sensor had observed huge amount of stereo images all over the world during the mission life of the satellite from 2006 through 2011. We have semi-automatically processed Digital Surface Model (DSM) data with the image archives in some limited areas. The height accuracy of the dataset was estimated at less than 5 m (rms) from the evaluation with ground control points (GCPs) or reference DSMs derived from the Light Detection and Ranging (LiDAR). Then, we decided to process the global DSM datasets from all available archives of PRISM stereo images by the end of March 2016. This paper briefly reports on the latest processing algorithms for the global DSM datasets as well as their preliminary results on some test sites. The accuracies and error characteristics of datasets are analyzed and discussed on various fields by the comparison with existing global datasets such as Ice, Cloud, and land Elevation Satellite (ICESat) data and Shuttle Radar Topography Mission (SRTM) data, as well as the GCPs and the reference airborne LiDAR/DSM.
Viewpoint Dependent Imaging: An Interactive Stereoscopic Display
NASA Astrophysics Data System (ADS)
Fisher, Scott
1983-04-01
Design and implementation of a viewpoint Dependent imaging system is described. The resultant display is an interactive, lifesize, stereoscopic image. that becomes a window into a three dimensional visual environment. As the user physically changes his viewpoint of the represented data in relation to the display surface, the image is continuously updated. The changing viewpoints are retrieved from a comprehensive, stereoscopic image array stored on computer controlled, optical videodisc and fluidly presented. in coordination with the viewer's, movements as detected by a body-tracking device. This imaging system is an attempt to more closely represent an observers interactive perceptual experience of the visual world by presenting sensory information cues not offered by traditional media technologies: binocular parallax, motion parallax, and motion perspective. Unlike holographic imaging, this display requires, relatively low bandwidth.
Accommodation training in foreign workers.
Takada, Masumi; Miyao, Masaru; Matsuura, Yasuyuki; Takada, Hiroki
2013-01-01
By relaxing the contracted focus-adjustment muscles around the eyeball, known as the ciliary and extraocular muscles, the degree of pseudomyopia can be reduced. This understanding has led to accommodation training in which a visual target is presented in stereoscopic video clips. However, it has been pointed out that motion sickness can be induced by viewing stereoscopic video clips. In Measurement 1 of the present study, we verified whether the new 3D technology reduced the severity of motion sickness in accordance with stabilometry. We then evaluated the short-term effects of accommodation training using new stereoscopic video clips on foreign workers (11 females) suffering from eye fatigue in Measurement 2. The foreign workers were trained for three days. As a result, visual acuity was statistically improved by continuous accommodation training, which will help promote ciliary muscle stretching.
An MR-compatible stereoscopic in-room 3D display for MR-guided interventions.
Brunner, Alexander; Groebner, Jens; Umathum, Reiner; Maier, Florian; Semmler, Wolfhard; Bock, Michael
2014-08-01
A commercial three-dimensional (3D) monitor was modified for use inside the scanner room to provide stereoscopic real-time visualization during magnetic resonance (MR)-guided interventions, and tested in a catheter-tracking phantom experiment at 1.5 T. Brightness, uniformity, radio frequency (RF) emissions and MR image interferences were measured. Due to modifications, the center luminance of the 3D monitor was reduced by 14%, and the addition of a Faraday shield further reduced the remaining luminance by 31%. RF emissions could be effectively shielded; only a minor signal-to-noise ratio (SNR) decrease of 4.6% was observed during imaging. During the tracking experiment, the 3D orientation of the catheter and vessel structures in the phantom could be visualized stereoscopically.
3D gaze tracking system for NVidia 3D Vision®.
Wibirama, Sunu; Hamamoto, Kazuhiko
2013-01-01
Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.
Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.
Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter
2015-07-01
Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. Copyright © 2015 Elsevier Ltd. All rights reserved.
The design and implementation of stereoscopic 3D scalable vector graphics based on WebKit
NASA Astrophysics Data System (ADS)
Liu, Zhongxin; Wang, Wenmin; Wang, Ronggang
2014-03-01
Scalable Vector Graphics (SVG), which is a language designed based on eXtensible Markup Language (XML), is used to describe basic shapes embedded in webpages, such as circles and rectangles. However, it can only depict 2D shapes. As a consequence, web pages using classical SVG can only display 2D shapes on a screen. With the increasing development of stereoscopic 3D (S3D) technology, binocular 3D devices have been widely used. Under this circumstance, we intend to extend the widely used web rendering engine WebKit to support the description and display of S3D webpages. Therefore, the extension of SVG is of necessity. In this paper, we will describe how to design and implement SVG shapes with stereoscopic 3D mode. Two attributes representing the depth and thickness are added to support S3D shapes. The elimination of hidden lines and hidden surfaces, which is an important process in this project, is described as well. The modification of WebKit is also discussed, which is made to support the generation of both left view and right view at the same time. As is shown in the result, in contrast to the 2D shapes generated by the Google Chrome web browser, the shapes got from our modified browser are in S3D mode. With the feeling of depth and thickness, the shapes seem to be real 3D objects away from the screen, rather than simple curves and lines as before.
3D Displays for Battle Management
1990-04-01
cinematography industry, were found to be inadequate for providing comfortable stereoscopic out-the-window terrain scenes, when viewed from a 19-inch...JStereoscoDic Projection GeomietrvL Existing stereoscopic drawing / fechniques, which have roots in t cinematography industry, were , found to be...categories. By calibrating our CRT monitors to known color standards, we are able to produce the measured hues on the display screen. 5.2 COMPLEXITY
ERIC Educational Resources Information Center
Gligorovic, Milica; Vucinic, Vesna; Eskirovic, Branka; Jablan, Branka
2011-01-01
This research was conducted in order to examine the influence of manifest strabismus and stereoscopic vision on non-verbal abilities of visually impaired children aged between 7 and 15. The sample included 55 visually impaired children from the 1st to the 6th grade of elementary schools for visually impaired children in Belgrade. RANDOT stereotest…
Robotic Vision-Based Localization in an Urban Environment
NASA Technical Reports Server (NTRS)
Mchenry, Michael; Cheng, Yang; Matthies
2007-01-01
A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.
Identification of depth information with stereoscopic mammography using different display methods
NASA Astrophysics Data System (ADS)
Morikawa, Takamitsu; Kodera, Yoshie
2013-03-01
Stereoscopy in radiography was widely used in the late 80's because it could be used for capturing complex structures in the human body, thus proving beneficial for diagnosis and screening. When radiologists observed the images stereoscopically, radiologists usually needed the training of their eyes in order to perceive the stereoscopic effect. However, with the development of three-dimensional (3D) monitors and their use in the medical field, only a visual inspection is no longer required in the medical field. The question then arises as to whether there is any difference in recognizing depth information when using conventional methods and that when using a 3D monitor. We constructed a phantom and evaluated the difference in capacity to identify the depth information between the two methods. The phantom consists of acryl steps and 3mm diameter acryl pillars on the top and bottom of each step. Seven observers viewed these images stereoscopically using the two display methods and were asked to judge the direction of the pillar that was on the top. We compared these judged direction with the direction of the real pillar arranged on the top, and calculated the percentage of correct answerers (PCA). The results showed that PCA obtained using the 3D monitor method was higher PCA by about 5% than that obtained using the naked-eye method. This indicated that people could view images stereoscopically more precisely using the 3D monitor method than when using with conventional methods, like the crossed or parallel eye viewing. We were able to estimate the difference in capacity to identify the depth information between the two display methods.
An optimized web-based approach for collaborative stereoscopic medical visualization
Kaspar, Mathias; Parsad, Nigel M; Silverstein, Jonathan C
2013-01-01
Objective Medical visualization tools have traditionally been constrained to tethered imaging workstations or proprietary client viewers, typically part of hospital radiology systems. To improve accessibility to real-time, remote, interactive, stereoscopic visualization and to enable collaboration among multiple viewing locations, we developed an open source approach requiring only a standard web browser with no added client-side software. Materials and Methods Our collaborative, web-based, stereoscopic, visualization system, CoWebViz, has been used successfully for the past 2 years at the University of Chicago to teach immersive virtual anatomy classes. It is a server application that streams server-side visualization applications to client front-ends, comprised solely of a standard web browser with no added software. Results We describe optimization considerations, usability, and performance results, which make CoWebViz practical for broad clinical use. We clarify technical advances including: enhanced threaded architecture, optimized visualization distribution algorithms, a wide range of supported stereoscopic presentation technologies, and the salient theoretical and empirical network parameters that affect our web-based visualization approach. Discussion The implementations demonstrate usability and performance benefits of a simple web-based approach for complex clinical visualization scenarios. Using this approach overcomes technical challenges that require third-party web browser plug-ins, resulting in the most lightweight client. Conclusions Compared to special software and hardware deployments, unmodified web browsers enhance remote user accessibility to interactive medical visualization. Whereas local hardware and software deployments may provide better interactivity than remote applications, our implementation demonstrates that a simplified, stable, client approach using standard web browsers is sufficient for high quality three-dimensional, stereoscopic, collaborative and interactive visualization. PMID:23048008
Accommodation response measurements for integral 3D image
NASA Astrophysics Data System (ADS)
Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.
2014-03-01
We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.
2006-07-27
unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The goal of this project was to develop analytical and computational tools to make vision a Viable sensor for...vision.ucla. edu July 27, 2006 Abstract The goal of this project was to develop analytical and computational tools to make vision a viable sensor for the ... sensors . We have proposed the framework of stereoscopic segmentation where multiple images of the same obejcts were jointly processed to extract geometry
DEEP SPACE: High Resolution VR Platform for Multi-user Interactive Narratives
NASA Astrophysics Data System (ADS)
Kuka, Daniela; Elias, Oliver; Martins, Ronald; Lindinger, Christopher; Pramböck, Andreas; Jalsovec, Andreas; Maresch, Pascal; Hörtner, Horst; Brandl, Peter
DEEP SPACE is a large-scale platform for interactive, stereoscopic and high resolution content. The spatial and the system design of DEEP SPACE are facing constraints of CAVETM-like systems in respect to multi-user interactive storytelling. To be used as research platform and as public exhibition space for many people, DEEP SPACE is capable to process interactive, stereoscopic applications on two projection walls with a size of 16 by 9 meters and a resolution of four times 1080p (4K) each. The processed applications are ranging from Virtual Reality (VR)-environments to 3D-movies to computationally intensive 2D-productions. In this paper, we are describing DEEP SPACE as an experimental VR platform for multi-user interactive storytelling. We are focusing on the system design relevant for the platform, including the integration of the Apple iPod Touch technology as VR control, and a special case study that is demonstrating the research efforts in the field of multi-user interactive storytelling. The described case study, entitled "Papyrate's Island", provides a prototypical scenario of how physical drawings may impact on digital narratives. In this special case, DEEP SPACE helps us to explore the hypothesis that drawing, a primordial human creative skill, gives us access to entirely new creative possibilities in the domain of interactive storytelling.
Bhadri, Prashant R; Rowley, Adrian P; Khurana, Rahul N; Deboer, Charles M; Kerns, Ralph M; Chong, Lawrence P; Humayun, Mark S
2007-05-01
To evaluate the effectiveness of a prototype stereoscopic camera-based viewing system (Digital Microsurgical Workstation, three-dimensional (3D) Vision Systems, Irvine, California, USA) for anterior and posterior segment ophthalmic surgery. Institutional-based prospective study. Anterior and posterior segment surgeons performed designated standardized tasks on porcine eyes after training on prosthetic plastic eyes. Both anterior and posterior segment surgeons were able to complete tasks requiring minimal or moderate stereoscopic viewing. The results indicate that the system provides improved ergonomics. Improvements in key viewing performance areas would further enhance the value over a conventional operating microscope. The performance of the prototype system is not at par with the planned commercial system. With continued development of this technology, the three- dimensional system may be a novel viewing system in ophthalmic surgery with improved ergonomics with respect to traditional microscopic viewing.
Reducing Visual Discomfort with HMDs Using Dynamic Depth of Field.
Carnegie, Kieran; Rhee, Taehyun
2015-01-01
Although head-mounted displays (HMDs) are ideal devices for personal viewing of immersive stereoscopic content, exposure to VR applications on them results in significant discomfort for the majority of people, with symptoms including eye fatigue, headaches, nausea, and sweating. A conflict between accommodation and vergence depth cues on stereoscopic displays is a significant cause of visual discomfort. This article describes the results of an evaluation used to judge the effectiveness of dynamic depth-of-field (DoF) blur in an effort to reduce discomfort caused by exposure to stereoscopic content on HMDs. Using a commercial game engine implementation, study participants report a reduction of visual discomfort on a simulator sickness questionnaire when DoF blurring is enabled. The study participants reported a decrease in symptom severity caused by HMD exposure, indicating that dynamic DoF can effectively reduce visual discomfort.
Interactive and Stereoscopic Hybrid 3D Viewer of Radar Data with Gesture Recognition
NASA Astrophysics Data System (ADS)
Goenetxea, Jon; Moreno, Aitor; Unzueta, Luis; Galdós, Andoni; Segura, Álvaro
This work presents an interactive and stereoscopic 3D viewer of weather information coming from a Doppler radar. The hybrid system shows a GIS model of the regional zone where the radar is located and the corresponding reconstructed 3D volume weather data. To enhance the immersiveness of the navigation, stereoscopic visualization has been added to the viewer, using a polarized glasses based system. The user can interact with the 3D virtual world using a Nintendo Wiimote for navigating through it and a Nintendo Wii Nunchuk for giving commands by means of hand gestures. We also present a dynamic gesture recognition procedure that measures the temporal advance of the performed gesture postures. Experimental results show how dynamic gestures are effectively recognized so that a more natural interaction and immersive navigation in the virtual world is achieved.
The Real Time Correction of Stereoscopic Images: From the Serial to a Parallel Treatment
NASA Astrophysics Data System (ADS)
Irki, Zohir; Devy, Michel; Achour, Karim; Azzaz, Mohamed Salah
2008-06-01
The correction of the stereoscopic images is a task which consists in replacing acquired images by other images having the same properties but which are simpler to use in the other stages of stereovision. The use of the pre-calculated tables, built during an off line calibration step, made it possible to carry out the off line stereoscopic images rectification. An improvement of the built tables made it possible to carry out the real time rectification. In this paper, we describe an improvement of the real time correction approach so it can be exploited for a possible implementation on an FPGA component. This improvement holds in account the real time aspect of the correction and the available resources that can offer the FPGA Type Stratix 1S40F780C5.
Current status of stereoscopic 3D LCD TV technologies
NASA Astrophysics Data System (ADS)
Choi, Hee-Jin
2011-06-01
The year 2010 may be recorded as a first year of successful commercial 3D products. Among them, the 3D LCD TVs are expected to be the major one regarding the sales volume. In this paper, the principle of current stereoscopic 3D LCD TV techniques and the required flat panel display (FPD) technologies for the realization of them are reviewed. [Figure not available: see fulltext.
ERIC Educational Resources Information Center
Cui, Dongmei; Wilson, Timothy D.; Rockhold, Robin W.; Lehman, Michael N.; Lynch, James C.
2017-01-01
The head and neck region is one of the most complex areas featured in the medical gross anatomy curriculum. The effectiveness of using three-dimensional (3D) models to teach anatomy is a topic of much discussion in medical education research. However, the use of 3D stereoscopic models of the head and neck circulation in anatomy education has not…
Case study: the introduction of stereoscopic games on the Sony PlayStation 3
NASA Astrophysics Data System (ADS)
Bickerstaff, Ian
2012-03-01
A free stereoscopic firmware update on Sony Computer Entertainment's PlayStation® 3 console provides the potential to increase enormously the popularity of stereoscopic 3D in the home. For this to succeed though, a large selection of content has to become available that exploits 3D in the best way possible. In addition to the existing challenges found in creating 3D movies and television programmes, the stereography must compensate for the dynamic and unpredictable environments found in games. Automatically, the software must map the depth range of the scene into the display's comfort zone, while minimising depth compression. This paper presents a range of techniques developed to solve this problem and the challenge of creating twice as many images as the 2D version without excessively compromising the frame rate or image quality. At the time of writing, over 80 stereoscopic PlayStation 3 games have been released and notable titles are used as examples to illustrate how the techniques have been adapted for different game genres. Since the firmware's introduction in 2010, the industry has matured with a large number of developers now producing increasingly sophisticated 3D content. New technologies such as viewer head tracking and head-mounted displays should increase the appeal of 3D in the home still further.
Advanced autostereoscopic display for G-7 pilot project
NASA Astrophysics Data System (ADS)
Hattori, Tomohiko; Ishigaki, Takeo; Shimamoto, Kazuhiro; Sawaki, Akiko; Ishiguchi, Tsuneo; Kobayashi, Hiromi
1999-05-01
An advanced auto-stereoscopic display is described that permits the observation of a stereo pair by several persons simultaneously without the use of special glasses and any kind of head tracking devices for the viewers. The system is composed of a right eye system, a left eye system and a sophisticated head tracking system. In the each eye system, a transparent type color liquid crystal imaging plate is used with a special back light unit. The back light unit consists of a monochrome 2D display and a large format convex lens. The unit distributes the light of the viewers' correct each eye only. The right eye perspective system is combined with a left eye perspective system is combined with a left eye perspective system by a half mirror in order to function as a time-parallel stereoscopic system. The viewer's IR image is taken through and focused by the large format convex lens and feed back to the back light as a modulated binary half face image. The auto-stereoscopic display employs the TTL method as the accurate head tracking. The system was worked as a stereoscopic TV phone between Duke University Department Tele-medicine and Nagoya University School of Medicine Department Radiology using a high-speed digital line of GIBN. The applications are also described in this paper.
Pollock, Brice; Burton, Melissa; Kelly, Jonathan W; Gilbert, Stephen; Winer, Eliot
2012-04-01
Stereoscopic depth cues improve depth perception and increase immersion within virtual environments (VEs). However, improper display of these cues can distort perceived distances and directions. Consider a multi-user VE, where all users view identical stereoscopic images regardless of physical location. In this scenario, cues are typically customized for one "leader" equipped with a head-tracking device. This user stands at the center of projection (CoP) and all other users ("followers") view the scene from other locations and receive improper depth cues. This paper examines perceived depth distortion when viewing stereoscopic VEs from follower perspectives and the impact of these distortions on collaborative spatial judgments. Pairs of participants made collaborative depth judgments of virtual shapes viewed from the CoP or after displacement forward or backward. Forward and backward displacement caused perceived depth compression and expansion, respectively, with greater compression than expansion. Furthermore, distortion was less than predicted by a ray-intersection model of stereo geometry. Collaboration times were significantly longer when participants stood at different locations compared to the same location, and increased with greater perceived depth discrepancy between the two viewing locations. These findings advance our understanding of spatial distortions in multi-user VEs, and suggest a strategy for reducing distortion.
Process development for automated solar cell and module production. Task 4: Automated array assembly
NASA Technical Reports Server (NTRS)
1980-01-01
A process sequence which can be used in conjunction with automated equipment for the mass production of solar cell modules for terrestrial use was developed. The process sequence was then critically analyzed from a technical and economic standpoint to determine the technological readiness of certain process steps for implementation. The steps receiving analysis were: back contact metallization, automated cell array layup/interconnect, and module edge sealing. For automated layup/interconnect, both hard automation and programmable automation (using an industrial robot) were studied. The programmable automation system was then selected for actual hardware development.
Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion
NASA Astrophysics Data System (ADS)
Handy Turner, Tara
2010-02-01
From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.
Sugarcane Crop Extraction Using Object-Oriented Method from ZY-3 High Resolution Satellite Tlc Image
NASA Astrophysics Data System (ADS)
Luo, H.; Ling, Z. Y.; Shao, G. Z.; Huang, Y.; He, Y. Q.; Ning, W. Y.; Zhong, Z.
2018-04-01
Sugarcane is one of the most important crops in Guangxi, China. As the development of satellite remote sensing technology, more remotely sensed images can be used for monitoring sugarcane crop. With the help of Three Line Camera (TLC) images, wide coverage and stereoscopic mapping ability, Chinese ZY-3 high resolution stereoscopic mapping satellite is useful in attaining more information for sugarcane crop monitoring, such as spectral, shape, texture difference between forward, nadir and backward images. Digital surface model (DSM) derived from ZY-3 TLC images are also able to provide height information for sugarcane crop. In this study, we make attempt to extract sugarcane crop from ZY-3 images, which are acquired in harvest period. Ortho-rectified TLC images, fused image, DSM are processed for our extraction. Then Object-oriented method is used in image segmentation, example collection, and feature extraction. The results of our study show that with the help of ZY-3 TLC image, the information of sugarcane crop in harvest time can be automatic extracted, with an overall accuracy of about 85.3 %.
Variation and extrema of human interpupillary distance
NASA Astrophysics Data System (ADS)
Dodgson, Neil A.
2004-05-01
Mean interpupillary distance (IPD) is an important and oft-quoted measure in stereoscopic work. However, there is startlingly little agreement on what it should be. Mean IPD has been quoted in the stereoscopic literature as being anything from 58 mm to 70 mm. It is known to vary with respect to age, gender and race. Furthermore, the stereoscopic industry requires information on not just mean IPD, but also its variance and its extrema, because our products need to be able to cope with all possible users, including those with the smallest and largest IPDs. This paper brings together those statistics on IPD which are available. The key results are that mean adult IPD is around 63 mm, the vast majority of adults have IPDs in the range 50-75 mm, the wider range of 45-80 mm is likely to include (almost) all adults, and the minimum IPD for children (down to five years old) is around 40 mm.
NASA Astrophysics Data System (ADS)
Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin
2017-07-01
This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.
Self-calibration performance in stereoscopic PIV acquired in a transonic wind tunnel
Beresh, Steven J.; Wagner, Justin L.; Smith, Barton L.
2016-03-16
Three stereoscopic PIV experiments have been examined to test the effectiveness of self-calibration under varied circumstances. Furthermore, we our measurements taken in a streamwise plane yielded a robust self-calibration that returned common results regardless of the specific calibration procedure, but measurements in the crossplane exhibited substantial velocity bias errors whose nature was sensitive to the particulars of the self-calibration approach. Self-calibration is complicated by thick laser sheets and large stereoscopic camera angles and further exacerbated by small particle image diameters and high particle seeding density. In spite of the different answers obtained by varied self-calibrations, each implementation locked onto anmore » apparently valid solution with small residual disparity and converged adjustment of the calibration plane. Thus, the convergence of self-calibration on a solution with small disparity is not sufficient to indicate negligible velocity error due to the stereo calibration.« less
The Effect of Stereoscopic ("3D") vs. 2D Presentation on Learning through Video and Film
NASA Astrophysics Data System (ADS)
Price, Aaron; Kasal, E.
2014-01-01
Two Eyes, 3D is a NSF-funded research project into the effects of stereoscopy on learning of highly spatial concepts. We report final results on one study of the project which tested the effect of stereoscopic presentation on learning outcomes of two short films about Type 1a supernovae and the morphology of the Milky Way. 986 adults watched either film, randomly distributed between stereoscopic and 2D presentation. They took a pre-test and post-test that included multiple choice and drawing tasks related to the spatial nature of the topics in the film. Orientation of the answering device was also tracked and a spatial cognition pre-test was given to control for prior spatial ability. Data collection took place at the Adler Planetarium's Space Visualization Lab and the project is run through the AAVSO.
Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.
Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi
2016-05-30
Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.
Holistic processing for bodies and body parts: New evidence from stereoscopic depth manipulations.
Harris, Alison; Vyas, Daivik B; Reed, Catherine L
2016-10-01
Although holistic processing has been documented extensively for upright faces, it is unclear whether it occurs for other visual categories with more extensive substructure, such as body postures. Like faces, body postures have high social relevance, but they differ in having fine-grain organization not only of basic parts (e.g., arm) but also subparts (e.g., elbow, wrist, hand). To compare holistic processing for whole bodies and body parts, we employed a novel stereoscopic depth manipulation that creates either the percept of a whole body occluded by a set of bars, or of segments of a body floating in front of a background. Despite sharing low-level visual properties, only the stimulus perceived as being behind bars should be holistically "filled in" via amodal completion. In two experiments, we tested for better identification of individual body parts within the context of a body versus in isolation. Consistent with previous findings, recognition of body parts was better in the context of a whole body when the body was amodally completed behind occluders. However, when the same bodies were perceived as floating in strips, performance was significantly worse, and not significantly different, from that for amodally completed parts, supporting holistic processing of body postures. Intriguingly, performance was worst for parts in the frontal depth condition, suggesting that these effects may extend from gross body organization to a more local level. These results provide suggestive evidence that holistic representations may not be "all-or-none," but rather also operate on body regions of more limited spatial extent.
Enhanced visualization of inner ear structures
NASA Astrophysics Data System (ADS)
Niemczyk, Kazimierz; Kucharski, Tomasz; Kujawinska, Malgorzata; Bruzgielewicz, Antoni
2004-07-01
Recently surgery requires extensive support from imaging technologies in order to increase effectiveness and safety of operations. One of important tasks is to enhance visualisation of quasi-phase (transparent) 3d structures. Those structures are characterized by very low contrast. It makes differentiation of tissues in field of view very difficult. For that reason the surgeon may be extremly uncertain during operation. This problem is connected with supporting operations of inner ear during which physician has to perform cuts at specific places of quasi-transparent velums. Conventionally during such operations medical doctor views the operating field through stereoscopic microscope. In the paper we propose a 3D visualisation system based on Helmet Mounted Display. Two CCD cameras placed at the output of microscope perform acquisition of stereo pairs of images. The images are processed in real-time with the goal of enhancement of quasi-phased structures. The main task is to create algorithm that is not sensitive to changes in intensity distribution. The disadvantages of existing algorithms is their lack of adaptation to occuring reflexes and shadows in field of view. The processed images from both left and right channels are overlaid on the actual images exported and displayed at LCD's of Helmet Mounted Display. A physician observes by HMD (Helmet Mounted Display) a stereoscopic operating scene with indication of the places of special interest. The authors present the hardware ,procedures applied and initial results of inner ear structure visualisation. Several problems connected with processing of stereo-pair images are discussed.
Remote stereoscopic video play platform for naked eyes based on the Android system
NASA Astrophysics Data System (ADS)
Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng
2014-11-01
As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.
NASA Astrophysics Data System (ADS)
Zhao, L.; Fu, X.; Dou, X.; Liu, H.; Fang, Z.
2018-04-01
The ZY-3 is the civil high-resolution optical stereoscopic mapping satellite independently developed by China. The ZY-3 constellation of the twin satellites operates in a sun-synchronous, near-polar, circular 505 km orbit, with a descending location time of 10:30 AM and a 29-day revisiting period. The panchromatic triplet sensors, pointing forward, nadir, and backward with an angle of 22°, have excellent base-to-height ratio, which is beneficial to the extraction of DEM. In order to extract more detailed and highprecision DEM, the ZY-3 (02) satellite has been upgraded based on the ZY-3 (01), and the GSD of the stereo camera has been optimized from 3.5 to 2.5 meters. In the paper case studies using the ZY-3 01 and the 02 satellite data for block adjustment and DEM extraction have been carried out in Liaoning Province of China. The results show that the planimetric and altimetric accuracy can reach 3 meters, which meet the mapping requirements of 1 : 50,000 national topographic map and the design performance of the satellites. The normalized elevation accuracy index (NEAI) is adopted to evaluate the twin satellite stereoscopic performance, and the NEAIs of the twin ZY-3 satellites are good and the index of the ZY-3(02) is slightly better. The comparison of the overlapping DEMs from the twin ZY-3 satellites and SRTM is analysed. The bias and the standard deviation of all the DEMs are better than 5 meters. In addition, in the process of accuracy comparison, some gross errors of the DEM can be identified, and some elevation changes of the DEM can also be found. The differential DEM becomes a new tool and application.
Continuous monitoring of prostate position using stereoscopic and monoscopic kV image guidance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, M. Tynan R.; Parsons, Dave D.; Robar, James L.
2016-05-15
Purpose: To demonstrate continuous kV x-ray monitoring of prostate motion using both stereoscopic and monoscopic localizations, assess the spatial accuracy of these techniques, and evaluate the dose delivered from the added image guidance. Methods: The authors implemented both stereoscopic and monoscopic fiducial localizations using a room-mounted dual oblique x-ray system. Recently developed monoscopic 3D position estimation techniques potentially overcome the issue of treatment head interference with stereoscopic imaging at certain gantry angles. To demonstrate continuous position monitoring, a gold fiducial marker was placed in an anthropomorphic phantom and placed on the Linac couch. The couch was used as a programmablemore » translation stage. The couch was programmed with a series of patient prostate motion trajectories exemplifying five distinct categories: stable prostate, slow drift, persistent excursion, transient excursion, and high frequency excursions. The phantom and fiducial were imaged using 140 kVp, 0.63 mAs per image at 1 Hz for a 60 s monitoring period. Both stereoscopic and monoscopic 3D localization accuracies were assessed by comparison to the ground-truth obtained from the Linac log file. Imaging dose was also assessed, using optically stimulated luminescence dosimeter inserts in the phantom. Results: Stereoscopic localization accuracy varied between 0.13 ± 0.05 and 0.33 ± 0.30 mm, depending on the motion trajectory. Monoscopic localization accuracy varied from 0.2 ± 0.1 to 1.1 ± 0.7 mm. The largest localization errors were typically observed in the left–right direction. There were significant differences in accuracy between the two monoscopic views, but which view was better varied from trajectory to trajectory. The imaging dose was measured to be between 2 and 15 μGy/mAs, depending on location in the phantom. Conclusions: The authors have demonstrated the first use of monoscopic localization for a room-mounted dual x-ray system. Three-dimensional position estimation from monoscopic imaging permits continuous, uninterrupted intrafraction motion monitoring even in the presence of gantry rotation, which may block kV sources or imagers. This potentially allows for more accurate treatment delivery, by ensuring that the prostate does not deviate substantially from the initial setup position.« less
Stereoscopic processing of crossed and uncrossed disparities in the human visual cortex.
Li, Yuan; Zhang, Chuncheng; Hou, Chunping; Yao, Li; Zhang, Jiacai; Long, Zhiying
2017-12-21
Binocular disparity provides a powerful cue for depth perception in a stereoscopic environment. Despite increasing knowledge of the cortical areas that process disparity from neuroimaging studies, the neural mechanism underlying disparity sign processing [crossed disparity (CD)/uncrossed disparity (UD)] is still poorly understood. In the present study, functional magnetic resonance imaging (fMRI) was used to explore different neural features that are relevant to disparity-sign processing. We performed an fMRI experiment on 27 right-handed healthy human volunteers by using both general linear model (GLM) and multi-voxel pattern analysis (MVPA) methods. First, GLM was used to determine the cortical areas that displayed different responses to different disparity signs. Second, MVPA was used to determine how the cortical areas discriminate different disparity signs. The GLM analysis results indicated that shapes with UD induced significantly stronger activity in the sub-region (LO) of the lateral occipital cortex (LOC) than those with CD. The results of MVPA based on region of interest indicated that areas V3d and V3A displayed higher accuracy in the discrimination of crossed and uncrossed disparities than LOC. The results of searchlight-based MVPA indicated that the dorsal visual cortex showed significantly higher prediction accuracy than the ventral visual cortex and the sub-region LO of LOC showed high accuracy in the discrimination of crossed and uncrossed disparities. The results may suggest the dorsal visual areas are more discriminative to the disparity signs than the ventral visual areas although they are not sensitive to the disparity sign processing. Moreover, the LO in the ventral visual cortex is relevant to the recognition of shapes with different disparity signs and discriminative to the disparity sign.
Selecting automation for the clinical chemistry laboratory.
Melanson, Stacy E F; Lindeman, Neal I; Jarolim, Petr
2007-07-01
Laboratory automation proposes to improve the quality and efficiency of laboratory operations, and may provide a solution to the quality demands and staff shortages faced by today's clinical laboratories. Several vendors offer automation systems in the United States, with both subtle and obvious differences. Arriving at a decision to automate, and the ensuing evaluation of available products, can be time-consuming and challenging. Although considerable discussion concerning the decision to automate has been published, relatively little attention has been paid to the process of evaluating and selecting automation systems. To outline a process for evaluating and selecting automation systems as a reference for laboratories contemplating laboratory automation. Our Clinical Chemistry Laboratory staff recently evaluated all major laboratory automation systems in the United States, with their respective chemistry and immunochemistry analyzers. Our experience is described and organized according to the selection process, the important considerations in clinical chemistry automation, decisions and implementation, and we give conclusions pertaining to this experience. Including the formation of a committee, workflow analysis, submitting a request for proposal, site visits, and making a final decision, the process of selecting chemistry automation took approximately 14 months. We outline important considerations in automation design, preanalytical processing, analyzer selection, postanalytical storage, and data management. Selecting clinical chemistry laboratory automation is a complex, time-consuming process. Laboratories considering laboratory automation may benefit from the concise overview and narrative and tabular suggestions provided.
NASA Astrophysics Data System (ADS)
Esteghamatian, Mehdi; Sarkar, Kripasindhu; Pautler, Stephen E.; Chen, Elvis C. S.; Peters, Terry M.
2012-02-01
Radical prostatectomy surgery (RP) is the gold standard for treatment of localized prostate cancer (PCa). Recently, emergence of minimally invasive techniques such as Laparoscopic Radical Prostatectomy (LRP) and Robot-Assisted Laparoscopic Radical Prostatectomy (RARP) has improved the outcomes for prostatectomy. However, it remains difficult for the surgeons to make informed decisions regarding resection margins and nerve sparing since the location of the tumor within the organ is not usually visible in a laparoscopic view. While MRI enables visualization of the salient structures and cancer foci, its efficacy in LRP is reduced unless it is fused into a stereoscopic view such that homologous structures overlap. Registration of the MRI image and peri-operative ultrasound image using a tracked probe can potentially be exploited to bring the pre-operative information into alignment with the patient coordinate system during the procedure. While doing so, prostate motion needs to be compensated in real-time to synchronize the stereoscopic view with the pre-operative MRI during the prostatectomy procedure. In this study, a point-based stereoscopic tracking technique is investigated to compensate for rigid prostate motion so that the same motion can be applied to the pre-operative images. This method benefits from stereoscopic tracking of the surface markers implanted over the surface of the prostate phantom. The average target registration error using this approach was 3.25+/-1.43mm.
Recovering stereo vision by squashing virtual bugs in a virtual reality environment.
Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M
2016-06-19
Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Author(s).
Recovering stereo vision by squashing virtual bugs in a virtual reality environment
Vedamurthy, Indu; Knill, David C.; Huang, Samuel J.; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne
2016-01-01
Stereopsis is the rich impression of three-dimensionality, based on binocular disparity—the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task—a ‘bug squashing’ game—in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269607
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Bucher, Urs J.; Statler, Irving C. (Technical Monitor)
1994-01-01
The influence of physically presented background stimuli on the perceived depth of optically overlaid, stereoscopic virtual images has been studied using headmounted stereoscopic, virtual image displays. These displays allow presentation of physically unrealizable stimulus combinations. Positioning of an opaque physical object either at the initial perceived depth of the virtual image or at a position substantially in front of the virtual image, causes the virtual image to perceptually move closer to the observer. In the case of objects positioned substantially in front of the virtual image, subjects often perceive the opaque object to become transparent. Evidence is presented that the apparent change of position caused by interposition of the physical object is not due to occlusion cues. According, it may have an alternative cause such as variation in the binocular vengeance position of the eyes caused by introduction of the physical object. This effect may complicate design of overlaid virtual image displays for near objects and appears to be related to the relative conspicuousness of the overlaid virtual image and the background. Consequently, it may be related to earlier analyses of John Foley which modeled open-loop pointing errors to stereoscopically presented points of light in terms of errors in determination of a reference point for interpretation of observed retinal disparities. Implications for the design of see-through displays for manufacturing will be discussed.
Automation and control of off-planet oxygen production processes
NASA Technical Reports Server (NTRS)
Marner, W. J.; Suitor, J. W.; Schooley, L. S.; Cellier, F. E.
1990-01-01
This paper addresses several aspects of the automation and control of off-planet production processes. First, a general approach to process automation and control is discussed from the viewpoint of translating human process control procedures into automated procedures. Second, the control issues for the automation and control of off-planet oxygen processes are discussed. Sensors, instruments, and components are defined and discussed in the context of off-planet applications, and the need for 'smart' components is clearly established.
Stereoscopic game design and evaluation
NASA Astrophysics Data System (ADS)
Rivett, Joe; Holliman, Nicolas
2013-03-01
We report on a new game design where the goal is to make the stereoscopic depth cue sufficiently critical to success that game play should become impossible without using a stereoscopic 3D (S3D) display and, at the same time, we investigate whether S3D game play is affected by screen size. Before we detail our new game design we review previously unreported results from our stereoscopic game research over the last ten years at the Durham Visualisation Laboratory. This demonstrates that game players can achieve significantly higher scores using S3D displays when depth judgements are an integral part of the game. Method: We design a game where almost all depth cues, apart from the binocular cue, are removed. The aim of the game is to steer a spaceship through a series of oncoming hoops where the viewpoint of the game player is from above, with the hoops moving right to left across the screen towards the spaceship, to play the game it is essential to make decisive depth judgments to steer the spaceship through each oncoming hoop. To confound these judgements we design altered depth cues, for example perspective is reduced as a cue by varying the hoop's depth, radius and cross-sectional size. Results: Players were screened for stereoscopic vision, given a short practice session, and then played the game in both 2D and S3D modes on a seventeen inch desktop display, on average participants achieved a more than three times higher score in S3D than they achieved in 2D. The same experiment was repeated using a four metre S3D projection screen and similar results were found. Conclusions: Our conclusion is that games that use the binocular depth cue in decisive game judgements can benefit significantly from using an S3D display. Based on both our current and previous results we additionally conclude that display size, from cell-phone, to desktop, to projection display does not adversely affect player performance.
Visualization of planetary subsurface radar sounder data in three dimensions using stereoscopy
NASA Astrophysics Data System (ADS)
Frigeri, A.; Federico, C.; Pauselli, C.; Ercoli, M.; Coradini, A.; Orosei, R.
2010-12-01
Planetary subsurface sounding radar data extend the knowledge of planetary surfaces to a third dimension: the depth. The interpretation of delays of radar echoes converted into depth often requires the comparative analysis with other data, mainly topography, and radar data from different orbits can be used to investigate the spatial continuity of signals from subsurface geologic features. This scenario requires taking into account spatially referred information in three dimensions. Three dimensional objects are generally easier to understand if represented into a three dimensional space, and this representation can be improved by stereoscopic vision. Since its invention in the first half of 19th century, stereoscopy has been used in a broad range of application, including scientific visualization. The quick improvement of computer graphics and the spread of graphic rendering hardware allow to apply the basic principles of stereoscopy in the digital domain, allowing the stereoscopic projection of complex models. Specialized system for stereoscopic view of scientific data have been available in the industry, and proprietary solutions were affordable only to large research institutions. In the last decade, thanks to the GeoWall Consortium, the basics of stereoscopy have been applied for setting up stereoscopic viewers based on off-the shelf hardware products. Geowalls have been spread and are now used by several geo-science research institutes and universities. We are exploring techniques for visualizing planetary subsurface sounding radar data in three dimensions and we are developing a hardware system for rendering it in a stereoscopic vision system. Several Free Open Source Software tools and libraries are being used, as their level of interoperability is typically high and their licensing system offers the opportunity to implement quickly new functionalities to solve specific needs during the progress of the project. Visualization of planetary radar data in three dimensions represents a challenging task, and the exploration of different strategies will bring to the selection of the most appropriate ones for a meaningful extraction of information from the products of these innovative instruments.
NASA Astrophysics Data System (ADS)
Wong, Erwin
2000-03-01
Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.
Stereo study as an aid to visual analysis of ERTS and Skylab images
NASA Technical Reports Server (NTRS)
Vangenderen, J. L. (Principal Investigator)
1973-01-01
The author has identified the following significant results. The parallax on ERTS and Skylab images is sufficiently large for exploitation by human photointerpreters. The ability to view the imagery stereoscopically reduces the signal-to-noise ratio. Stereoscopic examination of orbital data can contribute to studies of spatial, spectral, and temporal variations on the imagery. The combination of true stereo parallax, plus shadow parallax offer many possibilities to human interpreters for making meaningful analyses of orbital imagery.
Virtual workstation - A multimodal, stereoscopic display environment
NASA Astrophysics Data System (ADS)
Fisher, S. S.; McGreevy, M.; Humphries, J.; Robinett, W.
1987-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use in a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.
Digital stereoscopic cinema: the 21st century
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2008-02-01
Over 1000 theaters in more than a dozen countries have been outfitted with digital projectors using the Texas Instruments DLP engine equipped to show field-sequential 3-D movies using the polarized method of image selection. Shuttering eyewear and advanced anaglyph products are also being deployed for image selection. Many studios are in production with stereoscopic films, and some have committed to producing their entire output of animated features in 3-D. This is a time of technology change for the motion picture industry.
Jipp, Meike
2016-02-01
I explored whether different cognitive abilities (information-processing ability, working-memory capacity) are needed for expertise development when different types of automation (information vs. decision automation) are employed. It is well documented that expertise development and the employment of automation lead to improved performance. Here, it is argued that a learner's ability to reason about an activity may be hindered by the employment of information automation. Additional feedback needs to be processed, thus increasing the load on working memory and decelerating expertise development. By contrast, the employment of decision automation may stimulate reasoning, increase the initial load on information-processing ability, and accelerate expertise development. Authors of past research have not investigated the interrelations between automation assistance, individual differences, and expertise development. Sixty-one naive learners controlled simulated air traffic with two types of automation: information automation and decision automation. Their performance was captured across 16 trials. Well-established tests were used to assess information-processing ability and working-memory capacity. As expected, learners' performance benefited from expertise development and decision automation. Furthermore, individual differences moderated the effect of the type of automation on expertise development: The employment of only information automation increased the load on working memory during later expertise development. The employment of decision automation initially increased the need to process information. These findings highlight the importance of considering individual differences and expertise development when investigating human-automation interaction. The results are relevant for selecting automation configurations for expertise development. © 2015, Human Factors and Ergonomics Society.
The Automation of Reserve Processing.
ERIC Educational Resources Information Center
Self, James
1985-01-01
Describes an automated reserve processing system developed locally at Clemons Library, University of Virginia. Discussion covers developments in the reserve operation at Clemons Library, automation of the processing and circulation functions of reserve collections, and changes in reserve operation performance and staffing needs due to automation.…
NASA Astrophysics Data System (ADS)
Cooperstock, Jeremy R.; Wang, Guangyu
2009-02-01
We conducted a comparative study of different stereoscopic display modalities (head-mounted display, polarized projection, and multiview lenticular display) to evaluate their efficacy in supporting manipulation and understanding of 3D content, specifically, in the context of neurosurgical visualization. Our study was intended to quantify the differences in resulting task performance between these choices of display technology. The experimental configuration involved a segmented brain vasculature and a simulated tumor. Subjects were asked to manipulate the vasculature and a pen-like virtual probe in order to define a vessel-free path from cortical surface to the targeted tumor. Because of the anatomical complexity, defining such a path can be a challenging task. To evaluate the system, we quantified performance differences under three different stereoscopic viewing conditions. Our results indicate that, on average, participants achieved best performance using polarized projection, and worst with the multiview lenticular display. These quantitative measurements were further reinforced by the subjects' responses to our post-test questionnaire regarding personal preferences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sebok, A.; Nystad, E.
This paper describes a study investigating questions of learning effectiveness in different VR technology types. Four VR display technology types were compared in terms of their ability to support procedural learning. The VR systems included two desktop displays (mono-scopic and stereoscopic view), a large screen stereoscopic display, and a mono-scopic head-mounted display. Twenty-four participants completed procedural training scenarios on these different display types. Training effectiveness was assessed in terms of objective task performance. Following the training session, participants performed the procedure they had just learned using the same VR display type they used for training. Time to complete the proceduremore » and errors were recorded. Retention and transfer of training were evaluated in a talk-through session 24 hours after the training. In addition, subjective questionnaire data were gathered to investigate perceived workload, Sense of Presence, simulator sickness, perceived usability, and ease of navigation. While no difference was found for the short-term learning, the study results indicate that retention and transfer of training were better supported by the large screen stereoscopic condition. (authors)« less
NASA Astrophysics Data System (ADS)
Hollander, Ari; Rose, Howard; Kollin, Joel; Moss, William
2011-03-01
Attack! of the S. Mutans is a multi-player game designed to harness the immersion and appeal possible with wide-fieldof- view stereoscopic 3D to combat the tooth decay epidemic. Tooth decay is one of the leading causes of school absences and costs more than $100B annually in the U.S. In 2008 the authors received a grant from the National Institutes of Health to build a science museum exhibit that included a suite of serious games involving the behaviors and bacteria that cause cavities. The centerpiece is an adventure game where five simultaneous players use modified Wii controllers to battle biofilms and bacteria while immersed in environments generated within a 11-foot stereoscopic WUXGA display. The authors describe the system and interface used in this prototype application and some of the ways they attempted to use the power of immersion and the appeal of S3D revolution to change health attitudes and self-care habits.
Stereoscopic 3D entertainment and its effect on viewing comfort: comparison of children and adults.
Pölönen, Monika; Järvenpää, Toni; Bilcu, Beatrice
2013-01-01
Children's and adults' viewing comfort during stereoscopic three-dimensional film viewing and computer game playing was studied. Certain mild changes in visual function, heterophoria and near point of accommodation values, as well as eyestrain and visually induced motion sickness levels were found when single setups were compared. The viewing system had an influence on viewing comfort, in particular for eyestrain levels, but no clear difference between two- and three-dimensional systems was found. Additionally, certain mild changes in visual functions and visually induced motion sickness levels between adults and children were found. In general, all of the system-task combinations caused mild eyestrain and possible changes in visual functions, but these changes in magnitude were small. According to subjective opinions that further support these measurements, using a stereoscopic three-dimensional system for up to 2 h was acceptable for most of the users regardless of their age. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Stereoscopic Analysis of 19 May and 31 Aug 2007 Filament Eruptions
NASA Technical Reports Server (NTRS)
Liewer, Paulett; DeJong, E. M.; Hall, J. R.
2008-01-01
The presentation outline includes results from stereoscopic analysis of SECCHI/EUVI data for 19 May 2007 filament eruption, including the determined 3D trajectory of erupting filament, strong evidence for reconnection below erupting filament as consistent with standard model, and comparison of EUVI and H-alpha images during eruption; and results from stereoscopic analytic of 21 August 2007 filament eruption. Slide topics include standard model of filament eruption; 2007 May 19 STEREO A/SECCHI/EUVI 195 and 304 A: CME signatures and filament eruption, 3D reconstruction of erupting prominence; filament's relation to coronal magnetic fields; 3d reconstructions of filament eruption; height-time plot of eruption from 3D reconstructions; detailed pre-eruptions comparison of H-alpha and EUVI 304 at 12:42 UT; comparisons during the eruption; STEREO prominence and CME August 31, 2007; reconstructions of prominence and leading edges of both dark cavity and CME; and 3D reconstructions of prominence and leading edges.
Agarwal, Nitin; Schmitt, Paul J; Sukul, Vishad; Prestigiacomo, Charles J
2012-08-01
Virtual reality training for complex tasks has been shown to be of benefit in fields involving highly technical and demanding skill sets. The use of a stereoscopic three-dimensional (3D) virtual reality environment to teach a patient-specific analysis of the microsurgical treatment modalities of a complex basilar aneurysm is presented. Three different surgical approaches were evaluated in a virtual environment and then compared to elucidate the best surgical approach. These approaches were assessed with regard to the line-of-sight, skull base anatomy and visualisation of the relevant anatomy at the level of the basilar artery and surrounding structures. Overall, the stereoscopic 3D virtual reality environment with fusion of multimodality imaging affords an excellent teaching tool for residents and medical students to learn surgical approaches to vascular lesions. Future studies will assess the educational benefits of this modality and develop a series of metrics for student assessments.
Electronic Data Interchange in Procurement
1990-04-01
contract management and order processing systems. This conversion of automated information to paper and back to automated form is not only slow and...automated purchasing computer and the contractor’s order processing computer through telephone lines, as illustrated in Figure 1-1. Computer-to-computer...into the contractor’s order processing or contract management system. This approach - converting automated information to paper and back to automated
Experiments on shape perception in stereoscopic displays
NASA Astrophysics Data System (ADS)
Leroy, Laure; Fuchs, Philippe; Paljic, Alexis; Moreau, Guillaume
2009-02-01
Stereoscopic displays are increasingly used for computer-aided design. The aim is to make virtual prototypes to avoid building real ones, so that time, money and raw materials are saved. But do we really know whether virtual displays render the objects in a realistic way to potential users? In this study, we have performed several experiments in which we compare two virtual shapes to their equivalent in the real world, each of these aiming at a specific issue by a comparison: First, we performed some perception tests to evaluate the importance of head tracking to evaluate if it is better to concentrate our efforts on stereoscopic vision; Second, we have studied the effects of interpupillary distance; Third, we studied the effects of the position of the main object in comparison with the screen. Two different tests are used, the first one using a well-known shape (a sphere) and the second one using an irregular shape but with almost the same colour and dimension. These two tests allow us to determine if symmetry is important in their perception. We show that head tracking has a more important effect on shape perception than stereoscopic vision, especially on depth perception because the subject is able to move around the scene. The study also shows that an object between the subject and the screen is perceived better than an object which is on the screen, even if the latter is better for the eye strain.
Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes
Boulos, Maged N Kamel; Robinson, Larry R
2009-01-01
Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system. PMID:19849837
Architecture for high performance stereoscopic game rendering on Android
NASA Astrophysics Data System (ADS)
Flack, Julien; Sanderson, Hugh; Shetty, Sampath
2014-03-01
Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption.
Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes.
Boulos, Maged N Kamel; Robinson, Larry R
2009-10-22
Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.
Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes
Boulos, Maged N.K.; Robinson, Larry R.
2009-01-01
Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.
Comparative Analysis of InSAR Digital Surface Models for Test Area Bucharest
NASA Astrophysics Data System (ADS)
Dana, Iulia; Poncos, Valentin; Teleaga, Delia
2010-03-01
This paper presents the results of the interferometric processing of ERS Tandem, ENVISAT and TerraSAR- X for digital surface model (DSM) generation. The selected test site is Bucharest (Romania), a built-up area characterized by the usual urban complex pattern: mixture of buildings with different height levels, paved roads, vegetation, and water bodies. First, the DSMs were generated following the standard interferometric processing chain. Then, the accuracy of the DSMs was analyzed against the SPOT HRS model (30 m resolution at the equator). A DSM derived by optical stereoscopic processing of SPOT 5 HRG data and also the SRTM (3 arc seconds resolution at the equator) DSM have been included in the comparative analysis.
Neubert, Sebastian; Göde, Bernd; Gu, Xiangyu; Stoll, Norbert; Thurow, Kerstin
2017-04-01
Modern business process management (BPM) is increasingly interesting for laboratory automation. End-to-end workflow automation and improved top-level systems integration for information technology (IT) and automation systems are especially prominent objectives. With the ISO Standard Business Process Model and Notation (BPMN) 2.X, a system-independent and interdisciplinary accepted graphical process control notation is provided, allowing process analysis, while also being executable. The transfer of BPM solutions to structured laboratory automation places novel demands, for example, concerning the real-time-critical process and systems integration. The article discusses the potential of laboratory execution systems (LESs) for an easier implementation of the business process management system (BPMS) in hierarchical laboratory automation. In particular, complex application scenarios, including long process chains based on, for example, several distributed automation islands and mobile laboratory robots for a material transport, are difficult to handle in BPMSs. The presented approach deals with the displacement of workflow control tasks into life science specialized LESs, the reduction of numerous different interfaces between BPMSs and subsystems, and the simplification of complex process modelings. Thus, the integration effort for complex laboratory workflows can be significantly reduced for strictly structured automation solutions. An example application, consisting of a mixture of manual and automated subprocesses, is demonstrated by the presented BPMS-LES approach.
Advanced automation for in-space vehicle processing
NASA Technical Reports Server (NTRS)
Sklar, Michael; Wegerif, D.
1990-01-01
The primary objective of this 3-year planned study is to assure that the fully evolved Space Station Freedom (SSF) can support automated processing of exploratory mission vehicles. Current study assessments show that required extravehicular activity (EVA) and to some extent intravehicular activity (IVA) manpower requirements for required processing tasks far exceeds the available manpower. Furthermore, many processing tasks are either hazardous operations or they exceed EVA capability. Thus, automation is essential for SSF transportation node functionality. Here, advanced automation represents the replacement of human performed tasks beyond the planned baseline automated tasks. Both physical tasks such as manipulation, assembly and actuation, and cognitive tasks such as visual inspection, monitoring and diagnosis, and task planning are considered. During this first year of activity both the Phobos/Gateway Mars Expedition and Lunar Evolution missions proposed by the Office of Exploration have been evaluated. A methodology for choosing optimal tasks to be automated has been developed. Processing tasks for both missions have been ranked on the basis of automation potential. The underlying concept in evaluating and describing processing tasks has been the use of a common set of 'Primitive' task descriptions. Primitive or standard tasks have been developed both for manual or crew processing and automated machine processing.
Intubation simulation with a cross-sectional visual guidance.
Rhee, Chi-Hyoung; Kang, Chul Won; Lee, Chang Ha
2013-01-01
We present an intubation simulation with deformable objects and a cross-sectional visual guidance using a general haptic device. Our method deforms the tube model when it collides with the human model. Mass-Spring model with the Euler integration is used for the tube deformation. For the trainee's more effective understanding of the intubation process, we provide a cross-sectional view of the oral cavity and the tube. Our system also applies a stereoscopic rendering to improve the depth perception and the reality of the simulation.
Visually representing reality: aesthetics and accessibility aspects
NASA Astrophysics Data System (ADS)
van Nes, Floris L.
2009-02-01
This paper gives an overview of the visual representation of reality with three imaging technologies: painting, photography and electronic imaging. The contribution of the important image aspects, called dimensions hereafter, such as color, fine detail and total image size, to the degree of reality and aesthetic value of the rendered image are described for each of these technologies. Whereas quite a few of these dimensions - or approximations, or even only suggestions thereof - were already present in prehistoric paintings, apparent motion and true stereoscopic vision only recently were added - unfortunately also introducing accessibility and image safety issues. Efforts are made to reduce the incidence of undesirable biomedical effects such as photosensitive seizures (PSS), visually induced motion sickness (VIMS), and visual fatigue from stereoscopic images (VFSI) by international standardization of the image parameters to be avoided by image providers and display manufacturers. The history of this type of standardization, from an International Workshop Agreement to a strategy for accomplishing effective international standardization by ISO, is treated at some length. One of the difficulties to be mastered in this process is the reconciliation of the, sometimes opposing, interests of vulnerable persons, thrill-seeking viewers, creative video designers and the game industry.
NASA Astrophysics Data System (ADS)
Lee, Marcus J. C.; Bourke, Paul; Alderson, Jacqueline A.; Lloyd, David G.; Lay, Brendan
2010-02-01
Non-contact anterior cruciate ligament (ACL) injuries are serious and debilitating, often resulting from the performance of evasive sides-stepping (Ssg) by team sport athletes. Previous laboratory based investigations of evasive Ssg have used generic visual stimuli to simulate realistic time and space constraints that athletes experience in the preparation and execution of the manoeuvre. However, the use of unrealistic visual stimuli to impose these constraints may not be accurately identifying the relationship between the perceptual demands and ACL loading during Ssg in actual game environments. We propose that stereoscopically filmed footage featuring sport specific opposing defender/s simulating a tackle on the viewer, when used as visual stimuli, could improve the ecological validity of laboratory based investigations of evasive Ssg. Due to the need for precision and not just the experience of viewing depth in these scenarios, a rigorous filming process built on key geometric considerations and equipment development to enable a separation of 6.5 cm between two commodity cameras had to be undertaken. Within safety limits, this could be an invaluable tool in enabling more accurate investigations of the associations between evasive Ssg and ACL injury risk.
Semi-Immersive Virtual Turbine Engine Simulation System
NASA Astrophysics Data System (ADS)
Abidi, Mustufa H.; Al-Ahmari, Abdulrahman M.; Ahmad, Ali; Darmoul, Saber; Ameen, Wadea
2018-05-01
The design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.
An innovative virtual reality training tool for orthognathic surgery.
Pulijala, Y; Ma, M; Pears, M; Peebles, D; Ayoub, A
2018-02-01
Virtual reality (VR) surgery using Oculus Rift and Leap Motion devices is a multi-sensory, holistic surgical training experience. A multimedia combination including 360° videos, three-dimensional interaction, and stereoscopic videos in VR has been developed to enable trainees to experience a realistic surgery environment. The innovation allows trainees to interact with the individual components of the maxillofacial anatomy and apply surgical instruments while watching close-up stereoscopic three-dimensional videos of the surgery. In this study, a novel training tool for Le Fort I osteotomy based on immersive virtual reality (iVR) was developed and validated. Seven consultant oral and maxillofacial surgeons evaluated the application for face and content validity. Using a structured assessment process, the surgeons commented on the content of the developed training tool, its realism and usability, and the applicability of VR surgery for orthognathic surgical training. The results confirmed the clinical applicability of VR for delivering training in orthognathic surgery. Modifications were suggested to improve the user experience and interactions with the surgical instruments. This training tool is ready for testing with surgical trainees. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Wanjing; Schütze, Rainer; Böhler, Martin; Boochs, Frank; Marzani, Franck S.; Voisin, Yvon
2009-06-01
We present an approach to integrate a preprocessing step of the region of interest (ROI) localization into 3-D scanners (laser or stereoscopic). The definite objective is to make the 3-D scanner intelligent enough to localize rapidly in the scene, during the preprocessing phase, the regions with high surface curvature, so that precise scanning will be done only in these regions instead of in the whole scene. In this way, the scanning time can be largely reduced, and the results contain only pertinent data. To test its feasibility and efficiency, we simulated the preprocessing process under an active stereoscopic system composed of two cameras and a video projector. The ROI localization is done in an iterative way. First, the video projector projects a regular point pattern in the scene, and then the pattern is modified iteratively according to the local surface curvature of each reconstructed 3-D point. Finally, the last pattern is used to determine the ROI. Our experiments showed that with this approach, the system is capable to localize all types of objects, including small objects with small depth.
Adaptive Algorithms for Automated Processing of Document Images
2011-01-01
ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University
Automated processing of endoscopic surgical instruments.
Roth, K; Sieber, J P; Schrimm, H; Heeg, P; Buess, G
1994-10-01
This paper deals with the requirements for automated processing of endoscopic surgical instruments. After a brief analysis of the current problems, solutions are discussed. Test-procedures have been developed to validate the automated processing, so that the cleaning results are guaranteed and reproducable. Also a device for testing and cleaning was designed together with Netzsch Newamatic and PCI, called TC-MIC, to automate processing and reduce manual work.
Proof-of-concept automation of propellant processing
NASA Technical Reports Server (NTRS)
Ramohalli, Kumar; Schallhorn, P. A.
1989-01-01
For space-based propellant production, automation of the process is needed. Currently, all phases of terrestrial production have some form of human interaction. A mixer was acquired to help perform the tasks of automation. A heating system to be used with the mixer was designed, built, and installed. Tests performed on the heating system verify design criteria. An IBM PS/2 personal computer was acquired for the future automation work. It is hoped that some the mixing process itself will be automated. This is a concept demonstration task; proving that propellant production can be automated reliably.
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
Automated Chromium Plating Line for Gun Barrels
1979-09-01
consistent pretreatments and bath dwell times. Some of the advantages of automated processing include increased productivity (average of 20^) due to...when automated processing procedures’ are used. The current method of applying chromium electrodeposits to gun tubes is a manual, batch operation...currently practiced with rotary swaged gun tubes would substantially reduce the difficulties in automated processing . RECOMMENDATIONS
Teaching-learning: stereoscopic 3D versus Traditional methods in Mexico City.
Mendoza Oropeza, Laura; Ortiz Sánchez, Ricardo; Ojeda Villagómez, Raúl
2015-01-01
In the UNAM Faculty of Odontology, we use a stereoscopic 3D teaching method that has grown more common in the last year, which makes it important to know whether students can learn better with this strategy. The objective of the study is to know, if the 4th year students of the bachelor's degree in dentistry learn more effectively with the use of stereoscopic 3D than the traditional method in Orthodontics. first, we selected the course topics, to be used for both methods; the traditional method using projection of slides and for the stereoscopic third dimension, with the use of videos in digital stereo projection (seen through "passive" polarized 3D glasses). The main topic was supernumerary teeth, including and diverted from their guide eruption. Afterwards we performed an exam on students, containing 24 items, validated by expert judgment in Orthodontics teaching. The results of the data were compared between the two educational methods for determined effectiveness using the model before and after measurement with the statistical package SPSS 20 version. The results presented for the 9 groups of undergraduates in dentistry, were collected with a total of 218 students for 3D and traditional methods, we found in a traditional method a mean 4.91, SD 1.4752 in the pretest and X=6.96, SD 1.26622, St Error 0.12318 for the posttest. The 3D method had a mean 5.21, SD 1.996779 St Error 0.193036 for the pretest X= 7.82, SD =0.963963, St Error 0.09319 posttest; the analysis of Variance between groups F= 5.60 Prob > 0.0000 and Bartlett's test for equal variances 21.0640 Prob > chi2 = 0.007. These results show that the student's learning in 3D means a significant improvement as compared to the traditional teaching method and having a strong association between the two methods. The findings suggest that the stereoscopic 3D method lead to improved student learning compared to traditional teaching.
Automation in School Library Media Centers.
ERIC Educational Resources Information Center
Driver, Russell W.; Driver, Mary Anne
1982-01-01
Surveys the historical development of automated technical processing in schools and notes the impact of this automation in a number of cases. Speculations about the future involvement of school libraries in automated processing and networking are included. Thirty references are listed. (BBM)
Sédille-Mostafaie, Nazanin; Engler, Hanna; Lutz, Susanne; Korte, Wolfgang
2013-06-01
Laboratories today face increasing pressure to automate operations due to increasing workloads and the need to reduce expenditure. Few studies to date have focussed on the laboratory automation of preanalytical coagulation specimen processing. In the present study, we examined whether a clinical chemistry automation protocol meets the preanalytical requirements for the analyses of coagulation. During the implementation of laboratory automation, we began to operate a pre- and postanalytical automation system. The preanalytical unit processes blood specimens for chemistry, immunology and coagulation by automated specimen processing. As the production of platelet-poor plasma is highly dependent on optimal centrifugation, we examined specimen handling under different centrifugation conditions in order to produce optimal platelet deficient plasma specimens. To this end, manually processed models centrifuged at 1500 g for 5 and 20 min were compared to an automated centrifugation model at 3000 g for 7 min. For analytical assays that are performed frequently enough to be targets for full automation, Passing-Bablok regression analysis showed close agreement between different centrifugation methods, with a correlation coefficient between 0.98 and 0.99 and a bias between -5% and +6%. For seldom performed assays that do not mandate full automation, the Passing-Bablok regression analysis showed acceptable to poor agreement between different centrifugation methods. A full automation solution is suitable and can be recommended for frequent haemostasis testing.
Consciousness and stereoscopic environmental imaging
NASA Astrophysics Data System (ADS)
Mason, Steve
2014-02-01
The question of human consciousness has intrigued philosophers and scientists for centuries: its nature, how we perceive our environment, how we think, our very awareness of thought and self. It has been suggested that stereoscopic vision is "a paradigm of how the mind works" 1 In depth perception, laws of perspective are known, reasoned, committed to memory from an early age; stereopsis, on the other hand, is a 3D experience governed by strict laws but actively joined within the brain―one sees it without explanation. How do we, in fact, process two different images into one 3D module within the mind and does an awareness of this process give us insight into the workings of our own consciousness? To translate this idea to imaging I employed ChromaDepth™ 3D glasses that rely on light being refracted in a different direction for each eye―colors of differing wavelengths appearing at varying distances from the viewer resulting in a 3D space. This involves neither calculation nor manufacture of two images or views. Environmental spatial imaging was developed―a 3D image was generated that literally surrounds the viewer. The image was printed and adhered to a semi-circular mount; the viewer then entered the interior to experience colored shapes suspended in a 3D space with an apparent loss of surface, or picture plane, upon which the image is rendered. By focusing our awareness through perception-based imaging we are able to gain a deeper understanding of how the brain works, how we see.
NASA Astrophysics Data System (ADS)
Tyczka, Dale R.; Wright, Robert; Janiszewski, Brian; Chatten, Martha Jane; Bowen, Thomas A.; Skibba, Brian
2012-06-01
Nearly all explosive ordnance disposal robots in use today employ monoscopic standard-definition video cameras to relay live imagery from the robot to the operator. With this approach, operators must rely on shadows and other monoscopic depth cues in order to judge distances and object depths. Alternatively, they can contact an object with the robot's manipulator to determine its position, but that approach carries with it the risk of detonation from unintentionally disturbing the target or nearby objects. We recently completed a study in which high-definition (HD) and stereoscopic video cameras were used in addition to conventional standard-definition (SD) cameras in order to determine if higher resolutions and/or stereoscopic depth cues improve operators' overall performance of various unmanned ground vehicle (UGV) tasks. We also studied the effect that the different vision modes had on operator comfort. A total of six different head-aimed vision modes were used including normal-separation HD stereo, SD stereo, "micro" (reduced separation) SD stereo, HD mono, and SD mono (two types). In general, the study results support the expectation that higher resolution and stereoscopic vision aid UGV teleoperation, but the degree of improvement was found to depend on the specific task being performed; certain tasks derived notably more benefit from improved depth perception than others. This effort was sponsored by the Joint Ground Robotics Enterprise under Robotics Technology Consortium Agreement #69-200902 T01. Technical management was provided by the U.S. Air Force Research Laboratory's Robotics Research and Development Group at Tyndall AFB, Florida.
Tagaste, Barbara; Riboldi, Marco; Spadea, Maria F; Bellante, Simone; Baroni, Guido; Cambria, Raffaella; Garibaldi, Cristina; Ciocca, Mario; Catalano, Gianpiero; Alterio, Daniela; Orecchia, Roberto
2012-04-01
To compare infrared (IR) optical vs. stereoscopic X-ray technologies for patient setup in image-guided stereotactic radiotherapy. Retrospective data analysis of 233 fractions in 127 patients treated with hypofractionated stereotactic radiotherapy was performed. Patient setup at the linear accelerator was carried out by means of combined IR optical localization and stereoscopic X-ray image fusion in 6 degrees of freedom (6D). Data were analyzed to evaluate the geometric and dosimetric discrepancy between the two patient setup strategies. Differences between IR optical localization and 6D X-ray image fusion parameters were on average within the expected localization accuracy, as limited by CT image resolution (3 mm). A disagreement between the two systems below 1 mm in all directions was measured in patients treated for cranial tumors. In extracranial sites, larger discrepancies and higher variability were observed as a function of the initial patient alignment. The compensation of IR-detected rotational errors resulted in a significantly improved agreement with 6D X-ray image fusion. On the basis of the bony anatomy registrations, the measured differences were found not to be sensitive to patient breathing. The related dosimetric analysis showed that IR-based patient setup caused limited variations in three cases, with 7% maximum dose reduction in the clinical target volume and no dose increase in organs at risk. In conclusion, patient setup driven by IR external surrogates localization in 6D featured comparable accuracy with respect to procedures based on stereoscopic X-ray imaging. Copyright © 2012 Elsevier Inc. All rights reserved.
Demonstration of the feasibility of automated silicon solar cell fabrication
NASA Technical Reports Server (NTRS)
Taylor, W. E.; Schwartz, F. M.
1975-01-01
A study effort was undertaken to determine the process, steps and design requirements of an automated silicon solar cell production facility. Identification of the key process steps was made and a laboratory model was conceptually designed to demonstrate the feasibility of automating the silicon solar cell fabrication process. A detailed laboratory model was designed to demonstrate those functions most critical to the question of solar cell fabrication process automating feasibility. The study and conceptual design have established the technical feasibility of automating the solar cell manufacturing process to produce low cost solar cells with improved performance. Estimates predict an automated process throughput of 21,973 kilograms of silicon a year on a three shift 49-week basis, producing 4,747,000 hexagonal cells (38mm/side), a total of 3,373 kilowatts at an estimated manufacturing cost of $0.866 per cell or $1.22 per watt.
Automated Space Processing Payloads Study. Volume 1: Executive Summary
NASA Technical Reports Server (NTRS)
1975-01-01
An investigation is described which examined the extent to which the experiment hardware and operational requirements can be met by automatic control and material handling devices; payload and system concepts are defined which make extensive use of automation technology. Topics covered include experiment requirements and hardware data, capabilities and characteristics of industrial automation equipment and controls, payload grouping, automated payload conceptual design, space processing payload preliminary design, automated space processing payloads for early shuttle missions, and cost and scheduling.
Optical cross-talk and visual comfort of a stereoscopic display used in a real-time application
NASA Astrophysics Data System (ADS)
Pala, S.; Stevens, R.; Surman, P.
2007-02-01
Many 3D systems work by presenting to the observer stereoscopic pairs of images that are combined to give the impression of a 3D image. Discomfort experienced when viewing for extended periods may be due to several factors, including the presence of optical crosstalk between the stereo image channels. In this paper we use two video cameras and two LCD panels viewed via a Helmholtz arrangement of mirrors, to display a stereoscopic image inherently free of crosstalk. Simple depth discrimination tasks are performed whilst viewing the 3D image and controlled amounts of image crosstalk are introduced by electronically mixing the video signals. Error monitoring and skin conductance are used as measures of workload as well as traditional subjective questionnaires. We report qualitative measurements of user workload under a variety of viewing conditions. This pilot study revealed a decrease in task performance and increased workload as crosstalk was increased. The observations will assist in the design of further trials planned to be conducted in a medical environment.
Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization
NASA Astrophysics Data System (ADS)
Johnston, Semay; Renambot, Luc; Sauter, Daniel
2013-03-01
Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.
Calculation of 3D Coordinates of a Point on the Basis of a Stereoscopic System
NASA Astrophysics Data System (ADS)
Mussabayev, R. R.; Kalimoldayev, M. N.; Amirgaliyev, Ye. N.; Tairova, A. T.; Mussabayev, T. R.
2018-05-01
The solution of three-dimensional (3D) coordinate calculation task for a material point is considered. Two flat images (a stereopair) which correspond to the left and to the right viewpoints of a 3D scene are used for this purpose. The stereopair is obtained using two cameras with parallel optical axes. The analytical formulas for calculating 3D coordinates of a material point in the scene were obtained on the basis of analysis of the stereoscopic system optical and geometrical schemes. The detailed presentation of the algorithmic and hardware realization of the given method was discussed with the the practical. The practical module was recommended for the determination of the optical system unknown parameters. The series of experimental investigations were conducted for verification of theoretical results. During these experiments the minor inaccuracies were occurred by space distortions in the optical system and by it discrecity. While using the high quality stereoscopic system, the existing calculation inaccuracy enables to apply the given method for the wide range of practical tasks.
NASA Astrophysics Data System (ADS)
Yeh, Shih-Ching; Rizzo, Albert; Sawchuk, Alexander A.
2007-02-01
We have developed a novel VR task: the Dynamic Reaching Test, that measures human forearm movement in 3D space. In this task, three different stereoscopic displays: autostereoscopic (AS), shutter glasses (SG) and head mounted display (HMD), are used in tests in which subjects must catch a virtual ball thrown at them. Parameters such as percentage of successful catches, movement efficiency (subject path length compared to minimal path length), and reaction time are measured to evaluate differences in 3D perception among the three stereoscopic displays. The SG produces the highest percentage of successful catches, though the difference between the three displays is small, implying that users can perform the VR task with any of the displays. The SG and HMD produced the best movement efficiency, while the AS was slightly less efficient. Finally, the AS and HMD produced similar reaction times that were slightly higher (by 0.1 s) than the SG. We conclude that SG and HMD displays were the most effective, but only slightly better than the AS display.
Understanding human management of automation errors
McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.
2013-01-01
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042
Understanding human management of automation errors.
McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D
2014-01-01
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.
Report of the panel on the land surface: Process of change, section 5
NASA Technical Reports Server (NTRS)
Adams, John B.; Barron, Eric E.; Bloom, Arthur A.; Breed, Carol; Dohrenwend, J.; Evans, Diane L.; Farr, Thomas T.; Gillespie, Allan R.; Isaks, B. L.; Williams, Richard S.
1991-01-01
The panel defined three main areas of study that are central to the Solid Earth Science (SES) program: climate interactions with the Earth's surface, tectonism as it affects the Earth's surface and climate, and human activities that modify the Earth's surface. Four foci of research are envisioned: process studies with an emphasis on modern processes in transitional areas; integrated studies with an emphasis on long term continental climate change; climate-tectonic interactions; and studies of human activities that modify the Earth's surface, with an emphasis on soil degradation. The panel concluded that there is a clear requirement for global coverage by high resolution stereoscopic images and a pressing need for global topographic data in support of studies of the land surface.
Disparity modification in stereoscopic images for emotional enhancement
NASA Astrophysics Data System (ADS)
Kawai, Takashi; Atsuta, Daiki; Kim, Sanghyun; Häkkinen, Jukka
2015-03-01
This paper describes an experiment that focuses on disparity changes in emotional scenes of stereoscopic (3D) images, in which an examination of the effects on pleasant and arousal was carried out by adding binocular disparity to 2D images that evoke specific emotions, and applying disparity modification based on the disparity analysis of prominent 3D movies. From the results of the experiment, it was found that pleasant and arousal was increased by expanding 3D space to a certain level. In addition, pleasant gradually decreased and arousal gradually increased by expansion of 3D space above a certain level.
Device for diagnosis and treatment of impairments on binocular vision and stereopsis
NASA Astrophysics Data System (ADS)
Bahn, Jieun; Choi, Yong-Jin; Son, Jung-Young; Kodratiev, N. V.; Elkhov, Victor A.; Ovechkis, Yuri N.; Chung, Chan-sup
2001-06-01
Strabismus and amblyopia are two main impairments of our visual system, which are responsible for the loss of stereovision. A device is developed for diagnosis and treatment of strabismus and amblyopia, and for training and developing stereopsis. This device is composed of a liquid crystal glasses (LCG), electronics for driving LCG and synchronizing with an IBM PC, and a special software. The software contains specially designed patterns and graphics for enabling to train and develop stereopsis, and do objective measurement of some stereoscopic vision parameters such as horizontal and vertical phoria, fusion, fixation disparity, and stereoscopic visual threshold.
Stereoscopic display technologies for FHD 3D LCD TV
NASA Astrophysics Data System (ADS)
Kim, Dae-Sik; Ko, Young-Ji; Park, Sang-Moo; Jung, Jong-Hoon; Shestak, Sergey
2010-04-01
Stereoscopic display technologies have been developed as one of advanced displays, and many TV industrials have been trying commercialization of 3D TV. We have been developing 3D TV based on LCD with LED BLU (backlight unit) since Samsung launched the world's first 3D TV based on PDP. However, the data scanning of panel and LC's response characteristics of LCD TV cause interference among frames (that is crosstalk), and this makes 3D video quality worse. We propose the method to reduce crosstalk by LCD driving and backlight control of FHD 3D LCD TV.
Bowman, Wesley A; Robar, James L; Sattarivand, Mike
2017-03-01
Stereoscopic x-ray image guided radiotherapy for lung tumors is often hindered by bone overlap and limited soft-tissue contrast. This study aims to evaluate the feasibility of dual-energy imaging techniques and to optimize parameters of the ExacTrac stereoscopic imaging system to enhance soft-tissue imaging for application to lung stereotactic body radiation therapy. Simulated spectra and a physical lung phantom were used to optimize filter material, thickness, tube potentials, and weighting factors to obtain bone subtracted dual-energy images. Spektr simulations were used to identify material in the atomic number range (3-83) based on a metric defined to separate spectra of high and low-energies. Both energies used the same filter due to time constraints of imaging in the presence of respiratory motion. The lung phantom contained bone, soft tissue, and tumor mimicking materials, and it was imaged with a filter thickness in the range of (0-0.7) mm and a kVp range of (60-80) for low energy and (120,140) for high energy. Optimal dual-energy weighting factors were obtained when the bone to soft-tissue contrast-to-noise ratio (CNR) was minimized. Optimal filter thickness and tube potential were achieved by maximizing tumor-to-background CNR. Using the optimized parameters, dual-energy images of an anthropomorphic Rando phantom with a spherical tumor mimicking material inserted in his lung were acquired and evaluated for bone subtraction and tumor contrast. Imaging dose was measured using the dual-energy technique with and without beam filtration and matched to that of a clinical conventional single energy technique. Tin was the material of choice for beam filtering providing the best energy separation, non-toxicity, and non-reactiveness. The best soft-tissue-weighted image in the lung phantom was obtained using 0.2 mm tin and (140, 60) kVp pair. Dual-energy images of the Rando phantom with the tin filter had noticeable improvement in bone elimination, tumor contrast, and noise content when compared to dual-energy imaging with no filtration. The surface dose was 0.52 mGy per each stereoscopic view for both clinical single energy technique and the dual-energy technique in both cases of with and without the tin filter. Dual-energy soft-tissue imaging is feasible without additional imaging dose using the ExacTrac stereoscopic imaging system with optimized acquisition parameters and no beam filtration. Addition of a single tin filter for both the high and low energies has noticeable improvements on dual-energy imaging with optimized parameters. Clinical implementation of a dual-energy technique on ExacTrac stereoscopic imaging could improve lung tumor visibility. © 2017 American Association of Physicists in Medicine.
Real-time skin feature identification in a time-sequential video stream
NASA Astrophysics Data System (ADS)
Kramberger, Iztok
2005-04-01
Skin color can be an important feature when tracking skin-colored objects. Particularly this is the case for computer-vision-based human-computer interfaces (HCI). Humans have a highly developed feeling of space and, therefore, it is reasonable to support this within intelligent HCI, where the importance of augmented reality can be foreseen. Joining human-like interaction techniques within multimodal HCI could, or will, gain a feature for modern mobile telecommunication devices. On the other hand, real-time processing plays an important role in achieving more natural and physically intuitive ways of human-machine interaction. The main scope of this work is the development of a stereoscopic computer-vision hardware-accelerated framework for real-time skin feature identification in the sense of a single-pass image segmentation process. The hardware-accelerated preprocessing stage is presented with the purpose of color and spatial filtering, where the skin color model within the hue-saturation-value (HSV) color space is given with a polyhedron of threshold values representing the basis of the filter model. An adaptive filter management unit is suggested to achieve better segmentation results. This enables the adoption of filter parameters to the current scene conditions in an adaptive way. Implementation of the suggested hardware structure is given at the level of filed programmable system level integrated circuit (FPSLIC) devices using an embedded microcontroller as their main feature. A stereoscopic clue is achieved using a time-sequential video stream, but this shows no difference for real-time processing requirements in terms of hardware complexity. The experimental results for the hardware-accelerated preprocessing stage are given by efficiency estimation of the presented hardware structure using a simple motion-detection algorithm based on a binary function.
Automation of Space Processing Applications Shuttle payloads
NASA Technical Reports Server (NTRS)
Crosmer, W. E.; Neau, O. T.; Poe, J.
1975-01-01
The Space Processing Applications Program is examining the effect of weightlessness on key industrial materials processes, such as crystal growth, fine-grain casting of metals, and production of unique and ultra-pure glasses. Because of safety and in order to obtain optimum performance, some of these processes lend themselves to automation. Automation can increase the number of potential Space Shuttle flight opportunities and increase the overall productivity of the program. Five automated facility design concepts and overall payload combinations incorporating these facilities are presented.
Development of a plan for automating integrated circuit processing
NASA Technical Reports Server (NTRS)
1971-01-01
The operations analysis and equipment evaluations pertinent to the design of an automated production facility capable of manufacturing beam-lead CMOS integrated circuits are reported. The overall plan shows approximate cost of major equipment, production rate and performance capability, flexibility, and special maintenance requirements. Direct computer control is compared with supervisory-mode operations. The plan is limited to wafer processing operations from the starting wafer to the finished beam-lead die after separation etching. The work already accomplished in implementing various automation schemes, and the type of equipment which can be found for instant automation are described. The plan is general, so that small shops or large production units can perhaps benefit. Examples of major types of automated processing machines are shown to illustrate the general concepts of automated wafer processing.
The Hyperspectral Imager for the Coastal Ocean (HICO): Sensor and Data Processing Overview
2010-01-20
backscattering coefficients, and others. Several of these software modules will be developed within the Automated Processing System (APS), a data... Automated Processing System (APS) NRL developed APS, which processes satellite data into ocean color data products. APS is a collection of methods...used for ocean color processing which provide the tools for the automated processing of satellite imagery [1]. These tools are in the process of
10 CFR 1017.28 - Processing on Automated Information Systems (AIS).
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Processing on Automated Information Systems (AIS). 1017.28... UNCLASSIFIED CONTROLLED NUCLEAR INFORMATION Physical Protection Requirements § 1017.28 Processing on Automated Information Systems (AIS). UCNI may be processed or produced on any AIS that complies with the guidance in OMB...
2018-01-01
ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a
Digital image transformation and rectification of spacecraft and radar images
NASA Technical Reports Server (NTRS)
Wu, S. S. C.
1985-01-01
The application of digital processing techniques to spacecraft television pictures and radar images is discussed. The use of digital rectification to produce contour maps from spacecraft pictures is described; images with azimuth and elevation angles are converted into point-perspective frame pictures. The digital correction of the slant angle of radar images to ground scale is examined. The development of orthophoto and stereoscopic shaded relief maps from digital terrain and digital image data is analyzed. Digital image transformations and rectifications are utilized on Viking Orbiter and Lander pictures of Mars.
Correction And Use Of Jitter In Television Images
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.
1989-01-01
Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.
Macro-carriers of plastic deformation of steel surface layers detected by digital image correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kopanitsa, D. G., E-mail: kopanitsa@mail.ru; Ustinov, A. M., E-mail: artemustinov@mail.ru; Potekaev, A. I., E-mail: potekaev@spti.tsu.ru
2016-01-15
This paper presents a study of characteristics of an evolution of deformation fields in surface layers of medium-carbon low-alloy specimens under compression. The experiments were performed on the “Universal Testing Machine 4500” using a digital stereoscopic image processing system Vic-3D. A transition between stages is reflected as deformation redistribution on the near-surface layers. Electronic microscopy shows that the structure of the steel is a mixture of pearlite and ferrite grains. A proportion of pearlite is 40% and ferrite is 60%.
Semi-Automated Processing of Trajectory Simulator Output Files for Model Evaluation
2018-01-01
ARL-TR-8284 ● JAN 2018 US Army Research Laboratory Semi-Automated Processing of Trajectory Simulator Output Files for Model......Do not return it to the originator. ARL-TR-8284 ● JAN 2018 US Army Research Laboratory Semi-Automated Processing of Trajectory
An Intelligent Automation Platform for Rapid Bioprocess Design.
Wu, Tianyi; Zhou, Yuhong
2014-08-01
Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user's inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. © 2013 Society for Laboratory Automation and Screening.
Effect of image scaling on stereoscopic movie experience
NASA Astrophysics Data System (ADS)
Häkkinen, Jukka P.; Hakala, Jussi; Hannuksela, Miska; Oittinen, Pirkko
2011-03-01
Camera separation affects the perceived depth in stereoscopic movies. Through control of the separation and thereby the depth magnitudes, the movie can be kept comfortable but interesting. In addition, the viewing context has a significant effect on the perceived depth, as a larger display and longer viewing distances also contribute to an increase in depth. Thus, if the content is to be viewed in multiple viewing contexts, the depth magnitudes should be carefully planned so that the content always looks acceptable. Alternatively, the content can be modified for each viewing situation. To identify the significance of changes due to the viewing context, we studied the effect of stereoscopic camera base distance on the viewer experience in three different situations: 1) small sized video and a viewing distance of 38 cm, 2) television and a viewing distance of 158 cm, and 3) cinema and a viewing distance of 6-19 meters. We examined three different animations with positive parallax. The results showed that the camera distance had a significant effect on the viewing experience in small display/short viewing distance situations, in which the experience ratings increased until the maximum disparity in the scene was 0.34 - 0.45 degrees of visual angle. After 0.45 degrees, increasing the depth magnitude did not affect the experienced quality ratings. Interestingly, changes in the camera distance did not affect the experience ratings in the case of television or cinema if the depth magnitudes were below one degree of visual angle. When the depth was greater than one degree, the experience ratings began to drop significantly. These results indicate that depth magnitudes have a larger effect on the viewing experience with a small display. When a stereoscopic movie is viewed from a larger display, other experiences might override the effect of depth magnitudes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, J. A.; Perry, C. H.; Harrison, R. A.
2013-11-10
The twin-spacecraft STEREO mission has enabled simultaneous white-light imaging of the solar corona and inner heliosphere from multiple vantage points. This has led to the development of numerous stereoscopic techniques to investigate the three-dimensional structure and kinematics of solar wind transients such as coronal mass ejections (CMEs). Two such methods—triangulation and the tangent to a sphere—can be used to determine time profiles of the propagation direction and radial distance (and thereby radial speed) of a solar wind transient as it travels through the inner heliosphere, based on its time-elongation profile viewed by two observers. These techniques are founded on themore » assumption that the transient can be characterized as a point source (fixed φ, FP, approximation) or a circle attached to Sun-center (harmonic mean, HM, approximation), respectively. These geometries constitute extreme descriptions of solar wind transients, in terms of their cross-sectional extent. Here, we present the stereoscopic expressions necessary to derive propagation direction and radial distance/speed profiles of such transients based on the more generalized self-similar expansion (SSE) geometry, for which the FP and HM geometries form the limiting cases; our implementation of these equations is termed the stereoscopic SSE method. We apply the technique to two Earth-directed CMEs from different phases of the STEREO mission, the well-studied event of 2008 December and a more recent event from 2012 March. The latter CME was fast, with an initial speed exceeding 2000 km s{sup –1}, and highly geoeffective, in stark contrast to the slow and ineffectual 2008 December CME.« less
Volonté, Francesco; Buchs, Nicolas C; Pugin, François; Spaltenstein, Joël; Schiltz, Boris; Jung, Minoa; Hagen, Monika; Ratib, Osman; Morel, Philippe
2013-09-01
Computerized management of medical information and 3D imaging has become the norm in everyday medical practice. Surgeons exploit these emerging technologies and bring information previously confined to the radiology rooms into the operating theatre. The paper reports the authors' experience with integrated stereoscopic 3D-rendered images in the da Vinci surgeon console. Volume-rendered images were obtained from a standard computed tomography dataset using the OsiriX DICOM workstation. A custom OsiriX plugin was created that permitted the 3D-rendered images to be displayed in the da Vinci surgeon console and to appear stereoscopic. These rendered images were displayed in the robotic console using the TilePro multi-input display. The upper part of the screen shows the real endoscopic surgical field and the bottom shows the stereoscopic 3D-rendered images. These are controlled by a 3D joystick installed on the console, and are updated in real time. Five patients underwent a robotic augmented reality-enhanced procedure. The surgeon was able to switch between the classical endoscopic view and a combined virtual view during the procedure. Subjectively, the addition of the rendered images was considered to be an undeniable help during the dissection phase. With the rapid evolution of robotics, computer-aided surgery is receiving increasing interest. This paper details the authors' experience with 3D-rendered images projected inside the surgical console. The use of this intra-operative mixed reality technology is considered very useful by the surgeon. It has been shown that the usefulness of this technique is a step toward computer-aided surgery that will progress very quickly over the next few years. Copyright © 2012 John Wiley & Sons, Ltd.
Automated Student Aid Processing: The Challenge and Opportunity.
ERIC Educational Resources Information Center
St. John, Edward P.
1985-01-01
To utilize automated technology for student aid processing, it is necessary to work with multi-institutional offices (student aid, admissions, registration, and business) and to develop automated interfaces with external processing systems at state and federal agencies and perhaps at need-analysis organizations and lenders. (MLW)
Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema
NASA Astrophysics Data System (ADS)
Manolas, Christos; Pauletto, Sandra
2014-09-01
Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.
NASA Astrophysics Data System (ADS)
Tsao, Thomas R.; Tsao, Doris
1997-04-01
In the 1980's, neurobiologist suggested a simple mechanism in primate visual cortex for maintaining a stable and invariant representation of a moving object. The receptive field of visual neurons has real-time transforms in response to motion, to maintain a stable representation. When the visual stimulus is changed due to motion, the geometric transform of the stimulus triggers a dual transform of the receptive field. This dual transform in the receptive fields compensates geometric variation in the stimulus. This process can be modelled using a Lie group method. The massive array of affine parameter sensing circuits will function as a smart sensor tightly coupled to the passive imaging sensor (retina). Neural geometric engine is a neuromorphic computing device simulating our Lie group model of spatial perception of primate's primal visual cortex. We have developed the computer simulation and experimented on realistic and synthetic image data, and performed a preliminary research of using analog VLSI technology for implementation of the neural geometric engine. We have benchmark tested on DMA's terrain data with their result and have built an analog integrated circuit to verify the computational structure of the engine. When fully implemented on ANALOG VLSI chip, we will be able to accurately reconstruct a 3D terrain surface in real-time from stereoscopic imagery.
NASA Technical Reports Server (NTRS)
Anderson, Kinsey A.
1991-01-01
The objective of this grant was to measure the spatial structure and directivity of the hard X-ray and low energy gamma-ray (100 keV-2 MeV) continuum sources in solar flares using stereoscopic observations made with spectrometers aboard the Pioneer Venus Orbiter (PVO) and Third International Sun Earth Explorer (ISEE-3) spacecraft. Since the hard X-ray emission is produced by energetic electrons through the bremsstrahlung process, the observed directivity can be directly related to the 'beaming' of electrons accelerated during the flare as they propagate from the acceleration region in the corona to the chromosphere/transition region. Some models (e.g., the thick-target model) predict that most of the impulsive hard X-ray/low energy gamma-ray source is located in the chromosphere, the effective height of the X-ray source above the photosphere increasing with the decrease in the photon energy. This can be verified by determining the height-dependence of the photon source through stereoscopic observations of those flares which are partially occulted from the view of one of the two spacecraft. Thus predictions about beaming of electrons as well as their spatial distributions could be tested through the analysis proposed under this grant.
Saliency Detection of Stereoscopic 3D Images with Application to Visual Discomfort Prediction
NASA Astrophysics Data System (ADS)
Li, Hong; Luo, Ting; Xu, Haiyong
2017-06-01
Visual saliency detection is potentially useful for a wide range of applications in image processing and computer vision fields. This paper proposes a novel bottom-up saliency detection approach for stereoscopic 3D (S3D) images based on regional covariance matrix. As for S3D saliency detection, besides the traditional 2D low-level visual features, additional 3D depth features should also be considered. However, only limited efforts have been made to investigate how different features (e.g. 2D and 3D features) contribute to the overall saliency of S3D images. The main contribution of this paper is that we introduce a nonlinear feature integration descriptor, i.e., regional covariance matrix, to fuse both 2D and 3D features for S3D saliency detection. The regional covariance matrix is shown to be effective for nonlinear feature integration by modelling the inter-correlation of different feature dimensions. Experimental results demonstrate that the proposed approach outperforms several existing relevant models including 2D extended and pure 3D saliency models. In addition, we also experimentally verified that the proposed S3D saliency map can significantly improve the prediction accuracy of experienced visual discomfort when viewing S3D images.
NASA Astrophysics Data System (ADS)
Ogawa, Masahiko; Shidoji, Kazunori
2011-03-01
High-resolution stereoscopic images are effective for use in virtual reality and teleoperation systems. However, the higher the image resolution, the higher is the cost of computer processing and communication. To reduce this cost, numerous earlier studies have suggested the use of multi-resolution images, which have high resolution in region of interests and low resolution in other areas. However, observers can perceive unpleasant sensations and incorrect depth because they can see low-resolution areas in their field of vision. In this study, we conducted an experiment to research the relationship between the viewing field and the perception of image resolution, and determined respective thresholds of image-resolution perception for various positions of the viewing field. The results showed that participants could not distinguish between the high-resolution stimulus and the decreased stimulus, 63 ppi, at positions more than 8 deg outside the gaze point. Moreover, with positions shifted a further 11 and 13 deg from the gaze point, participants could not distinguish between the high-resolution stimulus and the decreased stimuli whose resolution densities were 42 and 25 ppi. Hence, we will propose the composition of multi-resolution images in which observers do not perceive unpleasant sensations and incorrect depth with data reduction (compression).
NASA Astrophysics Data System (ADS)
Noh, Myoung-Jong; Howat, Ian M.
2018-02-01
The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.
NASA Astrophysics Data System (ADS)
Chan-Amaya, Alejandro; Anaya-Pérez, María Elena; Benítez-Baltazar, Víctor Hugo
2017-08-01
Companies are constantly looking for improvements in productivity to increase their competitiveness. The use of automation technologies is a tool that have been proven to be effective to achieve this. There are companies that are not familiar with the process to acquire automation technologies, therefore, they abstain from investments and thereby miss the opportunity to take advantage of it. The present document proposes a methodology to determine the level of automation appropriate for the production process and thus minimize automation and improve production taking in consideration the ergonomics factor.
CFD Process Pre- and Post-processing Automation in Support of Space Propulsion
NASA Technical Reports Server (NTRS)
Dorney, Suzanne M.
2003-01-01
The use of Computational Fluid Dynamics or CFD has become standard practice in the design and analysis of the major components used for space propulsion. In an attempt to standardize and improve the CFD process a series of automated tools have been developed. Through the use of these automated tools the application of CFD to the design cycle has been improved and streamlined. This paper presents a series of applications in which deficiencies were identified in the CFD process and corrected through the development of automated tools.
Comparability of automated human induced pluripotent stem cell culture: a pilot study.
Archibald, Peter R T; Chandra, Amit; Thomas, Dave; Chose, Olivier; Massouridès, Emmanuelle; Laâbi, Yacine; Williams, David J
2016-12-01
Consistent and robust manufacturing is essential for the translation of cell therapies, and the utilisation automation throughout the manufacturing process may allow for improvements in quality control, scalability, reproducibility and economics of the process. The aim of this study was to measure and establish the comparability between alternative process steps for the culture of hiPSCs. Consequently, the effects of manual centrifugation and automated non-centrifugation process steps, performed using TAP Biosystems' CompacT SelecT automated cell culture platform, upon the culture of a human induced pluripotent stem cell (hiPSC) line (VAX001024c07) were compared. This study, has demonstrated that comparable morphologies and cell diameters were observed in hiPSCs cultured using either manual or automated process steps. However, non-centrifugation hiPSC populations exhibited greater cell yields, greater aggregate rates, increased pluripotency marker expression, and decreased differentiation marker expression compared to centrifugation hiPSCs. A trend for decreased variability in cell yield was also observed after the utilisation of the automated process step. This study also highlights the detrimental effect of the cryopreservation and thawing processes upon the growth and characteristics of hiPSC cultures, and demonstrates that automated hiPSC manufacturing protocols can be successfully transferred between independent laboratories.
The automated system for technological process of spacecraft's waveguide paths soldering
NASA Astrophysics Data System (ADS)
Tynchenko, V. S.; Murygin, A. V.; Emilova, O. A.; Bocharov, A. N.; Laptenok, V. D.
2016-11-01
The paper solves the problem of automated process control of space vehicles waveguide paths soldering by means of induction heating. The peculiarities of the induction soldering process are analyzed and necessity of information-control system automation is identified. The developed automated system makes the control of the product heating process, by varying the power supplied to the inductor, on the basis of information about the soldering zone temperature, and stabilizing the temperature in a narrow range above the melting point of the solder but below the melting point of the waveguide. This allows the soldering process automating to improve the quality of the waveguides and eliminate burn-troughs. The article shows a block diagram of a software system consisting of five modules, and describes the main algorithm of its work. Also there is a description of the waveguide paths automated soldering system operation, for explaining the basic functions and limitations of the system. The developed software allows setting of the measurement equipment, setting and changing parameters of the soldering process, as well as view graphs of temperatures recorded by the system. There is shown the results of experimental studies that prove high quality of soldering process control and the system applicability to the tasks of automation.
[Algorithm for the automated processing of rheosignals].
Odinets, G S
1988-01-01
Algorithm for rheosignals recognition for a microprocessing device with a representation apparatus and with automated and manual cursor control was examined. The algorithm permits to automate rheosignals registrating and processing taking into account their changeability.
Flexible End2End Workflow Automation of Hit-Discovery Research.
Holzmüller-Laue, Silke; Göde, Bernd; Thurow, Kerstin
2014-08-01
The article considers a new approach of more complex laboratory automation at the workflow layer. The authors purpose the automation of end2end workflows. The combination of all relevant subprocesses-whether automated or manually performed, independently, and in which organizational unit-results in end2end processes that include all result dependencies. The end2end approach focuses on not only the classical experiments in synthesis or screening, but also on auxiliary processes such as the production and storage of chemicals, cell culturing, and maintenance as well as preparatory activities and analyses of experiments. Furthermore, the connection of control flow and data flow in the same process model leads to reducing of effort of the data transfer between the involved systems, including the necessary data transformations. This end2end laboratory automation can be realized effectively with the modern methods of business process management (BPM). This approach is based on a new standardization of the process-modeling notation Business Process Model and Notation 2.0. In drug discovery, several scientific disciplines act together with manifold modern methods, technologies, and a wide range of automated instruments for the discovery and design of target-based drugs. The article discusses the novel BPM-based automation concept with an implemented example of a high-throughput screening of previously synthesized compound libraries. © 2014 Society for Laboratory Automation and Screening.
The mosaics of Mars: As seen by the Viking Lander cameras
NASA Technical Reports Server (NTRS)
Levinthal, E. C.; Jones, K. L.
1980-01-01
The mosaics and derivative products produced from many individual high resolution images acquired by the Viking Lander Camera Systems are described: A morning and afternoon mosaic for both cameras at the Lander 1 Chryse Planitia site, and a morning, noon, and afternoon camera pair at Utopia Planitia, the Lander 11 site. The derived products include special geometric projections of the mosaic data sets, polar stereographic (donut), stereoscopic, and orthographic. Contour maps and vertical profiles of the topography were overlaid on the mosaics from which they were derived. Sets of stereo pairs were extracted and enlarged from stereoscopic projections of the mosaics.
Analysis of Performance of Stereoscopic-Vision Software
NASA Technical Reports Server (NTRS)
Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert
2007-01-01
A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.
A foreground object features-based stereoscopic image visual comfort assessment model
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.
2014-11-01
Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.
Simulated disparity and peripheral blur interact during binocular fusion.
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J
2014-07-17
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual’s aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. © 2014 ARVO.
Simulated disparity and peripheral blur interact during binocular fusion
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J
2014-01-01
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual's aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. PMID:25034260
NASA Astrophysics Data System (ADS)
Tan, S. L. E.
2005-03-01
Stereoscopy was used in medicine as long ago as 1898, but has not gained widespread acceptance except for a peak in the 1930's. It retains a use in orthopaedics in the form of Radiostereogrammetrical Analysis (RSA), though this is now done by computer software without using stereopsis. Combining computer assisted stereoscopic displays with both conventional plain films and reconstructed volumetric axial data, we are reassessing the use of stereoscopy in orthopaedics. Applications include use in developing nations or rural settings, erect patients where axial imaging cannot be used, and complex deformity and trauma reconstruction. Extension into orthopaedic endoscopic systems and teaching aids (e.g. operative videos) are further possibilities. The benefits of stereoscopic vision in increased perceived resolution and depth perception can help orthopaedic surgeons achieve more accurate diagnosis and better pre-operative planning. Limitations to currently available stereoscopic displays which need to be addressed prior to widespread acceptance are: availability of hardware and software, loss of resolution, use of glasses, and image "ghosting". Journal publication, the traditional mode of information dissemination in orthopaedics, is also viewed as a hindrance to the acceptance of stereoscopy - it does not deliver the full impact of stereoscopy and "hands-on" demonstrations are needed.
NASA Astrophysics Data System (ADS)
Eichenlaub, Jesse B.
2005-03-01
The difference in accommodation and convergence distance experienced when viewing stereoscopic displays has long been recognized as a source of visual discomfort. It is especially problematic in head mounted virtual reality and enhanced reality displays, where images must often be displayed across a large depth range or superimposed on real objects. DTI has demonstrated a novel method of creating stereoscopic images in which the focus and fixation distances are closely matched for all parts of the scene from close distances to infinity. The method is passive in the sense that it does not rely on eye tracking, moving parts, variable focus optics, vibrating optics, or feedback loops. The method uses a rapidly changing illumination pattern in combination with a high speed microdisplay to create cones of light that converge at different distances to form the voxels of a high resolution space filling image. A bench model display was built and a series of visual tests were performed in order to demonstrate the concept and investigate both its capabilities and limitations. Results proved conclusively that real optical images were being formed and that observers had to change their focus to read text or see objects at different distances
Stereoscopic Movies for Teaching and Learning of Astronomy
NASA Astrophysics Data System (ADS)
Hayashi, Mitsuru; Kato, Tsunehiko N.; Takeda, Takaaaki; Kokubo, Eiichiro; Miura, Hitoshi; Takahei, Toshiyuki; Miyama, Shoken M.; Kaifu, Norio
To attract the interest of the public in astronomy we visualize data obtained through simulations by using super computers and observations by using state-of -the-art facilities for example the SUBARU Telescope in the virtual reality system. The system is composed of three soft screens. We use two PC's two DLP projectors with circular polarization filters and one mirror for each screen to realize stereoscopic projection. By wearing glasses of circular polarization filters we can experience immersiveness in the system. Six PC's are connected by using optical fiber cable(1Gbps). Especially we developed the software for synchronization and realized stereoscopic movies(15-30 frames per second). In addition to teaching and learning of astronomy we also utilize the system above for public relations and science itself in NAO Mitaka. The system can provide scientists with the point of view we cannot realize on the earth. We are planning to improve the contents easier for the public to understand and distribute the contents to museums and educational institutions through networks for example Super SINET(the internet backbone connects institutes at 10Gbps) in 2003 in addition to monthly exhibition in NAOMitaka
Optoelectronic stereoscopic device for diagnostics, treatment, and developing of binocular vision
NASA Astrophysics Data System (ADS)
Pautova, Larisa; Elkhov, Victor A.; Ovechkis, Yuri N.
2003-08-01
Operation of the device is based on alternative generation of pictures for left and right eyes on the monitor screen. Controller gives pulses on LCG so that shutter for left or right eye opens synchronously with pictures. The device provides frequency of switching more than 100 Hz, and that is why the flickering is absent. Thus, a separate demonstration of images to the left eye or to the right one in turn is obtained for patients being unaware and creates the conditions of binocular perception clsoe to natural ones without any additional separation of vision fields. LC-cell transfer characteristic coodination with time parameters of monitor screen has enabled to improve stereo image quality. Complicated problem of computer stereo images with LC-glasses is so called 'ghosts' - noise images that come to blocked eye. We reduced its influence by adapting stereo images to phosphor and LC-cells characteristics. The device is intended for diagnostics and treatment of stabismus, amblyopia and other binocular and stereoscopic vision impairments, for cultivating, training and developing of stereoscopic vision, for measurements of horizontal and vertical phoria, phusion reserves, the stereovision acuity and some else, for fixing central scotoma borders, as well as suppression scotoma in strabismus too.
Viewing 3D TV over two months produces no discernible effects on balance, coordination or eyesight
Read, Jenny C.A.; Godfrey, Alan; Bohr, Iwo; Simonotto, Jennifer; Galna, Brook; Smulders, Tom V.
2016-01-01
Abstract With the rise in stereoscopic 3D media, there has been concern that viewing stereoscopic 3D (S3D) content could have long-term adverse effects, but little data are available. In the first study to address this, 28 households who did not currently own a 3D TV were given a new TV set, either S3D or 2D. The 116 members of these households all underwent tests of balance, coordination and eyesight, both before they received their new TV set, and after they had owned it for 2 months. We did not detect any changes which appeared to be associated with viewing 3D TV. We conclude that viewing 3D TV does not produce detectable effects on balance, coordination or eyesight over the timescale studied. Practitioner Summary: Concern has been expressed over possible long-term effects of stereoscopic 3D (S3D). We looked for any changes in vision, balance and coordination associated with normal home S3D TV viewing in the 2 months after first acquiring a 3D TV. We find no evidence of any changes over this timescale. PMID:26758965
Depth reversals in stereoscopic displays driven by apparent size
NASA Astrophysics Data System (ADS)
Sacher, Gunnar; Hayes, Amy; Thornton, Ian M.; Sereno, Margaret E.; Malony, Allen D.
1998-04-01
In visual scenes, depth information is derived from a variety of monocular and binocular cues. When in conflict, a monocular cue is sometimes able to override the binocular information. We examined the accuracy of relative depth judgments in orthographic, stereoscopic displays and found that perceived relative size can override binocular disparity as a depth cue in a situation where the relative size information is itself generated from disparity information, not from retinal size difference. A size discrimination task confirmed the assumption that disparity information was perceived and used to generate apparent size differences. The tendency for the apparent size cue to override disparity information can be modulated by varying the strength of the apparent size cue. In addition, an analysis of reaction times provides supporting evidence for this novel depth reversal effect. We believe that human perception must be regarded as an important component of stereoscopic applications. Hence, if applications are to be effective and accurate, it is necessary to take into account the richness and complexity of the human visual perceptual system that interacts with them. We discuss implications of this and similar research for human performance in virtual environments, the design of visual presentations for virtual worlds, and the design of visualization tools.
Evaluation of stereoscopic display with visual function and interview
NASA Astrophysics Data System (ADS)
Okuyama, Fumio
1999-05-01
The influence of binocular stereoscopic (3D) television display on the human eye were compared with one of a 2D display, using human visual function testing and interviews. A 40- inch double lenticular display was used for 2D/3D comparison experiments. Subjects observed the display for 30 minutes at a distance 1.0 m, with a combination of 2D material and one of 3D material. The participants were twelve young adults. Main optometric test with visual function measured were visual acuity, refraction, phoria, near vision point, accommodation etc. The interview consisted of 17 questions. Testing procedures were performed just before watching, just after watching, and forty-five minutes after watching. Changes in visual function are characterized as prolongation of near vision point, decrease of accommodation and increase in phoria. 3D viewing interview results show much more visual fatigue in comparison with 2D results. The conclusions are: 1) change in visual function is larger and visual fatigue is more intense when viewing 3D images. 2) The evaluation method with visual function and interview proved to be very satisfactory for analyzing the influence of stereoscopic display on human eye.
NASA Astrophysics Data System (ADS)
Yolken, H. T.; Mehrabian, R.
1985-12-01
These are the proceedings of the workshop A National Forum on the Future of Automated Materials Processing in U.S. Industry - The Role of Sensors. This is the first of two workshops to be sponsored by the Industrial Research Institute and the White House Office of Science and Technology Policy, Committee on Materials Working Group on Automation of Materials Processing. The second workshop will address the other two key components required for automated materials processing, process models and artificial intelligence coupled with computer integration of the system. The objective of these workshops is to identify and assess important issues afecting the competitive position of U.S. industry related to its ability to automate production processes for basic and advanced materials and to develop approaches for improved capability through cooperative R&D and associated efforts.
Augmented microscopy with near-infrared fluorescence detection
NASA Astrophysics Data System (ADS)
Watson, Jeffrey R.; Martirosyan, Nikolay; Skoch, Jesse; Lemole, G. Michael; Anton, Rein; Romanowski, Marek
2015-03-01
Near-infrared (NIR) fluorescence has become a frequently used intraoperative technique for image-guided surgical interventions. In procedures such as cerebral angiography, surgeons use the optical surgical microscope for the color view of the surgical field, and then switch to an electronic display for the NIR fluorescence images. However, the lack of stereoscopic, real-time, and on-site coregistration adds time and uncertainty to image-guided surgical procedures. To address these limitations, we developed the augmented microscope, whereby the electronically processed NIR fluorescence image is overlaid with the anatomical optical image in real-time within the optical path of the microscope. In vitro, the augmented microscope can detect and display indocyanine green (ICG) concentrations down to 94.5 nM, overlaid with the anatomical color image. We prepared polyacrylamide tissue phantoms with embedded polystyrene beads, yielding scattering properties similar to brain matter. In this model, 194 μM solution of ICG was detectable up to depths of 5 mm. ICG angiography was then performed in anesthetized rats. A dynamic process of ICG distribution in the vascular system overlaid with anatomical color images was observed and recorded. In summary, the augmented microscope demonstrates NIR fluorescence detection with superior real-time coregistration displayed within the ocular of the stereomicroscope. In comparison to other techniques, the augmented microscope retains full stereoscopic vision and optical controls including magnification and focus, camera capture, and multiuser access. Augmented microscopy may find application in surgeries where the use of traditional microscopes can be enhanced by contrast agents and image guided delivery of therapeutics, including oncology, neurosurgery, and ophthalmology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinberg, Adam M.; Driscoll, James F.
2009-12-15
The dynamical processes of flame surface straining and wrinkling that occur as turbulence interacts with a premixed flame were measured using cinema-stereoscopic PIV (CS-PIV) and orthogonal-plane cinema-stereoscopic PIV (OPCS-PIV). These diagnostics provided temporally resolved measurements of turbulence-flame interaction at frame rates of up to 3 kHz and spatial resolutions as small as 280{mu} m. Previous descriptions of flame straining and wrinkling have typically been derived based on a canonical interaction between a pair of counter-rotating vortices and a planar flame surface. However, it was found that this configuration did not properly represent real turbulence-flame interaction. Interactions resembling the canonical configurationmore » were observed in less than 10% of the recorded frames. Instead, straining and wrinkling were generally caused more geometrically complex turbulence, consisting of large groups of structures that could be multiply curved and intertwined. The effect of the interaction was highly dependent on the interaction geometry. Furthermore, even when the turbulence did exist in the canonical geometry, the straining and wrinkling of the flame surface were not well characterized by the vortical structures. A new mechanistic description of the turbulence-flame interaction was therefore identified and confirmed by the measurements. In this description, flame surface straining is caused by coherent structures of fluid-dynamic strain-rate (strain-rate structures). The role of vortical structures is to curve existing flame surface, creating wrinkles. By simultaneously considering both forms of turbulent structure, turbulence-flame interactions in both the canonical configuration and more complex geometries could be understood. (author)« less
Nonanalytic Laboratory Automation: A Quarter Century of Progress.
Hawker, Charles D
2017-06-01
Clinical laboratory automation has blossomed since the 1989 AACC meeting, at which Dr. Masahide Sasaki first showed a western audience what his laboratory had implemented. Many diagnostics and other vendors are now offering a variety of automated options for laboratories of all sizes. Replacing manual processing and handling procedures with automation was embraced by the laboratory community because of the obvious benefits of labor savings and improvement in turnaround time and quality. Automation was also embraced by the diagnostics vendors who saw automation as a means of incorporating the analyzers purchased by their customers into larger systems in which the benefits of automation were integrated to the analyzers.This report reviews the options that are available to laboratory customers. These options include so called task-targeted automation-modules that range from single function devices that automate single tasks (e.g., decapping or aliquoting) to multifunction workstations that incorporate several of the functions of a laboratory sample processing department. The options also include total laboratory automation systems that use conveyors to link sample processing functions to analyzers and often include postanalytical features such as refrigerated storage and sample retrieval.Most importantly, this report reviews a recommended process for evaluating the need for new automation and for identifying the specific requirements of a laboratory and developing solutions that can meet those requirements. The report also discusses some of the practical considerations facing a laboratory in a new implementation and reviews the concept of machine vision to replace human inspections. © 2017 American Association for Clinical Chemistry.
Fully automated processing of fMRI data in SPM: from MRI scanner to PACS.
Maldjian, Joseph A; Baer, Aaron H; Kraft, Robert A; Laurienti, Paul J; Burdette, Jonathan H
2009-01-01
Here we describe the Wake Forest University Pipeline, a fully automated method for the processing of fMRI data using SPM. The method includes fully automated data transfer and archiving from the point of acquisition, real-time batch script generation, distributed grid processing, interface to SPM in MATLAB, error recovery and data provenance, DICOM conversion and PACS insertion. It has been used for automated processing of fMRI experiments, as well as for the clinical implementation of fMRI and spin-tag perfusion imaging. The pipeline requires no manual intervention, and can be extended to any studies requiring offline processing.
More steps towards process automation for optical fabrication
NASA Astrophysics Data System (ADS)
Walker, David; Yu, Guoyu; Beaucamp, Anthony; Bibby, Matt; Li, Hongyu; McCluskey, Lee; Petrovic, Sanja; Reynolds, Christina
2017-06-01
In the context of Industrie 4.0, we have previously described the roles of robots in optical processing, and their complementarity with classical CNC machines, providing both processing and automation functions. After having demonstrated robotic moving of parts between a CNC polisher and metrology station, and auto-fringe-acquisition, we have moved on to automate the wash-down operation. This is part of a wider strategy we describe in this paper, leading towards automating the decision-making operations required before and throughout an optical manufacturing cycle.
Automated Subsystem Control for Life Support System (ASCLSS)
NASA Technical Reports Server (NTRS)
Block, Roger F.
1987-01-01
The Automated Subsystem Control for Life Support Systems (ASCLSS) program has successfully developed and demonstrated a generic approach to the automation and control of space station subsystems. The automation system features a hierarchical and distributed real-time control architecture which places maximum controls authority at the lowest or process control level which enhances system autonomy. The ASCLSS demonstration system pioneered many automation and control concepts currently being considered in the space station data management system (DMS). Heavy emphasis is placed on controls hardware and software commonality implemented in accepted standards. The approach demonstrates successfully the application of real-time process and accountability with the subsystem or process developer. The ASCLSS system completely automates a space station subsystem (air revitalization group of the ASCLSS) which moves the crew/operator into a role of supervisory control authority. The ASCLSS program developed over 50 lessons learned which will aide future space station developers in the area of automation and controls..
Improvements to the Processing and Characterization of Needled Composite Laminates
2014-01-01
the automated processing equipment are shown and discussed. The modifications allow better spatial control at the penetration sites and the ability... automated processing equipment are shown and discussed. The modifications allow better spatial control at the penetration sites and the ability to...semi- automated processing equipment, commercial off-the-shelf (COTS) needles and COTS aramid mat designed for other applications. Needled material
Knowledge Representation Artifacts for Use in Sensemaking Support Systems
2015-03-12
and manual processing must be replaced by automated processing wherever it makes sense and is possible. Clearly, given the data and cognitive...knowledge-centric view to situation analysis and decision-making as previously discussed, has lead to the development of several automated processing components...for use in sensemaking support systems [6-11]. In turn, automated processing has required the development of appropriate knowledge
Command and Control Common Semantic Core Required to Enable Net-centric Operations
2008-05-20
automated processing capability. A former US Marine Corps component C4 director during Operation Iraqi Freedom identified the problems of 1) uncertainty...interoperability improvements to warfighter community processes, thanks to ubiquitous automated processing , are likely high and somewhat easier to quantify. A...synchronized with the actions of other partners / warfare communities. This requires high- quality information, rapid sharing and automated processing – which
ERIC Educational Resources Information Center
Naclerio, Nick
1979-01-01
Clerical personnel may be able to climb career ladders as a result of office automation and expanded job opportunities in the word processing area. Suggests opportunities in an automated office system and lists books and periodicals on word processing for counselors and teachers. (MF)
Extratropical Cyclone in the Southern Ocean
NASA Technical Reports Server (NTRS)
2001-01-01
These images from the Multi-angle Imaging SpectroRadiometer portray an occluded extratropical cyclone situated in the Southern Ocean, about 650 kilometers south of the Eyre Peninsula, South Australia.Parts of the Yorke Peninsula and a portion of the Murray-Darling River basin are visible between the clouds near the top of the left-hand image, a true-color view from MISR's nadir(vertical-viewing) camera. Retrieved cloud-tracked wind velocities are indicated by the superimposed arrows. The image on the right displays cloud-top heights. Areas where cloud heights could not be retrieved are shown in black. Both the wind vectors and the cloud heights were derived using data from multiple MISR cameras within automated computer processing algorithms. The stereoscopic algorithms used to generate these results are still being refined, and future versions of these products may show modest changes.Extratropical cyclones are the dominant weather system at midlatitudes, and the term is used generically for region allow-pressure systems in the mid- to high-latitudes. In the southern hemisphere, cyclonic rotation is clockwise. These storms obtain their energy from temperature differences between air masses on either side of warm and cold fronts, and their characteristic pattern is of warm and cold fronts radiating out from a migrating low pressure center which forms, deepens, and dissipates as the fronts fold and collapse on each other. The center of this cyclone has started to decay, with the band of cloud to the south most likely representing the main front that was originally connected with the cyclonic circulation.These views were acquired on October 11, 2001 during Terra orbit 9650, and represent an area of about 380 kilometers x 1900 kilometers.Extratropical Cyclone in the Southern Ocean
NASA Technical Reports Server (NTRS)
2002-01-01
These images from the Multi-angle Imaging SpectroRadiometer (MISR) portray an occluded extratropical cyclone situated in the Southern Ocean, about 650 kilometers south of the Eyre Peninsula, South Australia. The left-hand image, a true-color view from MISR's nadir (vertical-viewing) camera, shows clouds just south of the Yorke Peninsula and the Murray-Darling river basin in Australia. Retrieved cloud-tracked wind velocities are indicated by the superimposed arrows. The image on the right displays cloud-top heights. Areas where cloud heights could not be retrieved are shown in black. Both the wind vectors and the cloud heights were derived using data from multiple MISR cameras within automated computer processing algorithms. The stereoscopic algorithms used to generate these results are still being refined, and future versions of these products may show modest changes. Extratropical cyclones are the dominant weather system at midlatitudes, and the term is used generically for regional low-pressure systems in the mid- to high-latitudes. In the southern hemisphere, cyclonic rotation is clockwise. These storms obtain their energy from temperature differences between air masses on either side of warm and cold fronts, and their characteristic pattern is of warm and cold fronts radiating out from a migrating low pressure center which forms, deepens, and dissipates as the fronts fold and collapse on each other. The center of this cyclone has started to decay, with the band of cloud to the south most likely representing the main front that was originally connected with the cyclonic circulation. These views were acquired on October 11, 2001, and the large view represents an area of about 380 kilometers x 1900 kilometers. Image courtesy NASA/GSFC/LaRC/JPL, MISR Team.
The ASTER Global Digital Elevation Model (GDEM) -for societal benefit -
NASA Astrophysics Data System (ADS)
Hato, M.; Tsu, H.; Tachikawa, T.; Abrams, M.; Bailey, B.
2009-12-01
The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Model (GDEM) was developed jointly by the Ministry of Economy, Trade and Industry (METI) of Japan and the United States National Aeronautics and Space Administration (NASA) under the agreement of contribution to GEOSS and a public release was started on June 29th. ASTER GDEM can be downloaded to users from the Earth Remote Sensing Data Analysis Center (ERSDAC) of Japan and NASA’s Land Processes Distributed Active Archive Center (LP DAAC) free of charge. The ASTER instrument was built by METI and launched onboard NASA’s Terra spacecraft in December 1999. It has an along-track stereoscopic capability using its near infrared spectral band (NIR) and its nadir-viewing and backward-viewing telescopes to acquire stereo image data with a base-to-height ratio of 0.6. The ASTER GDEM was produced by applying newly-developed automated algorithm to more than 1.2 million NIR data Produced DEMs of all scene data was stacked after cloud masking and finally partitioned into 1° x 1°unit (called ‘tile’) data for convenience of distribution and handling by users. Before start of public distribution, ERSDAC and USGS/NASA together with many volunteers did validation and characterization by using a preliminary product of the ASTER GDEM. As a result of validation, METI and NASA evaluated that Version 1 of the ASTER GDEM has enough quality to be used as “experimental” or “research grade” data and consequently decided to release it. The ASTER GDEM covering almost all land area of from 83N to 83S on the earth represents as an important contribution to the global earth observation community. We will show our effort of development of ASTER GDEM and its accuracy and character.
An Intelligent Automation Platform for Rapid Bioprocess Design
Wu, Tianyi
2014-01-01
Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user’s inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. PMID:24088579
An Automated Method for High-Definition Transcranial Direct Current Stimulation Modeling*
Huang, Yu; Su, Yuzhuo; Rorden, Christopher; Dmochowski, Jacek; Datta, Abhishek; Parra, Lucas C.
2014-01-01
Targeted transcranial stimulation with electric currents requires accurate models of the current flow from scalp electrodes to the human brain. Idiosyncratic anatomy of individual brains and heads leads to significant variability in such current flows across subjects, thus, necessitating accurate individualized head models. Here we report on an automated processing chain that computes current distributions in the head starting from a structural magnetic resonance image (MRI). The main purpose of automating this process is to reduce the substantial effort currently required for manual segmentation, electrode placement, and solving of finite element models. In doing so, several weeks of manual labor were reduced to no more than 4 hours of computation time and minimal user interaction, while current-flow results for the automated method deviated by less than 27.9% from the manual method. Key facilitating factors are the addition of three tissue types (skull, scalp and air) to a state-of-the-art automated segmentation process, morphological processing to correct small but important segmentation errors, and automated placement of small electrodes based on easily reproducible standard electrode configurations. We anticipate that such an automated processing will become an indispensable tool to individualize transcranial direct current stimulation (tDCS) therapy. PMID:23367144
Hands-on guide for 3D image creation for geological purposes
NASA Astrophysics Data System (ADS)
Frehner, Marcel; Tisato, Nicola
2013-04-01
Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red-cyan anaglyphs is their simplicity and the possibility to print them on normal paper or project them using a conventional projector. Producing 3D stereoscopic images is much easier than commonly thought. Our hands-on poster provides an easy-to-use guide for producing 3D stereoscopic images. Few simple rules-of-thumb are presented that define how photographs of any scene or object have to be shot to produce good-looking 3D images. We use the free software Stereophotomaker (http://stereo.jpn.org/eng/stphmkr) to produce anaglyphs and provide red-cyan 3D glasses for viewing them. Our hands-on poster is easy to adapt and helps any geologist to present his/her field or hand specimen photographs in a much more fashionable 3D way for future publications or conference posters.
Development Status: Automation Advanced Development Space Station Freedom Electric Power System
NASA Technical Reports Server (NTRS)
Dolce, James L.; Kish, James A.; Mellor, Pamela A.
1990-01-01
Electric power system automation for Space Station Freedom is intended to operate in a loop. Data from the power system is used for diagnosis and security analysis to generate Operations Management System (OMS) requests, which are sent to an arbiter, which sends a plan to a commander generator connected to the electric power system. This viewgraph presentation profiles automation software for diagnosis, scheduling, and constraint interfaces, and simulation to support automation development. The automation development process is diagrammed, and the process of creating Ada and ART versions of the automation software is described.
Containerless automated processing of intermetallic compounds and composites
NASA Technical Reports Server (NTRS)
Johnson, D. R.; Joslin, S. M.; Reviere, R. D.; Oliver, B. F.; Noebe, R. D.
1993-01-01
An automated containerless processing system has been developed to directionally solidify high temperature materials, intermetallic compounds, and intermetallic/metallic composites. The system incorporates a wide range of ultra-high purity chemical processing conditions. The utilization of image processing for automated control negates the need for temperature measurements for process control. The list of recent systems that have been processed includes Cr, Mo, Mn, Nb, Ni, Ti, V, and Zr containing aluminides. Possible uses of the system, process control approaches, and properties and structures of recently processed intermetallics are reviewed.
Information Fusion for Feature Extraction and the Development of Geospatial Information
2004-07-01
of automated processing . 2. Requirements for Geospatial Information Accurate, timely geospatial information is critical for many military...this evaluation illustrates some of the difficulties in comparing manual and automated processing results (figure 5). The automated delineation of
Code of Federal Regulations, 2010 CFR
2010-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2013 CFR
2013-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2014 CFR
2014-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2012 CFR
2012-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2011 CFR
2011-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Automated workflows for modelling chemical fate, kinetics and toxicity.
Sala Benito, J V; Paini, Alicia; Richarz, Andrea-Nicole; Meinl, Thorsten; Berthold, Michael R; Cronin, Mark T D; Worth, Andrew P
2017-12-01
Automation is universal in today's society, from operating equipment such as machinery, in factory processes, to self-parking automobile systems. While these examples show the efficiency and effectiveness of automated mechanical processes, automated procedures that support the chemical risk assessment process are still in their infancy. Future human safety assessments will rely increasingly on the use of automated models, such as physiologically based kinetic (PBK) and dynamic models and the virtual cell based assay (VCBA). These biologically-based models will be coupled with chemistry-based prediction models that also automate the generation of key input parameters such as physicochemical properties. The development of automated software tools is an important step in harmonising and expediting the chemical safety assessment process. In this study, we illustrate how the KNIME Analytics Platform can be used to provide a user-friendly graphical interface for these biokinetic models, such as PBK models and VCBA, which simulates the fate of chemicals in vivo within the body and in vitro test systems respectively. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Adaptive automation of human-machine system information-processing functions.
Kaber, David B; Wright, Melanie C; Prinzel, Lawrence J; Clamann, Michael P
2005-01-01
The goal of this research was to describe the ability of human operators to interact with adaptive automation (AA) applied to various stages of complex systems information processing, defined in a model of human-automation interaction. Forty participants operated a simulation of an air traffic control task. Automated assistance was adaptively applied to information acquisition, information analysis, decision making, and action implementation aspects of the task based on operator workload states, which were measured using a secondary task. The differential effects of the forms of automation were determined and compared with a manual control condition. Results of two 20-min trials of AA or manual control revealed a significant effect of the type of automation on performance, particularly during manual control periods as part of the adaptive conditions. Humans appear to better adapt to AA applied to sensory and psychomotor information-processing functions (action implementation) than to AA applied to cognitive functions (information analysis and decision making), and AA is superior to completely manual control. Potential applications of this research include the design of automation to support air traffic controller information processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purdie, Thomas G., E-mail: Tom.Purdie@rmp.uhn.on.ca; Department of Radiation Oncology, University of Toronto, Toronto, Ontario; Techna Institute, University Health Network, Toronto, Ontario
Purpose: To demonstrate the large-scale clinical implementation and performance of an automated treatment planning methodology for tangential breast intensity modulated radiation therapy (IMRT). Methods and Materials: Automated planning was used to prospectively plan tangential breast IMRT treatment for 1661 patients between June 2009 and November 2012. The automated planning method emulates the manual steps performed by the user during treatment planning, including anatomical segmentation, beam placement, optimization, dose calculation, and plan documentation. The user specifies clinical requirements of the plan to be generated through a user interface embedded in the planning system. The automated method uses heuristic algorithms to definemore » and simplify the technical aspects of the treatment planning process. Results: Automated planning was used in 1661 of 1708 patients receiving tangential breast IMRT during the time interval studied. Therefore, automated planning was applicable in greater than 97% of cases. The time for treatment planning using the automated process is routinely 5 to 6 minutes on standard commercially available planning hardware. We have shown a consistent reduction in plan rejections from plan reviews through the standard quality control process or weekly quality review multidisciplinary breast rounds as we have automated the planning process for tangential breast IMRT. Clinical plan acceptance increased from 97.3% using our previous semiautomated inverse method to 98.9% using the fully automated method. Conclusions: Automation has become the routine standard method for treatment planning of tangential breast IMRT at our institution and is clinically feasible on a large scale. The method has wide clinical applicability and can add tremendous efficiency, standardization, and quality to the current treatment planning process. The use of automated methods can allow centers to more rapidly adopt IMRT and enhance access to the documented improvements in care for breast cancer patients, using technologies that are widely available and already in clinical use.« less
Purdie, Thomas G; Dinniwell, Robert E; Fyles, Anthony; Sharpe, Michael B
2014-11-01
To demonstrate the large-scale clinical implementation and performance of an automated treatment planning methodology for tangential breast intensity modulated radiation therapy (IMRT). Automated planning was used to prospectively plan tangential breast IMRT treatment for 1661 patients between June 2009 and November 2012. The automated planning method emulates the manual steps performed by the user during treatment planning, including anatomical segmentation, beam placement, optimization, dose calculation, and plan documentation. The user specifies clinical requirements of the plan to be generated through a user interface embedded in the planning system. The automated method uses heuristic algorithms to define and simplify the technical aspects of the treatment planning process. Automated planning was used in 1661 of 1708 patients receiving tangential breast IMRT during the time interval studied. Therefore, automated planning was applicable in greater than 97% of cases. The time for treatment planning using the automated process is routinely 5 to 6 minutes on standard commercially available planning hardware. We have shown a consistent reduction in plan rejections from plan reviews through the standard quality control process or weekly quality review multidisciplinary breast rounds as we have automated the planning process for tangential breast IMRT. Clinical plan acceptance increased from 97.3% using our previous semiautomated inverse method to 98.9% using the fully automated method. Automation has become the routine standard method for treatment planning of tangential breast IMRT at our institution and is clinically feasible on a large scale. The method has wide clinical applicability and can add tremendous efficiency, standardization, and quality to the current treatment planning process. The use of automated methods can allow centers to more rapidly adopt IMRT and enhance access to the documented improvements in care for breast cancer patients, using technologies that are widely available and already in clinical use. Copyright © 2014 Elsevier Inc. All rights reserved.
Trust in automation: designing for appropriate reliance.
Lee, John D; See, Katrina A
2004-01-01
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation.
Automated manufacturing process for DEAP stack-actuators
NASA Astrophysics Data System (ADS)
Tepel, Dominik; Hoffstadt, Thorben; Maas, Jürgen
2014-03-01
Dielectric elastomers (DE) are thin polymer films belonging to the class of electroactive polymers (EAP), which are coated with compliant and conductive electrodes on each side. Due to the influence of an electrical field, dielectric elastomers perform a large amount of deformation. In this contribution a manufacturing process of automated fabricated stack-actuators based on dielectric electroactive polymers (DEAP) are presented. First of all the specific design of the considered stack-actuator is explained and afterwards the development, construction and realization of an automated manufacturing process is presented in detail. By applying this automated process, stack-actuators with reproducible and homogeneous properties can be manufactured. Finally, first DEAP actuator modules fabricated by the mentioned process are validated experimentally.
An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques
2018-01-09
ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological and...is no longer needed. Do not return it to the originator. ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy ...4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques 5a. CONTRACT NUMBER
Oxygen-controlled automated neural differentiation of mouse embryonic stem cells.
Mondragon-Teran, Paul; Tostoes, Rui; Mason, Chris; Lye, Gary J; Veraitch, Farlan S
2013-03-01
Automation and oxygen tension control are two tools that provide significant improvements to the reproducibility and efficiency of stem cell production processes. the aim of this study was to establish a novel automation platform capable of controlling oxygen tension during both the cell-culture and liquid-handling steps of neural differentiation processes. We built a bespoke automation platform, which enclosed a liquid-handling platform in a sterile, oxygen-controlled environment. An airtight connection was used to transfer cell culture plates to and from an automated oxygen-controlled incubator. Our results demonstrate that our system yielded comparable cell numbers, viabilities, metabolism profiles and differentiation efficiencies when compared with traditional manual processes. Interestingly, eliminating exposure to ambient conditions during the liquid-handling stage resulted in significant improvements in the yield of MAP2-positive neural cells, indicating that this level of control can improve differentiation processes. This article describes, for the first time, an automation platform capable of maintaining oxygen tension control during both the cell-culture and liquid-handling stages of a 2D embryonic stem cell differentiation process.
Augmenting team cognition in human-automation teams performing in complex operational environments.
Cuevas, Haydee M; Fiore, Stephen M; Caldwell, Barrett S; Strater, Laura
2007-05-01
There is a growing reliance on automation (e.g., intelligent agents, semi-autonomous robotic systems) to effectively execute increasingly cognitively complex tasks. Successful team performance for such tasks has become even more dependent on team cognition, addressing both human-human and human-automation teams. Team cognition can be viewed as the binding mechanism that produces coordinated behavior within experienced teams, emerging from the interplay between each team member's individual cognition and team process behaviors (e.g., coordination, communication). In order to better understand team cognition in human-automation teams, team performance models need to address issues surrounding the effect of human-agent and human-robot interaction on critical team processes such as coordination and communication. Toward this end, we present a preliminary theoretical framework illustrating how the design and implementation of automation technology may influence team cognition and team coordination in complex operational environments. Integrating constructs from organizational and cognitive science, our proposed framework outlines how information exchange and updating between humans and automation technology may affect lower-level (e.g., working memory) and higher-level (e.g., sense making) cognitive processes as well as teams' higher-order "metacognitive" processes (e.g., performance monitoring). Issues surrounding human-automation interaction are discussed and implications are presented within the context of designing automation technology to improve task performance in human-automation teams.
Wait, Eric; Winter, Mark; Bjornsson, Chris; Kokovay, Erzsebet; Wang, Yue; Goderie, Susan; Temple, Sally; Cohen, Andrew R
2014-10-03
Neural stem cells are motile and proliferative cells that undergo mitosis, dividing to produce daughter cells and ultimately generating differentiated neurons and glia. Understanding the mechanisms controlling neural stem cell proliferation and differentiation will play a key role in the emerging fields of regenerative medicine and cancer therapeutics. Stem cell studies in vitro from 2-D image data are well established. Visualizing and analyzing large three dimensional images of intact tissue is a challenging task. It becomes more difficult as the dimensionality of the image data increases to include time and additional fluorescence channels. There is a pressing need for 5-D image analysis and visualization tools to study cellular dynamics in the intact niche and to quantify the role that environmental factors play in determining cell fate. We present an application that integrates visualization and quantitative analysis of 5-D (x,y,z,t,channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. We combine unsupervised image analysis algorithms with an interactive visualization of the results. Our validation interface allows for each data set to be corrected to 100% accuracy, ensuring that downstream data analysis is accurate and verifiable. Our tool is the first to combine all of these aspects, leveraging the synergies obtained by utilizing validation information from stereo visualization to improve the low level image processing tasks.
Film/Adhesive Processing Module for Fiber-Placement Processing of Composites
NASA Technical Reports Server (NTRS)
Hulcher, A. Bruce
2007-01-01
An automated apparatus has been designed and constructed that enables the automated lay-up of composite structures incorporating films, foils, and adhesives during the automated fiber-placement process. This apparatus, denoted a film module, could be used to deposit materials in film or thin sheet form either simultaneously when laying down the fiber composite article or in an independent step.
Mobility and orientation aid for blind persons using artificial vision
NASA Astrophysics Data System (ADS)
Costa, Gustavo; Gusberti, Adrián; Graffigna, Juan Pablo; Guzzo, Martín; Nasisi, Oscar
2007-11-01
Blind or vision-impaired persons are limited in their normal life activities. Mobility and orientation of blind persons is an ever-present research subject because no total solution has yet been reached for these activities that pose certain risks for the affected persons. The current work presents the design and development of a device conceived on capturing environment information through stereoscopic vision. The images captured by a couple of video cameras are transferred and processed by configurable and sequential FPGA and DSP devices that issue action signals to a tactile feedback system. Optimal processing algorithms are implemented to perform this feedback in real time. The components selected permit portability; that is, to readily get used to wearing the device.
Agile based "Semi-"Automated Data ingest process : ORNL DAAC example
NASA Astrophysics Data System (ADS)
Santhana Vannan, S. K.; Beaty, T.; Cook, R. B.; Devarakonda, R.; Hook, L.; Wei, Y.; Wright, D.
2015-12-01
The ORNL DAAC archives and publishes data and information relevant to biogeochemical, ecological, and environmental processes. The data archived at the ORNL DAAC must be well formatted, self-descriptive, and documented, as well as referenced in a peer-reviewed publication. The ORNL DAAC ingest team curates diverse data sets from multiple data providers simultaneously. To streamline the ingest process, the data set submission process at the ORNL DAAC has been recently updated to use an agile process and a semi-automated workflow system has been developed to provide a consistent data provider experience and to create a uniform data product. The goals of semi-automated agile ingest process are to: 1.Provide the ability to track a data set from acceptance to publication 2. Automate steps that can be automated to improve efficiencies and reduce redundancy 3.Update legacy ingest infrastructure 4.Provide a centralized system to manage the various aspects of ingest. This talk will cover the agile methodology, workflow, and tools developed through this system.
Automation Bias: Decision Making and Performance in High-Tech Cockpits
NASA Technical Reports Server (NTRS)
Mosier, Kathleen L.; Skitka, Linda J.; Heers, Susan; Burdick, Mark; Rosekind, Mark R. (Technical Monitor)
1997-01-01
Automated aids and decision support tools are rapidly becoming indispensible tools in high-technology cockpits, and are assuming increasing control of "cognitive" flight tasks, such as calculating fuel-efficient routes, navigating, or detecting and diagnosing system malfunctions and abnormalities. This study was designed to investigate "automation bias," a recently documented factor in the use of automated aids and decision support systems. The term refers to omission and commission errors resulting from the use of automated cues as a heuristic replacement for vigilant information seeking and processing. Glass-cockpit pilots flew flight scenarios involving automation "events," or opportunities for automation-related omission and commission errors. Pilots who perceived themselves as "accountable" for their performance and strategies of interaction with the automation were more likely to double-check automated functioning against other cues, and less likely to commit errors. Pilots were also likely to erroneously "remember" the presence of expected cues when describing their decision-making processes.
Code of Federal Regulations, 2010 CFR
2010-10-01
... to conduct automated data processing and recordkeeping activities through Office Automation... IV-D Systems and office automation? 310.5 Section 310.5 Public Welfare Regulations Relating to Public... AUTOMATION Requirements for Computerized Tribal IV-D Systems and Office Automation § 310.5 What options are...
An Automation Survival Guide for Media Centers.
ERIC Educational Resources Information Center
Whaley, Roger E.
1989-01-01
Reviews factors that should affect the decision to automate a school media center and offers suggestions for the automation process. Topics discussed include getting the library collection ready for automation, deciding what automated functions are needed, evaluating software vendors, selecting software, and budgeting. (CLB)
Fleischer, Heidi; Ramani, Kinjal; Blitti, Koffi; Roddelkopf, Thomas; Warkentin, Mareike; Behrend, Detlef; Thurow, Kerstin
2018-02-01
Automation systems are well established in industries and life science laboratories, especially in bioscreening and high-throughput applications. An increasing demand of automation solutions can be seen in the field of analytical measurement in chemical synthesis, quality control, and medical and pharmaceutical fields, as well as research and development. In this study, an automation solution was developed and optimized for the investigation of new biliary endoprostheses (stents), which should reduce clogging after implantation in the human body. The material inside the stents (incrustations) has to be controlled regularly and under identical conditions. The elemental composition is one criterion to be monitored in stent development. The manual procedure was transferred to an automated process including sample preparation, elemental analysis using inductively coupled plasma mass spectrometry (ICP-MS), and data evaluation. Due to safety issues, microwave-assisted acid digestion was executed outside of the automation system. The performance of the automated process was determined and validated. The measurement results and the processing times were compared for both the manual and the automated procedure. Finally, real samples of stent incrustations and pig bile were analyzed using the automation system.
Tests of Spectral Cloud Classification Using DMSP Fine Mode Satellite Data.
1980-06-02
processing techniques of potential value. Fourier spectral analysis was identified as the most promising technique to upgrade automated processing of...these measurements on the Earth’s surface is 0. 3 n mi. 3. Pickett, R.M., and Blackman, E.S. (1976) Automated Processing of Satellite Imagery Data at Air...and Pickett. R. Al. (1977) Automated Processing of Satellite Imagery Data at the Air Force Global Weather Central: Demonstrations of Spectral Analysis
Stereoscopic, Force-Feedback Trainer For Telerobot Operators
NASA Technical Reports Server (NTRS)
Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.
1994-01-01
Computer-controlled simulator for training technicians to operate remote robots provides both visual and kinesthetic virtual reality. Used during initial stage of training; saves time and expense, increases operational safety, and prevents damage to robots by inexperienced operators. Computes virtual contact forces and torques of compliant robot in real time, providing operator with feel of forces experienced by manipulator as well as view in any of three modes: single view, two split views, or stereoscopic view. From keyboard, user specifies force-reflection gain and stiffness of manipulator hand for three translational and three rotational axes. System offers two simulated telerobotic tasks: insertion of peg in hole in three dimensions, and removal and insertion of drawer.
How much crosstalk can be allowed in a stereoscopic system at various grey levels?
NASA Astrophysics Data System (ADS)
Shestak, Sergey; Kim, Daesik; Kim, Yongie
2012-03-01
We have calculated a perceptual threshold of stereoscopic crosstalk on the basis of mathematical model of human vision sensitivity. Instead of linear model of just noticeable difference (JND) known as Weber's law we applied nonlinear Barten's model. The predicted crosstalk threshold varies with the background luminance. The calculated values of threshold are in a reasonable agreement with known experimental data. We calculated perceptual threshold of crosstalk for various combinations of the applied grey level. This result can be applied for the assessment of grey-to-grey crosstalk compensation. Further computational analysis of the applied model predicts the increase of the displayable image contrast with reduction of the maximum displayable luminance.
Vision in our three-dimensional world
2016-01-01
Many aspects of our perceptual experience are dominated by the fact that our two eyes point forward. Whilst the location of our eyes leaves the environment behind our head inaccessible to vision, co-ordinated use of our two eyes gives us direct access to the three-dimensional structure of the scene in front of us, through the mechanism of stereoscopic vision. Scientific understanding of the different brain regions involved in stereoscopic vision and three-dimensional spatial cognition is changing rapidly, with consequent influences on fields as diverse as clinical practice in ophthalmology and the technology of virtual reality devices. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269595
NASA Astrophysics Data System (ADS)
Taravella, Brandon; Potts, J. Baker; Stegmeir, Matthew
2014-11-01
The University of New Orleans recently acquired a self-contained stereoscopic particle image velocimetry system for use in their 125 ft long towing tank. This system is being used to study the wake flow behind an anguilliform swimming robot that swims with an ideal motion that is theorized not to produce any trailing vortices. The presentation will describe the particulars of the SPIV system along with details of installation of the SPIV system within the towing tank. The calibration routine will be discussed in detail and results of the free-flow runs will be discussed. Preliminary results from the anguilliform swimming motion will also be presented.
NASA Technical Reports Server (NTRS)
1984-01-01
The electroepitaxial process and the Very Large Scale Integration (VLSI) circuits (chips) facilities were chosen because each requires a very high degree of automation, and therefore involved extensive use of teleoperators, robotics, process mechanization, and artificial intelligence. Both cover a raw materials process and a sophisticated multi-step process and are therfore highly representative of the kinds of difficult operation, maintenance, and repair challenges which can be expected for any type of space manufacturing facility. Generic areas were identified which will require significant further study. The initial design will be based on terrestrial state-of-the-art hard automation. One hundred candidate missions were evaluated on the basis of automation portential and availability of meaning ful knowldege. The design requirements and unconstrained design concepts developed for the two missions are presented.
First- and second-order processing in transient stereopsis.
Edwards, M; Pope, D R; Schor, C M
2000-01-01
Large-field stimuli were used to investigate the interaction of first- and second-order pathways in transient-stereo processing. Stimuli consisted of sinewave modulations in either the mean luminance (first-order stimulus) or the contrast (second-order stimulus) of a dynamic-random-dot field. The main results of the present study are that: (1) Depth could be extracted with both the first-order and second-order stimuli; (2) Depth could be extracted from dichoptically mixed first- and second-order stimuli, however, the same stimuli, when presented as a motion sequence, did not result in a motion percept. Based upon these findings we conclude that the transient-stereo system processes both first- and second-order signals, and that these two signals are pooled prior to the extraction of transient depth. This finding of interaction between first- and second-order stereoscopic processing is different from the independence that has been found with the motion system.
Improved compliance by BPM-driven workflow automation.
Holzmüller-Laue, Silke; Göde, Bernd; Fleischer, Heidi; Thurow, Kerstin
2014-12-01
Using methods and technologies of business process management (BPM) for the laboratory automation has important benefits (i.e., the agility of high-level automation processes, rapid interdisciplinary prototyping and implementation of laboratory tasks and procedures, and efficient real-time process documentation). A principal goal of the model-driven development is the improved transparency of processes and the alignment of process diagrams and technical code. First experiences of using the business process model and notation (BPMN) show that easy-to-read graphical process models can achieve and provide standardization of laboratory workflows. The model-based development allows one to change processes quickly and an easy adaption to changing requirements. The process models are able to host work procedures and their scheduling in compliance with predefined guidelines and policies. Finally, the process-controlled documentation of complex workflow results addresses modern laboratory needs of quality assurance. BPMN 2.0 as an automation language to control every kind of activity or subprocess is directed to complete workflows in end-to-end relationships. BPMN is applicable as a system-independent and cross-disciplinary graphical language to document all methods in laboratories (i.e., screening procedures or analytical processes). That means, with the BPM standard, a communication method of sharing process knowledge of laboratories is also available. © 2014 Society for Laboratory Automation and Screening.
Surface topography characterization using 3D stereoscopic reconstruction of SEM images
NASA Astrophysics Data System (ADS)
Vedantha Krishna, Amogh; Flys, Olena; Reddy, Vijeth V.; Rosén, B. G.
2018-06-01
A major drawback of the optical microscope is its limitation to resolve finer details. Many microscopes have been developed to overcome the limitations set by the diffraction of visible light. The scanning electron microscope (SEM) is one such alternative: it uses electrons for imaging, which have much smaller wavelength than photons. As a result high magnification with superior image resolution can be achieved. However, SEM generates 2D images which provide limited data for surface measurements and analysis. Often many research areas require the knowledge of 3D structures as they contribute to a comprehensive understanding of microstructure by allowing effective measurements and qualitative visualization of the samples under study. For this reason, stereo photogrammetry technique is employed to convert SEM images into 3D measurable data. This paper aims to utilize a stereoscopic reconstruction technique as a reliable method for characterization of surface topography. Reconstructed results from SEM images are compared with coherence scanning interferometer (CSI) results obtained by measuring a roughness reference standard sample. This paper presents a method to select the most robust/consistent surface texture parameters that are insensitive to the uncertainties involved in the reconstruction technique itself. Results from the two-stereoscopic reconstruction algorithms are also documented in this paper.
A GeoWall with Physics and Astronomy Applications
NASA Astrophysics Data System (ADS)
Dukes, Phillip; Bruton, Dan
2008-03-01
A GeoWall is a passive stereoscopic projection system that can be used by students, teachers, and researchers for visualization of the structure and dynamics of three-dimensional systems and data. The type of system described here adequately provides 3-D visualization in natural color for large or small groups of viewers. The name ``GeoWall'' derives from its initial development to visualize data in the geosciences.1 An early GeoWall system was developed by Paul Morin at the electronic visualization laboratory at the University of Minnesota and was applied in an introductory geology course in spring of 2001. Since that time, several stereoscopic media, which are applicable to introductory-level physics and astronomy classes, have been developed and released into the public domain. In addition to the GeoWall's application in the classroom, there is considerable value in its use as part of a general science outreach program. In this paper we briefly describe the theory of operation of stereoscopic projection and the basic necessary components of a GeoWall system. Then we briefly describe how we are using a GeoWall as an instructional tool for the classroom and informal astronomy education and in research. Finally, we list sources for several of the free software media in physics and astronomy available for use with a GeoWall system.
Broadcast-quality-stereoscopic video in a time-critical entertainment and corporate environment
NASA Astrophysics Data System (ADS)
Gay, Jean-Philippe
1995-03-01
`reality present: Peter Gabrial and Cirque du Soleil' is a 12 minute original work directed and produced by Doug Brown, Jean-Philippe Gay & A. Coogan, which showcases creative content applications of commercial stereoscopic video equipment. For production, a complete equipment package including a Steadicam mount was used in support of the Ikegami LK-33 camera. Remote production units were fielded in the time critical, on-stage and off-stage environments of 2 major live concerts: Peter Gabriel's Secret World performance at the San Diego Sports Arena, and Cirque du Soleil's Saltimbanco performance in Chicago. Twin 60 Hz video channels were captured on Beta SP for maximum post production flexibility. Digital post production and field sequential mastering were effected in D-2 format at studio facilities in Los Angeles. The program was world premiered to a large public at the World of Music, Arts and Dance festivals in Los Angeles and San Francisco, in late 1993. It was presented to the artists in Los Angeles, Montreal and Washington D.C. Additional presentations have been made using a broad range of commercial and experimental stereoscopic video equipment, including projection systems, LCD and passive eyewear, and digital signal processors. Technical packages for live presentation have been fielded on site and off, through to the present.
Discussion on Height Systems in Stereoscopic Mapping Using the ZY-3 Satellite Images
NASA Astrophysics Data System (ADS)
Zhao, L.; Fu, X.; Zhu, G.; Zhang, J.; Han, C.; Cheng, L.
2018-04-01
The ZY-3 is the civil high-resolution optical stereoscopic mapping satellite independently developed by China. It is mainly used for 1 : 50,000 scale topographic mapping. One of the distinguishing features of the ZY-3 is that the panchromatic triplet camera can obtain thousands of kilometers of continuous strip stereo data. The working mode is suitable for wide-range stereoscopic mapping, in particular global DEM extraction. The ZY-3 constellation is operated in a sun-synchronous at an altitude 505 km, with a 10:30 AM equator crossing time and a 29-day revisiting period. The panchromatic triplet sensors have excellent base-to-height ratio, which is advantageous for obtaining good mapping accuracy. In this paper the China quasi-geoid, EGM2008 and the height conversion method are discussed. It is pointed out that according to the current surveying and mapping specifications, almost all maps and charts use mean sea level for elevation. Experiments on bundle adjustment and DEM extraction with different height systems have been carried out in Liaoning Province of China. The results show that the similar accuracy can be obtained using different elevation system. According to the principle of geodesy and photogrammetry, it is recommended to use ellipsoidal height for satellite photogrammetric calculation and use the orthometric height in mapping production.
Combining volumetric edge display and multiview display for expression of natural 3D images
NASA Astrophysics Data System (ADS)
Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki
2006-02-01
In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.
NASA Astrophysics Data System (ADS)
McIntire, John P.; Wright, Steve T.; Harrington, Lawrence K.; Havig, Paul R.; Watamaniuk, Scott N. J.; Heft, Eric L.
2014-06-01
Twelve participants were tested on a simple virtual object precision placement task while viewing a stereoscopic three-dimensional (S3-D) display. Inclusion criteria included uncorrected or best corrected vision of 20/20 or better in each eye and stereopsis of at least 40 arc sec using the Titmus stereotest. Additionally, binocular function was assessed, including measurements of distant and near phoria (horizontal and vertical) and distant and near horizontal fusion ranges using standard optometric clinical techniques. Before each of six 30 min experimental sessions, measurements of phoria and fusion ranges were repeated using a Keystone View Telebinocular and an S3-D display, respectively. All participants completed experimental sessions in which the task required the precision placement of a virtual object in depth at the same location as a target object. Subjective discomfort was assessed using the simulator sickness questionnaire. Individual placement accuracy in S3-D trials was significantly correlated with several of the binocular screening outcomes: viewers with larger convergent fusion ranges (measured at near distance), larger total fusion ranges (convergent plus divergent ranges, measured at near distance), and/or lower (better) stereoscopic acuity thresholds were more accurate on the placement task. No screening measures were predictive of subjective discomfort, perhaps due to the low levels of discomfort induced.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Zeng, Luan
2017-11-01
Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.
A strategy for mineral and energy resource independence
Carter, W.D.
1983-01-01
Data acquired by Landsats 1, 2, and 3, are beginning to provide the information on which an improved mineral and energy resource exploration strategy can be based. Landsat 4 is expected to augment this capability with its higher resolution (30 m) and additional spectral bands in the Thematic Mapper (TM) designed specifically to discriminate clay minerals associated with mineral alteration. In addition, a new global magnetic anomaly map, derived from the recent Magsat mission, has recently been compiled by the National Aeronautics and Space Administration (NASA), the U.S. Geological Survey (USGS), and others. Preliminary, extremely small-scale renditions of this map indicate that global coverage is nearly complete and that the map will improve upon a previous one derived from Polar Orbiting Geophysical Observatory (POGO) data. Digital processing of the Landsat image data and Magsat geophysical data can be used to create three-dimensional stereoscopic models for which Landsat images provide surface reference to deep structural anomalies. Comparative studies of national Landsat lineament maps, Magsat stereoscopic models, and metallogenic information derived from the Computerized Resources Information Bank (CRIB) inventory of U.S. mineral resources, provide a way of identifying and selecting exploration areas that have mineral resource potential. Landsat images and computer-compatible tapes can provide new and better mosaics and also provide the capability for a closer look at promising sites. ?? 1983.
What is stereoscopic vision good for?
NASA Astrophysics Data System (ADS)
Read, Jenny C. A.
2015-03-01
Stereo vision is a resource-intensive process. Nevertheless, it has evolved in many animals including mammals, birds, amphibians and insects. It must therefore convey significant fitness benefits. It is often assumed that the main benefit is improved accuracy of depth judgments, but camouflage breaking may be as important, particularly in predatory animals. In humans, for the last 150 years, stereo vision has been turned to a new use: helping us reproduce visual reality for artistic purposes. By recreating the different views of a scene seen by the two eyes, stereo achieves unprecedented levels of realism. However, it also has some unexpected effects on viewer experience. The disruption of established mechanisms for interpreting pictures may be one reason why some viewers find stereoscopic content disturbing. Stereo vision also has uses in ophthalmology. Clinical stereoacuity tests are used in the management of conditions such as strabismus and amblyopia as well as vision screening. Stereoacuity can reveal the effectiveness of therapy and even predict long-term outcomes post surgery. Yet current clinical stereo tests fall far short of the accuracy and precision achievable in the lab. At Newcastle University, we are exploiting the recent availability of autostereo 3D tablet computers to design a clinical stereotest app in the form of a game suitable for young children. Our goal is to enable quick, accurate and precise stereoacuity measures which will enable clinicians to obtain better outcomes for children with visual disorders.
Problems of Automation and Management Principles Information Flow in Manufacturing
NASA Astrophysics Data System (ADS)
Grigoryuk, E. N.; Bulkin, V. V.
2017-07-01
Automated control systems of technological processes are complex systems that are characterized by the presence of elements of the overall focus, the systemic nature of the implemented algorithms for the exchange and processing of information, as well as a large number of functional subsystems. The article gives examples of automatic control systems and automated control systems of technological processes held parallel between them by identifying strengths and weaknesses. Other proposed non-standard control system of technological process.
Public Library Automation Report: 1984.
ERIC Educational Resources Information Center
Gotanda, Masae
Data processing was introduced to public libraries in Hawaii in 1973 with a feasibility study which outlined the candidate areas for automation. Since then, the Office of Library Services has automated the order procedures for one of the largest book processing centers for public libraries in the country; created one of the first COM…
Development of an automated fuzing station for the future armored resupply vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chesser, J.B.; Jansen, J.F.; Lloyd, P.D.
1995-03-01
The US Army is developing the Advanced Field Artillery System (SGSD), a next generation armored howitzer. The Future Armored Resupply Vehicle (FARV) will be its companion ammunition resupply vehicle. The FARV with automate the supply of ammunition and fuel to the AFAS which will increase capabilities over the current system. One of the functions being considered for automation is ammunition processing. Oak Ridge National Laboratory is developing equipment to demonstrate automated ammunition processing. One of the key operations to be automated is fuzing. The projectiles are initially unfuzed, and a fuze must be inserted and threaded into the projectile asmore » part of the processing. A constraint on the design solution is that the ammunition cannot be modified to simplify automation. The problem was analyzed to determine the alignment requirements. Using the results of the analysis, ORNL designed, built, and tested a test stand to verify the selected design solution.« less
PLACE: an open-source python package for laboratory automation, control, and experimentation.
Johnson, Jami L; Tom Wörden, Henrik; van Wijk, Kasper
2015-02-01
In modern laboratories, software can drive the full experimental process from data acquisition to storage, processing, and analysis. The automation of laboratory data acquisition is an important consideration for every laboratory. When implementing a laboratory automation scheme, important parameters include its reliability, time to implement, adaptability, and compatibility with software used at other stages of experimentation. In this article, we present an open-source, flexible, and extensible Python package for Laboratory Automation, Control, and Experimentation (PLACE). The package uses modular organization and clear design principles; therefore, it can be easily customized or expanded to meet the needs of diverse laboratories. We discuss the organization of PLACE, data-handling considerations, and then present an example using PLACE for laser-ultrasound experiments. Finally, we demonstrate the seamless transition to post-processing and analysis with Python through the development of an analysis module for data produced by PLACE automation. © 2014 Society for Laboratory Automation and Screening.
Biomek 3000: the workhorse in an automated accredited forensic genetic laboratory.
Stangegaard, Michael; Meijer, Per-Johan; Børsting, Claus; Hansen, Anders J; Morling, Niels
2012-10-01
We have implemented and validated automated protocols for a wide range of processes such as sample preparation, PCR setup, and capillary electrophoresis setup using small, simple, and inexpensive automated liquid handlers. The flexibility and ease of programming enable the Biomek 3000 to be used in many parts of the laboratory process in a modern forensic genetics laboratory with low to medium sample throughput. In conclusion, we demonstrated that sample processing for accredited forensic genetic DNA typing can be implemented on small automated liquid handlers, leading to the reduction of manual work as well as increased quality and throughput.
NASA Astrophysics Data System (ADS)
Zhang, Yongyong; Gao, Yang; Yu, Qiang
2017-09-01
Agricultural nitrogen loss becomes an increasingly important source of water quality deterioration and eutrophication, even threatens water safety for humanity. Nitrogen dynamic mechanism is still too complicated to be well captured at watershed scale due to its multiple existence forms and instability, disturbance of agricultural management practices. Stereoscopic agriculture is a novel agricultural planting pattern to efficiently use local natural resources (e.g., water, land, sunshine, heat and fertilizer). It is widely promoted as a high yield system and can obtain considerable economic benefits, particularly in China. However, its environmental quality implication is not clear. In our study, Qianyanzhou station is famous for its stereoscopic agriculture pattern of Southern China, and an experimental watershed was selected as our study area. Regional characteristics of runoff and nitrogen losses were simulated by an integrated water system model (HEQM) with multi-objective calibration, and multiple agriculture practices were assessed to find the effective approach for the reduction of diffuse nitrogen losses. Results showed that daily variations of runoff and nitrogen forms were well reproduced throughout watershed, i.e., satisfactory performances for ammonium and nitrate nitrogen (NH4-N and NO3-N) loads, good performances for runoff and organic nitrogen (ON) load, and very good performance for total nitrogen (TN) load. The average loss coefficient was 62.74 kg/ha for NH4-N, 0.98 kg/ha for NO3-N, 0.0004 kg/ha for ON and 63.80 kg/ha for TN. The dominating form of nitrogen losses was NH4-N due to the applied fertilizers, and the most dramatic zones aggregated in the middle and downstream regions covered by paddy and orange orchard. In order to control diffuse nitrogen losses, the most effective practices for Qianyanzhou stereoscopic agriculture pattern were to reduce farmland planting scale in the valley by afforestation, particularly for orchard in the downstream regions, followed by fertilizer application optimization.
Takahashi; Nakazawa; Watanabe; Konagaya
1999-01-01
We have developed the automated processing algorithms for 2-dimensional (2-D) electrophoretograms of genomic DNA based on RLGS (Restriction Landmark Genomic Scanning) method, which scans the restriction enzyme recognition sites as the landmark and maps them onto a 2-D electrophoresis gel. Our powerful processing algorithms realize the automated spot recognition from RLGS electrophoretograms and the automated comparison of a huge number of such images. In the final stage of the automated processing, a master spot pattern, on which all the spots in the RLGS images are mapped at once, can be obtained. The spot pattern variations which seemed to be specific to the pathogenic DNA molecular changes can be easily detected by simply looking over the master spot pattern. When we applied our algorithms to the analysis of 33 RLGS images derived from human colon tissues, we successfully detected several colon tumor specific spot pattern changes.
Complacency and bias in human use of automation: an attentional integration.
Parasuraman, Raja; Manzey, Dietrich H
2010-06-01
Our aim was to review empirical studies of complacency and bias in human interaction with automated and decision support systems and provide an integrated theoretical model for their explanation. Automation-related complacency and automation bias have typically been considered separately and independently. Studies on complacency and automation bias were analyzed with respect to the cognitive processes involved. Automation complacency occurs under conditions of multiple-task load, when manual tasks compete with the automated task for the operator's attention. Automation complacency is found in both naive and expert participants and cannot be overcome with simple practice. Automation bias results in making both omission and commission errors when decision aids are imperfect. Automation bias occurs in both naive and expert participants, cannot be prevented by training or instructions, and can affect decision making in individuals as well as in teams. While automation bias has been conceived of as a special case of decision bias, our analysis suggests that it also depends on attentional processes similar to those involved in automation-related complacency. Complacency and automation bias represent different manifestations of overlapping automation-induced phenomena, with attention playing a central role. An integrated model of complacency and automation bias shows that they result from the dynamic interaction of personal, situational, and automation-related characteristics. The integrated model and attentional synthesis provides a heuristic framework for further research on complacency and automation bias and design options for mitigating such effects in automated and decision support systems.
NASA Astrophysics Data System (ADS)
Gleason, J. L.; Hillyer, T. N.; Wilkins, J.
2012-12-01
The CERES Science Team integrates data from 5 CERES instruments onboard the Terra, Aqua and NPP missions. The processing chain fuses CERES observations with data from 19 other unique sources. The addition of CERES Flight Model 5 (FM5) onboard NPP, coupled with ground processing system upgrades further emphasizes the need for an automated job-submission utility to manage multiple processing streams concurrently. The operator-driven, legacy-processing approach relied on manually staging data from magnetic tape to limited spinning disk attached to a shared memory architecture system. The migration of CERES production code to a distributed, cluster computing environment with approximately one petabyte of spinning disk containing all precursor input data products facilitates the development of a CERES-specific, automated workflow manager. In the cluster environment, I/O is the primary system resource in contention across jobs. Therefore, system load can be maximized with a throttling workload manager. This poster discusses a Java and Perl implementation of an automated job management tool tailored for CERES processing.
Gokce, Sertan Kutal; Guo, Samuel X.; Ghorashian, Navid; Everett, W. Neil; Jarrell, Travis; Kottek, Aubri; Bovik, Alan C.; Ben-Yakar, Adela
2014-01-01
Femtosecond laser nanosurgery has been widely accepted as an axonal injury model, enabling nerve regeneration studies in the small model organism, Caenorhabditis elegans. To overcome the time limitations of manual worm handling techniques, automation and new immobilization technologies must be adopted to improve throughput in these studies. While new microfluidic immobilization techniques have been developed that promise to reduce the time required for axotomies, there is a need for automated procedures to minimize the required amount of human intervention and accelerate the axotomy processes crucial for high-throughput. Here, we report a fully automated microfluidic platform for performing laser axotomies of fluorescently tagged neurons in living Caenorhabditis elegans. The presented automation process reduces the time required to perform axotomies within individual worms to ∼17 s/worm, at least one order of magnitude faster than manual approaches. The full automation is achieved with a unique chip design and an operation sequence that is fully computer controlled and synchronized with efficient and accurate image processing algorithms. The microfluidic device includes a T-shaped architecture and three-dimensional microfluidic interconnects to serially transport, position, and immobilize worms. The image processing algorithms can identify and precisely position axons targeted for ablation. There were no statistically significant differences observed in reconnection probabilities between axotomies carried out with the automated system and those performed manually with anesthetics. The overall success rate of automated axotomies was 67.4±3.2% of the cases (236/350) at an average processing rate of 17.0±2.4 s. This fully automated platform establishes a promising methodology for prospective genome-wide screening of nerve regeneration in C. elegans in a truly high-throughput manner. PMID:25470130
2010-04-01
NRL Stennis Space Center (NRL-SSC) for further processing using the NRL SSC Automated Processing System (APS). APS was developed for processing...have not previously developed automated processing for 73 hyperspectral ocean color data. The hyperspectral processing branch includes several
Multi-Dimensional Signal Processing Research Program
1981-09-30
applications to real-time image processing and analysis. A specific long-range application is the automated processing of aerial reconnaissance imagery...Non-supervised image segmentation is a potentially im- portant operation in the automated processing of aerial reconnaissance pho- tographs since it
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-12
... Relating to Establishing an Automated Service for the Processing of Transfers, Replacements, and Exchanges... (the ``Act'').\\1\\ The proposed rule change allows NSCC to add a new automated service to process... offer a new automated service for the transfer, replacement, or exchange (collectively referred to as a...
AUTOMATED LITERATURE PROCESSING HANDLING AND ANALYSIS SYSTEM--FIRST GENERATION.
ERIC Educational Resources Information Center
Redstone Scientific Information Center, Redstone Arsenal, AL.
THE REPORT PRESENTS A SUMMARY OF THE DEVELOPMENT AND THE CHARACTERISTICS OF THE FIRST GENERATION OF THE AUTOMATED LITERATURE PROCESSING, HANDLING AND ANALYSIS (ALPHA-1) SYSTEM. DESCRIPTIONS OF THE COMPUTER TECHNOLOGY OF ALPHA-1 AND THE USE OF THIS AUTOMATED LIBRARY TECHNIQUE ARE PRESENTED. EACH OF THE SUBSYSTEMS AND MODULES NOW IN OPERATION ARE…
2008-09-01
automated processing of images for color correction, segmentation of foreground targets from sediment and classification of targets to taxonomic category...element in the development of HabCam as a tool for habitat characterization is the automated processing of images for color correction, segmentation of
Classification Trees for Quality Control Processes in Automated Constructed Response Scoring.
ERIC Educational Resources Information Center
Williamson, David M.; Hone, Anne S.; Miller, Susan; Bejar, Isaac I.
As the automated scoring of constructed responses reaches operational status, the issue of monitoring the scoring process becomes a primary concern, particularly when the goal is to have automated scoring operate completely unassisted by humans. Using a vignette from the Architectural Registration Examination and data for 326 cases with both human…
Wada, Atsushi; Sakano, Yuichi; Ando, Hiroshi
2016-01-01
Vision is important for estimating self-motion, which is thought to involve optic-flow processing. Here, we investigated the fMRI response profiles in visual area V6, the precuneus motion area (PcM), and the cingulate sulcus visual area (CSv)—three medial brain regions recently shown to be sensitive to optic-flow. We used wide-view stereoscopic stimulation to induce robust self-motion processing. Stimuli included static, randomly moving, and coherently moving dots (simulating forward self-motion). We varied the stimulus size and the presence of stereoscopic information. A combination of univariate and multi-voxel pattern analyses (MVPA) revealed that fMRI responses in the three regions differed from each other. The univariate analysis identified optic-flow selectivity and an effect of stimulus size in V6, PcM, and CSv, among which only CSv showed a significantly lower response to random motion stimuli compared with static conditions. Furthermore, MVPA revealed an optic-flow specific multi-voxel pattern in the PcM and CSv, where the discrimination of coherent motion from both random motion and static conditions showed above-chance prediction accuracy, but that of random motion from static conditions did not. Additionally, while area V6 successfully classified different stimulus sizes regardless of motion pattern, this classification was only partial in PcM and was absent in CSv. This may reflect the known retinotopic representation in V6 and the absence of such clear visuospatial representation in CSv. We also found significant correlations between the strength of subjective self-motion and univariate activation in all examined regions except for primary visual cortex (V1). This neuro-perceptual correlation was significantly higher for V6, PcM, and CSv when compared with V1, and higher for CSv when compared with the visual motion area hMT+. Our convergent results suggest the significant involvement of CSv in self-motion processing, which may give rise to its percept. PMID:26973588
Stereoscopic optical viewing system
Tallman, C.S.
1986-05-02
An improved optical system which provides the operator with a stereoscopic viewing field and depth of vision, particularly suitable for use in various machines such as electron or laser beam welding and drilling machines. The system features two separate but independently controlled optical viewing assemblies from the eyepiece to a spot directly above the working surface. Each optical assembly comprises a combination of eye pieces, turning prisms, telephoto lenses for providing magnification, achromatic imaging relay lenses and final stage pentagonal turning prisms. Adjustment for variations in distance from the turning prisms to the workpiece, necessitated by varying part sizes and configurations and by the operator's visual accuity, is provided separately for each optical assembly by means of separate manual controls at the operator console or within easy reach of the operator.
Chang, Yia-Chung; Tang, Li-Chuan; Yin, Chun-Yi
2013-01-01
Both an analytical formula and an efficient numerical method for simulation of the accumulated intensity profile of light that is refracted through a lenticular lens array placed on top of a liquid-crystal display (LCD) are presented. The influence due to light refracted through adjacent lens is examined in the two-view and four-view systems. Our simulation results are in good agreement with those obtained by a piece of commercial software, ASAP, but our method is much more efficient. This proposed method allows one to adjust the design parameters and carry out simulation for the performance of a subpixel-matched auto-stereoscopic LCD more efficiently and easily.
Stereoscopic optical viewing system
Tallman, Clifford S.
1987-01-01
An improved optical system which provides the operator a stereoscopic viewing field and depth of vision, particularly suitable for use in various machines such as electron or laser beam welding and drilling machines. The system features two separate but independently controlled optical viewing assemblies from the eyepiece to a spot directly above the working surface. Each optical assembly comprises a combination of eye pieces, turning prisms, telephoto lenses for providing magnification, achromatic imaging relay lenses and final stage pentagonal turning prisms. Adjustment for variations in distance from the turning prisms to the workpiece, necessitated by varying part sizes and configurations and by the operator's visual accuity, is provided separately for each optical assembly by means of separate manual controls at the operator console or within easy reach of the operator.
Abstracts of AF Materials Laboratory Reports
1975-09-01
NO: TITLE: AUTHOR(S): CONTRACT NO; CONTRACTOR: AFML-TR-73-307 200,397 IMPROVED AUTOMATED TAPE LAYING MACHINE M. Poullos, W. J. Murray, D.L...AUTOMATED IMPROVED AUTOMATED TAPE LAYING MACHINE AUTOMATION AUTOMATION OF COATING PROCESSES FOR GAS TURBINE DLADcS AND VANES 203222/111 203072...IMP90VE0 TAPE LAYING MACHINE IMPP)VED AUTOMATED TAPE LAYING MACHINE A STUDY O^ THE STRESS-STRAIN TEHAVIOR OF GRAPHITE
DOE Program on Seismic Characterization for Regions of Interest to CTBT Monitoring,
1995-08-14
processing of the monitoring network data). While developing and testing the corrections and other parameters needed by the automated processing systems...the secondary network. Parameters tabulated in the knowledge base must be appropriate for routine automated processing of network data, and must also...operation of the PNDC, as well as to results of investigations of "special events" (i.e., those events that fail to locate or discriminate during automated
Automated imaging system for single molecules
Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel
2012-09-18
There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.
Automated Sequence Processor: Something Old, Something New
NASA Technical Reports Server (NTRS)
Streiffert, Barbara; Schrock, Mitchell; Fisher, Forest; Himes, Terry
2012-01-01
High productivity required for operations teams to meet schedules Risk must be minimized. Scripting used to automate processes. Scripts perform essential operations functions. Automated Sequence Processor (ASP) was a grass-roots task built to automate the command uplink process System engineering task for ASP revitalization organized. ASP is a set of approximately 200 scripts written in Perl, C Shell, AWK and other scripting languages.. ASP processes/checks/packages non-interactive commands automatically.. Non-interactive commands are guaranteed to be safe and have been checked by hardware or software simulators.. ASP checks that commands are non-interactive.. ASP processes the commands through a command. simulator and then packages them if there are no errors.. ASP must be active 24 hours/day, 7 days/week..
Managing laboratory automation
Saboe, Thomas J.
1995-01-01
This paper discusses the process of managing automated systems through their life cycles within the quality-control (QC) laboratory environment. The focus is on the process of directing and managing the evolving automation of a laboratory; system examples are given. The author shows how both task and data systems have evolved, and how they interrelate. A BIG picture, or continuum view, is presented and some of the reasons for success or failure of the various examples cited are explored. Finally, some comments on future automation need are discussed. PMID:18925018
Managing laboratory automation.
Saboe, T J
1995-01-01
This paper discusses the process of managing automated systems through their life cycles within the quality-control (QC) laboratory environment. The focus is on the process of directing and managing the evolving automation of a laboratory; system examples are given. The author shows how both task and data systems have evolved, and how they interrelate. A BIG picture, or continuum view, is presented and some of the reasons for success or failure of the various examples cited are explored. Finally, some comments on future automation need are discussed.
Computer-assisted photogrammetric mapping systems for geologic studies-A progress report
Pillmore, C.L.; Dueholm, K.S.; Jepsen, H.S.; Schuch, C.H.
1981-01-01
Photogrammetry has played an important role in geologic mapping for many years; however, only recently have attempts been made to automate mapping functions for geology. Computer-assisted photogrammetric mapping systems for geologic studies have been developed and are currently in use in offices of the Geological Survey of Greenland at Copenhagen, Denmark, and the U.S. Geological Survey at Denver, Colorado. Though differing somewhat, the systems are similar in that they integrate Kern PG-2 photogrammetric plotting instruments and small desk-top computers that are programmed to perform special geologic functions and operate flat-bed plotters by means of specially designed hardware and software. A z-drive capability, in which stepping motors control the z-motions of the PG-2 plotters, is an integral part of both systems. This feature enables the computer to automatically position the floating mark on computer-calculated, previously defined geologic planes, such as contacts or the base of coal beds, throughout the stereoscopic model in order to improve the mapping capabilities of the instrument and to aid in correlation and tracing of geologic units. The common goal is to enhance the capabilities of the PG-2 plotter and provide a means by which geologists can make conventional geologic maps more efficiently and explore ways to apply computer technology to geologic studies. ?? 1981.
NASA Technical Reports Server (NTRS)
Pokras, V. M.; Yevdokimov, V. P.; Maslov, V. D.
1978-01-01
The structure and potential of the information reference system OZhUR designed for the automated data processing systems of scientific space vehicles (SV) is considered. The system OZhUR ensures control of the extraction phase of processing with respect to a concrete SV and the exchange of data between phases.The practical application of the system OZhUR is exemplified in the construction of a data processing system for satellites of the Cosmos series. As a result of automating the operations of exchange and control, the volume of manual preparation of data is significantly reduced, and there is no longer any need for individual logs which fix the status of data processing. The system Ozhur is included in the automated data processing system Nauka which is realized in language PL-1 in a binary one-address system one-state (BOS OS) electronic computer.
ERIC Educational Resources Information Center
Wolf, Walter A., Ed.
1978-01-01
Presents four simple laboratory procedures for: preparation of organometallic compounds, a realistic qualitative organic analysis project, a computer program to plot potentiometric titration curves, and preparation of stereoscopic transparencies. (SL)
Laboratory automation: total and subtotal.
Hawker, Charles D
2007-12-01
Worldwide, perhaps 2000 or more clinical laboratories have implemented some form of laboratory automation, either a modular automation system, such as for front-end processing, or a total laboratory automation system. This article provides descriptions and examples of these various types of automation. It also presents an outline of how a clinical laboratory that is contemplating automation should approach its decision and the steps it should follow to ensure a successful implementation. Finally, the role of standards in automation is reviewed.
ICE AND DEBRIS IN THE FRETTED TERRAIN, MARS.
Lucchitta, Baerbel K.
1984-01-01
Viking moderate- and high-resolution images along the northern highland margin were studied monoscopically and stereoscopically to contribute to an understanding of the development of fretted terrain. Results support the hypothesis that the fretting process involved flow facilitated by interstitial ice. The process apparently continued for a long period of time, and debris-apron formation shaped the fretted terrain in the past as well as the present. Interstitial ice in debris aprons is most likely derived from ground ice obtained by sapping or scarp collapse. Debris aprons could have been removed by sublimation if they consisted mostly of ice, or by deflation if they consisted mostly of debris. To remove the debris, wind erosion was either very intense early in martian history, or was intermittent, perhaps owing to climatic cycles.
Precipitate resolution in an electron irradiated ni-si alloy
NASA Astrophysics Data System (ADS)
Watanabe, H.; Muroga, T.; Yoshida, N.; Kitajima, K.
1988-09-01
Precipitate resolution processes in a Ni-12.6 at% Si alloy under electron irradiation have been observed by means of HVEM. Above 400°C, growth and resolution of Ni 3Si precipitates were observed simultaneously. The detail stereoscopic observation showed that the precipitates close to free surfaces grew, while those in the middle of a specimen dissolved. The critical dose when the precipitates start to shrink increases with increasing the depth. This depth dependence of the precipitate behavior under irradiation has a close relation with the formation of surface precipitates and the growth of solute depleted zone beneath them. The temperature and dose dependence of the resolution rate showed that the precipitates in the solute depleted zone dissolved by the interface controlled process of radiation-enhanced diffusion.
Automated solar cell assembly team process research
NASA Astrophysics Data System (ADS)
Nowlan, M. J.; Hogan, S. J.; Darkazalli, G.; Breen, W. F.; Murach, J. M.; Sutherland, S. F.; Patterson, J. S.
1994-06-01
This report describes work done under the Photovoltaic Manufacturing Technology (PVMaT) project, Phase 3A, which addresses problems that are generic to the photovoltaic (PV) industry. Spire's objective during Phase 3A was to use its light soldering technology and experience to design and fabricate solar cell tabbing and interconnecting equipment to develop new, high-yield, high-throughput, fully automated processes for tabbing and interconnecting thin cells. Areas that were addressed include processing rates, process control, yield, throughput, material utilization efficiency, and increased use of automation. Spire teamed with Solec International, a PV module manufacturer, and the University of Massachusetts at Lowell's Center for Productivity Enhancement (CPE), automation specialists, who are lower-tier subcontractors. A number of other PV manufacturers, including Siemens Solar, Mobil Solar, Solar Web, and Texas instruments, agreed to evaluate the processes developed under this program.
Industrial applications of automated X-ray inspection
NASA Astrophysics Data System (ADS)
Shashishekhar, N.
2015-03-01
Many industries require that 100% of manufactured parts be X-ray inspected. Factors such as high production rates, focus on inspection quality, operator fatigue and inspection cost reduction translate to an increasing need for automating the inspection process. Automated X-ray inspection involves the use of image processing algorithms and computer software for analysis and interpretation of X-ray images. This paper presents industrial applications and illustrative case studies of automated X-ray inspection in areas such as automotive castings, fuel plates, air-bag inflators and tires. It is usually necessary to employ application-specific automated inspection strategies and techniques, since each application has unique characteristics and interpretation requirements.
NASA Technical Reports Server (NTRS)
Jackson, L. Neal; Crenshaw, John, Sr.; Hambright, R. N.; Nedungadi, A.; Mcfayden, G. M.; Tsuchida, M. S.
1989-01-01
A significant emphasis upon automation within the Space Biology Initiative hardware appears justified in order to conserve crew labor and crew training effort. Two generic forms of automation were identified: automation of data and information handling and decision making, and the automation of material handling, transfer, and processing. The use of automatic data acquisition, expert systems, robots, and machine vision will increase the volume of experiments and quality of results. The automation described may also influence efforts to miniaturize and modularize the large array of SBI hardware identified to date. The cost and benefit model developed appears to be a useful guideline for SBI equipment specifiers and designers. Additional refinements would enhance the validity of the model. Two NASA automation pilot programs, 'The Principal Investigator in a Box' and 'Rack Mounted Robots' were investigated and found to be quite appropriate for adaptation to the SBI program. There are other in-house NASA efforts that provide technology that may be appropriate for the SBI program. Important data is believed to exist in advanced medical labs throughout the U.S., Japan, and Europe. The information and data processing in medical analysis equipment is highly automated and future trends reveal continued progress in this area. However, automation of material handling and processing has progressed in a limited manner because the medical labs are not affected by the power and space constraints that Space Station medical equipment is faced with. Therefore, NASA's major emphasis in automation will require a lead effort in the automation of material handling to achieve optimal crew utilization.
Ball, Oliver; Robinson, Sarah; Bure, Kim; Brindley, David A; Mccall, David
2018-04-01
Phacilitate held a Special Interest Group workshop event in Edinburgh, UK, in May 2017. The event brought together leading stakeholders in the cell therapy bioprocessing field to identify present and future challenges and propose potential solutions to automation in cell therapy bioprocessing. Here, we review and summarize discussions from the event. Deep biological understanding of a product, its mechanism of action and indication pathogenesis underpin many factors relating to bioprocessing and automation. To fully exploit the opportunities of bioprocess automation, therapeutics developers must closely consider whether an automation strategy is applicable, how to design an 'automatable' bioprocess and how to implement process modifications with minimal disruption. Major decisions around bioprocess automation strategy should involve all relevant stakeholders; communication between technical and business strategy decision-makers is of particular importance. Developers should leverage automation to implement in-process testing, in turn applicable to process optimization, quality assurance (QA)/ quality control (QC), batch failure control, adaptive manufacturing and regulatory demands, but a lack of precedent and technical opportunities can complicate such efforts. Sparse standardization across product characterization, hardware components and software platforms is perceived to complicate efforts to implement automation. The use of advanced algorithmic approaches such as machine learning may have application to bioprocess and supply chain optimization. Automation can substantially de-risk the wider supply chain, including tracking and traceability, cryopreservation and thawing and logistics. The regulatory implications of automation are currently unclear because few hardware options exist and novel solutions require case-by-case validation, but automation can present attractive regulatory incentives. Copyright © 2018 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
FRAME (Force Review Automation Environment): MATLAB-based AFM data processor.
Partola, Kostyantyn R; Lykotrafitis, George
2016-05-03
Data processing of force-displacement curves generated by atomic force microscopes (AFMs) for elastic moduli and unbinding event measurements is very time consuming and susceptible to user error or bias. There is an evident need for consistent, dependable, and easy-to-use AFM data processing software. We have developed an open-source software application, the force review automation environment (or FRAME), that provides users with an intuitive graphical user interface, automating data processing, and tools for expediting manual processing. We did not observe a significant difference between manually processed and automatically processed results from the same data sets. Copyright © 2016 Elsevier Ltd. All rights reserved.
Accelerated design of bioconversion processes using automated microscale processing techniques.
Lye, Gary J; Ayazi-Shamlou, Parviz; Baganz, Frank; Dalby, Paul A; Woodley, John M
2003-01-01
Microscale processing techniques are rapidly emerging as a means to increase the speed of bioprocess design and reduce material requirements. Automation of these techniques can reduce labour intensity and enable a wider range of process variables to be examined. This article examines recent research on various individual microscale unit operations including microbial fermentation, bioconversion and product recovery techniques. It also explores the potential of automated whole process sequences operated in microwell formats. The power of the whole process approach is illustrated by reference to a particular bioconversion, namely the Baeyer-Villiger oxidation of bicyclo[3.2.0]hept-2-en-6-one for the production of optically pure lactones.
Automating Acquisitions: The Planning Process.
ERIC Educational Resources Information Center
Bryant, Bonita
1984-01-01
Account of process followed at large academic library in preparing for automation of acquisition and fund accounting functions highlights planning criteria, local goals, planning process elements (selecting participants, assigning tasks, devising timetable, providing foundations, evaluating systems, determining costs, formulating recommendations).…
EOS Terra: EOS DAM Automation Constellation MOWG
NASA Technical Reports Server (NTRS)
Mantziaras, Dimitrios C.
2017-01-01
Brief summary of the decision factors considered and process improvement steps made, to evolve the ESMO debris avoidance maneuver process to a more automated process. Presentation is in response to an action item/question received at a prior MOWG meeting.
Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.
Fischmeister, Florian Ph S; Bauer, Herbert
2006-10-01
Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.
Stereoscopic construction and practice of optoelectronic technology textbook
NASA Astrophysics Data System (ADS)
Zhou, Zigang; Zhang, Jinlong; Wang, Huili; Yang, Yongjia; Han, Yanling
2017-08-01
It is a professional degree course textbook for the Nation-class Specialty—Optoelectronic Information Science and Engineering, and it is also an engineering practice textbook for the cultivation of photoelectric excellent engineers. The book seeks to comprehensively introduce the theoretical and applied basis of optoelectronic technology, and it's closely linked to the current development of optoelectronic industry frontier and made up of following core contents, including the laser source, the light's transmission, modulation, detection, imaging and display. At the same time, it also embodies the features of the source of laser, the transmission of the waveguide, the electronic means and the optical processing methods.
NASA Astrophysics Data System (ADS)
Baillard, C.; Dissard, O.; Jamet, O.; Maître, H.
Above-ground analysis is a key point to the reconstruction of urban scenes, but it is a difficult task because of the diversity of the involved objects. We propose a new method to above-ground extraction from an aerial stereo pair, which does not require any assumption about object shape or nature. A Digital Surface Model is first produced by a stereoscopic matching stage preserving discontinuities, and then processed by a region-based Markovian classification algorithm. The produced above-ground areas are finally characterized as man-made or natural according to the grey level information. The quality of the results is assessed and discussed.
The Real And Its Holographic Double
NASA Astrophysics Data System (ADS)
Rabinovitch, Gerard
1980-06-01
The attempt to produce animated, three-dimensional image representations has haunted the occident since the 4th century-the camera ottica and the zograscope have answered the camera obscura as its prolongation and as its amplification -with the invention of photography (Niepoe Daguerre) the same phenomenon occurred: the appearance of stereopho-tography (principals stated by Wheastone and applied by Brewster, then Dubooq). -the same phenomenon happened again with cinema-tography the stereocinematography (screening, embossing, anaglyhic, polaroid glasses processes, etc...), haunt researchers. -finally, holography which, although depending on a space produced by a major technological jump, follows this quest of the reproduction of the effect of stereoscopic vision.
Bigdata Oriented Multimedia Mobile Health Applications.
Lv, Zhihan; Chirivella, Javier; Gagliardo, Pablo
2016-05-01
In this paper, two mHealth applications are introduced, which can be employed as the terminals of bigdata based health service to collect information for electronic medical records (EMRs). The first one is a hybrid system for improving the user experience in the hyperbaric oxygen chamber by 3D stereoscopic virtual reality glasses and immersive perception. Several HMDs have been tested and compared. The second application is a voice interactive serious game as a likely solution for providing assistive rehabilitation tool for therapists. The recorder of the voice of patients could be analysed to evaluate the long-time rehabilitation results and further to predict the rehabilitation process.
A system-level approach to automation research
NASA Technical Reports Server (NTRS)
Harrison, F. W.; Orlando, N. E.
1984-01-01
Automation is the application of self-regulating mechanical and electronic devices to processes that can be accomplished with the human organs of perception, decision, and actuation. The successful application of automation to a system process should reduce man/system interaction and the perceived complexity of the system, or should increase affordability, productivity, quality control, and safety. The expense, time constraints, and risk factors associated with extravehicular activities have led the Automation Technology Branch (ATB), as part of the NASA Automation Research and Technology Program, to investigate the use of robots and teleoperators as automation aids in the context of space operations. The ATB program addresses three major areas: (1) basic research in autonomous operations, (2) human factors research on man-machine interfaces with remote systems, and (3) the integration and analysis of automated systems. This paper reviews the current ATB research in the area of robotics and teleoperators.
First Annual Workshop on Space Operations Automation and Robotics (SOAR 87)
NASA Technical Reports Server (NTRS)
Griffin, Sandy (Editor)
1987-01-01
Several topics relative to automation and robotics technology are discussed. Automation of checkout, ground support, and logistics; automated software development; man-machine interfaces; neural networks; systems engineering and distributed/parallel processing architectures; and artificial intelligence/expert systems are among the topics covered.
Application of automation and information systems to forensic genetic specimen processing.
Leclair, Benoît; Scholl, Tom
2005-03-01
During the last 10 years, the introduction of PCR-based DNA typing technologies in forensic applications has been highly successful. This technology has become pervasive throughout forensic laboratories and it continues to grow in prevalence. For many criminal cases, it provides the most probative evidence. Criminal genotype data banking and victim identification initiatives that follow mass-fatality incidents have benefited the most from the introduction of automation for sample processing and data analysis. Attributes of offender specimens including large numbers, high quality and identical collection and processing are ideal for the application of laboratory automation. The magnitude of kinship analysis required by mass-fatality incidents necessitates the application of computing solutions to automate the task. More recently, the development activities of many forensic laboratories are focused on leveraging experience from these two applications to casework sample processing. The trend toward increased prevalence of forensic genetic analysis will continue to drive additional innovations in high-throughput laboratory automation and information systems.
Clark, Anna D; Guilfoyle, Mathew R; Candy, Nicholas G; Budohoski, Karol P; Hofmann, Riikka; Barone, Damiano G; Santarius, Thomas; Kirollos, Ramez W; Trivedi, Rikin A
2017-12-01
Stereoscopic three-dimensional (3D) imaging is increasingly used in the teaching of neuroanatomy and although this is mainly aimed at undergraduate medical students, it has enormous potential for enhancing the training of neurosurgeons. This study aims to assess whether 3D lecturing is an effective method of enhancing the knowledge and confidence of neurosurgeons and how it compares with traditional two-dimensional (2D) lecturing and cadaveric training. Three separate teaching sessions for neurosurgical trainees were organized: 1) 2D course (2D lecture + cadaveric session), 2) 3D lecture alone, and 3) 3D course (3D lecture + cadaveric session). Before and after each session, delegates were asked to complete questionnaires containing questions relating to surgical experience, anatomic knowledge, confidence in performing procedures, and perceived value of 3D, 2D, and cadaveric teaching. Although both 2D and 3D lectures and courses were similarly effective at improving self-rated knowledge and understanding, the 3D lecture and course were associated with significantly greater gains in confidence reported by the delegates for performing a subfrontal approach and sylvian fissure dissection. Stereoscopic 3D lectures provide neurosurgical trainees with greater confidence for performing standard operative approaches and enhances the benefit of subsequent practical experience in developing technical skills in cadaveric dissection. Copyright © 2017. Published by Elsevier Inc.
A collaborative virtual reality environment for neurosurgical planning and training.
Kockro, Ralf A; Stadie, Axel; Schwandt, Eike; Reisch, Robert; Charalampaki, Cleopatra; Ng, Ivan; Yeo, Tseng Tsai; Hwang, Peter; Serra, Luis; Perneczky, Axel
2007-11-01
We have developed a highly interactive virtual environment that enables collaborative examination of stereoscopic three-dimensional (3-D) medical imaging data for planning, discussing, or teaching neurosurgical approaches and strategies. The system consists of an interactive console with which the user manipulates 3-D data using hand-held and tracked devices within a 3-D virtual workspace and a stereoscopic projection system. The projection system displays the 3-D data on a large screen while the user is working with it. This setup allows users to interact intuitively with complex 3-D data while sharing this information with a larger audience. We have been using this system on a routine clinical basis and during neurosurgical training courses to collaboratively plan and discuss neurosurgical procedures with 3-D reconstructions of patient-specific magnetic resonance and computed tomographic imaging data or with a virtual model of the temporal bone. Working collaboratively with the 3-D information of a large, interactive, stereoscopic projection provides an unambiguous way to analyze and understand the anatomic spatial relationships of different surgical corridors. In our experience, the system creates a unique forum for open and precise discussion of neurosurgical approaches. We believe the system provides a highly effective way to work with 3-D data in a group, and it significantly enhances teaching of neurosurgical anatomy and operative strategies.
Matching methods evaluation framework for stereoscopic breast x-ray images.
Rousson, Johanna; Naudin, Mathieu; Marchessoux, Cédric
2016-01-01
Three-dimensional (3-D) imaging has been intensively studied in the past few decades. Depth information is an important added value of 3-D systems over two-dimensional systems. Special focuses were devoted to the development of stereo matching methods for the generation of disparity maps (i.e., depth information within a 3-D scene). Dedicated frameworks were designed to evaluate and rank the performance of different stereo matching methods but never considering x-ray medical images. Yet, 3-D x-ray acquisition systems and 3-D medical displays have already been introduced into the diagnostic market. To access the depth information within x-ray stereoscopic images, computing accurate disparity maps is essential. We aimed at developing a framework dedicated to x-ray stereoscopic breast images used to evaluate and rank several stereo matching methods. A multiresolution pyramid optimization approach was integrated to the framework to increase the accuracy and the efficiency of the stereo matching techniques. Finally, a metric was designed to score the results of the stereo matching compared with the ground truth. Eight methods were evaluated and four of them [locally scaled sum of absolute differences (LSAD), zero mean sum of absolute differences, zero mean sum of squared differences, and locally scaled mean sum of squared differences] appeared to perform equally good with an average error score of 0.04 (0 is the perfect matching). LSAD was selected for generating the disparity maps.
Deep-level stereoscopic multiple traps of acoustic vortices
NASA Astrophysics Data System (ADS)
Li, Yuzhi; Guo, Gepu; Ma, Qingyu; Tu, Juan; Zhang, Dong
2017-04-01
Based on the radiation pattern of a planar piston transducer, the mechanisms underlying the generation of axially controllable deep-level stereoscopic multiple traps of acoustic vortices (AV) using sparse directional sources were proposed with explicit formulae. Numerical simulations for the axial and cross-sectional distributions of acoustic pressure and phase were conducted for various ka (product of the wave number and the radius of transducer) values at the frequency of 1 MHz. It was demonstrated that, for bigger ka, besides the main-AV (M-AV) generated by the main lobes of the sources, cone-shaped side-AV (S-AV) produced by the side lobes were closer to the source plane at a relatively lower pressure. Corresponding to the radiation angles of pressure nulls between the main lobe and the side lobes of the sources, vortex valleys with nearly pressure zero could be generated on the central axis to form multiple traps, based on Gor'kov potential theory. The number and locations of vortex valleys could be controlled accurately by the adjustment of ka. With the established eight-source AV generation system, the existence of the axially controllable multiple traps was verified by the measured M-AV and S-AVs as well as the corresponding vortex valleys. The favorable results provided the feasibility of deep-level stereoscopic control of AV and suggested potential application of multiple traps for particle manipulation in the area of biomedical engineering.
Clinically Normal Stereopsis Does Not Ensure a Performance Benefit from Stereoscopic 3D Depth Cues
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Harrington, Lawrence K.; Wright, Steve T.; Watamaniuk, Scott N. J.; Heft, Eric L.
2014-09-01
To investigate the effect of manipulating disparity on task performance and viewing comfort, twelve participants were tested on a virtual object precision placement task while viewing a stereoscopic 3D (S3D) display. All participants had normal or corrected-to-normal visual acuity, passed the Titmus stereovision clinical test, and demonstrated normal binocular function, including phorias and binocular fusion ranges. Each participant completed six experimental sessions with different maximum binocular disparity limits. The results for ten of the twelve participants were generally as expected, demonstrating a large performance advantage when S3D cues were provided. The sessions with the larger disparity limits typically resulted in the best performance, and the sessions with no S3D cues the poorest performance. However, one participant demonstrated poorer performance in sessions with smaller disparity limits but improved performance in sessions with the larger disparity limits. Another participant's performance declined whenever any S3D cues were provided. Follow-up testing suggested that the phenomenon of pseudo-stereoanomaly may account for one viewer's atypical performance, while the phenomenon of stereoanomaly might account for the other. Overall, the results demonstrate that a subset of viewers with clinically normal binocular and stereoscopic vision may have difficulty performing depth-related tasks on S3D displays. The possibility of the vergence-accommodation conflict contributing to individual performance differences is also discussed.
Automation bias: decision making and performance in high-tech cockpits.
Mosier, K L; Skitka, L J; Heers, S; Burdick, M
1997-01-01
Automated aids and decision support tools are rapidly becoming indispensable tools in high-technology cockpits and are assuming increasing control of"cognitive" flight tasks, such as calculating fuel-efficient routes, navigating, or detecting and diagnosing system malfunctions and abnormalities. This study was designed to investigate automation bias, a recently documented factor in the use of automated aids and decision support systems. The term refers to omission and commission errors resulting from the use of automated cues as a heuristic replacement for vigilant information seeking and processing. Glass-cockpit pilots flew flight scenarios involving automation events or opportunities for automation-related omission and commission errors. Although experimentally manipulated accountability demands did not significantly impact performance, post hoc analyses revealed that those pilots who reported an internalized perception of "accountability" for their performance and strategies of interaction with the automation were significantly more likely to double-check automated functioning against other cues and less likely to commit errors than those who did not share this perception. Pilots were also lilkely to erroneously "remember" the presence of expected cues when describing their decision-making processes.
Examining single- and multiple-process theories of trust in automation.
Rice, Stephen
2009-07-01
The author examined the effects of human responses to automation alerts and nonalerts. Previous research has shown that automation false alarms and misses have differential effects on human trust (i.e., automation false alarms tend to affect operator compliance, whereas automation misses tend to affect operator reliance). Participants performed a simulated combat task, whereby they examined aerial photographs for the presence of enemy targets. A diagnostic aid provided a recommendation during each trial. The author manipulated the reliability and response bias of the aid to provide appropriate data for state-trace analyses. The analyses provided strong evidence that only a multiple-process theory of operator trust can explain the effects of automation errors on human dependence behaviors. The author discusses the theoretical and practical implications of this finding.
Robotics for Nuclear Material Handling at LANL:Capabilities and Needs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harden, Troy A; Lloyd, Jane A; Turner, Cameron J
Nuclear material processing operations present numerous challenges for effective automation. Confined spaces, hazardous materials and processes, particulate contamination, radiation sources, and corrosive chemical operations are but a few of the significant hazards. However, automated systems represent a significant safety advance when deployed in place of manual tasks performed by human workers. The replacement of manual operations with automated systems has been desirable for nearly 40 years, yet only recently are automated systems becoming increasingly common for nuclear materials handling applications. This paper reviews several automation systems which are deployed or about to be deployed at Los Alamos National Laboratory formore » nuclear material handling operations. Highlighted are the current social and technological challenges faced in deploying automated systems into hazardous material handling environments and the opportunities for future innovations.« less
Automation of Cassini Support Imaging Uplink Command Development
NASA Technical Reports Server (NTRS)
Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert
2010-01-01
"Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.
9 CFR 381.307 - Record review and maintenance.
Code of Federal Regulations, 2014 CFR
2014-01-01
... be identified by production date, container code, processing vessel number or other designation and... review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and... applicable requirements of § 381.306. (c) Container closure records. Written records of all container closure...
9 CFR 381.307 - Record review and maintenance.
Code of Federal Regulations, 2013 CFR
2013-01-01
... be identified by production date, container code, processing vessel number or other designation and... review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and... applicable requirements of § 381.306. (c) Container closure records. Written records of all container closure...
9 CFR 381.307 - Record review and maintenance.
Code of Federal Regulations, 2010 CFR
2010-01-01
... be identified by production date, container code, processing vessel number or other designation and... review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and... applicable requirements of § 381.306. (c) Container closure records. Written records of all container closure...
9 CFR 381.307 - Record review and maintenance.
Code of Federal Regulations, 2011 CFR
2011-01-01
... be identified by production date, container code, processing vessel number or other designation and... review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and... applicable requirements of § 381.306. (c) Container closure records. Written records of all container closure...
9 CFR 381.307 - Record review and maintenance.
Code of Federal Regulations, 2012 CFR
2012-01-01
... be identified by production date, container code, processing vessel number or other designation and... review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and... applicable requirements of § 381.306. (c) Container closure records. Written records of all container closure...
Jones, Gillian; Matthews, Roger; Cunningham, Richard; Jenks, Peter
2011-07-01
The sensitivity of automated culture of Staphylococcus aureus from flocked swabs versus that of manual culture of fiber swabs was prospectively compared using nasal swabs from 867 patients. Automated culture from flocked swabs significantly increased the detection rate, by 13.1% for direct culture and 10.2% for enrichment culture.
Perspectives on bioanalytical mass spectrometry and automation in drug discovery.
Janiszewski, John S; Liston, Theodore E; Cole, Mark J
2008-11-01
The use of high speed synthesis technologies has resulted in a steady increase in the number of new chemical entities active in the drug discovery research stream. Large organizations can have thousands of chemical entities in various stages of testing and evaluation across numerous projects on a weekly basis. Qualitative and quantitative measurements made using LC/MS are integrated throughout this process from early stage lead generation through candidate nomination. Nearly all analytical processes and procedures in modern research organizations are automated to some degree. This includes both hardware and software automation. In this review we discuss bioanalytical mass spectrometry and automation as components of the analytical chemistry infrastructure in pharma. Analytical chemists are presented as members of distinct groups with similar skillsets that build automated systems, manage test compounds, assays and reagents, and deliver data to project teams. The ADME-screening process in drug discovery is used as a model to highlight the relationships between analytical tasks in drug discovery. Emerging software and process automation tools are described that can potentially address gaps and link analytical chemistry related tasks. The role of analytical chemists and groups in modern 'industrialized' drug discovery is also discussed.
Exponential error reduction in pretransfusion testing with automation.
South, Susan F; Casina, Tony S; Li, Lily
2012-08-01
Protecting the safety of blood transfusion is the top priority of transfusion service laboratories. Pretransfusion testing is a critical element of the entire transfusion process to enhance vein-to-vein safety. Human error associated with manual pretransfusion testing is a cause of transfusion-related mortality and morbidity and most human errors can be eliminated by automated systems. However, the uptake of automation in transfusion services has been slow and many transfusion service laboratories around the world still use manual blood group and antibody screen (G&S) methods. The goal of this study was to compare error potentials of commonly used manual (e.g., tiles and tubes) versus automated (e.g., ID-GelStation and AutoVue Innova) G&S methods. Routine G&S processes in seven transfusion service laboratories (four with manual and three with automated G&S methods) were analyzed using failure modes and effects analysis to evaluate the corresponding error potentials of each method. Manual methods contained a higher number of process steps ranging from 22 to 39, while automated G&S methods only contained six to eight steps. Corresponding to the number of the process steps that required human interactions, the risk priority number (RPN) of the manual methods ranged from 5304 to 10,976. In contrast, the RPN of the automated methods was between 129 and 436 and also demonstrated a 90% to 98% reduction of the defect opportunities in routine G&S testing. This study provided quantitative evidence on how automation could transform pretransfusion testing processes by dramatically reducing error potentials and thus would improve the safety of blood transfusion. © 2012 American Association of Blood Banks.
Effects of automation of information-processing functions on teamwork.
Wright, Melanie C; Kaber, David B
2005-01-01
We investigated the effects of automation as applied to different stages of information processing on team performance in a complex decision-making task. Forty teams of 2 individuals performed a simulated Theater Defense Task. Four automation conditions were simulated with computer assistance applied to realistic combinations of information acquisition, information analysis, and decision selection functions across two levels of task difficulty. Multiple measures of team effectiveness and team coordination were used. Results indicated different forms of automation have different effects on teamwork. Compared with a baseline condition, an increase in automation of information acquisition led to an increase in the ratio of information transferred to information requested; an increase in automation of information analysis resulted in higher team coordination ratings; and automation of decision selection led to better team effectiveness under low levels of task difficulty but at the cost of higher workload. The results support the use of early and intermediate forms of automation related to acquisition and analysis of information in the design of team tasks. Decision-making automation may provide benefits in more limited contexts. Applications of this research include the design and evaluation of automation in team environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulsh, M.; Wheeler, D.; Protopappas, P.
The U.S. Department of Energy (DOE) is interested in supporting manufacturing research and development (R&D) for fuel cell systems in the 10-1,000 kilowatt (kW) power range relevant to stationary and distributed combined heat and power applications, with the intent to reduce manufacturing costs and increase production throughput. To assist in future decision-making, DOE requested that the National Renewable Energy Laboratory (NREL) provide a baseline understanding of the current levels of adoption of automation in manufacturing processes and flow, as well as of continuous processes. NREL identified and visited or interviewed key manufacturers, universities, and laboratories relevant to the study usingmore » a standard questionnaire. The questionnaire covered the current level of vertical integration, the importance of quality control developments for automation, the current level of automation and source of automation design, critical balance of plant issues, potential for continuous cell manufacturing, key manufacturing steps or processes that would benefit from DOE support for manufacturing R&D, the potential for cell or stack design changes to support automation, and the relationship between production volume and decisions on automation.« less
Workload Capacity: A Response Time-Based Measure of Automation Dependence.
Yamani, Yusuke; McCarley, Jason S
2016-05-01
An experiment used the workload capacity measure C(t) to quantify the processing efficiency of human-automation teams and identify operators' automation usage strategies in a speeded decision task. Although response accuracy rates and related measures are often used to measure the influence of an automated decision aid on human performance, aids can also influence response speed. Mean response times (RTs), however, conflate the influence of the human operator and the automated aid on team performance and may mask changes in the operator's performance strategy under aided conditions. The present study used a measure of parallel processing efficiency, or workload capacity, derived from empirical RT distributions as a novel gauge of human-automation performance and automation dependence in a speeded task. Participants performed a speeded probabilistic decision task with and without the assistance of an automated aid. RT distributions were used to calculate two variants of a workload capacity measure, COR(t) and CAND(t). Capacity measures gave evidence that a diagnosis from the automated aid speeded human participants' responses, and that participants did not moderate their own decision times in anticipation of diagnoses from the aid. Workload capacity provides a sensitive and informative measure of human-automation performance and operators' automation dependence in speeded tasks. © 2016, Human Factors and Ergonomics Society.
ARES - A New Airborne Reflective Emissive Spectrometer
2005-10-01
Information and Management System (DIMS), an automated processing environment with robot archive interface as established for the handling of satellite data...consisting of geocoded ground reflectance data. All described processing steps will be integrated in the automated processing environment DIMS to assure a
9 CFR 318.307 - Record review and maintenance.
Code of Federal Regulations, 2013 CFR
2013-01-01
... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...
9 CFR 318.307 - Record review and maintenance.
Code of Federal Regulations, 2012 CFR
2012-01-01
... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...
9 CFR 318.307 - Record review and maintenance.
Code of Federal Regulations, 2014 CFR
2014-01-01
... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...
9 CFR 318.307 - Record review and maintenance.
Code of Federal Regulations, 2011 CFR
2011-01-01
... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...
9 CFR 318.307 - Record review and maintenance.
Code of Federal Regulations, 2010 CFR
2010-01-01
... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...
Disparity modifications and the emotional effects of stereoscopic images
NASA Astrophysics Data System (ADS)
Kawai, Takashi; Atsuta, Daiki; Tomiyama, Yuya; Kim, Sanghyun; Morikawa, Hiroyuki; Mitsuya, Reiko; Häkkinen, Jukka
2014-03-01
This paper describes a study that focuses on disparity changes in emotional scenes of stereoscopic (3D) images, in which an examination of the effects on pleasant and arousal was carried out by adding binocular disparity to 2D images that evoke specific emotions, and applying disparity modification based on the disparity analysis of famous 3D movies. From the results of the experiment, for pleasant, a significant difference was found only for the main effect of the emotions. On the other hand, for arousal, there was a trend of increasing the evaluation values in the order 2D condition, 3D condition and 3D condition applied the disparity modification for happiness, surprise, and fear. This suggests the possibility that binocular disparity and the modification affect arousal.
You're a What? Automation Technician
ERIC Educational Resources Information Center
Mullins, John
2010-01-01
Many people think of automation as laborsaving technology, but it sure keeps Jim Duffell busy. Defined simply, automation is a technique for making a device run or a process occur with minimal direct human intervention. But the functions and technologies involved in automated manufacturing are complex. Nearly all functions, from orders coming in…
Opportunities for Automation of Student Aid Processing in Postsecondary Institutions.
ERIC Educational Resources Information Center
St. John, Edward P.
1986-01-01
An overview of the options and opportunities postsecondary institutions should consider when developing plans for student aid automation is provided. The role of automation in the financial aid office, interfaces with institutional and external systems, alternative approaches to automation, and the need for an institutional strategy for automation…
Stage Evolution of Office Automation Technological Change and Organizational Learning.
ERIC Educational Resources Information Center
Sumner, Mary
1985-01-01
A study was conducted to identify stage characteristics in terms of technology, applications, the role and responsibilities of the office automation organization, and planning and control strategies; and to describe the respective roles of data processing professionals, office automation analysts, and users in office automation systems development…
Records Management Handbook; Source Data Automation Equipment Guide.
ERIC Educational Resources Information Center
National Archives and Records Service (GSA), Washington, DC. Office of Records Management.
A detailed guide to selecting appropriate source data automation equipment is presented. Source data automation equipment is used to prepare data for electronic data processing or computerized recordkeeping. The guide contains specifications, performance data cost, and pictures of the major types of machines used in source data automation.…
The Influence of Cultural Factors on Trust in Automation
ERIC Educational Resources Information Center
Chien, Shih-Yi James
2016-01-01
Human interaction with automation is a complex process that requires both skilled operators and complex system designs to effectively enhance overall performance. Although automation has successfully managed complex systems throughout the world for over half a century, inappropriate reliance on automation can still occur, such as the recent…