Science.gov

Sample records for 3-d viewing glasses

  1. Glasses-free 3D viewing systems for medical imaging

    NASA Astrophysics Data System (ADS)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  2. User experience while viewing stereoscopic 3D television

    PubMed Central

    Read, Jenny C.A.; Bohr, Iwo

    2014-01-01

    3D display technologies have been linked to visual discomfort and fatigue. In a lab-based study with a between-subjects design, 433 viewers aged from 4 to 82 years watched the same movie in either 2D or stereo 3D (S3D), and subjectively reported on a range of aspects of their viewing experience. Our results suggest that a minority of viewers, around 14%, experience adverse effects due to viewing S3D, mainly headache and eyestrain. A control experiment where participants viewed 2D content through 3D glasses suggests that around 8% may report adverse effects which are not due directly to viewing S3D, but instead are due to the glasses or to negative preconceptions about S3D (the ‘nocebo effect'). Women were slightly more likely than men to report adverse effects with S3D. We could not detect any link between pre-existing eye conditions or low stereoacuity and the likelihood of experiencing adverse effects with S3D. Practitioner Summary: Stereoscopic 3D (S3D) has been linked to visual discomfort and fatigue. Viewers watched the same movie in either 2D or stereo 3D (between-subjects design). Around 14% reported effects such as headache and eyestrain linked to S3D itself, while 8% report adverse effects attributable to 3D glasses or negative expectations. PMID:24874550

  3. Methods For Electronic 3-D Moving Pictures Without Glasses

    NASA Astrophysics Data System (ADS)

    Collender, Robert B.

    1987-06-01

    This paper describes implementation approaches in image acquisition and playback for 3-D computer graphics, 3-D television and 3-D theatre movies without special glasses. Projection lamps, spatial light modulators, CRT's and dynamic scanning are all eliminated by the application of an active image array, all static components and a semi-specular screen. The resulting picture shows horizontal parallax with a wide horizontal view field (up to 360 de-grees) giving a holographic appearance in full color with smooth continuous viewing without speckle. Static component systems are compared with dynamic component systems using both linear and circular arrays. Implementation of computer graphic systems are shown that allow complex shaded color images to extend from the viewer's eyes to infinity. Large screen systems visible by hundreds of people are feasible by the use of low f-stops and high gain screens in projection. Screen geometries and special screen properties are shown. Viewing characteristics offer no restrictions in view-position over the entire view-field and have a "look-around" feature for all the categories of computer graphics, television and movies. Standard video cassettes and optical discs can also interface the system to generate a 3-D window viewable without glasses. A prognosis is given for technology application to 3-D pictures without glasses that replicate the daily viewing experience. Super-position of computer graphics on real-world pictures is shown feasible.

  4. Natural 3D content on glasses-free light-field 3D cinema

    NASA Astrophysics Data System (ADS)

    Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.

    2013-03-01

    This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.

  5. Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio

    2013-05-01

    With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.

  6. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  7. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  8. EEG-based usability assessment of 3D shutter glasses

    NASA Astrophysics Data System (ADS)

    Wenzel, Markus A.; Schultze-Kraft, Rafael; Meinecke, Frank C.; Cardinaux, Fabien; Kemp, Thomas; Müller, Klaus-Robert; Curio, Gabriel; Blankertz, Benjamin

    2016-02-01

    Objective. Neurotechnology can contribute to the usability assessment of products by providing objective measures of neural workload and can uncover usability impediments that are not consciously perceived by test persons. In this study, the neural processing effort imposed on the viewer of 3D television by shutter glasses was quantified as a function of shutter frequency. In particular, we sought to determine the critical shutter frequency at which the ‘neural flicker’ vanishes, such that visual fatigue due to this additional neural effort can be prevented by increasing the frequency of the system. Approach. Twenty-three participants viewed an image through 3D shutter glasses, while multichannel electroencephalogram (EEG) was recorded. In total ten shutter frequencies were employed, selected individually for each participant to cover the range below, at and above the threshold of flicker perception. The source of the neural flicker correlate was extracted using independent component analysis and the flicker impact on the visual cortex was quantified by decoding the state of the shutter from the EEG. Main Result. Effects of the shutter glasses were traced in the EEG up to around 67 Hz—about 20 Hz over the flicker perception threshold—and vanished at the subsequent frequency level of 77 Hz. Significance. The impact of the shutter glasses on the visual cortex can be detected by neurotechnology even when a flicker is not reported by the participants. Potential impact. Increasing the shutter frequency from the usual 50 Hz or 60 Hz to 77 Hz reduces the risk of visual fatigue and thus improves shutter-glass-based 3D usability.

  9. Design of extended viewing zone at autostereoscopic 3D display based on diffusing optical element

    NASA Astrophysics Data System (ADS)

    Kim, Min Chang; Hwang, Yong Seok; Hong, Suk-Pyo; Kim, Eun Soo

    2012-03-01

    In this paper, to realize a non-glasses type 3D display as next step from the current glasses-typed 3D display, it is suggested that a viewing zone is designed for the 3D display using DOE (Diffusing Optical Element). Viewing zone of proposed method is larger than that of the current parallax barrier method or lenticular method. Through proposed method, it is shown to enable the expansion and adjustment of the area of viewing zone according to viewing distance.

  10. First 3D view of solar eruptions

    NASA Astrophysics Data System (ADS)

    2004-07-01

    arrival times and impact angles at the Earth," says Dr Thomas Moran of the Catholic University, Washington, USA. In collaboration with Dr Joseph Davila, of NASA’s Goddard Space Flight Center, Greenbelt, USA, Moran has analysed two-dimensional images from the ESA/NASA Solar and Heliospheric Observatory (SOHO) in a new way to yield 3D images. Their technique is able to reveal the complex and distorted magnetic fields that travel with the CME cloud and sometimes interact with Earth's own magnetic field, pouring tremendous amounts of energy into the space near Earth. "These magnetic fields are invisible," Moran explains, "but since the CME gas is electrified, it spirals around the magnetic fields, tracing out their shapes." Therefore, a 3D view of the CME electrified gas (called a plasma) gives scientists valuable information on the structure and behaviour of the magnetic fields powering the CME. The new analysis technique for SOHO data determines the three-dimensional structure of a CME by taking a sequence of three SOHO Large Angle and Spectrometric Coronagraph (LASCO) images through various polarisers, at different angles. Whilst the light emitted by the Sun is not polarised, once it is scattered off electrons in the CME plasma it takes up some polarisation. This means that the electric fields of some of the scattered light are forced to oscillate in certain directions, whereas the electric field in the light emitted by the Sun is free to oscillate in all directions. Moran and Davila knew that light from CME structures closer to the plane of the Sun (as seen on the LASCO images) had to be more polarised than light from structures farther from that plane. Thus, by computing the ratio of polarised to unpolarised light for each CME structure, they could measure its distance from the plane. This provided the missing third dimension to the LASCO images. With this technique, the team has confirmed that the structure of CMEs directed towards Earth is an expanding arcade of

  11. 3-D Perspective View, Kamchatka Peninsula, Russia

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western side of the volcanically active Kamchatka Peninsula in eastern Russia. The image was generated using the first data collected during the Shuttle Radar Topography Mission (SRTM). In the foreground is the Sea of Okhotsk. Inland from the coast, vegetated floodplains and low relief hills rise toward snow capped peaks. The topographic effects on snow and vegetation distribution are very clear in this near-horizontal view. Forming the skyline is the Sredinnyy Khrebet, the volcanic mountain range that makes up the spine of the peninsula. High resolution SRTM topographic data will be used by geologists to study how volcanoes form and to understand the hazards posed by future eruptions.

    This image was generated using topographic data from SRTM and an enhanced true-color image from the Landsat 7 satellite. This image contains about 2,400 meters (7,880 feet) of total relief. The topographic expression was enhanced by adding artificial shading as calculated from the SRTM elevation model. The Landsat data was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    SRTM, launched on February 11, 2000, used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar(SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. To collect the 3-D SRTM data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. SRTM collected three-dimensional measurements of nearly 80 percent of the Earth's surface. SRTM is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 33.3 km (20.6 miles) wide x

  12. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  13. 3D View of Death Valley, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This 3-D perspective view looking north over Death Valley, California, was produced by draping ASTER nighttime thermal infrared data over topographic data from the US Geological Survey. The ASTER data were acquired April 7, 2000 with the multi-spectral thermal infrared channels, and cover an area of 60 by 80 km (37 by 50 miles). Bands 13, 12, and 10 are displayed in red, green and blue respectively. The data have been computer enhanced to exaggerate the color variations that highlight differences in types of surface materials. Salt deposits on the floor of Death Valley appear in shades of yellow, green, purple, and pink, indicating presence of carbonate, sulfate, and chloride minerals. The Panamint Mtns. to the west, and the Black Mtns. to the east, are made up of sedimentary limestones, sandstones, shales, and metamorphic rocks. The bright red areas are dominated by the mineral quartz, such as is found in sandstones; green areas are limestones. In the lower center part of the image is Badwater, the lowest point in North America.

    Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of International Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products. Dr. Anne Kahle at NASA's Jet Propulsion Laboratory, Pasadena, Calif., is the U.S. Science team leader; Moshe Pniel of JPL is the project manager. ASTER is the only high resolution imaging sensor on Terra. The primary goal of the ASTER mission is to obtain high-resolution image data in 14 channels over the entire land surface, as well as black and white stereo images. With revisit time of between 4 and 16 days, ASTER will provide the capability for repeat coverage of changing areas on Earth's surface.

    The broad spectral coverage and high spectral resolution of ASTER

  14. 30-view projection 3D display

    NASA Astrophysics Data System (ADS)

    Huang, Junejei; Wang, Yuchang

    2015-03-01

    A 30-view auto-stereoscopic display using angle-magnifying screen is proposed. Small incident angle of Lamp-scanning from exit pupil of projection lens is magnified into large field of view on the observing side. The lamp-scanning is realized by the vibration of Galvano-mirror that synchronizing with the frame rate of the DMD and reflecting the laser illuminator to the scanning angles. To achieve 15-view, a 3-chip DLP projector with frame rate of 720 Hz is used. For one cycle of vibration of Galvano-mirror, steps of 0, 2, 4, 6, 8 10, 12, 14 are reflected on going-path and steps of 13, 11, 9, 7, 5, 3, 1 are reflected on returning path. A frame is divided into two half parts of odd lines and even lines for two views. For each view, 48 half frames per second are provided. A projection lens with aperture-relay module is used to double the lens aperture and separating the frame into two half parts of even and odd lines. After going through the Philips prism, three panels, the scanning 15 spots are doubled to 30 spots and emerge from the exit pupil of the projection lens. The exit 30 light spots from the projection lens are projected to 30 viewing zones by the anglemagnifying screen. A cabinet of rear projection with two folded mirrors is used because a projection lens of long throw distance is required.

  15. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ).

  16. True 3-D View of 'Columbia Hills' from an Angle

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This mosaic of images from NASA's Mars Exploration Rover Spirit shows a panorama of the 'Columbia Hills' without any adjustment for rover tilt. When viewed through 3-D glasses, depth is much more dramatic and easier to see, compared with a tilt-adjusted version. This is because stereo views are created by producing two images, one corresponding to the view from the panoramic camera's left-eye camera, the other corresponding to the view from the panoramic camera's right-eye camera. The brain processes the visual input more accurately when the two images do not have any vertical offset. In this view, the vertical alignment is nearly perfect, but the horizon appears to curve because of the rover's tilt (because the rover was parked on a steep slope, it was tilted approximately 22 degrees to the west-northwest). Spirit took the images for this 360-degree panorama while en route to higher ground in the 'Columbia Hills.'

    The highest point visible in the hills is 'Husband Hill,' named for space shuttle Columbia Commander Rick Husband. To the right are the rover's tracks through the soil, where it stopped to perform maintenance on its right front wheel in July. In the distance, below the hills, is the floor of Gusev Crater, where Spirit landed Jan. 3, 2004, before traveling more than 3 kilometers (1.8 miles) to reach this point. This vista comprises 188 images taken by Spirit's panoramic camera from its 213th day, or sol, on Mars to its 223rd sol (Aug. 9 to 19, 2004). Team members at NASA's Jet Propulsion Laboratory and Cornell University spent several weeks processing images and producing geometric maps to stitch all the images together in this mosaic. The 360-degree view is presented in a cylindrical-perspective map projection with geometric seam correction.

  17. 3D View of Mars Particle

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This is a 3D representation of the pits seen in the first Atomic Force Microscope, or AFM, images sent back from NASA's Phoenix Mars Lander. Red represents the highest point and purple represents the lowest point.

    The particle in the upper left corner shown at the highest magnification ever seen from another world is a rounded particle about one micrometer, or one millionth of a meter, across. It is a particle of the dust that cloaks Mars. Such dust particles color the Martian sky pink, feed storms that regularly envelop the planet and produce Mars' distinctive red soil.

    The particle was part of a sample informally called 'Sorceress' delivered to the AFM on the 38th Martian day, or sol, of the mission (July 2, 2008). The AFM is part of Phoenix's microscopic station called MECA, or the Microscopy, Electrochemistry, and Conductivity Analyzer.

    The AFM was developed by a Swiss-led consortium, with Imperial College London producing the silicon substrate that holds sampled particles.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  18. World Wind 3D Earth Viewing

    NASA Technical Reports Server (NTRS)

    Hogan, Patrick; Maxwell, Christopher; Kim, Randolph; Gaskins, Tom

    2007-01-01

    World Wind allows users to zoom from satellite altitude down to any place on Earth, leveraging high-resolution LandSat imagery and SRTM (Shuttle Radar Topography Mission) elevation data to experience Earth in visually rich 3D. In addition to Earth, World Wind can also visualize other planets, and there are already comprehensive data sets for Mars and the Earth's moon, which are as easily accessible as those of Earth. There have been more than 20 million downloads to date, and the software is being used heavily by the Department of Defense due to the code s ability to be extended and the evolution of the code courtesy of NASA and the user community. Primary features include the dynamic access to public domain imagery and its ease of use. All one needs to control World Wind is a two-button mouse. Additional guides and features can be accessed through a simplified menu. A JAVA version will be available soon. Navigation is automated with single clicks of a mouse, or by typing in any location to automatically zoom in to see it. The World Wind install package contains the necessary requirements such as the .NET runtime and managed DirectX library. World Wind can display combinations of data from a variety of sources, including Blue Marble, LandSat 7, SRTM, NASA Scientific Visualization Studio, GLOBE, and much more. A thorough list of features, the user manual, a key chart, and screen shots are available at http://worldwind.arc.nasa.gov.

  19. 3D View of Grand Canyon, Arizona

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Grand Canyon is one of North America's most spectacular geologic features. Carved primarily by the Colorado River over the past six million years, the canyon sports vertical drops of 5,000 feet and spans a 445-kilometer-long stretch of Arizona desert. The strata along the steep walls of the canyon form a record of geologic time from the Paleozoic Era (250 million years ago) to the Precambrian (1.7 billion years ago).

    The above view was acquired by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument aboard the Terra spacecraft. Visible and near infrared data were combined to form an image that simulates the natural colors of water and vegetation. Rock colors, however, are not accurate. The image data were combined with elevation data to produce this perspective view, with no vertical exaggeration, looking from above the South Rim up Bright Angel Canyon towards the North Rim. The light lines on the plateau at lower right are the roads around the Canyon View Information Plaza. The Bright Angel Trail, which reaches the Colorado in 11.3 kilometers, can be seen dropping into the canyon over Plateau Point at bottom center. The blue and black areas on the North Rim indicate a forest fire that was smoldering as the data were acquired on May 12, 2000.

    Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of International Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products. Dr. Anne Kahle at NASA's Jet Propulsion Laboratory, Pasadena, Calif., is the U.S. Science team leader; Moshe Pniel of JPL is the project manager. ASTER is the only high resolution imaging sensor on Terra. The primary goal of the ASTER mission is to obtain high-resolution image data in 14 channels over the entire land

  20. Parallel tempering and 3D spin glass models

    NASA Astrophysics Data System (ADS)

    Papakonstantinou, T.; Malakis, A.

    2014-03-01

    We review parallel tempering schemes and examine their main ingredients for accuracy and efficiency. We discuss two selection methods of temperatures and some alternatives for the exchange of replicas, including all-pair exchange methods. We measure specific heat errors and round-trip efficiency using the two-dimensional (2D) Ising model, and also test the efficiency for the ground state production in 3D spin glass models. We find that the optimization of the GS problem is highly influenced by the choice of the temperature range of the PT process. Finally, we present numerical evidence concerning the universality aspects of an anisotropic case of the 3D spin-glass model.

  1. Effect of Illumination on Ocular Status Modifications Induced by Short-Term 3D TV Viewing

    PubMed Central

    Chen, Yuanyuan; Xu, Aiqin; Jiang, Jian

    2017-01-01

    Objectives. This study aimed to compare changes in ocular status after 3D TV viewing under three modes of illumination and thereby identify optimal illumination for 3D TV viewing. Methods. The following measures of ocular status were assessed: the accommodative response, accommodative microfluctuation, accommodative facility, relative accommodation, gradient accommodative convergence/accommodation (AC/A) ratio, phoria, and fusional vergence. The observers watched 3D television for 90 minutes through 3D shutter glasses under three illumination modes: A, complete darkness; B, back illumination (50 lx); and C, front illumination (130 lx). The ocular status of the observers was assessed both before and after the viewing. Results. After 3D TV viewing, the accommodative response and accommodative microfluctuation were significantly changed under illumination Modes A and B. The near positive fusional vergence decreased significantly after the 90-minute 3D viewing session under each illumination mode, and this effect was not significantly different among the three modes. Conclusions. Short-term 3D viewing modified the ocular status of adults. The least amount of such change occurred with front illumination, suggesting that this type of illumination is an appropriate mode for 3D shutter TV viewing. PMID:28348893

  2. Balance and coordination after viewing stereoscopic 3D television

    PubMed Central

    Read, Jenny C. A.; Simonotto, Jennifer; Bohr, Iwo; Godfrey, Alan; Galna, Brook; Rochester, Lynn; Smulders, Tom V.

    2015-01-01

    Manufacturers and the media have raised the possibility that viewing stereoscopic 3D television (S3D TV) may cause temporary disruption to balance and visuomotor coordination. We looked for evidence of such effects in a laboratory-based study. Four hundred and thirty-three people aged 4–82 years old carried out tests of balance and coordination before and after viewing an 80 min movie in either conventional 2D or stereoscopic 3D, while wearing two triaxial accelerometers. Accelerometry produced little evidence of any change in body motion associated with S3D TV. We found no evidence that viewing the movie in S3D causes a detectable impairment in balance or in visuomotor coordination. PMID:26587261

  3. Evaluation of viewing experiences induced by curved 3D display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul; Yano, Sumio

    2015-05-01

    As advanced display technology has been developed, much attention has been given to flexible panels. On top of that, with the momentum of the 3D era, stereoscopic 3D technique has been combined with the curved displays. However, despite the increased needs for 3D function in the curved displays, comparisons between curved and flat panel displays with 3D views have rarely been tested. Most of the previous studies have investigated their basic ergonomic aspects such as viewing posture and distance with only 2D views. It has generally been known that curved displays are more effective in enhancing involvement in specific content stories because field of views and distance from the eyes of viewers to both edges of the screen are more natural in curved displays than in flat panel ones. For flat panel displays, ocular torsions may occur when viewers try to move their eyes from the center to the edges of the screen to continuously capture rapidly moving 3D objects. This is due in part to differences in viewing distances from the center of the screen to eyes of viewers and from the edges of the screen to the eyes. Thus, this study compared S3D viewing experiences induced by a curved display with those of a flat panel display by evaluating significant subjective and objective measures.

  4. A closer view of prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Shark, Half-Dome, Pumpkin, Flat Top and Frog are at center. Little Flat Top is at right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  5. Multi-view 3D display using waveguides

    NASA Astrophysics Data System (ADS)

    Lee, Byoungho; Lee, Chang-Kun

    2015-07-01

    We propose a multi-projection based multi-view 3D display system using an optical waveguide. The images from the projection units with the angle satisfying the total internal reflection (TIR) condition are incident on the waveguide and experience multiple reflections at the interface by the TIR. As a result of the multiple reflections in the waveguide, the projection distance in horizontal direction is effectively reduced to the thickness of the waveguide, and it is possible to implement the compact projection display system. By aligning the projection array in the entrance part of the waveguide, the multi-view 3D display system based on the multiple projectors with the minimized structure is realized. Viewing zones are generated by combining the waveguide projection system, a vertical diffuser, and a Fresnel lens. In the experimental setup, the feasibility of the proposed method is verified and a ten-view 3D display system with compact size in projection space is implemented.

  6. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  7. Efficient fabrication method of nano-grating for 3D holographic display with full parallax views.

    PubMed

    Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Fang, Zongbao; Pu, Donglin; Ye, Yan; Liu, Yanhua; Chen, Linsen

    2016-03-21

    Without any special glasses, multiview 3D displays based on the diffractive optics can present high resolution, full-parallax 3D images in an ultra-wide viewing angle. The enabling optical component, namely the phase plate, can produce arbitrarily distributed view zones by carefully designing the orientation and the period of each nano-grating pixel. However, such 3D display screen is restricted to a limited size due to the time-consuming fabricating process of nano-gratings on the phase plate. In this paper, we proposed and developed a lithography system that can fabricate the phase plate efficiently. Here we made two phase plates with full nano-grating pixel coverage at a speed of 20 mm2/mins, a 500 fold increment in the efficiency when compared to the method of E-beam lithography. One 2.5-inch phase plate generated 9-view 3D images with horizontal-parallax, while the other 6-inch phase plate produced 64-view 3D images with full-parallax. The angular divergence in horizontal axis and vertical axis was 1.5 degrees, and 1.25 degrees, respectively, slightly larger than the simulated value of 1.2 degrees by Finite Difference Time Domain (FDTD). The intensity variation was less than 10% for each viewpoint, in consistency with the simulation results. On top of each phase plate, a high-resolution binary masking pattern containing amplitude information of all viewing zone was well aligned. We achieved a resolution of 400 pixels/inch and a viewing angle of 40 degrees for 9-view 3D images with horizontal parallax. In another prototype, the resolution of each view was 160 pixels/inch and the view angle was 50 degrees for 64-view 3D images with full parallax. As demonstrated in the experiments, the homemade lithography system provided the key fabricating technology for multiview 3D holographic display.

  8. 3D Viewing: Odd Perception - Illusion? reality? or both?

    NASA Astrophysics Data System (ADS)

    Kisimoto, K.; Iizasa, K.

    2008-12-01

    We live in the three dimensional space, don't we? It could be at least four dimensions, but that is another story. In either way our perceptual capability of 3D-Viewing is constrained by our 2D-perception (our intrinsic tools of perception). I carried out a few visual experiments using topographic data to show our intrinsic (or biological) disability (or shortcoming) in 3D-recognition of our world. Results of the experiments suggest: (1) 3D-surface model displayed on a 2D-computer screen (or paper) always has two interpretations of the 3D- surface geometry, if we choose one of the interpretation (in other word, if we are hooked by one perception of the two), we maintain its perception even if the 3D-model changes its viewing perspective in time shown on the screen, (2) more interesting is that 3D-real solid object (e.g.,made of clay) also gives above mentioned two interpretations of the geometry of the object, if we observe the object with one-eye. Most famous example of this viewing illusion is exemplified by a magician, who died in 2007, Jerry Andrus who made a super-cool paper crafted dragon which causes visual illusion to one-eyed viewer. I, by the experiments, confirmed this phenomenon in another perceptually persuasive (deceptive?) way. My conclusion is that this illusion is intrinsic, i.e. reality for human, because, even if we live in 3D-space, our perceptional tool (eyes) is composed of 2D sensors whose information is reconstructed or processed to 3D by our experience-based brain. So, (3) when we observe the 3D-surface-model on the computer screen, we are always one eye short even if we use both eyes. One last suggestion from my experiments is that recent highly sophisticated 3D- models might include too many information that human perceptions cannot handle properly, i.e. we might not be understanding the 3D world (geospace) at all, just illusioned.

  9. 3-D object recognition using 2-D views.

    PubMed

    Li, Wenjing; Bebis, George; Bourbakis, Nikolaos G

    2008-11-01

    We consider the problem of recognizing 3-D objects from 2-D images using geometric models and assuming different viewing angles and positions. Our goal is to recognize and localize instances of specific objects (i.e., model-based) in a scene. This is in contrast to category-based object recognition methods where the goal is to search for instances of objects that belong to a certain visual category (e.g., faces or cars). The key contribution of our work is improving 3-D object recognition by integrating Algebraic Functions of Views (AFoVs), a powerful framework for predicting the geometric appearance of an object due to viewpoint changes, with indexing and learning. During training, we compute the space of views that groups of object features can produce under the assumption of 3-D linear transformations, by combining a small number of reference views that contain the object features using AFoVs. Unrealistic views (e.g., due to the assumption of 3-D linear transformations) are eliminated by imposing a pair of rigidity constraints based on knowledge of the transformation between the reference views of the object. To represent the space of views that an object can produce compactly while allowing efficient hypothesis generation during recognition, we propose combining indexing with learning in two stages. In the first stage, we sample the space of views of an object sparsely and represent information about the samples using indexing. In the second stage, we build probabilistic models of shape appearance by sampling the space of views of the object densely and learning the manifold formed by the samples. Learning employs the Expectation-Maximization (EM) algorithm and takes place in a "universal," lower-dimensional, space computed through Random Projection (RP). During recognition, we extract groups of point features from the scene and we use indexing to retrieve the most feasible model groups that might have produced them (i.e., hypothesis generation). The likelihood

  10. Crosstalk minimization in autostereoscopic multiveiw 3D display by eye tracking and fusion (overlapping) of viewing zones

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Seon-Kyu; Yoon, Ki-Hyuk

    2012-06-01

    An autostereoscopic 3D display provides the binocular perception without eye glasses, but induces the low 3D effect and dizziness due to the crosstalk effect. The crosstalk related problems give the deterioration of 3D effect, clearness, and reality of 3D image. A novel method of reducing the crosstalk is designed and tested; the method is based on the fusion of viewing zones and the real time eye position. It is shown experimentally that the crosstalk is effectively reduced at any position around the optimal viewing distance.

  11. Autostereoscopic 3D Display with Long Visualization Depth Using Referential Viewing Area-Based Integral Photography.

    PubMed

    Hongen Liao; Dohi, Takeyoshi; Nomura, Keisuke

    2011-11-01

    We developed an autostereoscopic display for distant viewing of 3D computer graphics (CG) images without using special viewing glasses or tracking devices. The images are created by employing referential viewing area-based CG image generation and pixel distribution algorithm for integral photography (IP) and integral videography (IV) imaging. CG image rendering is used to generate IP/IV elemental images. The images can be viewed from each viewpoint within a referential viewing area and the elemental images are reconstructed from rendered CG images by pixel redistribution and compensation method. The elemental images are projected onto a screen that is placed at the same referential viewing distance from the lens array as in the image rendering. Photographic film is used to record the elemental images through each lens. The method enables 3D images with a long visualization depth to be viewed from relatively long distances without any apparent influence from deviated or distorted lenses in the array. We succeeded in creating an actual autostereoscopic images with an image depth of several meters in front of and behind the display that appear to have 3D even when viewed from a distance.

  12. 5. Headon view of looking glass aircraft. View to southwest. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. Head-on view of looking glass aircraft. View to southwest. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  13. 3. General view showing rear of looking glass aircraft. View ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. General view showing rear of looking glass aircraft. View to north. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  14. 4. View showing underside of wing, looking glass aircraft. View ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. View showing underside of wing, looking glass aircraft. View to north. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  15. Analysis of Crosstalk in 3D Circularly Polarized LCDs Depending on the Vertical Viewing Location.

    PubMed

    Zeng, Menglin; Nguyen, Truong Q

    2016-03-01

    Crosstalk in circularly polarized (CP) liquid crystal display (LCD) with polarized glasses (passive 3D glasses) is mainly caused by two factors: 1) the polarizing system including wave retarders and 2) the vertical misalignment (VM) of light between the LC module and the patterned retarder. We show that the latter, which is highly dependent on the vertical viewing location, is a much more significant factor of crosstalk in CP LCD than the former. There are three contributions in this paper. Initially, a display model for CP LCD, which accurately characterizes VM, is proposed. A novel display calibration method for the VM characterization that only requires pictures of the screen taken at four viewing locations. In addition, we prove that the VM-based crosstalk cannot be efficiently reduced by either preprocessing the input images or optimizing the polarizing system. Furthermore, we derive the analytic solution for the viewing zone, where the entire screen does not have the VM-based crosstalk.

  16. Color and 3D views of the Sierra Nevada mountains

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A stereo 'anaglyph' created using the nadir and 45.6-degree forward-viewing cameras provides a three-dimensional view of the scene when viewed with red/blue glasses. The red filter should be placed over your left eye. To facilitate the stereo viewing, the images have been oriented with north toward the left. Some prominent features are Mono Lake, in the center of the image; Walker Lake, to its left; and Lake Tahoe, near the lower left. This view of the Sierra Nevadas includes Yosemite, Kings Canyon, and Sequoia National Parks. Mount Whitney, the highest peak in the contiguous 48 states (elev. 14,495 feet), is visible near the righthand edge. Above it (to the east), the Owens Valley shows up prominently between the Sierra Nevada and Inyo ranges. Precipitation falling as rain or snow on the Sierras feeds numerous rivers flowing southwestward into the San Joaquin Valley. The abundant fields of this productive agricultural area can be seen along the lower right; a large number of reservoirs that supply water for crop irrigation are apparent in the western foothills of the Sierras. Urban areas in the valley appear as gray patches; among the California cities that are visible are Fresno, Merced, and Modesto.

  17. Measuring heterogenous stress fields in a 3D colloidal glass

    NASA Astrophysics Data System (ADS)

    Lin, Neil; Bierbaum, Matthew; Bi, Max; Sethna, James; Cohen, Itai

    Glass in our common experience is hard and fragile. But it still bends, yields, and flows slowly under loads. The yielding of glass, a well documented yet not fully understood flow behavior, is governed by the heterogenous local stresses in the material. While resolving stresses at the atomic scale is not feasible, measurements of stresses at the single particle level in colloidal glasses, a widely used model system for atomic glasses, has recently been made possible using Stress Assessment from Local Structural Anisotropy (SALSA). In this work, we use SALSA to visualize the three dimensional stress network in a hard-sphere glass during start-up shear. By measuring the evolution of this stress network we identify local-yielding. We find that these local-yielding events often require only minimal structural rearrangement and as such have most likely been ignored in previous analyses. We then relate these micro-scale yielding events to the macro-scale flow behavior observed using bulk measurements.

  18. View generation for 3D-TV using image reconstruction from irregularly spaced samples

    NASA Astrophysics Data System (ADS)

    Vázquez, Carlos

    2007-02-01

    Three-dimensional television (3D-TV) will become the next big step in the development of advanced TV systems. One of the major challenges for the deployment of 3D-TV systems is the diversity of display technologies and the high cost of capturing multi-view content. Depth image-based rendering (DIBR) has been identified as a key technology for the generation of new views for stereoscopic and multi-view displays from a small number of views captured and transmitted. We propose a disparity compensation method for DIBR that does not require spatial interpolation of the disparity map. We use a forward-mapping disparity compensation with real precision. The proposed method deals with the irregularly sampled image resulting from this disparity compensation process by applying a re-sampling algorithm based on a bi-cubic spline function space that produces smooth images. The fact that no approximation is made on the position of the samples implies that geometrical distortions in the final images due to approximations in sample positions are minimized. We also paid attention to the occlusion problem. Our algorithm detects the occluded regions in the newly generated images and uses simple depth-aware inpainting techniques to fill the gaps created by newly exposed areas. We tested the proposed method in the context of generation of views needed for viewing on SynthaGram TM auto-stereoscopic displays. We used as input either a 2D image plus a depth map or a stereoscopic pair with the associated disparity map. Our results show that this technique provides high quality images to be viewed on different display technologies such as stereoscopic viewing with shutter glasses (two views) and lenticular auto-stereoscopic displays (nine views).

  19. Femtosecond laser fabricated electrofluidic devices in glass for 3D manipulation of biological samples

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Midorikawa, Katsumi; Sugioka, Koji

    2016-03-01

    Novel electrofluidic microdevices based on monolithic integration of 3D metal electrodes into 3D glass microchannels have been prepared by femtosecond (fs) laser hybrid microfabrication. 3D microchannels with smooth internal walls are first prepared in photosensitive glass by fs laser-assisted chemical wet etching process combined with post-annealing. Then, 3D electrode patterning in prepared glass channels is carried out by water-assisted fs-laser direct-write ablation using the same laser followed by electroless metal plating. Laser processing parameters are optimized and the roles of water during the laser irradiation are discussed. The fabricated electrofluidic devices are applied to demonstrate 3D electro-orientation of cells in microfluidic environments.

  20. Reproducibility of crosstalk measurements on active glasses 3D LCD displays based on temporal characterization

    NASA Astrophysics Data System (ADS)

    Tourancheau, Sylvain; Wang, Kun; Bułat, Jarosław; Cousseau, Romain; Janowski, Lucjan; Brunnström, Kjell; Barkowsky, Marcus

    2012-03-01

    Crosstalk is one of the main display-related perceptual factors degrading image quality and causing visual discomfort on 3D-displays. It causes visual artifacts such as ghosting effects, blurring, and lack of color fidelity which are considerably annoying and can lead to difficulties to fuse stereoscopic images. On stereoscopic LCD with shutter-glasses, crosstalk is mainly due to dynamic temporal aspects: imprecise target luminance (highly dependent on the combination of left-view and right-view pixel color values in disparity regions) and synchronization issues between shutter-glasses and LCD. These different factors influence largely the reproducibility of crosstalk measurements across laboratories and need to be evaluated in several different locations involving similar and differing conditions. In this paper we propose a fast and reproducible measurement procedure for crosstalk based on high-frequency temporal measurements of both display and shutter responses. It permits to fully characterize crosstalk for any right/left color combination and at any spatial position on the screen. Such a reliable objective crosstalk measurement method at several spatial positions is considered a mandatory prerequisite for evaluating the perceptual influence of crosstalk in further subjective studies.

  1. Glasses for 3D ultrasound computer tomography: phase compensation

    NASA Astrophysics Data System (ADS)

    Zapf, M.; Hopp, T.; Ruiter, N. V.

    2016-03-01

    Ultrasound Computer Tomography (USCT), developed at KIT, is a promising new imaging system for breast cancer diagnosis, and was successfully tested in a pilot study. The 3D USCT II prototype consists of several hundreds of ultrasound (US) transducers on a semi-ellipsoidal aperture. Spherical waves are sequentially emitted by individual transducers and received in parallel by many transducers. Reflectivity volumes are reconstructed by synthetic aperture focusing (SAFT). However, straight forward SAFT imaging leads to blurred images due to system imperfections. We present an extension of a previously proposed approach to enhance the images. This approach includes additional a priori information and system characteristics. Now spatial phase compensation was included. The approach was evaluated with a simulation and clinical data sets. An increase in the image quality was observed and quantitatively measured by SNR and other metrics.

  2. Multispectral polarization viewing angle analysis of circular polarized stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2010-02-01

    In this paper we propose a method to characterize polarization based stereoscopic 3D displays using multispectral Fourier optics viewing angle measurements. Full polarization analysis of the light emitted by the display in the full viewing cone is made at 31 wavelengths in the visible range. Vertical modulation of the polarization state is observed and explained by the position of the phase shift filter into the display structure. In addition, strong spectral dependence of the ellipticity and polarization degree is observed. These features come from the strong spectral dependence of the phase shift film and introduce some imperfections (color shifts and reduced contrast). Using the measured transmission properties of the two glasses filters, the resulting luminance across each filter is computed for left and right eye views. Monocular contrast for each eye and binocular contrasts are performed in the observer space, and Qualified Monocular and Binocular Viewing Spaces (QMVS and QBVS) can be deduced in the same way as auto-stereoscopic 3D displays allowing direct comparison of the performances.

  3. Microbiological safety of glasses dispensed at 3D movie theatres.

    PubMed

    De Giusti, Maria; Marinelli, Lucia; Ursillo, Paolo; Del Cimmuto, Angela; Cottarelli, Alessia; Palazzo, Caterina; Marzuillo, Carolina; Solimini, Angelo Giuseppe; Boccia, Antonio

    2015-02-01

    Recent popularity of three-dimensional movies raised some concern about microbiological safety of glasses dispensed into movie theatres. In this study, we analysed the level of microbiological contamination on them before and after use and between theatres adopting manual and automatic sanitation systems. The manual sanitation system was more effective in reducing the total mesophilic count levels compared with the automatic system (P < 0.05), but no differences were found for coagulase-positive staphylococci levels (P = 0.22). No differences were found for mould and yeast between before and after levels (P = 0.21) and between sanitation systems (P = 0.44). We conclude that more evidences are needed to support microbiological risk evaluation.

  4. Analysis of multiple recording methods for full resolution multi-view autostereoscopic 3D display system incorporating VHOE

    NASA Astrophysics Data System (ADS)

    Hwang, Yong Seok; Cho, Kyu Ha; Kim, Eun Soo

    2014-03-01

    In this paper, we propose multiple recording process of photopolymer for a full-color multi-view including multiple-view auto-stereoscopic 3D display system based on VHOE (Volume Holographic Optical Element). To overcome the problems such as low resolution, and limited viewing zone of conventional 3D-display without glasses, we designed multiple recording condition of VHOE for multi-view display. It is verified that VHOE may be optically made by angle-multiplexed recording of pre-designed multiple-viewing zone that uniformly is recorded through optimized exposuretime scheduling scheme. Here, VHOE-based backlight system for 4-view stereoscopic display is implemented, in which the output beams that playing a role reference beam from LGP(Light guide plate)t may be sequentially synchronized with the respective stereo images displayed on the LCD panel.

  5. A 3D glass optrode array for optical neural stimulation

    PubMed Central

    Abaya, T.V.F.; Blair, S.; Tathireddy, P.; Rieth, L.; Solzbacher, F.

    2012-01-01

    This paper presents optical characterization of a first-generation SiO2 optrode array as a set of penetrating waveguides for both optogenetic and infrared (IR) neural stimulation. Fused silica and quartz discs of 3-mm thickness and 50-mm diameter were micromachined to yield 10 × 10 arrays of up to 2-mm long optrodes at a 400-μm pitch; array size, length and spacing may be varied along with the width and tip angle. Light delivery and loss mechanisms through these glass optrodes were characterized. Light in-coupling techniques include using optical fibers and collimated beams. Losses involve Fresnel reflection, coupling, scattering and total internal reflection in the tips. Transmission efficiency was constant in the visible and near-IR range, with the highest value measured as 71% using a 50-μm multi-mode in-coupling fiber butt-coupled to the backplane of the device. Transmittance and output beam profiles of optrodes with different geometries was investigated. Length and tip angle do not affect the amount of output power, but optrode width and tip angle influence the beam size and divergence independently. Finally, array insertion in tissue was performed to demonstrate its robustness for optical access in deep tissue. PMID:23243561

  6. Automated 3D reconstruction of interiors with multiple scan views

    NASA Astrophysics Data System (ADS)

    Sequeira, Vitor; Ng, Kia C.; Wolfart, Erik; Goncalves, Joao G. M.; Hogg, David C.

    1998-12-01

    This paper presents two integrated solutions for realistic 3D model acquisition and reconstruction; an early prototype, in the form of a push trolley, and a later prototype in the form of an autonomous robot. The systems encompass all hardware and software required, from laser and video data acquisition, processing and output of texture-mapped 3D models in VRML format, to batteries for power supply and wireless network communications. The autonomous version is also equipped with a mobile platform and other sensors for the purpose of automatic navigation. The applications for such a system range from real estate and tourism (e.g., showing a 3D computer model of a property to a potential buyer or tenant) or as tool for content creation (e.g., creating 3D models of heritage buildings or producing broadcast quality virtual studios). The system can also be used in industrial environments as a reverse engineering tool to update the design of a plant, or as a 3D photo-archive for insurance purposes. The system is Internet compatible: the photo-realistic models can be accessed via the Internet and manipulated interactively in 3D using a common Web browser with a VRML plug-in. Further information and example reconstructed models are available on- line via the RESOLV web-page at http://www.scs.leeds.ac.uk/resolv/.

  7. 3-D Perspective View, Miquelon and Saint Pierre Islands

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image shows Miquelon and Saint Pierre Islands, located south of Newfoundland, Canada. These islands, along with five smaller islands, are a self-governing territory of France. North is in the top right corner of the image. The island of Miquelon, in the background, is divided by a thin barrier beach into Petite Miquelon on the left, and Grande Miquelon on the right. Saint Pierre Island is seen in the foreground. The maximum elevation of this land is 240 meters (787 feet). The land mass of the islands is about 242square kilometers (94 square miles) or 1.5 times the size of Washington, DC.

    This three-dimensional perspective view is one of several still photographs taken from a simulated flyover of the islands. It shows how elevation data collected by the Shuttle Radar Topography Mission (SRTM) can be used to enhance other satellite images. Color and natural shading are provided by a Landsat 7 image taken on September 7, 1999. The Landsat image was draped over the SRTM data. Terrain perspective and shading are from SRTM. The vertical scale has been increased six times to make it easier to see the small features. This also makes the sea cliffs around the edges of the islands look larger. In this view the capital city of Saint Pierre is seen as the bright area in the foreground of the island. The thin bright line seen in the water is a breakwater that offers some walled protection for the coastal city.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and

  8. Dynamics of 3D view invariance in monkey inferotemporal cortex.

    PubMed

    Ratan Murty, N Apurva; Arun, Sripati P

    2015-04-01

    Rotations in depth are challenging for object vision because features can appear, disappear, be stretched or compressed. Yet we easily recognize objects across views. Are the underlying representations view invariant or dependent? This question has been intensely debated in human vision, but the neuronal representations remain poorly understood. Here, we show that for naturalistic objects, neurons in the monkey inferotemporal (IT) cortex undergo a dynamic transition in time, whereby they are initially sensitive to viewpoint and later encode view-invariant object identity. This transition depended on two aspects of object structure: it was strongest when objects foreshortened strongly across views and were similar to each other. View invariance in IT neurons was present even when objects were reduced to silhouettes, suggesting that it can arise through similarity between external contours of objects across views. Our results elucidate the viewpoint debate by showing that view invariance arises dynamically in IT neurons out of a representation that is initially view dependent.

  9. Generation of flat viewing zone in DFVZ autostereoscopic multiview 3D display by weighting factor

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Seon-Kyu; Yoon, Ky-Hyuk

    2013-05-01

    A new method is introduced to reduce three crosstalk problems and the brightness variation in 3D image by means of the dynamic fusion of viewing zones (DFVZ) using weighting factor. The new method effectively generates the flat viewing zone at the center of viewing zone. The new type autostereoscopic 3D display can give less brightness variation of 3D image when observer moves.

  10. LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.

  11. 3D Object Recognition: Symmetry and Virtual Views

    DTIC Science & Technology

    1992-12-01

    NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATIONI Artificial Intelligence Laboratory REPORT NUMBER 545 Technology Square AIM 1409 Cambridge... ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING A.I. Memo No. 1409 December 1992 C.B.C.L. Paper No. 76 3D Object...research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial

  12. Spirit 360-Degree View on Sol 409 (3D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this 360-degree view of the rover's surroundings on Spirit's 409th martian day, or sol (Feb. 26, 2005). Spirit had driven 2 meters (7 feet) on this sol to get in position on 'Cumberland Ridge' for looking into 'Tennessee Valley' to the east. This location is catalogued as Spirit's Site 108. Rover-wheel tracks from climbing the ridge are visible on the right. The summit of 'Husband Hill' is at the center, to the south. This view is presented in a cylindrical-perspective projection with geometric and brightness seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  13. Spirit 360-Degree View, Sol 388 (3D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this 360-degree view of the rover's surroundings on Spirit's 388th martian day, or sol (Feb. 4, 2005). Spirit had driven about 13 meters (43 feet) uphill toward 'Cumberland Ridge' on this sol. This location is catalogued as Spirit's Site 102, Position 513. The view is presented in a cylindrical-perspective projection with geometric and brightness seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  14. Viewing 3D TV over two months produces no discernible effects on balance, coordination or eyesight

    PubMed Central

    Read, Jenny C.A.; Godfrey, Alan; Bohr, Iwo; Simonotto, Jennifer; Galna, Brook; Smulders, Tom V.

    2016-01-01

    Abstract With the rise in stereoscopic 3D media, there has been concern that viewing stereoscopic 3D (S3D) content could have long-term adverse effects, but little data are available. In the first study to address this, 28 households who did not currently own a 3D TV were given a new TV set, either S3D or 2D. The 116 members of these households all underwent tests of balance, coordination and eyesight, both before they received their new TV set, and after they had owned it for 2 months. We did not detect any changes which appeared to be associated with viewing 3D TV. We conclude that viewing 3D TV does not produce detectable effects on balance, coordination or eyesight over the timescale studied. Practitioner Summary: Concern has been expressed over possible long-term effects of stereoscopic 3D (S3D). We looked for any changes in vision, balance and coordination associated with normal home S3D TV viewing in the 2 months after first acquiring a 3D TV. We find no evidence of any changes over this timescale. PMID:26758965

  15. Viewing 3D TV over two months produces no discernible effects on balance, coordination or eyesight.

    PubMed

    Read, Jenny C A; Godfrey, Alan; Bohr, Iwo; Simonotto, Jennifer; Galna, Brook; Smulders, Tom V

    2016-08-01

    With the rise in stereoscopic 3D media, there has been concern that viewing stereoscopic 3D (S3D) content could have long-term adverse effects, but little data are available. In the first study to address this, 28 households who did not currently own a 3D TV were given a new TV set, either S3D or 2D. The 116 members of these households all underwent tests of balance, coordination and eyesight, both before they received their new TV set, and after they had owned it for 2 months. We did not detect any changes which appeared to be associated with viewing 3D TV. We conclude that viewing 3D TV does not produce detectable effects on balance, coordination or eyesight over the timescale studied. Practitioner Summary: Concern has been expressed over possible long-term effects of stereoscopic 3D (S3D). We looked for any changes in vision, balance and coordination associated with normal home S3D TV viewing in the 2 months after first acquiring a 3D TV. We find no evidence of any changes over this timescale.

  16. The Influence on Humans of Long Hours of Viewing 3D Movies

    NASA Astrophysics Data System (ADS)

    Kawamura, Yuta; Horie, Yusuke; Sano, Keisuke; Kodama, Hiroya; Tsunoda, Naoki; Shibuta, Yuki; Kawachi, Yuki; Yamada, Mitsuho

    Three-dimensional (3D) movies have become very popular in movie theaters and for home viewing, To date, there has been no report of the effects of the continual vergence eye movement that occurs when viewing 3D movies from the beginning to the end. First, we analyzed the influence of viewing a 3D movie for several hours on vergence eye movement. At the same time, we investigated the influence of long viewing on the human body, using the Simulator Sickness Questionnaire (SSQ) and critical fusion frequency (CFF). It was suggested that the vergence stable time after saccade when viewing a long movie was influenced by the viewing time and that the vergence stable time after saccade depended on the content of the movie. Also the differences were seen in the SSQ and CFF between the movie's beginning and its ending when viewing a 3D movie.

  17. ISM abundances and history: a 3D, solar neighborhood view

    NASA Astrophysics Data System (ADS)

    Lallement, R.; Vergely, J.-L.; Puspitarini, L.

    For observational reasons, the solar neighborhood is particularly suitable for the study of the multi-phase interstellar (IS) medium and the search for traces of its temporal evolution. On the other hand, by a number of aspects it seems to be a peculiar region. We use recent 3D maps of the IS dust based on color excess data as well as former maps of the gas to illustrate how such maps can be used to shed additional light on the specificity of the local medium, its history and abundance pattern. 3D maps reveal a gigantic cavity located in the third quadrant and connected to the Local Bubble, the latter itself running into an elongated cavity toward l≃ 70°. Most nearby cloud complexes of the so-called Gould belt but also more distant clouds seem to border a large fraction of this entire structure. The IS medium with the large cavity appears ionized and dust-poor, as deduced from ionized calcium and neutral sodium to dust ratios. The geometry favors the proposed scenario of Gould belt-Local Arm formation through the braking of a supercloud by interaction with a spiral density wave \\citep{olano01}. The highly variable D/H ratio in the nearby IS gas may also be spatially related to the global structure. We speculate about potential consequences of the supercloud encounter and dust-gas decoupling during its braking, in particular the formation of strong inhomogeneities in both the dust to gas abundance ratio and the dust characteristics: (i) during the ≃ 500 Myrs prior to the collision, dust within the supercloud may have been gradually, strongly enriched in D due to an absence of strong stellar formation and preferential adsorption of D \\citep{jura82,draine03} ; (ii) during its interaction with the Plane and the braking dust-rich and dust-poor regions may have formed due to differential gas drag, the dust being more concentrated in the dense areas; strong radiation pressure from OB associations at the boundary of the left-behind giant cavity may have also helped

  18. Recognition by Humans and Pigeons of Novel Views of 3-D Objects and Their Photographs

    ERIC Educational Resources Information Center

    Friedman, Alinda; Spetch, Marcia L.; Ferrey, Anne

    2005-01-01

    Humans and pigeons were trained to discriminate between 2 views of actual 3-D objects or their photographs. They were tested on novel views that were either within the closest rotational distance between the training views (interpolated) or outside of that range (extrapolated). When training views were 60? apart, pigeons, but not humans,…

  19. Venus - 3D Perspective View of Eistla Regio

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A portion of western Eistla Regio is displayed in this three-dimensional perspective view of the surface of Venus. The viewpoint is located 1,310 kilometers (812 miles) southwest of Gula Mons at an elevation of 0.78 kilometer (0.48 mile). The view is to the northeast with Gula Mons appearing on the horizon. Gula Mons, a 3 kilometer (1.86 mile) high volcano, is located at approximately 22 degrees north latitude, 359 degrees east longitude. The impact crater Cunitz, named for the astronomer and mathematician Maria Cunitz, is visible in the center of the image. The crater is 48.5 kilometers (30 miles) in diameter and is 215 kilometers (133 miles) from the viewer's position. Magellan synthetic aperture radar data is combined with radar altimetry to develop a three-dimensional map of the surface. Rays cast in a computer intersect the surface to create a three-dimensional perspective view. Simulated color and a digital elevation map developed by the U.S. Geological Survey, are used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced at the JPL Multimission Image Processing Laboratory and is a single frame from a video released at the March 5, 1991, JPL news conference.

  20. Venus - 3D Perspective View of Gula Mons

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Gula Mons is displayed in this computer-simulated view of the surface of Venus. The viewpoint is located 110 kilometers (68 miles) southwest of Gula Mons at the same elevation as the summit, 3 kilometers (1.9 miles) above Eistla Regio. Lava flows extend for hundreds of kilometers across the fractured plains. The view is to the northeast with Gula Mons appearing at the center of the image. Gula Mons, a 3 kilometer (1.9 mile) high volcano, is located at approximately 22 degrees north latitude, 359 degrees east longitude in western Eistla Regio. Magellan synthetic aperture radar data is combined with radar altimetry to produce a three-dimensional map of the surface. Rays cast in a computer intersect the surface to create a three-dimensional perspective view. Simulated color and a digital elevation map developed by the U.S. Geological Survey are used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced by the JPL Multimission Image Processing Laboratory and is a single frame from a video released at the March 5, 1991, JPL news conference.

  1. Venus - 3D Perspective View of Maat Mons

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Maat Mons is displayed in this three-dimensional perspective view of the surface of Venus. The viewpoint is located 560 kilometers (347 miles) north of Maat Mons at an elevation of 1.7 kilometers (1 mile) above the terrain. Lava flows extend for hundreds of kilometers across the fractured plains shown in the foreground, to the base of Maat Mons. The view is to the south with Maat Mons appearing at the center of the image on the horizon. Maat Mons, an 8-kilometer (5 mile) high volcano, is located at approximately 0.9 degrees north latitude, 194.5 degrees east longitude. Maat Mons is named for an Egyptian goddess of truth and justice. Magellan synthetic aperture radar data is combined with radar altimetry to develop a three-dimensional map of the surface. The vertical scale in this perspective has been exaggerated 22.5 times. Rays cast in a computer intersect the surface to create a three-dimensional perspective view. Simulated color and a digital elevation map developed by the U.S. Geological Survey, are used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced at the JPL Multimission Image Processing Laboratory.

  2. Efficient view based 3-D object retrieval using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  3. Venus - 3D Perspective View of Idem-Kuva

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A corona is displayed in this computer-simulated view of the surface of Venus. The viewpoint is located 150 kilometers (93 miles) north of Gula Mons at a height of 1.6 kilometers (1 mile) above the corona. The corona has a diameter of 97 kilometers (60 miles). The proposed name for the corona is Idem-Kuva, a Finno-Ugraic harvest spirit. Lava flows extend for hundreds of kilometers across the fractured plains shown in the background. The viewpoint is to the north with Gula Mons to the south. Magellan synthetic aperture radar data is combined with radar altimetry to produce a three-dimensional map of the surface. Rays cast in a computer intersect the surface to create a three-dimensional perspective view. Simulated color and a digital elevation map developed by the U.S. Geological Survey are used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 124 spacecraft. The image was produced at the JPL Multimission Image Processing Laboratory and is a single frame from a video released at a March 5, 1991, JPL news conference.

  4. Dual-view integral imaging 3D display using polarizer parallax barriers.

    PubMed

    Wu, Fei; Wang, Qiong-Hua; Luo, Cheng-Gao; Li, Da-Hai; Deng, Huan

    2014-04-01

    We propose a dual-view integral imaging (DVII) 3D display using polarizer parallax barriers (PPBs). The DVII 3D display consists of a display panel, a microlens array, and two PPBs. The elemental images (EIs) displayed on the left and right half of the display panel are captured from two different 3D scenes, respectively. The lights emitted from two kinds of EIs are modulated by the left and right half of the microlens array to present two different 3D images, respectively. A prototype of the DVII 3D display is developed, and the experimental results agree well with the theory.

  5. The Twin Peaks in 3-D, as Viewed by the Mars Pathfinder IMP Camera

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The Twin Peaks are modest-size hills to the southwest of the Mars Pathfinder landing site. They were discovered on the first panoramas taken by the IMP camera on the 4th of July, 1997, and subsequently identified in Viking Orbiter images taken over 20 years ago. The peaks are approximately 30-35 meters (-100 feet) tall. North Twin is approximately 860 meters (2800 feet) from the lander, and South Twin is about a kilometer away (3300 feet). The scene includes bouldery ridges and swales or 'hummocks' of flood debris that range from a few tens of meters away from the lander to the distance of the South Twin Peak. The large rock at the right edge of the scene is nicknamed 'Hippo'. This rock is about a meter (3 feet) across and 25 meters (80 feet) distant.

    This view of the Twin Peaks was produced by combining 4 individual 'Superpan' scenes from the left and right eyes of the IMP camera to cover both peaks. Each frame consists of 8 individual frames (left eye) and 7 frames (right eye) taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution pancromatic frame that is sharper than an individual frame would be.

    The anaglyph view of the Twin Peaks was produced by combining the left and right eye mosaics (above) by assigning the left eye view to the red color plane and the right eye view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech). The IMP was developed by the University of Arizona Lunar and Planetary

  6. Venus - 3D Perspective View of Maat Mons

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Maat Mons is displayed in this computer generated three-dimensional perspective of the surface of Venus. The viewpoint is located 634 kilometers (393 miles) north of Maat Mons at an elevation of 3 kilometers (2 miles) above the terrain. Lava flows extend for hundreds of kilometers across the fractured plains shown in the foreground, to the base of Maat Mons. The view is to the south with the volcano Maat Mons appearing at the center of the image on the horizon and rising to almost 5 kilometers (3 miles) above the surrounding terrain. Maat Mons is located at approximately 0.9 degrees north latitude, 194.5 degrees east longitude with a peak that ascends to 8 kilometers (5 miles) above the mean surface. Maat Mons is named for an Egyptian Goddess of truth and justice. Magellan synthetic aperture radar data is combined with radar altimetry to develop a three-dimensional map of the surface. The vertical scale in this perspective has been exaggerated 10 times. Rays cast in a computer intersect the surface to crate a three-dimensional perspective view. Simulated color and a digital elevation map developed by the U.S. Geological Survey are used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced by the Solar System Visualization project and the Magellan Science team at the JPL Multimission Image Processing Laboratory and is a single frame from a video released at the April 22, 1992 news conference.

  7. Venus - 3D Perspective View of Estla Regio

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A portion of western Eistla Regio is displayed in this three-dimensional perspective view of the surface of Venus. The viewpoint is located 1,100 kilometers (682 miles) northeast of Gula Mons at an elevation of 7.5 kilometers (4.6 miles). Lava flows extend for hundreds of kilometers across the fractured plains shown in the foreground to the base of Gula Mons. The viewpoint is to the southwest with Gula Mons appearing at the left just below the horizon. Gula Mons, a 3 kilometers (1.8 miles) high volcano, is located at approximately 22 degrees north latitude, 359 degrees east longitude. Sif Mons, a volcano with a diameter of 300 kilometers (180 miles) and a height of 2 kilometers (1.2 miles), appears to the right of Gula Mons. The distance between Sif Mons and Gula Mons is approximately 730 kilometers (453 miles). Magellan synthetic aperture radar data is combined with radar altimetry to develop a three-dimensional map of the surface. Ray tracing (rays as if from a light source are cast in a computer to intersect the surface) simulate a perspective view. Simulated color and a digital elevation map developed by Randy Kirk of the U.S. Geological Survey, are used to enhance small scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced at the JPL Multimission Image Processing Laboratory by Eric De Jong, Jeff Hall and Myche McAuley, and is a single frame from a video released at a March 5, 1991, JPL news conference.

  8. 3D printed glass: surface finish and bulk properties as a function of the printing process

    NASA Astrophysics Data System (ADS)

    Klein, Susanne; Avery, Michael P.; Richardson, Robert; Bartlett, Paul; Frei, Regina; Simske, Steven

    2015-03-01

    It is impossible to print glass directly from a melt, layer by layer. Glass is not only very sensitive to temperature gradients between different layers but also to the cooling process. To achieve a glass state the melt, has to be cooled rapidly to avoid crystallization of the material and then annealed to remove cooling induced stress. In 3D-printing of glass the objects are shaped at room temperature and then fired. The material properties of the final objects are crucially dependent on the frit size of the glass powder used during shaping, the chemical formula of the binder and the firing procedure. For frit sizes below 250 μm, we seem to find a constant volume of pores of less than 5%. Decreasing frit size leads to an increase in the number of pores which then leads to an increase of opacity. The two different binders, 2- hydroxyethyl cellulose and carboxymethylcellulose sodium salt, generate very different porosities. The porosity of samples with 2-hydroxyethyl cellulose is similar to frit-only samples, whereas carboxymethylcellulose sodium salt creates a glass foam. The surface finish is determined by the material the glass comes into contact with during firing.

  9. Optimal 3D Viewing with Adaptive Stereo Displays for Advanced Telemanipulation

    NASA Technical Reports Server (NTRS)

    Lee, S.; Lakshmanan, S.; Ro, S.; Park, J.; Lee, C.

    1996-01-01

    A method of optimal 3D viewing based on adaptive displays of stereo images is presented for advanced telemanipulation. The method provides the viewer with the capability of accurately observing a virtual 3D object or local scene of his/her choice with minimum distortion.

  10. Glass formation - A contemporary view

    NASA Technical Reports Server (NTRS)

    Uhlmann, D. R.

    1983-01-01

    The process of glass formation is discussed from several perspectives. Particular attention is directed to kinetic treatments of glass formation and to the question of how fast a given liquid must be cooled in order to form a glass. Specific consideration is paid to the calculation of critical cooling rates for glass formation, to the effects of nucleating heterogeneities and transients in nucleation on the critical cooling rates, to crystallization on reheating a glass, to the experimental determination of nucleation rates and barriers to crystal nucleation, and to the characteristics of materials which are most conducive to glass formation.

  11. An update on transesophageal echocardiography views 2016: 2D versus 3D tee views

    PubMed Central

    Kapoor, Poonam Malhotra; Muralidhar, Kanchi; Nanda, Navin C.; Mehta, Yatin; Shastry, Naman; Irpachi, Kalpana; Baloria, Aditya

    2016-01-01

    In 1980, Transesophageal Echocardiography (TEE) first technology has introduced the standard of practice for most cardiac operating rooms to facilitate surgical decision making. Transoesophageal echocardiography as a diagnostic tool is now an integral part of intraoperative monitoring practice of cardiac anaesthesiology. Practice guidelines for perioperative transesophageal echocardiography are systematically developed recommendations that assist in the management of surgical patients, were developed by Indian Association of Cardiac Anaesthesiologists (IACTA). This update relates to the former IACTA practice guidelines published in 2013 and the ASE/EACTA guidelines of 2015. The current authors believe that the basic echocardiographer should be familiar with the technical skills for acquiring 28 cross sectional imaging planes. These 28 cross sections would provide also the format for digital acquisition and storage of a comprehensive TEE examination and adds 5 more additional views, introduced for different clinical scenarios in recent times. A comparison of 2D TEE views versus 3D TEE views is attempted for the first time in literature, in this manuscript. Since, cardiac anaesthesia variability exists in the precise anatomic orientation between the heart and the oesophagus in individual patients, an attempt has been made to provide specific criteria based on identifiable anatomic landmarks to improve the reproducibility and consistency of image acquisition for each of the standard cross sections. PMID:27762249

  12. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  13. Fabrication of 3D solenoid microcoils in silica glass by femtosecond laser wet etch and microsolidics

    NASA Astrophysics Data System (ADS)

    Meng, Xiangwei; Yang, Qing; Chen, Feng; Shan, Chao; Liu, Keyin; Li, Yanyang; Bian, Hao; Du, Guangqing; Hou, Xun

    2015-02-01

    This paper reports a flexible fabrication method for 3D solenoid microcoils in silica glass. The method consists of femtosecond laser wet etching (FLWE) and microsolidics process. The 3D microchannel with high aspect ratio is fabricated by an improved FLWE method. In the microsolidics process, an alloy was chosen as the conductive metal. The microwires are achieved by injecting liquid alloy into the microchannel, and allowing the alloy to cool and solidify. The alloy microwires with high melting point can overcome the limitation of working temperature and improve the electrical property. The geometry, the height and diameter of microcoils were flexibly fabricated by the pre-designed laser writing path, the laser power and etching time. The 3D microcoils can provide uniform magnetic field and be widely integrated in many magnetic microsystems.

  14. Imaging Techniques for Dense 3D reconstruction of Swimming Aquatic Life using Multi-view Stereo

    NASA Astrophysics Data System (ADS)

    Daily, David; Kiser, Jillian; McQueen, Sarah

    2016-11-01

    Understanding the movement characteristics of how various species of fish swim is an important step to uncovering how they propel themselves through the water. Previous methods have focused on profile capture methods or sparse 3D manual feature point tracking. This research uses an array of 30 cameras to automatically track hundreds of points on a fish as they swim in 3D using multi-view stereo. Blacktip sharks, sting rays, puffer fish, turtles and more were imaged in collaboration with the National Aquarium in Baltimore, Maryland using the multi-view stereo technique. The processes for data collection, camera synchronization, feature point extraction, 3D reconstruction, 3D alignment, biological considerations, and lessons learned will be presented. Preliminary results of the 3D reconstructions will be shown and future research into mathematically characterizing various bio-locomotive maneuvers will be discussed.

  15. 3D multi-view system using electro-wetting liquid lenticular lenses

    NASA Astrophysics Data System (ADS)

    Won, Yong Hyub; Kim, Junoh; Kim, Cheoljoong; Shin, Dooseub; Lee, Junsik; Koo, Gyohyun

    2016-06-01

    Lenticular multi-view system has great potential of three dimensional image realization. This paper introduces a fabrication of liquid lenticular lens array and an idea of increasing view points with a same resolution. Tunable liquid lens array can produce three dimensional images by using electro-wetting principle that changes surface tensions by applying voltage. The liquid lenticular device consists of a chamber, two different liquids and a sealing plate. To fabricate the chamber, an <100> silicon wafer is wet-etched by KOH solution and a trapezoid shaped chamber can be made after a certain time. The chamber having slanted walls is advantageous for electro-wetting achieving high diopter. Electroplating is done to make a nikel mold and poly methyl methacrylate (PMMA) chamber is fabricated through an embossing process. Indium tin oxide (ITO) is sputtered and parylene C and Teflon AF1600 is deposited for dielectric and hydrophobic layer respectively. Two immiscible liquids are injected and a glass plate as a sealing plate is covered with polycarbonates (PC) gaskets and sealed by UV adhesive. Two immiscible liquids are D.I water and a mixture of 1-chloronaphthalene and dodecane. The completed lenticular lens shows 2D and 3D images by applying certain voltages. Dioptric power and operation speed of the lenticular lens array are measured. A novel idea that an increment of viewpoints by electrode separation process is also proposed. The left and right electrodes of lenticular lens can be induced by different voltages and resulted in tilted optical axis. By switching the optical axis quickly, two times of view-points can be achieved with a same pixel resolution.

  16. 3D cell culture to determine in vitro biocompatibility of bioactive glass in association with chitosan.

    PubMed

    Bédouin, Y; Pellen Mussi, P; Tricot-Doleux, S; Chauvel-Lebret, D; Auroy, P; Ravalec, X; Oudadesse, H; Perez, F

    2015-01-01

    This study reports the in vitro biocompatibility of a composite biomaterial composed of 46S6 bioactive glass in association with chitosan (CH) by using 3D osteoblast culture of SaOS2. The 46S6 and CH composite (46S6-CH) forms small hydroxyapatite crystals on its surface after only three days immersion in the simulated body fluid. For 2D osteoblast culture, a significant increase in cell proliferation was observed after three days of contact with 46S6 or 46S6-CH-immersed media. After six days, 46S6-CH led to a significant increase in cell proliferation (128%) compared with pure 46S6 (113%) and pure CH (122%). For 3D osteoblast culture, after six days of culture, there was an increase in gene expression of markers of the early osteoblastic differentiation (RUNX2, ALP, COL1A1). Geometric structures corresponding to small apatite clusters were observed by SEM on the surface of the spheroids cultivated with 46S6 or 46S6-CH-immersed media. We showed different cellular responses depending on the 2D and 3D cell culture model. The induction of osteoblast differentiation in the 3D cell culture explained the differences of cell proliferation in contact with 46S6, CH or 46S6-CH-immersed media. This study confirmed that the 3D cell culture model is a very promising tool for in vitro biological evaluation of bone substitutes' properties.

  17. Femtosecond laser 3D nanofabrication in glass: enabling direct write of integrated micro/nanofluidic chips

    NASA Astrophysics Data System (ADS)

    Cheng, Ya; Liao, Yang; Sugioka, Koji

    2014-03-01

    The creation of complex three-dimensional (3D) fluidic systems composed of hollow micro- and nanostructures embedded in transparent substrates has attracted significant attention from both scientific and applied research communities. However, it is by now still a formidable challenge to build 3D micro- and nanofluidic structures with arbitrary configurations using conventional planar lithographic fabrication methods. As a direct and maskless fabrication technique, femtosecond laser micromachining provides a straightforward approach for high-precision spatial-selective modification inside transparent materials through nonlinear optical absorption. Here, we demonstrate rapid fabrication of high-aspect-ratio micro- and/or nanofluidic structures with various 3D configurations in glass substrates by femtosecond laser direct writing. Based on this approach, we demonstrate several functional micro- and nanofluidic devices including a 3D passive microfluidic mixer, a capillary electrophoresis (CE) analysis chip, and an integrated micro-nanofluidic system for single DNA analysis. This technology offers new opportunities to develop novel 3D micro-nanofluidic systems for a variety of lab-on-a-chip applications.

  18. Dual-view integral imaging 3D display by using orthogonal polarizer array and polarization switcher.

    PubMed

    Wang, Qiong-Hua; Ji, Chao-Chao; Li, Lei; Deng, Huan

    2016-01-11

    In this paper, a dual-view integral imaging three-dimensional (3D) display consisting of a display panel, two orthogonal polarizer arrays, a polarization switcher, and a micro-lens array is proposed. Two elemental image arrays for two different 3D images are presented by the display panel alternately, and the polarization switcher controls the polarization direction of the light rays synchronously. The two elemental image arrays are modulated by their corresponding and neighboring micro-lenses of the micro-lens array, and reconstruct two different 3D images in viewing zones 1 and 2, respectively. A prototype of the dual-view II 3D display is developed, and it has good performances.

  19. Facile synthesis 3D flexible core-shell graphene/glass fiber via chemical vapor deposition

    PubMed Central

    2014-01-01

    Direct deposition of graphene layers on the flexible glass fiber surface to form the three-dimensional (3D) core-shell structures is offered using a two-heating reactor chemical vapor deposition system. The two-heating reactor is utilized to offer sufficient, well-proportioned floating C atoms and provide a facile way for low-temperature deposition. Graphene layers, which are controlled by changing the growth time, can be grown on the surface of wire-type glass fiber with the diameter from 30 nm to 120 um. The core-shell graphene/glass fiber deposition mechanism is proposed, suggesting that the 3D graphene films can be deposited on any proper wire-type substrates. These results open a facile way for direct and high-efficiency deposition of the transfer-free graphene layers on the low-temperature dielectric wire-type substrates. PACS 81.05.U-; 81.07.-b; 81.15.Gh PMID:25170331

  20. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    NASA Astrophysics Data System (ADS)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  1. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer

    PubMed Central

    Douglas, David B.; Boone, John M.; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Objective To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. Methods A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. Results The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. Conclusion The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice. PMID:27774517

  2. Linear programming approach to optimize 3D data obtained from multiple view angiograms

    NASA Astrophysics Data System (ADS)

    Noël, Peter B.; Xu, Jinhui; Hoffmann, Kenneth R.; Singh, Vikas; Schafer, Sebastian; Walczak, Alan M.

    2007-03-01

    Three-dimensional (3D) vessel data from CTA or MRA are not always available prior to or during endovascular interventional procedures, whereas multiple 2D projection angiograms often are. Unfortunately, patient movement, table movement, and gantry sag during angiographic procedures can lead to large errors in gantry-based imaging geometries and thereby incorrect 3D. Therefore, we are developing methods for combining vessel data from multiple 2D angiographic views obtained during interventional procedures to provide 3D vessel data during these procedures. Multiple 2D projection views of carotid vessels are obtained, and the vessel centerlines are indicated. For each pair of views, endpoints of the 3D centerlines are reconstructed using triangulation based on the provided gantry geometry. Previous investigations indicated that translation errors were the primary source of error in the reconstructed 3D. Therefore, the errors in the translations relating the imaging systems are corrected by minimizing the L1 distance between the reconstructed endpoints, after which the 3D centerlines are reconstructed using epipolar constraints for every pair of views. Evaluations were performed using simulations, phantom data, and clinical cases. In simulation and phantom studies, the RMS error decreased from 6.0 mm obtained with biplane approaches to 0.5 mm with our technique. Centerlines in clinical cases are smoother and more consistent than those calculated from individual biplane pairs. The 3D centerlines are calculated in about 2 seconds. These results indicate that reliable 3D vessel data can be generated for treatment planning or revision during interventional procedures.

  3. A stereo matching model observer for stereoscopic viewing of 3D medical images

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.; Muralidlhar, Gautam S.

    2014-03-01

    Stereoscopic viewing of 3D medical imaging data has the potential to increase the detection of abnormalities. We present a new stereo model observer inspired by the characteristics of stereopsis in human vision. Given a stereo pair of images of an object (i.e., left and right images separated by a small displacement), the model observer rst nds the corresponding points between the two views, and then fuses them together to create a 2D cyclopean view. Assuming that the cyclopean view has extracted most of the 3D information presented in the stereo pair, a channelized Hotelling observer (CHO) can be utilized to make decisions. We conduct a simulation study that attempts to mimic the detection of breast lesions on stereoscopic viewing of breast tomosynthesis projection images. We render voxel datasets that contain random 3D power-law noise to model normal breast tissues with various breast densities. 3D Gaussian signal is added to some of the datasets to model the presence of a breast lesion. By changing the separation angle between the two views, multiple stereo pairs of projection images are generated for each voxel dataset. The performance of the model is evaluated in terms of the accuracy of binary decisions on the presence of the simulated lesions.

  4. Constructing 3-D Models Of A Scene From Planned Multiple Views

    NASA Astrophysics Data System (ADS)

    Xie, Shun-en; Calvert, Thomas W.

    1987-03-01

    Whether in an office, a warehouse or a home, the mobile robot must often work in a cluttered environment; although the basic layout of the environment may be known in advance, the nature and placement of objects within the environment will generally be unknown. Thus the intelligent mobile robot must be able to sense its environment with a vision system and it must be able to analyse multiple views to construct 3-d models of the objects it encounters. Since this analysis results in a heavy computational load, it is important to minimize the number of views and to use a planner to dynamically select a minimal set of vantage viewpoints. This paper discusses an approach to this general problem and describes a prototype system for a mobile intelligent robot which can construct 3-d models from planned sequential views. The principal components of this system are: (1) decomposition of a framed view into its components and the construction of partial 3-d descriptions of the view, (2) matching of the known environment to the partial 3-d descriptions of the view, (3) matching of partial descriptions of bodies derived from the current view with partial models constructed from previous views, (4) identification of new information in the current view and use of the information to update the models, (5) identification of unknown parts of partially constructed body models so that further viewpoints can be planned, (6) construction of a partial map of the scene and updating with each successive view, (7) selection of new viewpoints to maximize the information returned by a planner, (8) use of an expert system to convert the original boundary representations of the bodies to a new Constructive Solid Geometry-Extended Enhanced Spherical Image (CSG-EESI) representation to facilitate the recovery of structural information. Although the complete prototype system has not been implemented, its key components have been implemented and tested.

  5. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    NASA Astrophysics Data System (ADS)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  6. Focus-tunable multi-view holographic 3D display using a 4k LCD panel

    NASA Astrophysics Data System (ADS)

    Lin, Qiaojuan; Sang, Xinzhu; Chen, Zhidong; Yan, Binbin; Yu, Chongxiu; Wang, Peng; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    A focus-tunable multi-view holographic three-dimensional (3D) display system with a 10.1 inch 4K liquid crystal device (LCD) panel is presented. In the proposed synthesizing method, computer-generated hologram (CGH) does not require calculations of light diffraction. When multiple rays pass through one point of a 3D image and enter the pupil simultaneously, the eyes can focus on the point according to the depth cue. Benefiting from the holograms, the dense multiple perspective viewpoints of the 3D object are recorded and combined into the CGH in a dense-super-view way, which make two or more rays emitted from the same point in reconstructed light field into the pupil simultaneously. In general, a wavefront is converged to a viewpoint with the amplitude distribution of multi-view images on the hologram plane, and the phase distribution of a spherical wave is converged to the viewpoint. Here, the wavefronts are calculated according to all the multi-view images and then they are summed up to obtain the object wave on the hologram plane. Moreover, the reference light (converging light) is adopted to converge the central diffraction wave from the liquid crystal display (LCD) into a common area in a short view distance. Experimental results shows that the proposed holographic display can regenerate the 3D objects with focus cues: accommodation and retinal blur.

  7. Multi-Modal Clique-Graph Matching for View-Based 3D Model Retrieval.

    PubMed

    Liu, An-An; Nie, Wei-Zhi; Gao, Yue; Su, Yu-Ting

    2016-05-01

    Multi-view matching is an important but a challenging task in view-based 3D model retrieval. To address this challenge, we propose an original multi-modal clique graph (MCG) matching method in this paper. We systematically present a method for MCG generation that is composed of cliques, which consist of neighbor nodes in multi-modal feature space and hyper-edges that link pairwise cliques. Moreover, we propose an image set-based clique/edgewise similarity measure to address the issue of the set-to-set distance measure, which is the core problem in MCG matching. The proposed MCG provides the following benefits: 1) preserves the local and global attributes of a graph with the designed structure; 2) eliminates redundant and noisy information by strengthening inliers while suppressing outliers; and 3) avoids the difficulty of defining high-order attributes and solving hyper-graph matching. We validate the MCG-based 3D model retrieval using three popular single-modal data sets and one novel multi-modal data set. Extensive experiments show the superiority of the proposed method through comparisons. Moreover, we contribute a novel real-world 3D object data set, the multi-view RGB-D object data set. To the best of our knowledge, it is the largest real-world 3D object data set containing multi-modal and multi-view information.

  8. Are 3-D coronal mass ejection parameters from single-view observations consistent with multiview ones?

    NASA Astrophysics Data System (ADS)

    Lee, Harim; Moon, Y.-J.; Na, Hyeonock; Jang, Soojeong; Lee, Jae-Ok

    2015-12-01

    To prepare for when only single-view observations are available, we have made a test whether the 3-D parameters (radial velocity, angular width, and source location) of halo coronal mass ejections (HCMEs) from single-view observations are consistent with those from multiview observations. For this test, we select 44 HCMEs from December 2010 to June 2011 with the following conditions: partial and full HCMEs by SOHO and limb CMEs by twin STEREO spacecraft when they were approximately in quadrature. In this study, we compare the 3-D parameters of the HCMEs from three different methods: (1) a geometrical triangulation method, the STEREO CAT tool developed by NASA/CCMC, for multiview observations using STEREO/SECCHI and SOHO/LASCO data, (2) the graduated cylindrical shell (GCS) flux rope model for multiview observations using STEREO/SECCHI data, and (3) an ice cream cone model for single-view observations using SOHO/LASCO data. We find that the radial velocities and the source locations of the HCMEs from three methods are well consistent with one another with high correlation coefficients (≥0.9). However, the angular widths by the ice cream cone model are noticeably underestimated for broad CMEs larger than 100° and several partial HCMEs. A comparison between the 3-D CME parameters directly measured from twin STEREO spacecraft and the above 3-D parameters shows that the parameters from multiview are more consistent with the STEREO measurements than those from single view.

  9. Effect of mental fatigue caused by mobile 3D viewing on selective attention: an ERP study.

    PubMed

    Mun, Sungchul; Kim, Eun-Soo; Park, Min-Chul

    2014-12-01

    This study investigated behavioral responses to and auditory event-related potential (ERP) correlates of mental fatigue caused by mobile three-dimensional (3D) viewing. Twenty-six participants (14 women) performed a selective attention task in which they were asked to respond to the sounds presented at the attended side while ignoring sounds at the ignored side before and after mobile 3D viewing. Considering different individual susceptibilities to 3D, participants' subjective fatigue data were used to categorize them into two groups: fatigued and unfatigued. The amplitudes of d-ERP components were defined as differences in amplitudes between time-locked brain oscillations of the attended and ignored sounds, and these values were used to calculate the degree to which spatial selective attention was impaired by 3D mental fatigue. The fatigued group showed significantly longer response times after mobile 3D viewing compared to before the viewing. However, response accuracy did not significantly change between the two conditions, implying that the participants used a behavioral strategy to cope with their performance accuracy decrement by increasing their response times. No significant differences were observed for the unfatigued group. Analysis of covariance revealed group differences with significant and trends toward significant decreases in the d-P200 and d-late positive potential (LPP) amplitudes at the occipital electrodes of the fatigued and unfatigued groups. Our findings indicate that mentally fatigued participants did not effectively block out distractors in their information processing mechanism, providing support for the hypothesis that 3D mental fatigue impairs spatial selective attention and is characterized by changes in d-P200 and d-LPP amplitudes.

  10. VIEWNET: a neural architecture for learning to recognize 3D objects from multiple 2D views

    NASA Astrophysics Data System (ADS)

    Grossberg, Stephen; Bradski, Gary

    1994-10-01

    A self-organizing neural network is developed for recognition of 3-D objects from sequences of their 2-D views. Called VIEWNET because it uses view information encoded with networks, the model processes 2-D views of 3-D objects using the CORT-X 2 filter, which discounts the illuminant, regularizes and completes figural boundaries, and removes noise from the images. A log-polar transform is taken with respect to the centroid of the resulting figure and then re-centered to achieve 2-D scale and rotation invariance. The invariant images are coarse coded to further reduce noise, reduce foreshortening effects, and increase generalization. These compressed codes are input into a supervised learning system based on the Fuzzy ARTMAP algorithm which learns 2-D view categories. Evidence from sequences of 2-D view categories is stored in a working memory. Voting based on the unordered set of stored categories determines object recognition. Recognition is studied with noisy and clean images using slow and fast learning. VIEWNET is demonstrated on an MIT Lincoln Laboratory database of 2-D views of aircraft with and without additive noise. A recognition rate of up to 90% is achieved with one 2-D view category and of up to 98.5% correct with three 2-D view categories.

  11. JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases

    PubMed Central

    Feng, Guangjie; Burton, Nick; Hill, Bill; Davidson, Duncan; Kerwin, Janet; Scott, Mark; Lindsay, Susan; Baldock, Richard

    2005-01-01

    Background Many three-dimensional (3D) images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. Results We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. Conclusion We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily. PMID:15757508

  12. A View to the Future: A Novel Approach for 3D-3D Superimposition and Quantification of Differences for Identification from Next-Generation Video Surveillance Systems.

    PubMed

    Gibelli, Daniele; De Angelis, Danilo; Poppa, Pasquale; Sforza, Chiarella; Cattaneo, Cristina

    2017-03-01

    Techniques of 2D-3D superimposition are widely used in cases of personal identification from video surveillance systems. However, the progressive improvement of 3D image acquisition technology will enable operators to perform also 3D-3D facial superimposition. This study aims at analyzing the possible applications of 3D-3D superimposition to personal identification, although from a theoretical point of view. Twenty subjects underwent a facial 3D scan by stereophotogrammetry twice at different time periods. Scans were superimposed two by two according to nine landmarks, and root-mean-square (RMS) value of point-to-point distances was calculated. When the two superimposed models belonged to the same individual, RMS value was 2.10 mm, while it was 4.47 mm in mismatches with a statistically significant difference (p < 0.0001). This experiment shows the potential of 3D-3D superimposition: Further studies are needed to ascertain technical limits which may occur in practice and to improve methods useful in the forensic practice.

  13. Determination of the optimum viewing distance for a multi-view auto-stereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Park, Inkyu; Kim, Sung-Kyu

    2014-09-22

    We present methodologies for determining the optimum viewing distance (OVD) for a multi-view auto-stereoscopic 3D display system with a parallax barrier. The OVD can be efficiently determined as the viewing distance where statistical deviation of centers of quasi-linear distributions of illuminance at central viewing zones is minimized using local areas of a display panel. This method can offer reduced computation time because it does not use the entire area of the display panel during a simulation, but still secures considerable accuracy. The method is verified in experiments, showing its applicability for efficient optical characterization.

  14. See-through multi-view 3D display with parallax barrier

    NASA Astrophysics Data System (ADS)

    Hong, Jong-Young; Lee, Chang-Kun; Park, Soon-gi; Kim, Jonghyun; Cha, Kyung-Hoon; Kang, Ki Hyung; Lee, Byoungho

    2016-03-01

    In this paper, we propose the see-through parallax barrier type multi-view display with transparent liquid crystal display (LCD). The transparency of LCD is realized by detaching the backlight unit. The number of views in the proposed system is minimized to enlarge the aperture size of parallax barrier, which determines the transparency. For compensating the shortness of the number of viewpoints, eye tracking method is applied to provide large number of views and vertical parallax. Through experiments, a prototype of see-through autostereoscopic 3D display with parallax barrier is implemented, and the system parameters of transmittance, crosstalk, and barrier structure perception are analyzed.

  15. Developing a protocol for creating microfluidic devices with a 3D printer, PDMS, and glass

    NASA Astrophysics Data System (ADS)

    Collette, Robyn; Novak, Eric; Shirk, Kathryn

    2015-03-01

    Microfluidics research requires the design and fabrication of devices that have the ability to manipulate small volumes of fluid, typically ranging from microliters to picoliters. These devices are used for a wide range of applications including the assembly of materials and testing of biological samples. Many methods have been previously developed to create microfluidic devices, including traditional nanolithography techniques. However, these traditional techniques are cost-prohibitive for many small-scale laboratories. This research explores a relatively low-cost technique using a 3D printed master, which is used as a template for the fabrication of polydimethylsiloxane (PDMS) microfluidic devices. The masters are designed using computer aided design (CAD) software and can be printed and modified relatively quickly. We have developed a protocol for creating simple microfluidic devices using a 3D printer and PDMS adhered to glass. This relatively simple and lower-cost technique can now be scaled to more complicated device designs and applications. Funding provided by the Undergraduate Research Grant Program at Shippensburg University and the Student/Faculty Research Engagement Grants from the College of Arts and Sciences at Shippensburg University.

  16. Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.

    PubMed

    Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho

    2016-04-18

    We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.

  17. Surface functionalization of 3D glass-ceramic porous scaffolds for enhanced mineralization in vitro

    NASA Astrophysics Data System (ADS)

    Ferraris, Sara; Vitale-Brovarone, Chiara; Bretcanu, Oana; Cassinelli, Clara; Vernè, Enrica

    2013-04-01

    Bone reconstruction after tissue loosening due to traumatic, pathological or surgical causes is in increasing demand. 3D scaffolds are a widely studied solution for supporting new bone growth. Bioactive glass-ceramic porous materials can offer a three-dimensional structure that is able to chemically bond to bone. The ability to surface modify these devices by grafting biologically active molecules represents a challenge, with the aim of stimulating physiological bone regeneration with both inorganic and organic signals. In this research work glass ceramic scaffolds with very high mechanical properties and moderate bioactivity have been functionalized with the enzyme alkaline phosphatase (ALP). The material surface was activated in order to expose hydroxyl groups. The activated surface was further grafted with ALP both via silanization and also via direct grafting to the surface active hydroxyl groups. Enzymatic activity of grafted samples were measured by means of UV-vis spectroscopy before and after ultrasonic washing in TRIS-HCl buffer solution. In vitro inorganic bioactivity was investigated by soaking the scaffolds after the different steps of functionalization in a simulated body fluid (SBF). SEM observations allowed the monitoring of the scaffold morphology and surface chemical composition after soaking in SBF. The presence of ALP enhanced the in vitro inorganic bioactivity of the tested material.

  18. 3D/2D image registration: the impact of X-ray views and their number.

    PubMed

    Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2007-01-01

    An important part of image-guided radiation therapy or surgery is registration of a three-dimensional (3D) preoperative image to two-dimensional (2D) images of the patient. It is expected that the accuracy and robustness of a 3D/2D image registration method do not depend solely on the registration method itself but also on the number and projections (views) of intraoperative images. In this study, we systematically investigate these factors by using registered image data, comprising of CT and X-ray images of a cadaveric lumbar spine phantom and the recently proposed 3D/2D registration method. The results indicate that the proportion of successful registrations (robustness) significantly increases when more X-ray images are used for registration.

  19. 3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy.

    PubMed

    Uneri, A; Otake, Y; Wang, A S; Kleinszig, G; Vogt, S; Khanna, A J; Siewerdsen, J H

    2014-01-20

    An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ∼0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ∼10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.

  20. 3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Otake, Y.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Siewerdsen, J. H.

    2014-01-01

    An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ˜0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ˜10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.

  1. Modeling of multi-view 3D freehand radio frequency ultrasound.

    PubMed

    Klein, T; Hansson, M; Navab, Nassir

    2012-01-01

    Nowadays ultrasound (US) examinations are typically performed with conventional machines providing two dimensional imagery. However, there exist a multitude of applications where doctors could benefit from three dimensional ultrasound providing better judgment, due to the extended spatial view. 3D freehand US allows acquisition of images by means of a tracking device attached to the ultrasound transducer. Unfortunately, view dependency makes the 3D representation of ultrasound a non-trivial task. To address this we model speckle statistics, in envelope-detected radio frequency (RF) data, using a finite mixture model (FMM), assuming a parametric representation of data, in which the multiple views are treated as components of the FMM. The proposed model is show-cased with registration, using an ultrasound specific distribution based pseudo-distance, and reconstruction tasks, performed on the manifold of Gamma model parameters. Example field of application is neurology using transcranial US, as this domain requires high accuracy and data systematically features low SNR, making intensity based registration difficult. In particular, 3D US can be specifically used to improve differential diagnosis of Parkinson's disease (PD) compared to conventional approaches and is therefore of high relevance for future application.

  2. Learning the 3-D structure of objects from 2-D views depends on shape, not format

    PubMed Central

    Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit

    2016-01-01

    Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196

  3. Full optical characterization of autostereoscopic 3D displays using local viewing angle and imaging measurements

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    Two commercial auto-stereoscopic 3D displays are characterized a using Fourier optics viewing angle system and an imaging video-luminance-meter. One display has a fixed emissive configuration and the other adapts its emission to the observer position using head tracking. For a fixed emissive condition, three viewing angle measurements are performed at three positions (center, right and left). Qualified monocular and binocular viewing spaces in front of the display are deduced as well as the best working distance. The imaging system is then positioned at this working distance and crosstalk homogeneity on the entire surface of the display is measured. We show that the crosstalk is generally not optimized on all the surface of the display. Display aspect simulation using viewing angle measurements allows understanding better the origin of those crosstalk variations. Local imperfections like scratches and marks generally increase drastically the crosstalk, demonstrating that cleanliness requirements for this type of display are quite critical.

  4. A flexible new method for 3D measurement based on multi-view image sequences

    NASA Astrophysics Data System (ADS)

    Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu

    2016-11-01

    Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.

  5. 3D integration of microcomponents in a single glass chip by femtosecond laser direct writing for biochemical analysis

    NASA Astrophysics Data System (ADS)

    Sugioka, Koji; Hanada, Yasutaka; Midorikawa, Katsumi

    2007-05-01

    3D integration of microcomponents in a single glass chip by femtosecond laser direct writing followed by post annealing and successive wet etching is described for application to biochemical analysis. Integration of microfluidics and microoptics realized some functional microdevices like a μ-fluidic dye laser and a biosensor. As one of practical applications, we demonstrate inspection of living microorganisms using the microchip with 3D microfluidic structures fabricated by the present technique.

  6. Mesoporous bioactive glass nanolayer-functionalized 3D-printed scaffolds for accelerating osteogenesis and angiogenesis

    NASA Astrophysics Data System (ADS)

    Zhang, Yali; Xia, Lunguo; Zhai, Dong; Shi, Mengchao; Luo, Yongxiang; Feng, Chun; Fang, Bing; Yin, Jingbo; Chang, Jiang; Wu, Chengtie

    2015-11-01

    The hierarchical microstructure, surface and interface of biomaterials are important factors influencing their bioactivity. Porous bioceramic scaffolds have been widely used for bone tissue engineering by optimizing their chemical composition and large-pore structure. However, the surface and interface of struts in bioceramic scaffolds are often ignored. The aim of this study is to incorporate hierarchical pores and bioactive components into the bioceramic scaffolds by constructing nanopores and bioactive elements on the struts of scaffolds and further improve their bone-forming activity. Mesoporous bioactive glass (MBG) modified β-tricalcium phosphate (MBG-β-TCP) scaffolds with a hierarchical pore structure and a functional strut surface (~100 nm of MBG nanolayer) were successfully prepared via 3D printing and spin coating. The compressive strength and apatite-mineralization ability of MBG-β-TCP scaffolds were significantly enhanced as compared to β-TCP scaffolds without the MBG nanolayer. The attachment, viability, alkaline phosphatase (ALP) activity, osteogenic gene expression (Runx2, BMP2, OPN and Col I) and protein expression (OPN, Col I, VEGF, HIF-1α) of rabbit bone marrow stromal cells (rBMSCs) as well as the attachment, viability and angiogenic gene expression (VEGF and HIF-1α) of human umbilical vein endothelial cells (HUVECs) in MBG-β-TCP scaffolds were significantly upregulated compared with conventional bioactive glass (BG)-modified β-TCP (BG-β-TCP) and pure β-TCP scaffolds. Furthermore, MBG-β-TCP scaffolds significantly enhanced the formation of new bone in vivo as compared to BG-β-TCP and β-TCP scaffolds. The results suggest that application of the MBG nanolayer to modify 3D-printed bioceramic scaffolds offers a new strategy to construct hierarchically porous scaffolds with significantly improved physicochemical and biological properties, such as mechanical properties, osteogenesis, angiogenesis and protein expression for bone tissue

  7. Single DMD time-multiplexed 64-views autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Loreti, Luigi

    2013-03-01

    Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links

  8. 1. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. General view of looking glass aircraft in the project looking glass historic district. View to southeast. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  9. 4. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. General view of looking glass aircraft in the project looking glass historic district. View to west. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  10. 2. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. General view of looking glass aircraft in the project looking glass historic district. View to south. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  11. 5. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. General view of looking glass aircraft in the project looking glass historic district. View to north. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  12. 3. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. General view of looking glass aircraft in the project looking glass historic district. View to west. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  13. Multi-view indoor human behavior recognition based on 3D skeleton

    NASA Astrophysics Data System (ADS)

    Peng, Ling; Lu, Tongwei; Min, Feng

    2015-12-01

    For the problems caused by viewpoint changes in activity recognition, a multi-view interior human behavior recognition method based on 3D framework is presented. First, Microsoft's Kinect device is used to obtain body motion video in the positive perspective, the oblique angle and the side perspective. Second, it extracts bone joints and get global human features and the local features of arms and legs at the same time to form 3D skeletal features set. Third, online dictionary learning on feature set is used to reduce the dimension of feature. Finally, linear support vector machine (LSVM) is used to obtain the results of behavior recognition. The experimental results show that this method has better recognition rate.

  14. In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie

    2015-03-01

    Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.

  15. Bird's Eye View - A 3-D Situational Awareness Tool for the Space Station

    NASA Technical Reports Server (NTRS)

    Dershowitz, Adam; Chamitoff, Gregory

    2002-01-01

    Even as space-qualified computer hardware lags well behind the latest home computers, the possibility of using high-fidelity interactive 3-D graphics for displaying important on board information has finally arrived, and is being used on board the International Space Station (ISS). With the quantity and complexity of space-flight telemetry, 3-D displays can greatly enhance the ability of users, both onboard and on the ground, to interpret data quickly and accurately. This is particularly true for data related to vehicle attitude, position, configuration, and relation to other objects on the ground or in-orbit Bird's Eye View (BEV) is a 3-D real-time application that provides a high degree of Situational Awareness for the crew. Its purpose is to instantly convey important motion-related parameters to the crew and mission controllers by presenting 3-D simulated camera views of the International Space Station (ISS) in its actual environment Driven by actual telemetry, and running on board, as well as on the ground, the user can visualize the Space Station relative to the Earth, Sun, stars, various reference frames, and selected targets, such as ground-sites or communication satellites. Since the actual ISS configuration (geometry) is also modeled accurately, everything from the alignment of the solar panels to the expected view from a selected window can be visualized accurately. A virtual representation of the Space Station in real time has many useful applications. By selecting different cameras, the crew or mission control can monitor the station's orientation in space, position over the Earth, transition from day to night, direction to the Sun, the view from a particular window, or the motion of the robotic arm. By viewing the vehicle attitude and solar panel orientations relative to the Sun, the power status of the ISS can be easily visualized and understood. Similarly, the thermal impacts of vehicle attitude can be analyzed and visually confirmed. Communication

  16. Estimating 3D positions and velocities of projectiles from monocular views.

    PubMed

    Ribnick, Evan; Atev, Stefan; Papanikolopoulos, Nikolaos P

    2009-05-01

    In this paper, we consider the problem of localizing a projectile in 3D based on its apparent motion in a stationary monocular view. A thorough theoretical analysis is developed, from which we establish the minimum conditions for the existence of a unique solution. The theoretical results obtained have important implications for applications involving projectile motion. A robust, nonlinear optimization-based formulation is proposed, and the use of a local optimization method is justified by detailed examination of the local convexity structure of the cost function. The potential of this approach is validated by experimental results.

  17. Optical stability of 3d transition metal ions doped-cadmium borate glasses towards γ-rays interaction

    NASA Astrophysics Data System (ADS)

    Marzouk, M.; ElBatal, H.; Eisa, W.

    2016-07-01

    This work reports the preparation of glasses of binary cadmium borate with the basic composition (mol% 45 CdO 55 B2O3) and samples of the same composition containing 0.2 wt% dopants of 3d transition metal (TM) oxides (TiO2 → CuO). The glasses have been investigated by combined optical and Fourier Transform infrared spectroscopic measurements before and after being subjected to gamma irradiation with a dose of 8 Mrad (8 × 104 Gy). Optical absorption of the undoped glass before irradiation reveals strong charge transfer UV absorption which is related to the presence of unavoidable contaminated trace iron impurities (mainly Fe3+) within the raw materials used for the preparation of the base cadmium borate glass. The optical spectra of the 3d TM ions exhibit characteristic bands which are related the stable oxidation state of the 3d TM ions within the host glass. Gamma irradiation produces some limited variations in the optical spectra due to the stability of the host glass containing high percent 45 mol% of heavy metal oxide (CdO) which causes some shielding effects towards irradiation. From the absorption edge data, the values of the optical band gap Eopt and Urbach energy (∆E) have been calculated. The values of the optical energy gap are found to be dependent on the glass composition. Infrared absorption spectral measurements reveal characteristic absorption bands due to both triangular and tetrahedral borate groups with the BO3 units vibrations more intense than BO4 units due to the known limit value for the change of BO3 to BO4 groups. The introduction of 3d TM ions with the doping level (0.2 wt%) causes no changes in the number or position of the IR bands because of the presence of TM ions in modifying sites in the glass network. It is observed that gamma irradiation causes some limited changes in the FT-IR spectral bands due to the stability of the host heavy cadmium borate glass.

  18. Full parallax viewing-angle enhanced computer-generated holographic 3D display system using integral lens array.

    PubMed

    Choi, Kyongsik; Kim, Joohwan; Lim, Yongjun; Lee, Byoungho

    2005-12-26

    A novel full parallax and viewing-angle enhanced computer-generated holographic (CGH) three-dimensional (3D) display system is proposed and implemented by combining an integral lens array and colorized synthetic phase holograms displayed on a phase-type spatial light modulator. For analyzing the viewing-angle limitations of our CGH 3D display system, we provide some theoretical background and introduce a simple ray-tracing method for 3D image reconstruction. From our method we can get continuously varying full parallax 3D images with the viewing angle about +/-6 degrees . To design the colorized phase holograms, we used a modified iterative Fourier transform algorithm and we could obtain a high diffraction efficiency (~92.5%) and a large signal-to-noise ratio (~11dB) from our simulation results. Finally we show some experimental results that verify our concept and demonstrate the full parallax viewing-angle enhanced color CGH display system.

  19. View showing rear of looking glass aircraft on operational apron ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View showing rear of looking glass aircraft on operational apron with nose dock hangar in background. View to northeast - Offutt Air Force Base, Looking Glass Airborne Command Post, Operational & Hangar Access Aprons, Spanning length of northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  20. The RCSB protein data bank: integrative view of protein, gene and 3D structural information.

    PubMed

    Rose, Peter W; Prlić, Andreas; Altunkaya, Ali; Bi, Chunxiao; Bradley, Anthony R; Christie, Cole H; Costanzo, Luigi Di; Duarte, Jose M; Dutta, Shuchismita; Feng, Zukang; Green, Rachel Kramer; Goodsell, David S; Hudson, Brian; Kalro, Tara; Lowe, Robert; Peisach, Ezra; Randle, Christopher; Rose, Alexander S; Shao, Chenghua; Tao, Yi-Ping; Valasatava, Yana; Voigt, Maria; Westbrook, John D; Woo, Jesse; Yang, Huangwang; Young, Jasmine Y; Zardecki, Christine; Berman, Helen M; Burley, Stephen K

    2017-01-04

    The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB, http://rcsb.org), the US data center for the global PDB archive, makes PDB data freely available to all users, from structural biologists to computational biologists and beyond. New tools and resources have been added to the RCSB PDB web portal in support of a 'Structural View of Biology.' Recent developments have improved the User experience, including the high-speed NGL Viewer that provides 3D molecular visualization in any web browser, improved support for data file download and enhanced organization of website pages for query, reporting and individual structure exploration. Structure validation information is now visible for all archival entries. PDB data have been integrated with external biological resources, including chromosomal position within the human genome; protein modifications; and metabolic pathways. PDB-101 educational materials have been reorganized into a searchable website and expanded to include new features such as the Geis Digital Archive.

  1. From ATLASGAL to SEDIGISM: Towards a Complete 3D View of the Dense Galactic Interstellar Medium

    NASA Astrophysics Data System (ADS)

    Schuller, F.; Urquhart, J.; Bronfman, L.; Csengeri, T.; Bontemps, S.; Duarte-Cabral, A.; Giannetti, A.; Ginsburg, A.; Henning, T.; Immer, K.; Leurini, S.; Mattern, M.; Menten, K.; Molinari, S.; Muller, E.; Sánchez-Monge, A.; Schisano, E.; Suri, S.; Testi, L.; Wang, K.; Wyrowski, F.; Zavagno, A.

    2016-09-01

    The ATLASGAL survey has provided the first unbiased view of the inner Galactic Plane at sub-millimetre wavelengths. This is the largest ground-based survey of its kind to date, covering 420 square degrees at a wavelength of 870 µm. The reduced data, consisting of images and a catalogue of > 104 compact sources, are available from the ESO Science Archive Facility through the Phase 3 infrastructure. The extremely rich statistics of this survey initiated several follow-up projects, including spectroscopic observations to explore molecular complexity and high angular resolution imaging with the Atacama Large Millimeter/submillimeter Array (ALMA), aimed at resolving individual protostars. The most extensive follow-up project is SEDIGISM, a 3D mapping of the dense interstellar medium over a large fraction of the inner Galaxy. Some notable results of these surveys are highlighted.

  2. Assessment of next-best-view algorithms performance with various 3D scanners and manipulator

    NASA Astrophysics Data System (ADS)

    Karaszewski, M.; Adamczyk, M.; Sitnik, R.

    2016-09-01

    The problem of calculating three dimensional (3D) sensor position (and orientation) during the digitization of real-world objects (called next best view planning or NBV) has been an active topic of research for over 20 years. While many solutions have been developed, it is hard to compare their quality based only on the exemplary results presented in papers. We implemented 13 of the most popular NBV algorithms and evaluated their performance by digitizing five objects of various properties, using three measurement heads with different working volumes mounted on a 6-axis robot with a rotating table for placing objects. The results obtained for the 13 algorithms were then compared based on four criteria: the number of directional measurements, digitization time, total positioning distance, and surface coverage required to digitize test objects with available measurement heads.

  3. Single-view 3D reconstruction of correlated gamma-neutron sources

    DOE PAGES

    Monterial, Mateusz; Marleau, Peter; Pozzi, Sara A.

    2017-01-05

    We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf- 252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source’s intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutronmore » double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique’s potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). Furthermore, these simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%.« less

  4. 3D integration of microfluidics and microoptics inside photosensitive glass by femtosecond laser direct writing for photonic biosensing

    NASA Astrophysics Data System (ADS)

    Sugioka, Koji; Wang, Zhongke; Midorikawa, Katsumi

    2008-02-01

    Optical waveguides with a propagation loss of around 0.5 dB/cm are written inside photosensitive Foturan glass by internal modification of refractive index using femtosecond (fs) laser. Integration of the optical wafveguides with a micromirror enables us to bend the guided laser beam at an angle of 90° with a bending loss of less than 0.3 dB. In the meanwhile, a plano-convex microlens is completely embedded inside the Foturan glass chip via formation of a three-dimensional (3D) hollow microstructure using fs laser direct writing followed by heat treatment and successive wet etching. This technique can also be used to fabricate microfluidic devices and therefore realizes 3D integration of microoptical and microfluidic components by one continuous procedure. Subsequently, microoptical waveguides are further integrated into the single glass chip. Demonstration of optical measurements using the integrated microchip reveals that photonic biosensing can be performed with an efficiency increased by a factor of 8 for fluorescence detection and by a factor of 3 for absorption detection.

  5. Membrane-mirror-based display for viewing 2D and 3D images

    NASA Astrophysics Data System (ADS)

    McKay, Stuart; Mason, Steven; Mair, Leslie S.; Waddell, Peter; Fraser, Simon M.

    1999-05-01

    Stretchable Membrane Mirrors (SMMs) have been developed at the University of Strathclyde as a cheap, lightweight and variable focal length alternative to conventional fixed- curvature glass based optics. A SMM uses a thin sheet of aluminized polyester film which is stretched over a specially shaped frame, forming an airtight cavity behind the membrane. Removal of air from that cavity causes the resulting air pressure difference to force the membrane back into a concave shape. Controlling the pressure difference acting over the membrane now controls the curvature or f/No. of the mirror. Mirrors from 0.15-m to 1.2-m in diameter have been constructed at the University of Strathclyde. The use of lenses and mirrors to project real images in space is perhaps one of the simplest forms of 3D display. When using conventional optics however, there are severe financial restrictions on what size of image forming element may be used, hence the appeal of a SMM. The mirrors have been used both as image forming elements and directional screens in volumetric, stereoscopic and large format simulator displays. It was found that the use of these specular reflecting surfaces greatly enhances the perceived image quality of the resulting magnified display.

  6. Fabrication and characterization of strontium incorporated 3-D bioactive glass scaffolds for bone tissue from biosilica.

    PubMed

    Özarslan, Ali Can; Yücel, Sevil

    2016-11-01

    Bioactive glass scaffolds that contain silica are high viable biomaterials as bone supporters for bone tissue engineering due to their bioactive behaviour in simulated body fluid (SBF). In the human body, these materials help inorganic bone structure formation due to a combination of the particular ratio of elements such as silicon (Si), calcium (Ca), sodium (Na) and phosphorus (P), and the doping of strontium (Sr) into the scaffold structure increases their bioactive behaviour. In this study, bioactive glass scaffolds were produced by using rice hull ash (RHA) silica and commercial silica based bioactive glasses. The structural properties of scaffolds such as pore size, porosity and also the bioactive behaviour were investigated. The results showed that undoped and Sr-doped RHA silica-based bioactive glass scaffolds have better bioactivity than that of commercial silica based bioactive glass scaffolds. Moreover, undoped and Sr-doped RHA silica-based bioactive glass scaffolds will be able to be used instead of undoped and Sr-doped commercial silica based bioactive glass scaffolds for bone regeneration applications. Scaffolds that are produced from undoped or Sr-doped RHA silica have high potential to form new bone for bone defects in tissue engineering.

  7. A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes

    PubMed Central

    2011-01-01

    Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better

  8. Beat the diffraction limit in 3D direct laser writing in photosensitive glass.

    PubMed

    Bellec, Matthieu; Royon, Arnaud; Bousquet, Bruno; Bourhis, Kevin; Treguer, Mona; Cardinal, Thierry; Richardson, Martin; Canioni, Lionel

    2009-06-08

    Three-dimensional (3D) femtosecond laser direct structuring in transparent materials is widely used for photonic applications. However, the structure size is limited by the optical diffraction. Here we report on a direct laser writing technique that produces subwavelength nanostructures independently of the experimental limiting factors. We demonstrate 3D nanostructures of arbitrary patterns with feature sizes down to 80 nm, less than one tenth of the laser processing wavelength. Its ease of implementation for novel nanostructuring, with its accompanying high precision will open new opportunities for the fabrication of nanostructures for plasmonic and photonic devices and for applications in metamaterials.

  9. Effect of 3d-transition metal doping on the shielding behavior of barium borate glasses: a spectroscopic study.

    PubMed

    ElBatal, H A; Abdelghany, A M; Ghoneim, N A; ElBatal, F H

    2014-12-10

    UV-visible and FT infrared spectra were measured for prepared samples before and after gamma irradiation. Base undoped barium borate glass of the basic composition (BaO 40%-B2O3 60mol.%) reveals strong charge transfer UV absorption bands which are related to unavoidable trace iron impurities (Fe(3+)) within the chemical raw materials. 3d transition metal (TM)-doped glasses exhibit extra characteristic absorption bands due to each TM in its specific valence or coordinate state. The optical spectra show that TM ions favor generally the presence in the high valence or tetrahedral coordination state in barium borate host glass. Infrared absorption bands of all prepared glasses reveal the appearance of both triangular BO3 units and tetrahedral BO4 units within their characteristic vibrational modes and the TM-ions cause minor effects because of the low doping level introduced (0.2%). Gamma irradiation of the undoped barium borate glass increases the intensity of the UV absorption together with the generation of an induced broad visible band at about 580nm. These changes are correlated with suggested photochemical reactions of trace iron impurities together with the generation of positive hole center (BHC or OHC) within the visible region through generated electrons and positive holes during the irradiation process.

  10. Construction of Extended 3D Field of Views of the Internal Bladder Wall Surface: A Proof of Concept

    NASA Astrophysics Data System (ADS)

    Ben-Hamadou, Achraf; Daul, Christian; Soussen, Charles

    2016-09-01

    3D extended field of views (FOVs) of the internal bladder wall facilitate lesion diagnosis, patient follow-up and treatment traceability. In this paper, we propose a 3D image mosaicing algorithm guided by 2D cystoscopic video-image registration for obtaining textured FOV mosaics. In this feasibility study, the registration makes use of data from a 3D cystoscope prototype providing, in addition to each small FOV image, some 3D points located on the surface. This proof of concept shows that textured surfaces can be constructed with minimally modified cystoscopes. The potential of the method is demonstrated on numerical and real phantoms reproducing various surface shapes. Pig and human bladder textures are superimposed on phantoms with known shape and dimensions. These data allow for quantitative assessment of the 3D mosaicing algorithm based on the registration of images simulating bladder textures.

  11. 3-D view of erosional scars on U. S. Mid-Atlantic continental margin

    SciTech Connect

    Farre, J.A.; Ryan, W.B.

    1985-06-01

    Deep-towed side-scan and bathymetric data have been merged to present a 3-D view of the lower continental slope and upper continental rise offshore Atlantic City, New Jersey. Carteret Canyon narrows and becomes nearly stranded on the lower slope where it leads into one of two steep-walled, flat-floored erosional chutes. The floors of the chutes, cut into semilithified middle Eocene siliceous limestones, are marked by downslope-trending grooves. The grooves are interpreted to be gouge marks formed during rock and sediment slides. On the uppermost rise, beneath the chutes, is a 40-m deep depression. The origin of the depression is believed to be related to material moving downslope and encountering the change in gradient at the slope/rise boundary. Downslope of the depression are channels, trails, and allochthonous blocks. The lack of significant post-early Miocene deposits implies that the lower slope offshore New Jersey has yet to reach a configuration conducive to sediment accumulation. The age of erosion on the lower slope apparently ranges from late Eocene-early Miocene to the recent geologic past.

  12. The RCSB protein data bank: integrative view of protein, gene and 3D structural information

    PubMed Central

    Rose, Peter W.; Prlić, Andreas; Altunkaya, Ali; Bi, Chunxiao; Bradley, Anthony R.; Christie, Cole H.; Costanzo, Luigi Di; Duarte, Jose M.; Dutta, Shuchismita; Feng, Zukang; Green, Rachel Kramer; Goodsell, David S.; Hudson, Brian; Kalro, Tara; Lowe, Robert; Peisach, Ezra; Randle, Christopher; Rose, Alexander S.; Shao, Chenghua; Tao, Yi-Ping; Valasatava, Yana; Voigt, Maria; Westbrook, John D.; Woo, Jesse; Yang, Huangwang; Young, Jasmine Y.; Zardecki, Christine; Berman, Helen M.; Burley, Stephen K.

    2017-01-01

    The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB, http://rcsb.org), the US data center for the global PDB archive, makes PDB data freely available to all users, from structural biologists to computational biologists and beyond. New tools and resources have been added to the RCSB PDB web portal in support of a ‘Structural View of Biology.’ Recent developments have improved the User experience, including the high-speed NGL Viewer that provides 3D molecular visualization in any web browser, improved support for data file download and enhanced organization of website pages for query, reporting and individual structure exploration. Structure validation information is now visible for all archival entries. PDB data have been integrated with external biological resources, including chromosomal position within the human genome; protein modifications; and metabolic pathways. PDB-101 educational materials have been reorganized into a searchable website and expanded to include new features such as the Geis Digital Archive. PMID:27794042

  13. Analytical 3D views and virtual globes — scientific results in a familiar spatial context

    NASA Astrophysics Data System (ADS)

    Tiede, Dirk; Lang, Stefan

    In this paper we introduce analytical three-dimensional (3D) views as a means for effective and comprehensible information delivery, using virtual globes and the third dimension as an additional information carrier. Four case studies are presented, in which information extraction results from very high spatial resolution (VHSR) satellite images were conditioned and aggregated or disaggregated to regular spatial units. The case studies were embedded in the context of: (1) urban life quality assessment (Salzburg/Austria); (2) post-disaster assessment (Harare/Zimbabwe); (3) emergency response (Lukole/Tanzania); and (4) contingency planning (faked crisis scenario/Germany). The results are made available in different virtual globe environments, using the implemented contextual data (such as satellite imagery, aerial photographs, and auxiliary geodata) as valuable additional context information. Both day-to-day users and high-level decision makers are addressees of this tailored information product. The degree of abstraction required for understanding a complex analytical content is balanced with the ease and appeal by which the context is conveyed.

  14. Non-intubated subxiphoid uniportal video-assisted thoracoscopic thymectomy using glasses-free 3D vision

    PubMed Central

    Jiang, Long; Liu, Jun; Shao, Wenlong; Li, Jingpei

    2016-01-01

    Trans-sternal thymectomy has long been accepted as the standard surgical procedure for thymic masses. Recently, minimally invasive methods, such as video-assisted thoracoscopic surgery (VATS) and, even more recently, non-intubated anesthesia, have emerged. These methods provide advantages including reductions in surgical trauma, postoperative associated pain, and in regards to VATS, provide certain cosmetic benefits. Considering these advantages, we herein present a case of subxiphoid uniportal VATS for thymic mass using a glasses-free 3D thoracoscopic display system. PMID:28149591

  15. 2D-3D registration for brain radiation therapy using a 3D CBCT and a single limited field-of-view 2D kV radiograph

    NASA Astrophysics Data System (ADS)

    Munbodh, R.; Moseley, D. J.

    2014-03-01

    We report results of an intensity-based 2D-3D rigid registration framework for patient positioning and monitoring during brain radiotherapy. We evaluated two intensity-based similarity measures, the Pearson Correlation Coefficient (ICC) and Maximum Likelihood with Gaussian noise (MLG) derived from the statistics of transmission images. A useful image frequency band was identified from the bone-to-no-bone ratio. Validation was performed on gold-standard data consisting of 3D kV CBCT scans and 2D kV radiographs of an anthropomorphic head phantom acquired at 23 different poses with parameter variations along six degrees of freedom. At each pose, a single limited field of view kV radiograph was registered to the reference CBCT. The ground truth was determined from markers affixed to the phantom and visible in the CBCT images. The mean (and standard deviation) of the absolute errors in recovering each of the six transformation parameters along the x, y and z axes for ICC were varphix: 0.08(0.04)°, varphiy: 0.10(0.09)°, varphiz: 0.03(0.03)°, tx: 0.13(0.11) mm, ty: 0.08(0.06) mm and tz: 0.44(0.23) mm. For MLG, the corresponding results were varphix: 0.10(0.04)°, varphiy: 0.10(0.09)°, varphiz: 0.05(0.07)°, tx: 0.11(0.13) mm, ty: 0.05(0.05) mm and tz: 0.44(0.31) mm. It is feasible to accurately estimate all six transformation parameters from a 3D CBCT of the head and a single 2D kV radiograph within an intensity-based registration framework that incorporates the physics of transmission images.

  16. Comparison of simultaneous and sequential two-view registration for 3D/2D registration of vascular images.

    PubMed

    Pathak, Chetna; Van Horn, Mark; Weeks, Susan; Bullitt, Elizabeth

    2005-01-01

    Accurate 3D/2D vessel registration is complicated by issues of image quality, occlusion, and other problems. This study performs a quantitative comparison of 3D/2D vessel registration in which vessels segmented from preoperative CT or MR are registered with biplane x-ray angiograms by either a) simultaneous two-view registration with advance calculation of the relative pose of the two views, or b) sequential registration with each view. We conclude on the basis of phantom studies that, even in the absence of image errors, simultaneous two-view registration is more accurate than sequential registration. In more complex settings, including clinical conditions, the relative accuracy of simultaneous two-view registration is even greater.

  17. Direct laser-writing of ferroelectric single-crystal waveguide architectures in glass for 3D integrated optics

    PubMed Central

    Stone, Adam; Jain, Himanshu; Dierolf, Volkmar; Sakakura, Masaaki; Shimotsuma, Yasuhiko; Miura, Kiyotaka; Hirao, Kazuyuki; Lapointe, Jerome; Kashyap, Raman

    2015-01-01

    Direct three-dimensional laser writing of amorphous waveguides inside glass has been studied intensely as an attractive route for fabricating photonic integrated circuits. However, achieving essential nonlinear-optic functionality in such devices will also require the ability to create high-quality single-crystal waveguides. Femtosecond laser irradiation is capable of crystallizing glass in 3D, but producing optical-quality single-crystal structures suitable for waveguiding poses unique challenges that are unprecedented in the field of crystal growth. In this work, we use a high angular-resolution electron diffraction method to obtain the first conclusive confirmation that uniform single crystals can be grown inside glass by femtosecond laser writing under optimized conditions. We confirm waveguiding capability and present the first quantitative measurement of power transmission through a laser-written crystal-in-glass waveguide, yielding loss of 2.64 dB/cm at 1530 nm. We demonstrate uniformity of the crystal cross-section down the length of the waveguide and quantify its birefringence. Finally, as a proof-of-concept for patterning more complex device geometries, we demonstrate the use of dynamic phase modulation to grow symmetric crystal junctions with single-pass writing. PMID:25988599

  18. Diffraction effects incorporated design of a parallax barrier for a high-density multi-view autostereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu

    2016-02-22

    We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7.

  19. Human guidance of mobile robots in complex 3D environments using smart glasses

    NASA Astrophysics Data System (ADS)

    Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel

    2016-05-01

    In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.

  20. A theoretical view of the C3d:CR2 binding controversy.

    PubMed

    Mohan, Rohith R; Gorham, Ronald D; Morikis, Dimitrios

    2015-03-01

    The C3d:CR2(SCR1-2) interaction plays an important role in bridging innate and adaptive immunity, leading to enhanced antibody production at sites of complement activation. Over the past decade, there has been much debate over the binding mode of this interaction. An initial cocrystal structure (PDB: 1GHQ) was published in 2001, in which the only interactions observed were between the SCR2 domain of CR2 and a side-face of C3d whereas a cocrystal structure (PDB: 3OED) published in 2011 showed both the SCR1 and SCR2 domains of CR2 interacting with an acidic patch on the concave surface of C3d. The initial 1GHQ structure is at odds with the majority of existing biochemical data and the publication of the 3OED structure renewed uncertainty regarding the physiological relevance of 1GHQ, suggesting that crystallization may have been influenced by the presence of zinc acetate in the crystallization process. In our study, we used a variety of computational approaches to gain insight into the binding mode between C3d and CR2 and demonstrate that the binding site at the acidic patch (3OED) is electrostatically more favorable, exhibits better structural and dissociative stability, specifically at the SCR1 domain, and has higher binding affinity than the 1GHQ binding mode. We also observe that nonphysiological zinc ions enhance the formation of the C3d:CR2 complex at the side face of C3d (1GHQ) through increases in electrostatic favorability, intermolecular interactions, dissociative character and overall energetic favorability. These results provide a theoretical basis for the association of C3d:CR2 at the acidic cavity of C3d and provide an explanation for binding of CR2 at the side face of C3d in the presence of nonphysiological zinc ions.

  1. Development of a Top-View Numeric Coding Teaching-Learning Trajectory within an Elementary Grades 3-D Visualization Design Research Project

    ERIC Educational Resources Information Center

    Sack, Jacqueline J.

    2013-01-01

    This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…

  2. Extended field-of-view and increased-signal 3D holographic illumination with time-division multiplexing.

    PubMed

    Yang, Samuel J; Allen, William E; Kauvar, Isaac; Andalman, Aaron S; Young, Noah P; Kim, Christina K; Marshel, James H; Wetzstein, Gordon; Deisseroth, Karl

    2015-12-14

    Phase spatial light modulators (SLMs) are widely used for generating multifocal three-dimensional (3D) illumination patterns, but these are limited to a field of view constrained by the pixel count or size of the SLM. Further, with two-photon SLM-based excitation, increasing the number of focal spots penalizes the total signal linearly--requiring more laser power than is available or can be tolerated by the sample. Here we analyze and demonstrate a method of using galvanometer mirrors to time-sequentially reposition multiple 3D holograms, both extending the field of view and increasing the total time-averaged two-photon signal. We apply our approach to 3D two-photon in vivo neuronal calcium imaging.

  3. Extended field-of-view and increased-signal 3D holographic illumination with time-division multiplexing

    PubMed Central

    Yang, Samuel J.; Allen, William E.; Kauvar, Isaac; Andalman, Aaron S.; Young, Noah P.; Kim, Christina K.; Marshel, James H.; Wetzstein, Gordon; Deisseroth, Karl

    2016-01-01

    Phase spatial light modulators (SLMs) are widely used for generating multifocal three-dimensional (3D) illumination patterns, but these are limited to a field of view constrained by the pixel count or size of the SLM. Further, with two-photon SLM-based excitation, increasing the number of focal spots penalizes the total signal linearly—requiring more laser power than is available or can be tolerated by the sample. Here we analyze and demonstrate a method of using galvanometer mirrors to time-sequentially reposition multiple 3D holograms, both extending the field of view and increasing the total time-averaged two-photon signal. We apply our approach to 3D two-photon in vivo neuronal calcium imaging. PMID:26699047

  4. Effectiveness of Applying 2D Static Depictions and 3D Animations to Orthographic Views Learning in Graphical Course

    ERIC Educational Resources Information Center

    Wu, Chih-Fu; Chiang, Ming-Chin

    2013-01-01

    This study provides experiment results as an educational reference for instructors to help student obtain a better way to learn orthographic views in graphical course. A visual experiment was held to explore the comprehensive differences between 2D static and 3D animation object features; the goal was to reduce the possible misunderstanding…

  5. From pixel to voxel: a deeper view of biological tissue by 3D mass spectral imaging

    PubMed Central

    Ye, Hui; Greer, Tyler; Li, Lingjun

    2011-01-01

    Three dimensional mass spectral imaging (3D MSI) is an exciting field that grants the ability to study a broad mass range of molecular species ranging from small molecules to large proteins by creating lateral and vertical distribution maps of select compounds. Although the general premise behind 3D MSI is simple, factors such as choice of ionization method, sample handling, software considerations and many others must be taken into account for the successful design of a 3D MSI experiment. This review provides a brief overview of ionization methods, sample preparation, software types and technological advancements driving 3D MSI research of a wide range of low- to high-mass analytes. Future perspectives in this field are also provided to conclude that the positive and promises ever-growing applications in the biomedical field with continuous developments of this powerful analytical tool. PMID:21320052

  6. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  7. Hubble and ESO's VLT provide unique 3D views of remote galaxies

    NASA Astrophysics Data System (ADS)

    2009-03-01

    Astronomers have obtained exceptional 3D views of distant galaxies, seen when the Universe was half its current age, by combining the twin strengths of the NASA/ESA Hubble Space Telescope's acute eye, and the capacity of ESO's Very Large Telescope to probe the motions of gas in tiny objects. By looking at this unique "history book" of our Universe, at an epoch when the Sun and the Earth did not yet exist, scientists hope to solve the puzzle of how galaxies formed in the remote past. ESO PR Photo 10a/09 A 3D view of remote galaxies ESO PR Photo 10b/09 Measuring motions in 3 distant galaxies ESO PR Video 10a/09 Galaxies in collision For decades, distant galaxies that emitted their light six billion years ago were no more than small specks of light on the sky. With the launch of the Hubble Space Telescope in the early 1990s, astronomers were able to scrutinise the structure of distant galaxies in some detail for the first time. Under the superb skies of Paranal, the VLT's FLAMES/GIRAFFE spectrograph (ESO 13/02) -- which obtains simultaneous spectra from small areas of extended objects -- can now also resolve the motions of the gas in these distant galaxies (ESO 10/06). "This unique combination of Hubble and the VLT allows us to model distant galaxies almost as nicely as we can close ones," says François Hammer, who led the team. "In effect, FLAMES/GIRAFFE now allows us to measure the velocity of the gas at various locations in these objects. This means that we can see how the gas is moving, which provides us with a three-dimensional view of galaxies halfway across the Universe." The team has undertaken the Herculean task of reconstituting the history of about one hundred remote galaxies that have been observed with both Hubble and GIRAFFE on the VLT. The first results are coming in and have already provided useful insights for three galaxies. In one galaxy, GIRAFFE revealed a region full of ionised gas, that is, hot gas composed of atoms that have been stripped of

  8. 3D bioprint me: a socioethical view of bioprinting human organs and tissues.

    PubMed

    Vermeulen, Niki; Haddow, Gill; Seymour, Tirion; Faulkner-Jones, Alan; Shu, Wenmiao

    2017-03-20

    In this article, we review the extant social science and ethical literature on three-dimensional (3D) bioprinting. 3D bioprinting has the potential to be a 'game-changer', printing human organs on demand, no longer necessitating the need for living or deceased human donation or animal transplantation. Although the technology is not yet at the level required to bioprint an entire organ, 3D bioprinting may have a variety of other mid-term and short-term benefits that also have positive ethical consequences, for example, creating alternatives to animal testing, filling a therapeutic need for minors and avoiding species boundary crossing. Despite a lack of current socioethical engagement with the consequences of the technology, we outline what we see as some preliminary practical, ethical and regulatory issues that need tackling. These relate to managing public expectations and the continuing reliance on technoscientific solutions to diseases that affect high-income countries. Avoiding prescribing a course of action for the way forward in terms of research agendas, we do briefly outline one possible ethical framework 'Responsible Research Innovation' as an oversight model should 3D bioprinting promises are ever realised. 3D bioprinting has a lot to offer in the course of time should it move beyond a conceptual therapy, but is an area that requires ethical oversight and regulation and debate, in the here and now. The purpose of this article is to begin that discussion.

  9. 3D analysis of thermal and stress evolution during laser cladding of bioactive glass coatings.

    PubMed

    Krzyzanowski, Michal; Bajda, Szymon; Liu, Yijun; Triantaphyllou, Andrew; Mark Rainforth, W; Glendenning, Malcolm

    2016-06-01

    Thermal and strain-stress transient fields during laser cladding of bioactive glass coatings on the Ti6Al4V alloy basement were numerically calculated and analysed. Conditions leading to micro-cracking susceptibility of the coating have been investigated using the finite element based modelling supported by experimental results of microscopic investigation of the sample coatings. Consecutive temperature and stress peaks are developed within the cladded material as a result of the laser beam moving along the complex trajectory, which can lead to micro-cracking. The preheated to 500°C base plate allowed for decrease of the laser power and lowering of the cooling speed between the consecutive temperature peaks contributing in such way to achievement of lower cracking susceptibility. The cooling rate during cladding of the second and the third layer was lower than during cladding of the first one, in such way, contributing towards improvement of cracking resistance of the subsequent layers due to progressive accumulation of heat over the process.

  10. Optical rotation compensation for a holographic 3D display with a 360 degree horizontal viewing zone.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Yatagai, Toyohiko

    2016-10-20

    A method for a continuous optical rotation compensation in a time-division-based holographic three-dimensional (3D) display with a rotating mirror is presented. Since the coordinate system of wavefronts after the mirror reflection rotates about the optical axis along with the rotation angle, compensation or cancellation is absolutely necessary to fix the reconstructed 3D object. In this study, we address this problem by introducing an optical image rotator based on a right-angle prism that rotates synchronously with the rotating mirror. The optical and continuous compensation reduces the occurrence of duplicate images, which leads to the improvement of the quality of reconstructed images. The effect of the optical rotation compensation is experimentally verified and a demonstration of holographic 3D display with the optical rotation compensation is presented.

  11. Multi-scale Characterisation of the 3D Microstructure of a Thermally-Shocked Bulk Metallic Glass Matrix Composite

    PubMed Central

    Zhang, Wei; Bodey, Andrew J.; Sui, Tan; Kockelmann, Winfried; Rau, Christoph; Korsunsky, Alexander M.; Mi, Jiawei

    2016-01-01

    Bulk metallic glass matrix composites (BMGMCs) are a new class of metal alloys which have significantly increased ductility and impact toughness, resulting from the ductile crystalline phases distributed uniformly within the amorphous matrix. However, the 3D structures and their morphologies of such composite at nano and micrometre scale have never been reported before. We have used high density electric currents to thermally shock a Zr-Ti based BMGMC to different temperatures, and used X-ray microtomography, FIB-SEM nanotomography and neutron diffraction to reveal the morphologies, compositions, volume fractions and thermal stabilities of the nano and microstructures. Understanding of these is essential for optimizing the design of BMGMCs and developing viable manufacturing methods. PMID:26725519

  12. Multi-scale Characterisation of the 3D Microstructure of a Thermally-Shocked Bulk Metallic Glass Matrix Composite

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Bodey, Andrew J.; Sui, Tan; Kockelmann, Winfried; Rau, Christoph; Korsunsky, Alexander M.; Mi, Jiawei

    2016-01-01

    Bulk metallic glass matrix composites (BMGMCs) are a new class of metal alloys which have significantly increased ductility and impact toughness, resulting from the ductile crystalline phases distributed uniformly within the amorphous matrix. However, the 3D structures and their morphologies of such composite at nano and micrometre scale have never been reported before. We have used high density electric currents to thermally shock a Zr-Ti based BMGMC to different temperatures, and used X-ray microtomography, FIB-SEM nanotomography and neutron diffraction to reveal the morphologies, compositions, volume fractions and thermal stabilities of the nano and microstructures. Understanding of these is essential for optimizing the design of BMGMCs and developing viable manufacturing methods.

  13. 3D reconstruction of the coronary tree from two X-ray angiographic views

    NASA Astrophysics Data System (ADS)

    Sang, Nong; Peng, Weixue; Li, Heng; Zhang, Zhen; Zhang, Tianxu

    2006-03-01

    In this paper, we develop a method for the reconstruction of 3D coronary artery based on two perspective projections acquired on a standard single plane angiographic system in the same systole. Our reconstruction is based on the model of generalized cylinders, which are generated by sweeping a two-dimensional cross section along an axis in three-dimensional space. We restrict the cross section to be circular and always perpendicular to the tangent of the axis. Firstly, the vascular centerlines of the X-ray angiography images on both projections are semiautomatically extracted by multiscale vessel tracking using Gabor filters, and the radius of the coronary are also acquired simultaneously. Secondly, the relative geometry of the two projections is determined by the gantry information and 2D matching is realized through the epipolar geometry and the consistency of the vessels. Thirdly, we determine the three-dimensional (3D) coordinates of the identified object points from the image coordinates of the matched points and the calculated imaging system geometry. Finally, we link the consequent cross sections which are processed according to the radius and the direction information to obtain the 3D structure of the artery. The proposed 3D reconstruction method is validated on real data and is shown to perform robustly and accurately in the presence of noise.

  14. INTERIOR VIEW SHOWING FURNACE KEEPER OBSERVING FURNACE THROUGH BLUE GLASS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    INTERIOR VIEW SHOWING FURNACE KEEPER OBSERVING FURNACE THROUGH BLUE GLASS EVERY TWENTY MINUTES TO DETERMINE SIZE AND TEXTURE OF BATCH AND OTHER VARIABLES. FAN IN FRONT COOLS WORKERS AS THEY CONDUCT REPAIRS. FURNACE TEMPERATURE AT 1572 DEGREES FAHRENHEIT. - Chambers-McKee Window Glass Company, Furnace No. 2, Clay Avenue Extension, Jeannette, Westmoreland County, PA

  15. VIEW OF THREE NORTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF THREE NORTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ARE LOCATED JUST BELOW THE CHOIR LOFT. - U.S. Naval Base, Pearl Harbor, Chapel, Corner of Oakley & Nimitz Street, Pearl City, Honolulu County, HI

  16. VIEW OF THREE SOUTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF THREE SOUTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ARE LOCATED ADJACENT TO THE ALTER. - U.S. Naval Base, Pearl Harbor, Chapel, Corner of Oakley & Nimitz Street, Pearl City, Honolulu County, HI

  17. VIEW OF THREE NORTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF THREE NORTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ARE LOCATED ADJACENT TO THE ALTAR. - U.S. Naval Base, Pearl Harbor, Chapel, Corner of Oakley & Nimitz Street, Pearl City, Honolulu County, HI

  18. 18. INTERIOR DETAIL VIEW OF STAINED GLASS WINDOW LOCATED AT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. INTERIOR DETAIL VIEW OF STAINED GLASS WINDOW LOCATED AT SOUTH SIDE OF ALTAR, NOTE INSCRIPTION DEDICATED IN THE MEMORY OF FATHER DAMIEN - St. Francis Catholic Church, Moloka'i Island, Kalaupapa, Kalawao County, HI

  19. Micro-electrical discharge machining of 3D micro-molds from Pd40Cu30P20Ni10 metallic glass by using laminated 3D micro-electrodes

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Wu, Xiao-yu; Ma, Jiang; Liang, Xiong; Lei, Jian-guo; Wu, Bo; Ruan, Shuang-chen; Wang, Zhen-long

    2016-03-01

    For obtaining 3D micro-molds with better surface quality (slight ridges) and mechanical properties, in this paper 3D micro-electrodes were fabricated and applied to micro-electrical discharge machining (micro-EDM) to process Pd40Cu30P20Ni10 metallic glass. First, 100 μm-thick Cu foil was cut to obtain multilayer 2D micro-structures and these were connected to fit 3D micro-electrodes (with feature sizes of less than 1 mm). Second, under the voltage of 80 V, pulse frequency of 0.2MHZ, pulse width of 800 ns and pulse interval of 4200 ns, the 3D micro-electrodes were applied to micro-EDM for processing Pd40Cu30P20Ni10 metallic glass. The 3D micro-molds with feature within 1 mm were obtained. Third, scanning electron microscope, energy dispersive spectroscopy and x-ray diffraction analysis were carried out on the processed results. The analysis results indicate that with an increase in the depth of micro-EDM, carbon on the processed surface gradually increased from 0.5% to 5.8%, and the processed surface contained new phases (Ni12P5 and Cu3P).

  20. Reconstruction of quadratic curves in 3D using two or more perspective views: simulation studies

    NASA Astrophysics Data System (ADS)

    Kumar, Sanjeev; Sukavanam, N.; Balasubramanian, R.

    2006-01-01

    The shapes of many natural and man-made objects have planar and curvilinear surfaces. The images of such curves usually do not have sufficient distinctive features to apply conventional feature-based reconstruction algorithms. In this paper, we describe a method of reconstruction of a quadratic curve in 3-D space as an intersection of two cones containing the respective projected curve images. The correspondence between this pair of projections of the curve is assumed to be established in this work. Using least-square curve fitting, the parameters of a curve in 2-D space are found. From this we are reconstructing the 3-D quadratic curve. Relevant mathematical formulations and analytical solutions for obtaining the equation of reconstructed curve are given. The result of the described reconstruction methodology are studied by simulation studies. This reconstruction methodology is applicable to LBW decision in cricket, path of the missile, Robotic Vision, path lanning etc.

  1. View planetary differentiation process through high-resolution 3D imaging

    NASA Astrophysics Data System (ADS)

    Fei, Y.

    2011-12-01

    Core-mantle separation is one of the most important processes in planetary evolution, defining the structure and chemical distribution in the planets. Iron-dominated core materials could migrate through silicate mantle to the core by efficient liquid-liquid separation and/or by percolation of liquid metal through solid silicate matrix. We can experimentally simulate these processes to examine the efficiency and time of core formation and its geochemical signatures. The quantitative measure of the efficiency of percolation is usually the dihedral angle, related to the interfacial energies of the liquid and solid phases. To determine the true dihedral angle at high pressure and temperatures, it is necessary to measure the relative frequency distributions of apparent dihedral angles between the quenched liquid metal and silicate grains for each experiment. Here I present a new imaging technique to visualize the distribution of liquid metal in silicate matrix in 3D by combination of focus ion beam (FIB) milling and high-resolution SEM image. The 3D volume rendering provides precise determination of the dihedral angle and quantitative measure of volume fraction and connectivity. I have conducted a series of experiments using mixtures of San Carlos olivine and Fe-S (10wt%S) metal with different metal-silicate ratios, up to 25 GPa and at temperatures above 1800C. High-quality 3D volume renderings were reconstructed from FIB serial sectioning and imaging with 10-nm slice thickness and 14-nm image resolution for each quenched sample. The unprecedented spatial resolution at nano scale allows detailed examination of textural features and precise determination of the dihedral angle as a function of pressure, temperature and composition. The 3D reconstruction also allows direct assessment of connectivity in multi-phase matrix, providing a new way to investigate the efficiency of metal percolation in a real silicate mantle.

  2. Twin Peaks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The two hills in the distance, approximately one to two kilometers away, have been dubbed the 'Twin Peaks' and are of great interest to Pathfinder scientists as objects of future study. 3D glasses are necessary to identify surface detail. The white areas on the left hill, called the 'Ski Run' by scientists, may have been formed by hydrologic processes.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  3. Single-View 3D Scene Reconstruction and Parsing by Attribute Grammar.

    PubMed

    Liu, Xiaobai; Zhao, Yibiao; Zhu, Song-Chun

    2017-03-29

    In this paper, we present an attribute grammar for solving two coupled tasks: i) parsing an 2D image into semantic regions; and ii) recovering the 3D scene structures of all regions. The proposed grammar consists of a set of production rules, each describing a kind of spatial relation between planar surfaces in 3D scenes. These production rules are used to decompose an input image into a hierarchical parse graph representation where each graph node indicates a planar surface or a composite surface. Different from other stochastic image grammars, the proposed grammar augments each graph node with a set of attribute variables to depict scene-level global geometry, e.g. camera focal length, or local geometry, e.g., surface normal, contact lines between surfaces. These geometric attributes impose constraints between a node and its off-springs in the parse graph. Under a probabilistic framework, we develop a Markov Chain Monte Carlo method to construct a parse graph that optimizes the 2D image recognition and 3D scene reconstruction purposes simultaneously. We evaluated our method on both public benchmarks and newly collected datasets. Experiments demonstrate that the proposed method is capable of achieving state-of-the-art scene reconstruction of a single image.

  4. Wide-viewing-angle 3D/2D convertible display system using two display devices and a lens array.

    PubMed

    Choi, Heejin; Park, Jae-Hyeung; Kim, Joohwan; Cho, Seong-Woo; Lee, Byoungho

    2005-10-17

    A wide-viewing-angle 3D/2D convertible display system with a thin structure is proposed that is able to display three-dimensional and two-dimensional images. With the use of a transparent display device in front of a conventional integral imaging system, it is possible to display planar images using the conventional system as a backlight source. By experiments, the proposed method is proven and compared with the conventional one.

  5. Repercussion of geometric and dynamic constraints on the 3D rendering quality in structurally adaptive multi-view shooting systems

    NASA Astrophysics Data System (ADS)

    Ali-Bey, Mohamed; Moughamir, Saïd; Manamanni, Noureddine

    2011-12-01

    in this paper a simulator of a multi-view shooting system with parallel optical axes and structurally variable configuration is proposed. The considered system is dedicated to the production of 3D contents for auto-stereoscopic visualization. The global shooting/viewing geometrical process, which is the kernel of this shooting system, is detailed and the different viewing, transformation and capture parameters are then defined. An appropriate perspective projection model is afterward derived to work out a simulator. At first, this latter is used to validate the global geometrical process in the case of a static configuration. Next, the simulator is used to show the limitations of a static configuration of this shooting system type by considering the case of dynamic scenes and then a dynamic scheme is achieved to allow a correct capture of this kind of scenes. After that, the effect of the different geometrical capture parameters on the 3D rendering quality and the necessity or not of their adaptation is studied. Finally, some dynamic effects and their repercussions on the 3D rendering quality of dynamic scenes are analyzed using error images and some image quantization tools. Simulation and experimental results are presented throughout this paper to illustrate the different studied points. Some conclusions and perspectives end the paper. [Figure not available: see fulltext.

  6. 3D reconstruction of scintillation light emission from proton pencil beams using limited viewing angles – a simulation study

    PubMed Central

    Hui, CheukKai; Robertson, Daniel; Beddar, Sam

    2015-01-01

    An accurate and high-resolution quality assurance (QA) method for proton radiotherapy beams is necessary to ensure correct dose delivery to the target. Detectors based on a large volume of liquid scintillator have shown great promise in providing fast and high-resolution measurements of proton treatment fields. However, previous work with these detectors has been limited to two-dimensional measurements, and the quantitative measurement of dose distributions was lacking. The purpose of the current study is to assess the feasibility of reconstructing three-dimensional (3D) scintillation light distributions of spot scanning proton beams using a scintillation system. The proposed system consists of a tank of liquid scintillator imaged by charge-coupled device cameras at three orthogonal viewing angles. Because of the limited number of viewing angles, we developed a profile-based technique to obtain an initial estimate that can improve the quality of the 3D reconstruction. We found that our proposed scintillator system and profile-based technique can reconstruct a single energy proton beam in 3D with a gamma passing rate (3%/3 mm local) of 100.0%. For asingle energy layer of an intensity modulated proton therapy prostate treatment plan, the proposed method can reconstruct the 3D light distribution with a gamma pass rate (3%/3 mm local) of 99.7%. In addition, we also found that the proposed method is effective in detecting errors in the treatment plan, indicating that it can be a very useful tool for 3D proton beam QA. PMID:25054735

  7. 3D reconstruction of scintillation light emission from proton pencil beams using limited viewing angles—a simulation study

    NASA Astrophysics Data System (ADS)

    Hui, CheukKai; Robertson, Daniel; Beddar, Sam

    2014-08-01

    An accurate and high-resolution quality assurance (QA) method for proton radiotherapy beams is necessary to ensure correct dose delivery to the target. Detectors based on a large volume of liquid scintillator have shown great promise in providing fast and high-resolution measurements of proton treatment fields. However, previous work with these detectors has been limited to two-dimensional measurements, and the quantitative measurement of dose distributions was lacking. The purpose of the current study is to assess the feasibility of reconstructing three-dimensional (3D) scintillation light distributions of spot scanning proton beams using a scintillation system. The proposed system consists of a tank of liquid scintillator imaged by charge-coupled device cameras at three orthogonal viewing angles. Because of the limited number of viewing angles, we developed a profile-based technique to obtain an initial estimate that can improve the quality of the 3D reconstruction. We found that our proposed scintillator system and profile-based technique can reconstruct a single energy proton beam in 3D with a gamma passing rate (3%/3 mm local) of 100.0%. For a single energy layer of an intensity modulated proton therapy prostate treatment plan, the proposed method can reconstruct the 3D light distribution with a gamma pass rate (3%/3 mm local) of 99.7%. In addition, we also found that the proposed method is effective in detecting errors in the treatment plan, indicating that it can be a very useful tool for 3D proton beam QA.

  8. Spectral analysis of views interpolated by chroma subpixel downsampling for 3D autosteroscopic displays

    NASA Astrophysics Data System (ADS)

    Marson, Avishai; Stern, Adrian

    2015-05-01

    One of the main limitations of horizontal parallax autostereoscopic displays is the horizontal resolution loss due the need to repartition the pixels of the display panel among the multiple views. Recently we have shown that this problem can be alleviated by applying a color sub-pixel rendering technique1. Interpolated views are generated by down-sampling the panel pixels at sub-pixel level, thus increasing the number of views. The method takes advantage of lower acuity of the human eye to chromatic resolution. Here we supply further support of the technique by analyzing the spectra of the subsampled images.

  9. Quantitative analysis of 3D stent reconstruction from a limited number of views in cardiac rotational angiography

    NASA Astrophysics Data System (ADS)

    Perrenot, Béatrice; Vaillant, Régis; Prost, Rémy; Finet, Gérard; Douek, Philippe; Peyrin, Françoise

    2007-03-01

    Percutaneous coronary angioplasty consists in conducting a guidewire carrying a balloon and a stent through the lesion and deploying the stent by balloon inflation. A stent is a small 3D complex mesh hardly visible in X-ray images : the control of stent deployment is difficult although it is important to avoid post intervention complications. In a previous work, we proposed a method to reconstruct 3D stent images from a set of 2D cone-beam projections acquired in rotational acquisition mode. The process involves a motion compensation procedure based on the position of two markers located on the guidewire in the 2D radiographic sequence. Under the hypothesis that the stent and markers motions are identical, the method was shown to generate a negligible error. If this hypothesis is not fulfilled, a solution could be to use only the images where motion is weakest, at the detriment of having a limiter number of views. In this paper, we propose a simulation based study of the impact of a limited number of views in our context. The chain image involved in the acquisition of X-ray sequences is first modeled to simulate realistic noisy projections of stent animated by a motion close to cardiac motion. Then, the 3D stent images are reconstructed using the proposed motion compensation method from gated projections. Two gating strategies are examined to select projection in the sequences. A quantitative analysis is carried out to assess reconstruction quality as a function of noise and acquisition strategy.

  10. Potential hazards of viewing 3-D stereoscopic television, cinema and computer games: a review.

    PubMed

    Howarth, Peter A

    2011-03-01

    The visual stimulus provided by a 3-D stereoscopic display differs from that of the real world because the image provided to each eye is produced on a flat surface. The distance from the screen to the eye remains fixed, providing a single focal distance, but the introduction of disparity between the images allows objects to be located geometrically in front of, or behind, the screen. Unlike in the real world, the stimulus to accommodation and the stimulus to convergence do not match. Although this mismatch is used positively in some forms of Orthoptic treatment, a number of authors have suggested that it could negatively lead to the development of asthenopic symptoms. From knowledge of the zone of clear, comfortable, single binocular vision one can predict that, for people with normal binocular vision, adverse symptoms will not be present if the discrepancy is small, but are likely if it is large, and that what constitutes 'large' and 'small' are idiosyncratic to the individual. The accommodation-convergence mismatch is not, however, the only difference between the natural and the artificial stimuli. In the former case, an object located in front of, or behind, a fixated object will not only be perceived as double if the images fall outside Panum's fusional areas, but it will also be defocused and blurred. In the latter case, however, it is usual for the producers of cinema, TV or computer game content to provide an image that is in focus over the whole of the display, and as a consequence diplopic images will be sharply in focus. The size of Panum's fusional area is spatial frequency-dependent, and because of this the high spatial frequencies present in the diplopic 3-D image will provide a different stimulus to the fusion system from that found naturally.

  11. Highly optimized simulations on single- and multi-GPU systems of the 3D Ising spin glass model

    NASA Astrophysics Data System (ADS)

    Lulli, M.; Bernaschi, M.; Parisi, G.

    2015-11-01

    We present a highly optimized implementation of a Monte Carlo (MC) simulator for the three-dimensional Ising spin-glass model with bimodal disorder, i.e., the 3D Edwards-Anderson model running on CUDA enabled GPUs. Multi-GPU systems exchange data by means of the Message Passing Interface (MPI). The chosen MC dynamics is the classic Metropolis one, which is purely dissipative, since the aim was the study of the critical off-equilibrium relaxation of the system. We focused on the following issues: (i) the implementation of efficient memory access patterns for nearest neighbours in a cubic stencil and for lagged-Fibonacci-like pseudo-Random Numbers Generators (PRNGs); (ii) a novel implementation of the asynchronous multispin-coding Metropolis MC step allowing to store one spin per bit and (iii) a multi-GPU version based on a combination of MPI and CUDA streams. Cubic stencils and PRNGs are two subjects of very general interest because of their widespread use in many simulation codes.

  12. Automated bone segmentation from large field of view 3D MR images of the hip joint

    NASA Astrophysics Data System (ADS)

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S.; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-10-01

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation.

  13. Interior detail view, surviving stained glass panel in an east ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior detail view, surviving stained glass panel in an east aisle window. Most of the stained glass has been removed from the building and relocated to other area churches. (Similar to HABS No. PA-6694-25). - Acts of the Apostles Church in Jesus Christ, 1400-28 North Twenty-eighth Street, northwest corner of North Twenty-eighth & Master Streets, Philadelphia, Philadelphia County, PA

  14. Supervised and unsupervised MRF based 3D scene classification in multiple view airborne oblique images

    NASA Astrophysics Data System (ADS)

    Gerke, M.; Xiao, J.

    2013-10-01

    In this paper we develop and compare two methods for scene classification in 3D object space, that is, not single image pixels get classified, but voxels which carry geometric, textural and color information collected from the airborne oblique images and derived products like point clouds from dense image matching. One method is supervised, i.e. relies on training data provided by an operator. We use Random Trees for the actual training and prediction tasks. The second method is unsupervised, thus does not ask for any user interaction. We formulate this classification task as a Markov-Random-Field problem and employ graph cuts for the actual optimization procedure. Two test areas are used to test and evaluate both techniques. In the Haiti dataset we are confronted with largely destroyed built-up areas since the images were taken after the earthquake in January 2010, while in the second case we use images taken over Enschede, a typical Central European city. For the Haiti case it is difficult to provide clear class definitions, and this is also reflected in the overall classification accuracy; it is 73% for the supervised and only 59% for the unsupervised method. If classes are defined more unambiguously like in the Enschede area, results are much better (85% vs. 78%). In conclusion the results are acceptable, also taking into account that the point cloud used for geometric features is not of good quality and no infrared channel is available to support vegetation classification.

  15. Creating S0s with Major Mergers: A 3D View

    NASA Astrophysics Data System (ADS)

    Querejeta, Miguel; Eliche-Moral, M.; Tapia, Trinidad; Borlaff, Alejandro; van de Ven, Glenn; Lyubenova, Mariya; Martig, Marie; Falcón-Barroso, Jesús; Méndez-Abreu, Jairo; Zamorano, Jaime; Gallego, Jesús

    2015-12-01

    A number of simulators have argued that major mergers can sometimes preserve discs (e.g. Springel & Hernquist 2005), but the possibility that they could explain the emergence of lenticular galaxies (S0s) has been generally neglected. In fact, observations of S0s reveal a strong structural coupling between their bulges and discs, which seems difficult to reconcile with the idea that they come from major mergers. However, in Querejeta et al. (2015a) we have used N-body simulations of binary mergers to show that, under favourable conditions, discs are first destroyed but soon regrow out of the leftover debris, matching observational photometric scaling relations (e.g. Laurikainen et al. 2010). Additionally, in Querejeta et al. (2015b) we have shown how the merger scenario agrees with the recent discovery that S0s and most spirals are not compatible in an angular momentum--concentration plane. This important result from CALIFA constitutes a serious objection to the idea that spirals transform into S0s mainly by fading (e.g. via ram-pressure stripping, as that would not explain the observed simultaneous change in $\\lambda_\\mathrm{Re}$ and concentration), but our simulations of major mergers do explain that mismatch. From such a 3D comparison we conclude that mergers must be a relevant process in the build-up of the current population of S0s.

  16. A pathway-centric view of spatial proximity in the 3D nucleome across cell lines

    PubMed Central

    Karathia, Hiren; Kingsford, Carl; Girvan, Michelle; Hannenhalli, Sridhar

    2016-01-01

    In various contexts, spatially proximal genes have been shown to be functionally related. However, the extent to which spatial proximity of genes in a pathway contributes to the pathway’s context-specific activity is not known. Leveraging Hi-C data in six human cell-lines, we show that spatial proximity of genes in a pathway is highly correlated with the pathway’s context-specific expression and function. Furthermore, spatial proximity of pathway genes correlates with interactions of their protein products, and the specific pathway genes that are proximal to one another tend to occupy higher levels in the regulatory hierarchy. In addition to intra-pathway proximity, related pathways are spatially proximal to one another and housekeeping-genes tend to be proximal to several other pathways suggesting their coordinating role. Substantially extending previous works, our study reveals a pathway-centric organization of 3D-nucleome, whereby, functionally related interacting driver genes tend to be in spatial-proximity in a context-specific manner. PMID:27976707

  17. Petal, terrain & airbags - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Portions of the lander's deflated airbags and a petal are at the lower area of this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. The metallic object at lower right is part of the lander's low-gain antenna. This image is part of a 3D 'monster

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  18. 3D numerical model for a focal plane view in case of mosaic grating compressor for high energy CPA chain.

    PubMed

    Montant, S; Marre, G; Blanchot, N; Rouyer, C; Videau, L; Sauteret, C

    2006-12-11

    An important issue, mosaic grating compressor, is studied to recompress pulses for multiPetawatt, high energy laser systems. Alignment of the mosaic elements is crucial to control the focal spot and thus the intensity on target. No theoretical approach analyses the influence of compressor misalignment on spatial and temporal profiles in the focal plane. We describe a simple 3D numerical model giving access to the focal plane view after a compressor. This model is computationally inexpensive since it needs only 1D Fourier transforms to access to the temporal profile. We present simulations of monolithic and mosaic grating compressors.

  19. Venus - 3D Perspective View of Latona Corona and Dali Chasma

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This computer-generated perspective view of Latona Corona and Dali Chasma on Venus shows Magellan radar data superimposed on topography. The view is from the northeast and vertical exaggeration is 10 times. Exaggeration of relief is a common tool scientists use to detect relationships between structure (i.e. faults and fractures) and topography. Latona Corona, a circular feature approximately 1,000 kilometers (620 miles) in diameter whose eastern half is shown at the left of the image, has a relatively smooth, radar-bright raised rim. Bright lines or fractures within the corona appear to radiate away from its center toward the rim. The rest of the bright fractures in the area are associated with the relatively deep (approximately 3 kilometers or 1.9 miles) troughs of Dali Chasma. The Dali and Diana Chasma system consist of deep troughs that extend for 7,400 kilometers (4,588 miles) and are very distinct features on Venus. Those chasma connect the Ovda and Thetis highlands with the large volcanoes at Atla Regio and thus are considered to be the 'Scorpion Tail' of Aphrodite Terra. The broad, curving scarp resembles some of Earth's subduction zones where crustal plates are pushed over each other. The radar-bright surface at the highest elevation along the scarp is similar to surfaces in other elevated regions where some metallic mineral such as pyrite (fool's gold) may occur on the surface.

  20. Automatic thermographic scanning with the creation of 3D panoramic views of buildings

    NASA Astrophysics Data System (ADS)

    Ferrarini, G.; Cadelano, G.; Bortolin, A.

    2016-05-01

    Infrared thermography is widely applied to the inspection of building, enabling the identification of thermal anomalies due to the presence of hidden structures, air leakages, and moisture. One of the main advantages of this technique is the possibility to acquire rapidly a temperature map of a surface. However, due to the actual low-resolution of thermal camera and the necessity of scanning surfaces with different orientation, during a building survey it is necessary to take multiple images. In this work a device based on quantitative infrared thermography, called aIRview, has been applied during building surveys to automatically acquire thermograms with a camera mounted on a robotized pan tilt unit. The goal is to perform a first rapid survey of the building that could give useful information for the successive quantitative thermal investigations. For each data acquisition, the instrument covers a rotational field of view of 360° around the vertical axis and up to 180° around the horizontal one. The obtained images have been processed in order to create a full equirectangular projection of the ambient. For this reason the images have been integrated into a web visualization tool, working with web panorama viewers such as Google Street View, creating a webpage where it is possible to have a three dimensional virtual visit of the building. The thermographic data are embedded with the visual imaging and with other sensor data, facilitating the understanding of the physical phenomena underlying the temperature distribution.

  1. Integration of real-time 3D image acquisition and multiview 3D display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  2. Janocchio--a Java applet for viewing 3D structures and calculating NMR couplings and NOEs.

    PubMed

    Evans, David A; Bodkin, Michael J; Baker, S Richard; Sharman, Gary J

    2007-07-01

    We present a Java applet, based on the open source Jmol program, which allows the calculation of coupling constants and NOEs from a three-dimensional structure. The program has all the viewing features of Jmol, but adds the capability to calculate both H-H and H-C 3-bond couplings constants. In the case of H--H couplings, the Altona equation is used to perform this. The program also calculates NOEs using the full relaxation matrix approach. All these calculations are driven from a simple point and click interface. The program can calculate values for multi-structure files, and can produce input files for the conformational fitting program NAMFIS.

  3. Effects of field-of-view restriction on manoeuvring in a 3-D environment.

    PubMed

    Toet, A; Jansen, S E M; Delleman, N J

    2008-03-01

    Field-of-view (FOV) restrictions are known to affect human behaviour and to degrade performance for a range of different tasks. However, the relationship between human locomotion performance in complex environments and FOV size is currently not fully known. This paper examined the effects of FOV restrictions on the performance of participants manoeuvring through an obstacle course with horizontal and vertical barriers. All FOV restrictions tested (the horizontal FOV was either 30 degrees , 75 degrees or 120 degrees , while the vertical FOV was always 48 degrees ) significantly reduced performance compared to the unrestricted condition. Both the time and the number of footsteps needed to traverse the entire obstacle course increased with a decreasing FOV size. The relationship between FOV restriction and manoeuvring performance that was determined can be used to formulate requirements for FOV restricting devices that are deployed to perform time-limited human locomotion tasks in complex structured environments, such as night-vision goggles and head-mounted displays used in training and entertainment systems.

  4. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    NASA Astrophysics Data System (ADS)

    Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.

    2014-06-01

    This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  5. View forward from stern showing skylight with rippled glass over ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View forward from stern showing skylight with rippled glass over compartment c-110, officer's quarters; note manually operated capstan at center, and simulated eight inch guns in sheet metal mock-up turret; also note five inch guns in sponsons port and starboard. (p37) - USS Olympia, Penn's Landing, 211 South Columbus Boulevard, Philadelphia, Philadelphia County, PA

  6. EAST TOWER, DETAIL VIEW OF CARVED IONIC PILASTER CAPITAL. Glass ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    EAST TOWER, DETAIL VIEW OF CARVED IONIC PILASTER CAPITAL. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-ET-D-2 157.4684. Left (printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  7. EAST TOWER, DETAIL VIEW OF CARVED MASK. Glass plate stereopair ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    EAST TOWER, DETAIL VIEW OF CARVED MASK. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-ET-D-1 157.4683. Right (not printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  8. EAST TOWER, DETAIL VIEW OF CARVED IONIC PILASTER CAPITAL. Glass ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    EAST TOWER, DETAIL VIEW OF CARVED IONIC PILASTER CAPITAL. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-ET-D-2 157.4684. Right (not printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  9. EAST TOWER, DETAIL VIEW OF CARVED MASK. Glass plate stereopair ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    EAST TOWER, DETAIL VIEW OF CARVED MASK. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-ET-D-1 157.4683. Left (printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  10. 19. Photocopy of photograph. VIEW OF WORKER MANIPULATING SMALL GLASS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    19. Photocopy of photograph. VIEW OF WORKER MANIPULATING SMALL GLASS OBJECTS IN THE HOT BAY WITH MANIPULATOR ARMS AT WORK STATION E-2. Photographer unknown, ca. 1969, original photograph and negative on file at the Remote Sensing Laboratory, Department of Energy, Nevada Operations Office. - Nevada Test Site, Engine Maintenance Assembly & Disassembly Facility, Area 25, Jackass Flats, Mercury, Nye County, NV

  11. 11. GENERAL VIEW IN SENATE CHAMBER, FROM WEST; PAINTED GLASS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. GENERAL VIEW IN SENATE CHAMBER, FROM WEST; PAINTED GLASS WINDOW BEHIND COLUMNS DEPICTS 'THE LANDING OF DE SOTO;' MURAL TO LEFT SHOWS 'THOMAS HART BENTON'S SPEECH AT ST. LOUIS 1849;' MURAL TO RIGHT SHOWS 'PRESIDENT JEFFERSON GREETING LEWIS AND CLARK' - Missouri State Capitol, High Street between Broadway & Jefferson Streets, Jefferson City, Cole County, MO

  12. fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays.

    PubMed

    Yoshida, Shunsuke

    2016-06-13

    A novel glasses-free tabletop 3D display to float virtual objects on a flat tabletop surface is proposed. This method employs circularly arranged projectors and a conical rear-projection screen that serves as an anisotropic diffuser. Its practical implementation installs them beneath a round table and produces horizontal parallax in a circumferential direction without the use of high speed or a moving apparatus. Our prototype can display full-color, 5-cm-tall 3D characters on the table. Multiple viewers can share and enjoy its real-time animation from any angle of 360 degrees with appropriate perspectives as if the animated figures were present.

  13. LatticeLibrary and BccFccRaycaster: Software for processing and viewing 3D data on optimal sampling lattices

    NASA Astrophysics Data System (ADS)

    Linnér, Elisabeth Schold; Morén, Max; Smed, Karl-Oskar; Nysjö, Johan; Strand, Robin

    In this paper, we present LatticeLibrary, a C++ library for general processing of 2D and 3D images sampled on arbitrary lattices. The current implementation supports the Cartesian Cubic (CC), Body-Centered Cubic (BCC) and Face-Centered Cubic (FCC) lattices, and is designed to facilitate addition of other sampling lattices. We also introduce BccFccRaycaster, a plugin for the existing volume renderer Voreen, making it possible to view CC, BCC and FCC data, using different interpolation methods, with the same application. The plugin supports nearest neighbor and trilinear interpolation at interactive frame rates. These tools will enable further studies of the possible advantages of non-Cartesian lattices in a wide range of research areas.

  14. A 3-D view of field-scale fault-zone cementation from geologically ground-truthed electrical resistivity

    NASA Astrophysics Data System (ADS)

    Barnes, H.; Spinelli, G. A.; Mozley, P.

    2015-12-01

    Fault-zones are an important control on fluid flow, affecting groundwater supply, hydrocarbon/contaminant migration, and waste/carbon storage. However, current models of fault seal are inadequate, primarily focusing on juxtaposition and entrainment effects, despite the recognition that fault-zone cementation is common and can dramatically reduce permeability. We map the 3D cementation patterns of the variably cemented Loma Blanca fault from the land surface to ~40 m depth, using electrical resistivity and induced polarization (IP). The carbonate-cemented fault zone is a region of anomalously low normalized chargeability, relative to the surrounding host material. Zones of low-normalized chargeability immediately under the exposed cement provide the first ground-truth that a cemented fault yields an observable IP anomaly. Low-normalized chargeability extends down from the surface exposure, surrounded by zones of high-normalized chargeability, at an orientation consistent with normal faults in the region; this likely indicates cementation of the fault zone at depth, which could be confirmed by drilling and coring. Our observations are consistent with: 1) the expectation that carbonate cement in a sandstone should lower normalized chargeability by reducing pore-surface area and bridging gaps in the pore space, and 2) laboratory experiments confirming that calcite precipitation within a column of glass beads decreases polarization magnitude. The ability to characterize spatial variations in the degree of fault-zone cementation with resistivity and IP has exciting implications for improving predictive models of the hydrogeologic impacts of cementation within faults.

  15. High-speed 3-D measurement with a large field of view based on direct-view confocal microscope with an electrically tunable lens.

    PubMed

    Jeong, Hyeong-jun; Yoo, Hongki; Gweon, DaeGab

    2016-02-22

    We propose a new structure of confocal imaging system based on a direct-view confocal microscope (DVCM) with an electrically tunable lens (ETL). Since it has no mechanical moving parts to scan both the lateral (x-y) and axial (z) directions, the DVCM with an ETL allows for high-speed 3-dimensional (3-D) imaging. Axial response and signal intensity of the DVCM were analyzed theoretically according to the pinhole characteristics. The system was designed to have an isotropic spatial resolution of 20 µm in both lateral and axial direction with a large field of view (FOV) of 10 × 10 mm. The FOV was maintained according to the various focal shifts as a result of an integrated design of an objective lens with the ETL. The developed system was calibrated to have linear focal shift over a range of 9 mm with an applied current to the ETL. The system performance of 3-D volume imaging was demonstrated using standard height specimens and a dental plaster.

  16. Neural network system for 3-D object recognition and pose estimation from a single arbitrary 2-D view

    NASA Astrophysics Data System (ADS)

    Khotanzad, Alireza R.; Liou, James H.

    1992-09-01

    In this paper, a robust, and fast system for recognition as well as pose estimation of a 3-D object from a single 2-D perspective of it taken from an arbitrary viewpoint is developed. The approach is invariant to location, orientation, and scale of the object in the perspective. The silhouette of the object in the 2-D perspective is first normalized with respect to location and scale. A set of rotation invariant features derived from complex and orthogonal pseudo- Zernike moments of the image are then extracted. The next stage includes a bank of multilayer feed-forward neural networks (NN) each of which classifies the extracted features. The training set for these nets consists of perspective views of each object taken from several different viewing angles. The NNs in the bank differ in the size of their hidden layer nodes as well as their initial conditions but receive the same input. The classification decisions of all the nets are combined through a majority voting scheme. It is shown that this collective decision making yields better results compared to a single NN operating alone. After the object is classified, two of its pose parameters, namely elevation and aspect angles, are estimated by another module of NNs in a two-stage process. The first stage identifies the likely region of the space that the object is being viewed from. In the second stage, an NN estimator for the identified region is used to compute the pose angles. Extensive experimental studies involving clean and noisy images of seven military ground vehicles are carried out. The performance is compared to two other traditional methods, namely a nearest neighbor rule and a binary decision tree classifier and it is shown that our approach has major advantages over them.

  17. Single-view volumetric PIV via high-resolution scanning, isotropic voxel restructuring and 3D least-squares matching (3D-LSM)

    NASA Astrophysics Data System (ADS)

    Brücker, C.; Hess, D.; Kitzhofer, J.

    2013-02-01

    Scanning PIV as introduced by Brücker (1995 Exp. Fluids 19 255-63, 1996a Appl. Sci. Res. 56 157-79) has been successfully applied in the last 20 years to different flow problems where the frame rate was sufficient to ensure a ‘frozen’ field condition. The limited number of parallel planes however leads typically to an under-sampling in the scan direction in depth; therefore, the spatial resolution in depth is typically considerably lower than the spatial resolution in the plane of the laser sheet (depth resolution = scan shift Δz ≫ pixel unit in object space). In addition, a partial volume averaging effect due to the thickness of the light sheet must be taken into account. Herein, the method is further developed using a high-resolution scanning in combination with a Gaussian regression technique to achieve an isotropic representation of the tracer particles in a voxel-based volume reconstruction with cuboidal voxels. This eliminates the partial volume averaging effect due to light sheet thickness and leads to comparable spatial resolution of the particle field reconstructions in x-, y- and z-axes. In addition, advantage of voxel-based processing with estimations of translation, rotation and shear/strain is taken by using a 3D least-squares matching method, well suited for reconstruction of grey-level pattern fields. The method is discussed in this paper and used to investigate the ring vortex instability at Re = 2500 within a measurement volume of roughly 75 × 75 × 50 mm3 with a spatial resolution of 100 µm/voxel (750 × 750 × 500 voxel elements). The volume has been scanned with a number of 100 light sheets and scan rates of 10 kHz. The results show the growth of the Tsai-Widnall azimuthal instabilities accompanied with a precession of the axis of the vortex ring. Prior to breakdown, secondary instabilities evolve along the core with streamwise oriented striations. The front stagnation point's streamwise distance to the core starts to decrease while

  18. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    An area of rocky terrain near the landing site of the Sagan Memorial Station can be seen in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  19. 3D FEA of cemented glass fiber and cast posts with various dental cements in a maxillary central incisor.

    PubMed

    Madfa, Ahmed A; Al-Hamzi, Mohsen A; Al-Sanabani, Fadhel A; Al-Qudaimi, Nasr H; Yue, Xiao-Guang

    2015-01-01

    This study aimed to analyse and compare the stability of two dental posts cemented with four different luting agents by examining their shear stress transfer through the FEM. Eight three-dimensional finite element models of a maxillary central incisor restored with glass fiber and Ni-Cr alloy cast dental posts. Each dental post was luted with zinc phosphate, Panavia resin, super bond C&B resin and glass ionomer materials. Finite element models were constructed and oblique loading of 100 N was applied. The distribution of shear stress was investigated at posts and cement/dentine interfaces using ABAQUS/CAE software. The peak shear stress for glass fiber post models minimized approximately three to four times of those for Ni-Cr alloy cast post models. There was negligible difference in peak of shear stress when various cements were compared, irrespective of post materials. The shear stress had same trend for all cement materials. This study found that the glass fiber dental post reduced the shear stress concentration at interfacial of post and cement/dentine compared to Ni-Cr alloy cast dental post.

  20. Radionuclide Incorporation in Secondary Crystalline Minerals Resulting from Chemical Weathering of Selected Waste Glasses: Progress Report for Subtask 3d

    SciTech Connect

    SV Mattigod; DI Kaplan; VL LeGore; RD Orr; HT Schaef; JS Young

    1998-10-23

    Experiments were conducted in fiscal year 1998 by Pacific Northwest National Laboratory to evaluate potential incorporation of radionuclides in secondary mineral phases that form from weathering vitrified nuclear waste glasses. These experiments were conducted as part of the Immobilized Low- Activity Waste-Petiormance Assessment (ILAW-PA) to generate data on radionuclide mobilization and transport in a near-field enviromnent of disposed vitrified wastes. An initial experiment was conducted to identify the types of secondary minerals that form from two glass samples of differing compositions, LD6 and SRL202. Chemical weathering of LD6 glass at 90oC in contact with an aliquot of uncontaminated Hanford Site groundwater resulted in the formation of a Crystalline zeolitic mineral, phillipsite. In contrast similar chemical weathering of SRL202 glass at 90"C resulted in the formation of a microcrystalline smectitic mineral, nontronite. A second experiment was conducted at 90"C to assess the degree to which key radionuclides would be sequestered in the structure of secondary crystalline minerals; namely, phillipsite and nontronite. Chemical weathering of LD6 in contact with radionuclide-spiked Hanford Site groundwater indicated that substantial ilactions of the total activities were retained in the phillipsite structure. Similar chemical weathering of SRL202 at 90"C, also in contact with radionuclide-spiked Hanford Site groundwater, showed that significant fractions of the total activities were retained in the nontronite structure. These results have important implications regarding the radionuclide mobilization aspects of the ILAW-PA. Additional studies are required to confkm the results and to develop an improved under- standing of mechanisms of sequestration and attenuated release of radionuclides to help refine certain aspects of their mobilization.

  1. Electrical manipulation of biological samples in glass-based electrofluidics fabricated by 3D femtosecond laser processing

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Midorikawa, Katsumi; Sugioka, Koji

    2014-03-01

    Electrical manipulation of biological samples using glass-based electrofluidics fabricated by femtosecond laser, in which the microfluidic structures are integrated with microelectric components, is presented. Electro-orientation of movement of living cells with asymmetric shapes such as Euglena gracilis of aquatic microorganisms in microfluidic channels is demonstrated using the fabricated electrofluidics. By integrating the properly designed microelectrodes into microfluidic channels, the orientation direction of Euglena cells can be well controlled.

  2. Fabrication of a three dimensional particle focusing microfluidic device using a 3D printer, PDMS, and glass

    NASA Astrophysics Data System (ADS)

    Collette, Robyn; Rosen, Daniel; Shirk, Kathryn

    Microfluidic devices have high importance in fields such as bioanalysis because they can manipulate volumes of fluid in the range of microliters to picoliters. Small samples can be quickly and easily tested using complex microfluidic devices. Typically, these devices are created through lithography techniques, which can be costly and time consuming. It has been shown that inexpensive microfluidic devices can be produced quickly using a 3D printer and PDMS. However, a size limitation prohibits the fabrication of precisely controlled microchannels. By using shrinking materials in combination with 3D printing of flow-focusing geometries, this limitation can be overcome. This research seeks to employ these techniques to quickly fabricate an inexpensive, working device with three dimensional particle focusing capabilities. By modifying the channel geometry, colloidal particles in a solution will be focused into a single beam when passed through this device. The ability to focus particles is necessary for a variety of biological applications which requires precise detection and characterization of particles in a sample. We would like to thank the Shippensburg University Undergraduate Research Grant Program for their generous funding.

  3. High speed large viewing angle shutters for triple-flash active glasses

    NASA Astrophysics Data System (ADS)

    Caillaud, B.; Bellini, B.; de Bougrenet de la Tocnaye, J.-L.

    2009-02-01

    We present a new generation of liquid crystal shutters for active glasses, well suited to 3-D cinema current trends, involving triple flash regimes. Our technology uses a composite smectic C* liquid crystal mixture1. In this paper we focus on the electro-optical characterization of composite smectic-based shutters, and compare their performance with nematic ones, demonstrating their advantages for the new generation of 3-D cinema and more generally 3-D HDTV.

  4. Directionally controlled 3D ferroelectric single crystal growth in LaBGeO5 glass by femtosecond laser irradiation.

    PubMed

    Stone, Adam; Sakakura, Masaaki; Shimotsuma, Yasuhiko; Stone, Greg; Gupta, Pradyumna; Miura, Kiyotaka; Hirao, Kazuyuki; Dierolf, Volkmar; Jain, Himanshu

    2009-12-07

    Laser-fabrication of complex, highly oriented three-dimensional ferroelectric single crystal architecture with straight lines and bends is demonstrated in lanthanum borogermanate model glass using a high repetition rate femtosecond laser. Scanning micro-Raman microscopy shows that the c-axis of the ferroelectric crystal is aligned with the writing direction even after bending. A gradual rather than an abrupt transition is observed for the changing lattice orientation through bends up to approximately 14 degrees. Thus the single crystal character of the line is preserved along the bend through lattice straining rather than formation of a grain boundary.

  5. Mechanical and in vitro performance of apatite-wollastonite glass ceramic reinforced hydroxyapatite composite fabricated by 3D-printing.

    PubMed

    Suwanprateeb, J; Sanngam, R; Suvannapruk, W; Panyathanmaporn, T

    2009-06-01

    In situ hydroxyapatite/apatite-wollastonite glass ceramic composite was fabricated by a three dimensional printing (3DP) technique and characterized. It was found that the as-fabricated mean green strength of the composite was 1.27 MPa which was sufficient for general handling. After varying sintering temperatures (1050-1300 degrees C) and times (1-10 h), it was found that sintering at 1300 degrees C for 3 h gave the greatest flexural modulus and strength, 34.10 GPa and 76.82 MPa respectively. This was associated with a decrease in porosity and increase in densification ability of the composite resulting from liquid phase sintering. Bioactivity tested by soaking in simulated body fluid (SBF) and In Vitro toxicity studies showed that 3DP hydroxyapatite/A-W glass ceramic composite was non-toxic and bioactive. A new calcium phosphate layer was observed on the surface of the composite after soaking in SBF for only 1 day while osteoblast cells were able to attach and attain normal morphology on the surface of the composite.

  6. SummitView 1.0: a code to automatically generate 3D solid models of surface micro-machining based MEMS designs.

    SciTech Connect

    McBride, Cory L. (Elemental Technologies, American Fort, UT); Yarberry, Victor R.; Schmidt, Rodney Cannon; Meyers, Ray J.

    2006-11-01

    This report describes the SummitView 1.0 computer code developed at Sandia National Laboratories. SummitView is designed to generate a 3D solid model, amenable to visualization and meshing, that represents the end state of a microsystem fabrication process such as the SUMMiT (Sandia Ultra-Planar Multilevel MEMS Technology) V process. Functionally, SummitView performs essentially the same computational task as an earlier code called the 3D Geometry modeler [1]. However, because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages. As input it requires a definition of both the process itself and the collection of individual 2D masks created by the designer and associated with each of the process steps. The definition of the process is contained in a special process definition file [2] and the 2D masks are contained in MEM format files [3]. The code is written in C++ and consists of a set of classes and routines. The classes represent the geometric data and the SUMMiT V process steps. Classes are provided for the following process steps: Planar Deposition, Planar Etch, Conformal Deposition, Dry Etch, Wet Etch and Release Etch. SummitView is built upon the 2D Boolean library GBL-2D [4], and thus contains all of that library's functionality.

  7. Characterization by combined optical and FT infrared spectra of 3d-transition metal ions doped-bismuth silicate glasses and effects of gamma irradiation.

    PubMed

    ElBatal, F H; Abdelghany, A M; ElBatal, H A

    2014-03-25

    Optical and infrared absorption spectral measurements were carried out for binary bismuth silicate glass and other derived prepared samples with the same composition and containing additional 0.2% of one of 3d transition metal oxides. The same combined spectroscopic properties were also measured after subjecting the prepared glasses to a gamma dose of 8 Mrad. The experimental optical spectra reveal strong UV-near visible absorption bands from the base and extended to all TMs-doped samples and these specific extended and strong UV-near visible absorption bands are related to the contributions of absorption from both trace iron (Fe(3+)) ions present as contaminated impurities within the raw materials and from absorption of main constituent trivalent bismuth (Bi(3+)) ions. The strong UV-near visible absorption bands are observed to suppress any further UV bands from TM ions. The studied glasses show obvious resistant to gamma irradiation and only small changes are observed upon gamma irradiation. This observed shielding behavior is related to the presence of high Bi(3+) ions with heavy mass causing the observed stability of the optical absorption. Infrared absorption spectra of the studied glasses reveal characteristic vibrational bands due to both modes from silicate network and the sharing of Bi-O linkages and the presence of TMs in the doping level (0.2%) causes no distinct changes within the number or position of the vibrational modes. The presence of high Bi2O3 content (70 mol%) appears to cause stability of the structural building units towards gamma irradiation as revealed by FTIR measurements.

  8. Upgrades and application of FIT3D NBI-plasma interaction code in view of LHD deuterium campaigns

    NASA Astrophysics Data System (ADS)

    Vincenzi, P.; Bolzonella, T.; Murakami, S.; Osakabe, M.; Seki, R.; Yokoyama, M.

    2016-12-01

    This work presents an upgrade of the FIT3D neutral beam-plasma interaction code, part of TASK3D, a transport suite of codes, and its application to LHD experiments in the framework of the preparation for the first deuterium experiments in the LHD. The neutral beam injector (NBI) system will be upgraded to D injection, and efforts have been recently made to extend LHD modelling capabilities to D operations. The implemented upgrades for FIT3D to enable D NBI modelling in D plasmas are presented, with a discussion and benchmark of the models used. In particular, the beam ionization module has been modified and a routine for neutron production estimation has been implemented. The upgraded code is then used to evaluate the NBI power deposition in experiments with different plasma compositions. In the recent LHD campaign, in fact, He experiments have been run to help the prediction of main effects which may be relevant in future LHD D plasmas. Identical H/He experiments showed similar electron density and temperature profiles, while a higher ion temperature with an He majority has been observed. From first applications of the upgraded FIT3D code it turns out that, although more NB power appears to be coupled with the He plasma, the NBI power deposition is unaffected, suggesting that heat deposition does not play a key role in the increased ion temperature with He plasma.

  9. TransCAIP: A Live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters.

    PubMed

    Taguchi, Yuichi; Koike, Takafumi; Takahashi, Keita; Naemura, Takeshi

    2009-01-01

    The system described in this paper provides a real-time 3D visual experience by using an array of 64 video cameras and an integral photography display with 60 viewing directions. The live 3D scene in front of the camera array is reproduced by the full-color, full-parallax autostereoscopic display with interactive control of viewing parameters. The main technical challenge is fast and flexible conversion of the data from the 64 multicamera images to the integral photography format. Based on image-based rendering techniques, our conversion method first renders 60 novel images corresponding to the viewing directions of the display, and then arranges the rendered pixels to produce an integral photography image. For real-time processing on a single PC, all the conversion processes are implemented on a GPU with GPGPU techniques. The conversion method also allows a user to interactively control viewing parameters of the displayed image for reproducing the dynamic 3D scene with desirable parameters. This control is performed as a software process, without reconfiguring the hardware system, by changing the rendering parameters such as the convergence point of the rendering cameras and the interval between the viewpoints of the rendering cameras.

  10. The Best of Both Worlds: 3D X-ray Microscopy with Ultra-high Resolution and a Large Field of View

    NASA Astrophysics Data System (ADS)

    Li, W.; Gelb, J.; Yang, Y.; Guan, Y.; Wu, W.; Chen, J.; Tian, Y.

    2011-09-01

    3D visualizations of complex structures within various samples have been achieved with high spatial resolution by X-ray computed nanotomography (nano-CT). While high spatial resolution generally comes at the expense of field of view (FOV). Here we proposed an approach that stitched several 3D volumes together into a single large volume to significantly increase the size of the FOV while preserving resolution. Combining this with nano-CT, 18-μm FOV with sub-60-nm resolution has been achieved for non-destructive 3D visualization of clustered yeasts that were too large for a single scan. It shows high promise for imaging other large samples in the future.

  11. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  12. Development of a color 3D display visible to plural viewers at the same time without special glasses by using a ray-regenerating method

    NASA Astrophysics Data System (ADS)

    Hamagishi, Goro; Ando, Takahisa; Higashino, Masahiro; Yamashita, Atsuhiro; Mashitani, Ken; Inoue, Masutaka; Kishimoto, Shun-Ichi; Kobayashi, Tetsuro

    2002-05-01

    We have newly developed a few kinds of new auto-stereoscopic 3D displays adopting a ray-regenerating method. The method is invented basically at Osaka University in 1997. We adopted this method with LCD. The display has a very simple construction. It consists of LC panel with a very large number of pixels and many small light sources positioned behind the LC panel. We have examined the following new technologies: 1) Optimum design of the optical system. 2) Suitable construction in order to realize very large number of pixels. 3) Highly bright back-light system with optical fiber array to compensate the low lighting efficiency. The 3D displays having wide viewing area and being visible for plural viewers were realized. But the cross-talk images appeared more than we expected. By changing the construction of this system to reduce the diffusing factors of generated rays, the cross-talk images are reduced dramatically. Within the limitation of the pixel numbers of LCD, it is desirable to increase the pinhole numbers to realize the realistic 3D image. This research formed a link in the chain of the national project by NEDO (New Energy and Industrial Technology Development Organization) in Japan.

  13. A new type of stereoscopic 3D mini-projector

    NASA Astrophysics Data System (ADS)

    Su, Ping; Ma, Jianshe; Li, Yan; Chen, Dingru; Li, Yi

    2011-11-01

    Mini-projectors based on LED illumination have already become a hotspot of projector industry, and time-sequential stereoscopic mini-projector has been developed. However, viewers to this type of 3D projector usually suffered from dark and flickering images caused by false synchronization of images and glasses, and the low transmittance of LC glasses. We propose a new type of polarized mini-3D-projector, which employs double LED illumination engines, and double LCoS panels. The optical model of the optical engine for the mini-3D-projector is built based on the measured optical qualities of the key optical elements. The large divergence angle is the main factor which affects disparity according to the simulated analysis. The first version of prototype is developed, which has low disparity (<5%) and performs comfortable viewing experience. This new type of 3D mini-projector has the advantages of the both conventional 3D projection technologies.

  14. An automatic registration system of multi-view 3D measurement data using two-axis turntables

    NASA Astrophysics Data System (ADS)

    He, Dong; Liu, Xiaoli; Cai, Zewei; Chen, Hailong; Peng, Xiang

    2016-09-01

    Automatic registration is a key researcher issue in 3D measurement field. In this work, we developed the automatic registration system, which is composed of a stereo system with structured light and two axis turntables. To realize the fully automatically 3D point registration, the novel method is proposed for calibration the stereo system and the two turntable direction vector simultaneously. The plane calibration rig with marked points was placed on the turntable and was captured by the left and right cameras of the stereo system with different rotation angles of the two axis turntable. By the shot images, a stereo system (intrinsically and extrinsically) was calibrated with classics camera model, and reconstruction 3D coordinates of the marked points with different angle of the two turntable. The marked point in different angle posted the specific circle, and the normal line of the circle around the turntable axis direction vector. For the each turntable, different points have different circle and normal line, and the turntable axis direction vector is calculated by averaging the different normal line. And the result show that, the proposed registration system can precisely register point cloud under the different scanning angles. In addition, there are no the ICP iterative procedures, and that make it can be used in registration of the point cloud without the obvious features like sphere, cylinder comes and the other rotator.

  15. Investigation of 3D silvernanodendrite@glass as surface-enhanced Raman scattering substrate for the detection of Sildenafil and GSH

    NASA Astrophysics Data System (ADS)

    Lv, Meng; Gu, Huaimin; Yuan, Xiaojuan; Gao, Junxiang; Cai, Tiantian

    2012-12-01

    A solid-phase dendritic Ag nanostructure was synthesized in the presence of silk fibroin biomacromolecule and planted on the glass to form three-dimensional (3D) silvernanodendrite@glass film. When NO3-, Cl- and SO42- were added in the synthesis process of the film to study their influence on the Raman activity of this substrate using MB as probe molecule, it's found that the substrate with Cl-1 gives the most intensive enhancement, and two ways were proposed to explain this phenomenon. Its superiority in practical application of surface-enhanced Raman scattering (SERS) was verified by analyzing the characteristic Raman spectrum of Sildenafil between 1150 cm-1 and 1699 cm-1. Besides, the absorption mechanism of GSH on the film through the role of peptide bond was analyzed. GSH interacts strongly with the silver surface via the ν(Csbnd S) in two different conformers. The carboxyl and the amide groups are also involved in the adsorption process. In this experiment, we synthesized, studied and applied this as-growth substrate and found some information about its interaction with different molecular bonds and functional groups of peptide.

  16. 3D Visualizations of Abstract DataSets

    DTIC Science & Technology

    2010-08-01

    5000 Ultimate Free SDK, $2000; Single Seat $4000 Basic $250, Professional $1000, Studio $ 3000 Kind of Free Overall UI (ease of use) 5...full, high definition ( HD ) stereoscopic 3D display. It works by synching LCD wireless active shutter glasses through an IR emitter and advanced...software, to a Samsung SyncMaster 2233RZ, 120 Hz, LCD display that provides the full HD stereoscopic 3D. Subjects will view the display, seated at the

  17. FACET: a radiation view factor computer code for axisymmetric, 2D planar, and 3D geometries with shadowing

    SciTech Connect

    Shapiro, A.B.

    1983-08-01

    The computer code FACET calculates the radiation geometric view factor (alternatively called shape factor, angle factor, or configuration factor) between surfaces for axisymmetric, two-dimensional planar and three-dimensional geometries with interposed third surface obstructions. FACET was developed to calculate view factors for input to finite-element heat-transfer analysis codes. The first section of this report is a brief review of previous radiation-view-factor computer codes. The second section presents the defining integral equation for the geometric view factor between two surfaces and the assumptions made in its derivation. Also in this section are the numerical algorithms used to integrate this equation for the various geometries. The third section presents the algorithms used to detect self-shadowing and third-surface shadowing between the two surfaces for which a view factor is being calculated. The fourth section provides a user's input guide followed by several example problems.

  18. Radiometric and geometric evaluation of GeoEye-1, WorldView-2 and Pléiades-1A stereo images for 3D information extraction

    NASA Astrophysics Data System (ADS)

    Poli, D.; Remondino, F.; Angiuli, E.; Agugiaro, G.

    2015-02-01

    Today the use of spaceborne Very High Resolution (VHR) optical sensors for automatic 3D information extraction is increasing in the scientific and civil communities. The 3D Optical Metrology (3DOM) unit of the Bruno Kessler Foundation (FBK) in Trento (Italy) has collected VHR satellite imagery, as well as aerial and terrestrial data over Trento for creating a complete testfield for investigations on image radiometry, geometric accuracy, automatic digital surface model (DSM) generation, 2D/3D feature extraction, city modelling and data fusion. This paper addresses the radiometric and the geometric aspects of the VHR spaceborne imagery included in the Trento testfield and their potential for 3D information extraction. The dataset consist of two stereo-pairs acquired by WorldView-2 and by GeoEye-1 in panchromatic and multispectral mode, and a triplet from Pléiades-1A. For reference and validation, a DSM from airborne LiDAR acquisition is used. The paper gives details on the project, dataset characteristics and achieved results.

  19. Gypsies in the palace: Experimentalist's view on the use of 3-D physics-based simulation of hillslope hydrological response

    USGS Publications Warehouse

    James, A.L.; McDonnell, Jeffery J.; Tromp-Van Meerveld, I.; Peters, N.E.

    2010-01-01

    As a fundamental unit of the landscape, hillslopes are studied for their retention and release of water and nutrients across a wide range of ecosystems. The understanding of these near-surface processes is relevant to issues of runoff generation, groundwater-surface water interactions, catchment export of nutrients, dissolved organic carbon, contaminants (e.g. mercury) and ultimately surface water health. We develop a 3-D physics-based representation of the Panola Mountain Research Watershed experimental hillslope using the TOUGH2 sub-surface flow and transport simulator. A recent investigation of sub-surface flow within this experimental hillslope has generated important knowledge of threshold rainfall-runoff response and its relation to patterns of transient water table development. This work has identified components of the 3-D sub-surface, such as bedrock topography, that contribute to changing connectivity in saturated zones and the generation of sub-surface stormflow. Here, we test the ability of a 3-D hillslope model (both calibrated and uncalibrated) to simulate forested hillslope rainfall-runoff response and internal transient sub-surface stormflow dynamics. We also provide a transparent illustration of physics-based model development, issues of parameterization, examples of model rejection and usefulness of data types (e.g. runoff, mean soil moisture and transient water table depth) to the model enterprise. Our simulations show the inability of an uncalibrated model based on laboratory and field characterization of soil properties and topography to successfully simulate the integrated hydrological response or the distributed water table within the soil profile. Although not an uncommon result, the failure of the field-based characterized model to represent system behaviour is an important challenge that continues to vex scientists at many scales. We focus our attention particularly on examining the influence of bedrock permeability, soil anisotropy and

  20. On the Use of Uavs in Mining and Archaeology - Geo-Accurate 3d Reconstructions Using Various Platforms and Terrestrial Views

    NASA Astrophysics Data System (ADS)

    Tscharf, A.; Rumpler, M.; Fraundorfer, F.; Mayer, G.; Bischof, H.

    2015-08-01

    During the last decades photogrammetric computer vision systems have been well established in scientific and commercial applications. Especially the increasing affordability of unmanned aerial vehicles (UAVs) in conjunction with automated multi-view processing pipelines have resulted in an easy way of acquiring spatial data and creating realistic and accurate 3D models. With the use of multicopter UAVs, it is possible to record highly overlapping images from almost terrestrial camera positions to oblique and nadir aerial images due to the ability to navigate slowly, hover and capture images at nearly any possible position. Multi-copter UAVs thus are bridging the gap between terrestrial and traditional aerial image acquisition and are therefore ideally suited to enable easy and safe data collection and inspection tasks in complex or hazardous environments. In this paper we present a fully automated processing pipeline for precise, metric and geo-accurate 3D reconstructions of complex geometries using various imaging platforms. Our workflow allows for georeferencing of UAV imagery based on GPS-measurements of camera stations from an on-board GPS receiver as well as tie and control point information. Ground control points (GCPs) are integrated directly in the bundle adjustment to refine the georegistration and correct for systematic distortions of the image block. We discuss our approach based on three different case studies for applications in mining and archaeology and present several accuracy related analyses investigating georegistration, camera network configuration and ground sampling distance. Our approach is furthermore suited for seamlessly matching and integrating images from different view points and cameras (aerial and terrestrial as well as inside views) into one single reconstruction. Together with aerial images from a UAV, we are able to enrich 3D models by combining terrestrial images as well inside views of an object by joint image processing to

  1. Adipose- and bone marrow-derived mesenchymal stem cells display different osteogenic differentiation patterns in 3D bioactive glass-based scaffolds.

    PubMed

    Rath, Subha N; Nooeaid, Patcharakamon; Arkudas, Andreas; Beier, Justus P; Strobel, Leonie A; Brandl, Andreas; Roether, Judith A; Horch, Raymund E; Boccaccini, Aldo R; Kneser, Ulrich

    2016-10-01

    Mesenchymal stem cells can be isolated from a variety of different sources, each having their own peculiar merits and drawbacks. Although a number of studies have been conducted comparing these stem cells for their osteo-differentiation ability, these are mostly done in culture plastics. We have selected stem cells from either adipose tissue (ADSCs) or bone marrow (BMSCs) and studied their differentiation ability in highly porous three-dimensional (3D) 45S5 Bioglass®-based scaffolds. Equal numbers of cells were seeded onto 5 × 5 × 4 mm(3) scaffolds and cultured in vitro, with or without osteo-induction medium. After 2 and 4 weeks, the cell-scaffold constructs were analysed for cell number, cell spreading, viability, alkaline phosphatase activity and osteogenic gene expression. The scaffolds with ADSCs displayed osteo-differentiation even without osteo-induction medium; however, with osteo-induction medium osteogenic differentiation was further increased. In contrast, the scaffolds with BMSCs showed no osteo-differentiation without osteo-induction medium; after application of osteo-induction medium, osteo-differentiation was confirmed, although lower than in scaffolds with ADSCs. In general, stem cells in 3D bioactive glass scaffolds differentiated better than cells in culture plastics with respect to their ALP content and osteogenic gene expression. In summary, 45S5 Bioglass-based scaffolds seeded with ADSCs are well-suited for possible bone tissue-engineering applications. Induction of osteogenic differentiation appears unnecessary prior to implantation in this specific setting. Copyright © 2013 John Wiley & Sons, Ltd.

  2. A View through Medicare's Looking Glass: Notices You Might Have Missed.

    PubMed

    Schaum, Kathleen D

    2012-10-01

    CMS has made great strides in supplying current coding,payment, and coverage information. Wound care providers must pick up the looking glass to view this important information that is released on a regular basis. If you have not subscribed to this valuable information, now is the time to act. The looking glass is there if you will only reach out for it.

  3. Martian terrain & airbags - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Portions of the lander's deflated airbags and a petal are at lower left in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  4. Martian terrain & airbags - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Portions of the lander's deflated airbags and a petal are at the lower area of this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  5. Wavelet-Based 3D Reconstruction of Microcalcification Clusters from Two Mammographic Views: New Evidence That Fractal Tumors Are Malignant and Euclidean Tumors Are Benign

    PubMed Central

    Batchelder, Kendra A.; Tanenbaum, Aaron B.; Albert, Seth; Guimond, Lyne; Kestener, Pierre; Arneodo, Alain; Khalil, Andre

    2014-01-01

    The 2D Wavelet-Transform Modulus Maxima (WTMM) method was used to detect microcalcifications (MC) in human breast tissue seen in mammograms and to characterize the fractal geometry of benign and malignant MC clusters. This was done in the context of a preliminary analysis of a small dataset, via a novel way to partition the wavelet-transform space-scale skeleton. For the first time, the estimated 3D fractal structure of a breast lesion was inferred by pairing the information from two separate 2D projected mammographic views of the same breast, i.e. the cranial-caudal (CC) and mediolateral-oblique (MLO) views. As a novelty, we define the “CC-MLO fractal dimension plot”, where a “fractal zone” and “Euclidean zones” (non-fractal) are defined. 118 images (59 cases, 25 malignant and 34 benign) obtained from a digital databank of mammograms with known radiologist diagnostics were analyzed to determine which cases would be plotted in the fractal zone and which cases would fall in the Euclidean zones. 92% of malignant breast lesions studied (23 out of 25 cases) were in the fractal zone while 88% of the benign lesions were in the Euclidean zones (30 out of 34 cases). Furthermore, a Bayesian statistical analysis shows that, with 95% credibility, the probability that fractal breast lesions are malignant is between 74% and 98%. Alternatively, with 95% credibility, the probability that Euclidean breast lesions are benign is between 76% and 96%. These results support the notion that the fractal structure of malignant tumors is more likely to be associated with an invasive behavior into the surrounding tissue compared to the less invasive, Euclidean structure of benign tumors. Finally, based on indirect 3D reconstructions from the 2D views, we conjecture that all breast tumors considered in this study, benign and malignant, fractal or Euclidean, restrict their growth to 2-dimensional manifolds within the breast tissue. PMID:25222610

  6. Wavelet-based 3D reconstruction of microcalcification clusters from two mammographic views: new evidence that fractal tumors are malignant and Euclidean tumors are benign.

    PubMed

    Batchelder, Kendra A; Tanenbaum, Aaron B; Albert, Seth; Guimond, Lyne; Kestener, Pierre; Arneodo, Alain; Khalil, Andre

    2014-01-01

    The 2D Wavelet-Transform Modulus Maxima (WTMM) method was used to detect microcalcifications (MC) in human breast tissue seen in mammograms and to characterize the fractal geometry of benign and malignant MC clusters. This was done in the context of a preliminary analysis of a small dataset, via a novel way to partition the wavelet-transform space-scale skeleton. For the first time, the estimated 3D fractal structure of a breast lesion was inferred by pairing the information from two separate 2D projected mammographic views of the same breast, i.e. the cranial-caudal (CC) and mediolateral-oblique (MLO) views. As a novelty, we define the "CC-MLO fractal dimension plot", where a "fractal zone" and "Euclidean zones" (non-fractal) are defined. 118 images (59 cases, 25 malignant and 34 benign) obtained from a digital databank of mammograms with known radiologist diagnostics were analyzed to determine which cases would be plotted in the fractal zone and which cases would fall in the Euclidean zones. 92% of malignant breast lesions studied (23 out of 25 cases) were in the fractal zone while 88% of the benign lesions were in the Euclidean zones (30 out of 34 cases). Furthermore, a Bayesian statistical analysis shows that, with 95% credibility, the probability that fractal breast lesions are malignant is between 74% and 98%. Alternatively, with 95% credibility, the probability that Euclidean breast lesions are benign is between 76% and 96%. These results support the notion that the fractal structure of malignant tumors is more likely to be associated with an invasive behavior into the surrounding tissue compared to the less invasive, Euclidean structure of benign tumors. Finally, based on indirect 3D reconstructions from the 2D views, we conjecture that all breast tumors considered in this study, benign and malignant, fractal or Euclidean, restrict their growth to 2-dimensional manifolds within the breast tissue.

  7. The New Realm of 3-D Vision

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  8. Sojourner near Barnacle Bill - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    At right, Sojourner has traveled off the lander's rear ramp and onto the surface of Mars. 3D glasses are necessary to identify surface detail. The rock Barnacle Bill and the rear ramp is to the left of Sojourner.

    The image was taken by the Imager for Mars Pathfinder (IMP) on Sol 3. The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  9. 6. Building E9; view of glass lines for dilute liquor ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. Building E-9; view of glass lines for dilute liquor and spent acid; second floor, looking ESE. Bottom of wash tank is at the top of the view. (Ryan and Harms) - Holston Army Ammunition Plant, RDX-and-Composition-B Manufacturing Line 9, Kingsport, Sullivan County, TN

  10. The effect of activity outside the field-of-view on image signal-to-noise ratio for 3D PET with 15O

    NASA Astrophysics Data System (ADS)

    Ibaraki, Masanobu; Sugawara, Shigeki; Nakamura, Kazuhiro; Kinoshita, Fumiko; Kinoshita, Toshibumi

    2011-05-01

    Activity outside the field-of-view (FOV) degrades the count rate performance of 3D PET and consequently reduces signal-to-noise ratios (SNRs) of reconstructed images. The aim of this study was to evaluate a neck-shield installed in a 3D PET scanner for reducing the effect of the outside FOV activity. Specifically, we compared brain PET scans (15O2 and H215O) with and without the use of the neck-shield. Image SNRs were directly estimated by a sinogram bootstrap method. The bootstrap analysis showed that the use of the neck-shield improved the SNR by 8% and 19% for H215O and 15O2, respectively. The SNR improvements were predominantly due to the reduction of the random count rates. Noise equivalent count rate (NECR) analysis provided SNR estimates that were very similar with the bootstrap-based results for H215O, but not for 15O2. This discrepancy may be due to the fundamental difference between the two methods: the bootstrap method directly calculates the local SNR of reconstructed images, whereas the NECR calculation is based on the whole-gantry count rates, indicating a limitation of the conventional NECR-based method as a tool for assessing the image SNR. Although quantitative parameters, e.g. cerebral blood flow, did not differ when examined with and without the neck-shield, the use of the shield for brain 15O study is recommended in terms of the image SNR.

  11. 3D radiative transfer effects in multi-angle/multispectral radio-polarimetric signals from a mixture of clouds and aerosols viewed by a non-imaging sensor

    NASA Astrophysics Data System (ADS)

    Davis, Anthony B.; Garay, Michael J.; Xu, Feng; Qu, Zheng; Emde, Claudia

    2013-09-01

    When observing a spatially complex mix of aerosols and clouds in a single relatively large field-of-view, nature entangles their signals non-linearly through polarized radiation transport processes that unfold in the 3D position and direction spaces. In contrast, any practical forward model in a retrieval algorithm will use only 1D vector radiative transfer (vRT) in a linear mixing technique. We assess the difference between the observed and predicted signals using synthetic data from a high-fidelity 3D vRT model with clouds generated using a Large Eddy Simulation model and an aerosol climatology. We find that this difference is signal—not noise—for the Aerosol Polarimetry Sensor (APS), an instrument developed by NASA. Moreover, the worst case scenario is also the most interesting case, namely, when the aerosol burden is large, hence hase the most impact on the cloud microphysics and dynamics. Based on our findings, we formulate a mitigation strategy for these unresolved cloud adjacency effects assuming that some spatial information is available about the structure of the clouds at higher resolution from "context" cameras, as was planned for NASA's ill-fated Glory mission that was to carry the APS but failed to reach orbit. Application to POLDER (POLarization and Directionality of Earth Reflectances) data from the period when PARASOL (Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) was in the A-train is briefly discussed.

  12. Color Flat Panel Displays: 3D Autostereoscopic Brassboard and Field Sequential Illumination Technology.

    DTIC Science & Technology

    1997-06-01

    DTI has advanced autostereoscopic and field sequential color (FSC) illumination technologies for flat panel displays. Using a patented backlight...technology, DTI has developed prototype 3D flat panel color display that provides stereoscopic viewing without the need for special glasses or other... autostereoscopic viewing. Discussions of system architecture, critical component specifications and resultant display characteristics are provided. Also

  13. Increasing the effective aperture of a detector and enlarging the receiving field of view in a 3D imaging lidar system through hexagonal prism beam splitting.

    PubMed

    Lee, Xiaobao; Wang, Xiaoyi; Cui, Tianxiang; Wang, Chunhui; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-07-11

    The detector in a highly accurate and high-definition scanning 3D imaging lidar system requires high frequency bandwidth and sufficient photosensitive area. To solve the problem of small photosensitive area of an existing indium gallium arsenide detector with a certain frequency bandwidth, this study proposes a method for increasing the receiving field of view (FOV) and enlarging the effective photosensitive aperture of such detector through hexagonal prism beam splitting. The principle and construction of hexagonal prism beam splitting is also discussed in this research. Accordingly, a receiving optical system with two hexagonal prisms is provided and the splitting beam effect of the simulation experiment is analyzed. Using this novel method, the receiving optical system's FOV can be improved effectively up to ±5°, and the effective photosensitive aperture of the detector is increased from 0.5 mm to 1.5 mm.

  14. TU-C-BRE-04: 3D Gel Dosimetry Using ViewRay On-Board MR Scanner: A Feasibility Study

    SciTech Connect

    Zhang, L; Du, D; Green, O; Rodriguez, V; Wooten, H; Xiao, Z; Yang, D; Hu, Y; Li, H

    2014-06-15

    Purpose: MR based 3D gel has been proposed for radiation therapy dosimetry. However, access to MR scanner has been one of the limiting factors for its wide acceptance. Recent commercialization of an on-board MR-IGRT device (ViewRay) may render the availability issue less of a concern. This work reports our attempts to simulate MR based dose measurement accuracy on ViewRay using three different gels. Methods: A spherical BANG gel dosimeter was purchased from MGS Research. Cylindrical MAGIC gel and Fricke gel were fabricated in-house according to published recipes. After irradiation, BANG and MAGIC were imaged using a dual-echo spin echo sequence for T2 measurement on a Philips 1.5T MR scanner, while Fricke gel was imaged using multiple spin echo sequences. Difference between MR measured and TPS calculated dose was defined as noise. The noise power spectrum was calculated and then simulated for the 0.35 T magnetic field associated with ViewRay. The estimated noise was then added to TG-119 test cases to simulate measured dose distributions. Simulated measurements were evaluated against TPS calculated doses using gamma analysis. Results: Given same gel, sequence and coil setup, with a FOV of 180×90×90 mm3, resolution of 3×3×3 mm3, and scanning time of 30 minutes, the simulated measured dose distribution using BANG would have a gamma passing rate greater than 90% (3%/3mm and absolute). With a FOV 180×90×90 mm3, resolution of 4×4×5 mm3, and scanning time of 45 minutes, the simulated measuremened dose distribution would have a gamma passing rate greater than 97%. MAGIC exhibited similar performance while Fricke gel was inferior due to much higher noise. Conclusions: The simulation results demonstrated that it may be feasible to use MAGIC and BANG gels for 3D dose verification using ViewRay low-field on-board MRI scanner.

  15. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  16. Microscopic View of Accelerated Dynamics in Deformed Polymer Glasses

    NASA Astrophysics Data System (ADS)

    Warren, Mya; Rottler, Jörg

    2010-05-01

    A molecular level analysis of segmental trajectories obtained from molecular dynamics simulations is used to obtain the full relaxation time spectrum in aging polymer glasses subject to three different deformation protocols. As in experiments, dynamics can be accelerated by several orders of magnitude, and a narrowing of the distribution of relaxation times during creep is directly observed. Additionally, the acceleration factor describing the transformation of the relaxation time distributions is computed and found to obey a universal dependence on the strain, independent of age and deformation protocol.

  17. VPython: Python plus Animations in Stereo 3D

    NASA Astrophysics Data System (ADS)

    Sherwood, Bruce

    2004-03-01

    Python is a modern object-oriented programming language. VPython (http://vpython.org) is a combination of Python (http://python.org), the Numeric module from LLNL (http://www.pfdubois.com/numpy), and the Visual module created by David Scherer, all of which have been under continuous development as open source projects. VPython makes it easy to write programs that generate real-time, navigable 3D animations. The Visual module includes a set of 3D objects (sphere, cylinder, arrow, etc.), tools for creating other shapes, and support for vector algebra. The 3D renderer runs in a parallel thread, and animations are produced as a side effect of computations, freeing the programmer to concentrate on the physics. Applications include educational and research visualization. In the Fall of 2003 Hugh Fisher at the Australian National University, John Zelle at Wartburg College, and I contributed to a new stereo capability of VPython. By adding a single statement to an existing VPython program, animations can be viewed in true stereo 3D. One can choose several modes: active shutter glasses, passive polarized glasses, or colored glasses (e.g. red-cyan). The talk will demonstrate the new stereo capability and discuss the pros and cons of various schemes for display of stereo 3D for a large audience. Supported in part by NSF grant DUE-0237132.

  18. 3d visualization of atomistic simulations on every desktop

    NASA Astrophysics Data System (ADS)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-08-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given.

  19. Progresses in 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Navarro, Héctor; Pons, Amparo; Javidi, Bahram

    2008-11-01

    Integral imaging is a promising technique for the acquisition and auto-stereoscopic display of 3D scenes with full parallax and without the need of any additional devices like special glasses. First suggested by Lippmann in the beginning of the 20th century, integral imaging is based in the intersection of ray cones emitted by a collection of 2D elemental images which store the 3D information of the scene. This paper is devoted to the study, from the ray optics point of view, of the optical effects and interaction with the observer of integral imaging systems.

  20. `We put on the glasses and Moon comes closer!' Urban Second Graders Exploring the Earth, the Sun and Moon Through 3D Technologies in a Science and Literacy Unit

    NASA Astrophysics Data System (ADS)

    Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin

    2014-01-01

    This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day and night, Moon phases and seasons. These modules were used in a science and literacy unit for 35 second graders at an urban elementary school in Midwestern USA. Data included pre- and post-interviews, audio-taped lessons and classroom observations. Post-interviews demonstrated that children's knowledge of the shapes and the movements of the Earth and Moon, alternation of day and night, the occurrence of the seasons, and Moon's changing appearance increased. Second graders reported that they enjoyed expanding their knowledge through hands-on experiences; through its reality effect, 3D visualization enabled them to observe the space objects that move in the virtual space. The teachers noted that 3D visualization stimulated children's interest in space and that using 3D visualization in combination with other teaching methods-literacy experiences, videos and photos, simulations, discussions, and presentations-supported student learning. The teachers and the students still experienced challenges using 3D visualization due to technical problems with 3D vision and time constraints. We conclude that 3D visualization offers hands-on experiences for challenging science concepts and may support young children's ability to view phenomena that would typically be observed through direct, long-term observations in outer space. Results imply a reconsideration of assumed capabilities of young children to understand astronomical phenomena.

  1. The World of 3-D.

    ERIC Educational Resources Information Center

    Mayshark, Robin K.

    1991-01-01

    Students explore three-dimensional properties by creating red and green wall decorations related to Christmas. Students examine why images seem to vibrate when red and green pieces are small and close together. Instructions to conduct the activity and construct 3-D glasses are given. (MDH)

  2. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  3. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  4. A comparison of two Stokes ice sheet models applied to the Marine Ice Sheet Model Intercomparison Project for plan view models (MISMIP3d)

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Price, Stephen; Ju, Lili; Leng, Wei; Brondex, Julien; Durand, Gaël; Gagliardini, Olivier

    2017-01-01

    We present a comparison of the numerics and simulation results for two "full" Stokes ice sheet models, FELIX-S (Leng et al. 2012) and Elmer/Ice (Gagliardini et al. 2013). The models are applied to the Marine Ice Sheet Model Intercomparison Project for plan view models (MISMIP3d). For the diagnostic experiment (P75D) the two models give similar results ( < 2 % difference with respect to along-flow velocities) when using identical geometries and computational meshes, which we interpret as an indication of inherent consistencies and similarities between the two models. For the standard (Stnd), P75S, and P75R prognostic experiments, we find that FELIX-S (Elmer/Ice) grounding lines are relatively more retreated (advanced), results that are consistent with minor differences observed in the diagnostic experiment results and that we show to be due to different choices in the implementation of basal boundary conditions in the two models. While we are not able to argue for the relative favorability of either implementation, we do show that these differences decrease with increasing horizontal (i.e., both along- and across-flow) grid resolution and that grounding-line positions for FELIX-S and Elmer/Ice converge to within the estimated truncation error for Elmer/Ice. Stokes model solutions are often treated as an accuracy metric in model intercomparison experiments, but computational cost may not always allow for the use of model resolution within the regime of asymptotic convergence. In this case, we propose that an alternative estimate for the uncertainty in the grounding-line position is the span of grounding-line positions predicted by multiple Stokes models.

  5. Integrated 3D view of postmating responses by the Drosophila melanogaster female reproductive tract, obtained by micro-computed tomography scanning.

    PubMed

    Mattei, Alexandra L; Riccio, Mark L; Avila, Frank W; Wolfner, Mariana F

    2015-07-07

    Physiological changes in females during and after mating are triggered by seminal fluid components in conjunction with female-derived molecules. In insects, these changes include increased egg production, storage of sperm, and changes in muscle contraction within the reproductive tract (RT). Such postmating changes have been studied in dissected RT tissues, but understanding their coordination in vivo requires a holistic view of the tissues and their interrelationships. Here, we used high-resolution, multiscale micro-computed tomography (CT) scans to visualize and measure postmating changes in situ in the Drosophila female RT before, during, and after mating. These studies reveal previously unidentified dynamic changes in the conformation of the female RT that occur after mating. Our results also reveal how the reproductive organs temporally shift in concert within the confines of the abdomen. For example, we observed chiral loops in the uterus and in the upper common oviduct that relax and constrict throughout sperm storage and egg movement. We found that specific seminal fluid proteins or female secretions mediate some of the postmating changes in morphology. The morphological movements, in turn, can cause further changes due to the connections among organs. In addition, we observed apparent copulatory damage to the female intima, suggesting a mechanism for entry of seminal proteins, or other exogenous components, into the female's circulatory system. The 3D reconstructions provided by high-resolution micro-CT scans reveal how male and female molecules and anatomy interface to carry out and coordinate mating-dependent changes in the female's reproductive physiology.

  6. 3D laser-written silica glass step-index high-contrast waveguides for the 3.5 μm mid-infrared range.

    PubMed

    Martínez, Javier; Ródenas, Airán; Fernandez, Toney; Vázquez de Aldana, Javier R; Thomson, Robert R; Aguiló, Magdalena; Kar, Ajoy K; Solis, Javier; Díaz, Francesc

    2015-12-15

    We report on the direct laser fabrication of step-index waveguides in fused silica substrates for operation in the 3.5 μm mid-infrared wavelength range. We demonstrate core-cladding index contrasts of 0.7% at 3.39 μm and propagation losses of 1.3 (6.5) dB/cm at 3.39 (3.68) μm, close to the intrinsic losses of the glass. We also report on the existence of three different laser modified SiO₂ glass volumes, their different micro-Raman spectra, and their different temperature-dependent populations of color centers, tentatively clarifying the SiO₂ lattice changes that are related to the large index changes.

  7. The effect of activity outside the field of view on image quality for a 3D LSO-based whole body PET/CT scanner.

    PubMed

    Matheoud, R; Secco, C; Della Monica, P; Leva, L; Sacchetti, G; Inglese, E; Brambilla, M

    2009-10-07

    The purpose of this study was to quantify the influence of outside field of view (FOV) activity concentration (A(c)(,out)) on the noise equivalent count rate (NECR), scatter fraction (SF) and image quality of a 3D LSO whole-body PET/CT scanner. The contrast-to-noise ratio (CNR) was the figure of merit used to characterize the image quality of PET scans. A modified International Electrotechnical Commission (IEC) phantom was used to obtain SF and counting rates similar to those found in average patients. A scatter phantom was positioned at the end of the modified IEC phantom to simulate an activity that extends beyond the scanner. The modified IEC phantom was filled with (18)F (11 kBq mL(-1)) and the spherical targets, with internal diameter (ID) ranging from 10 to 37 mm, had a target-to-background ratio of 10. PET images were acquired with background activity concentrations into the FOV (A(c)(,bkg)) about 11, 9.2, 6.6, 5.2 and 3.5 kBq mL(-1). The emission scan duration (ESD) was set to 1, 2, 3 and 4 min. The tube inside the scatter phantom was filled with activities to provide A(c)(,out) in the whole scatter phantom of zero, half, unity, twofold and fourfold the one of the modified IEC phantom. Plots of CNR versus the various parameters are provided. Multiple linear regression was employed to study the effects of A(c)(,out) on CNR, adjusted for the presence of variables (sphere ID, A(c)(,bkg) and ESD) related to CNR. The presence of outside FOV activity at the same concentration as the one inside the FOV reduces peak NECR of 30%. The increase in SF is marginal (1.2%). CNR diminishes significantly with increasing outside FOV activity, in the range explored. ESD and A(c)(,out) have a similar weight in accounting for CNR variance. Thus, an experimental law that adjusts the scan duration to the outside FOV activity can be devised. Recovery of CNR loss due to an elevated A(c)(,out) activity seems feasible by modulating the ESD in individual bed positions according to A(c)(,out).

  8. Lunar and Planetary Science XXXV: Viewing the Lunar Interior Through Titanium-Colored Glasses

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The session"Viewing the Lunar Interior Through Titanium-Colored Glasses" included the following reports:Consequences of High Crystallinity for the Evolution of the Lunar Magma Ocean: Trapped Plagioclase; Low Abundances of Highly Siderophile Elements in the Lunar Mantle: Evidence for Prolonged Late Accretion; Fast Anorthite Dissolution Rates in Lunar Picritic Melts: Petrologic Implications; Searching the Moon for Aluminous Mare Basalts Using Compositional Remote-Sensing Constraints II: Detailed analysis of ROIs; Origin of Lunar High Titanium Ultramafic Glasses: A Hybridized Source?; Ilmenite Solubility in Lunar Basalts as a Function of Temperature and Pressure: Implications for Petrogenesis; Garnet in the Lunar Mantle: Further Evidence from Volcanic Glasses; Preliminary High Pressure Phase Relations of Apollo 15 Green C Glass: Assessment of the Role of Garnet; Oxygen Fugacity of Mare Basalts and the Lunar Mantle. Application of a New Microscale Oxybarometer Based on the Valence State of Vanadium; A Model for the Origin of the Dark Ring at Orientale Basin; Petrology and Geochemistry of LAP 02 205: A New Low-Ti Mare-Basalt Meteorite; Thorium and Samarium in Lunar Pyroclastic Glasses: Insights into the Composition of the Lunar Mantle and Basaltic Magmatism on the Moon; and Eu2+ and REE3+ Diffusion in Enstatite, Diopside, Anorthite, and a Silicate Melt: A Database for Understanding Kinetic Fractionation of REE in the Lunar Mantle and Crust.

  9. The effects of 3D bioactive glass scaffolds and BMP-2 on bone formation in rat femoral critical size defects and adjacent bones.

    PubMed

    Liu, Wai-Ching; Robu, Irina S; Patel, Rikin; Leu, Ming C; Velez, Mariano; Chu, Tien-Min Gabriel

    2014-08-01

    Reconstruction of critical size defects in the load-bearing area has long been a challenge in orthopaedics. In the past, we have demonstrated the feasibility of using a biodegradable load-sharing scaffold fabricated from poly(propylene fumarate)/tricalcium phosphate (PPF/TCP) loaded with bone morphogenetic protein-2 (BMP-2) to successfully induce healing in those defects. However, there is limited osteoconduction observed with the PPF/TCP scaffold itself. For this reason, 13-93 bioactive glass scaffolds with local BMP-2 delivery were investigated in this study for inducing segmental defect repairs in a load-bearing region. Furthermore, a recent review on BMP-2 revealed greater risks in radiculitis, ectopic bone formation, osteolysis and poor global outcome in association with the use of BMP-2 for spinal fusion. We also evaluated the potential side effects of locally delivered BMP-2 on the structures of adjacent bones. Therefore, cylindrical 13-93 glass scaffolds were fabricated by indirect selective laser sintering with side holes on the cylinder filled with dicalcium phosphate dehydrate as a BMP-2 carrier. The scaffolds were implanted into critical size defects created in rat femurs with and without 10 μg of BMP-2. The x-ray and micro-CT results showed that a bridging callus was found as soon as three weeks and progressed gradually in the BMP group while minimal bone formation was observed in the control group. Degradation of the scaffolds was noted in both groups. Stiffness, peak load and energy to break of the BMP group were all higher than the control group. There was no statistical difference in bone mineral density, bone area and bone mineral content in the tibiae and contralateral femurs of the control and BMP groups. In conclusion, a 13-93 bioactive glass scaffold with local BMP-2 delivery has been demonstrated for its potential application in treating large bone defects.

  10. Toward mobile 3D visualization for structural biologists.

    PubMed

    Tanramluk, Duangrudee; Akavipat, Ruj; Charoensawan, Varodom

    2013-12-01

    Technological advances in crystallography have led to the ever-rapidly increasing number of biomolecular structures deposited in public repertoires. This undoubtedly shifts the bottleneck of structural biology research from obtaining high-quality structures to data analysis and interpretation. The recently available glasses-free autostereoscopic laptop offers an unprecedented opportunity to visualize and study 3D structures using a much more affordable, and for the first time, portable device. Together with a gamepad re-programmed for 3D structure controlling, we describe how the gaming technologies can deliver the output 3D images for high-quality viewing, comparable to that of a passive stereoscopic system, and can give the user more control and flexibility than the conventional controlling setup using only a mouse and a keyboard.

  11. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  12. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  13. A cross-platform solution for light field based 3D telemedicine.

    PubMed

    Wang, Gengkun; Xiang, Wei; Pickering, Mark

    2016-03-01

    Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The "immersiveness" provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience.

  14. 3D recovery of human gaze in natural environments

    NASA Astrophysics Data System (ADS)

    Paletta, Lucas; Santner, Katrin; Fritz, Gerald; Mayer, Heinz

    2013-01-01

    The estimation of human attention has recently been addressed in the context of human robot interaction. Today, joint work spaces already exist and challenge cooperating systems to jointly focus on common objects, scenes and work niches. With the advent of Google glasses and increasingly affordable wearable eye-tracking, monitoring of human attention will soon become ubiquitous. The presented work describes for the first time a method for the estimation of human fixations in 3D environments that does not require any artificial landmarks in the field of view and enables attention mapping in 3D models. It enables full 3D recovery of the human view frustum and the gaze pointer in a previously acquired 3D model of the environment in real time. The study on the precision of this method reports a mean projection error ≈1.1 cm and a mean angle error ≈0.6° within the chosen 3D model - the precision does not go below the one of the technical instrument (≈1°). This innovative methodology will open new opportunities for joint attention studies as well as for bringing new potential into automated processing for human factors technologies.

  15. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  16. Analyzing the 3D Structure of Human Carbonic Anhydrase II and Its Mutants Using Deep View and the Protein Data Bank

    ERIC Educational Resources Information Center

    Ship, Noam J.; Zamble, Deborah B.

    2005-01-01

    The self directed study of a 3D image of a biomolecule stresses the complex nature of the intra- and intermolecular interactions that come together to define its structure. This is made up of a series of in vitro experiments with a wild-type and mutants forms of human carbonic anhydrase II (hCAII) that examine the structure function relationship…

  17. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  18. Laser Based 3D Volumetric Display System

    DTIC Science & Technology

    1993-03-01

    Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye

  19. MTF characterization in 2D and 3D for a high resolution, large field of view flat panel imager for cone beam CT

    NASA Astrophysics Data System (ADS)

    Shah, Jainil; Mann, Steve D.; Tornai, Martin P.; Richmond, Michelle; Zentai, George

    2014-03-01

    The 2D and 3D modulation transfer functions (MTFs) of a custom made, large 40x30cm2 area, 600- micron CsI-TFT based flat panel imager having 127-micron pixellation, along with the micro-fiber scintillator structure, were characterized in detail using various techniques. The larger area detector yields a reconstructed FOV of 25cm diameter with an 80cm SID in CT mode. The MTFs were determined with 1x1 (intrinsic) binning. The 2D MTFs were determined using a 50.8 micron tungsten wire and a solid lead edge, and the 3D MTF was measured using a custom made phantom consisting of three nearly orthogonal 50.8 micron tungsten wires suspended in an acrylic cubic frame. The 2D projection data was reconstructed using an iterative OSC algorithm using 16 subsets and 5 iterations. As additional verification of the resolution, along with scatter, the Catphan® phantom was also imaged and reconstructed with identical parameters. The measured 2D MTF was ~4% using the wire technique and ~1% using the edge technique at the 3.94 lp/mm Nyquist cut-off frequency. The average 3D MTF measured along the wires was ~8% at the Nyquist. At 50% MTF, the resolutions were 1.2 and 2.1 lp/mm in 2D and 3D, respectively. In the Catphan® phantom, the 1.7 lp/mm bars were easily observed. Lastly, the 3D MTF measured on the three wires has an observed 5.9% RMSD, indicating that the resolution of the imaging system is uniform and spatially independent. This high performance detector is integrated into a dedicated breast SPECT-CT imaging system.

  20. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  1. Monitoring buried remains with a transparent 3D half bird's eye view of ground penetrating radar data in the Zeynel Bey tomb in the ancient city of Hasankeyf, Turkey

    NASA Astrophysics Data System (ADS)

    Kadioglu, Selma; Kagan Kadioglu, Yusuf; Akin Akyol, Ali

    2011-09-01

    The aim of this paper is to show a new monitoring approximation for ground penetrating radar (GPR) data. The method was used to define buried archaeological remains inside and outside the Zeynel Bey tomb in Hasankeyf, an ancient city in south-eastern Turkey. The study examined whether the proposed GPR method could yield useful results at this highly restricted site, which has a maximum diameter inside the tomb of 4 m. A transparent three-dimensional (3D) half bird's eye view was constructed from a processed parallel-aligned two-dimensional GPR profile data set by using an opaque approximation instead of linear opacity. Interactive visualizations of transparent 3D sub-data volumes were conducted. The amplitude-colour scale was balanced by the amplitude range of the buried remains in a depth range, and appointed a different opaque value for this range, in order to distinguish the buried remains from one another. Therefore, the maximum amplitude values of the amplitude-colour scale were rearranged with the same colour range. This process clearly revealed buried remains in depth slices and transparent 3D data volumes. However, the transparent 3D half bird's eye views of the GPR data better revealed the remains than the depth slices of the same data. In addition, the results showed that the half bird's eye perspective was important in order to image the buried remains. Two rectangular walls were defined, one within and the other perpendicularly, in the basement structure of the Zeynel Bey tomb, and a cemetery was identified aligned in the east-west direction at the north side of the tomb. The transparent 3D half bird's eye view of the GPR data set also determined the buried walls outside the tomb. The findings of the excavation works at the Zeynel Bey tomb successfully overlapped with the new visualization results.

  2. 3D visualization of polymer nanostructure

    SciTech Connect

    Werner, James H

    2009-01-01

    Soft materials and structured polymers are extremely useful nanotechnology building blocks. Block copolymers, in particular, have served as 2D masks for nanolithography and 3D scaffolds for photonic crystals, nanoparticle fabrication, and solar cells. F or many of these applications, the precise 3 dimensional structure and the number and type of defects in the polymer is important for ultimate function. However, directly visualizing the 3D structure of a soft material from the nanometer to millimeter length scales is a significant technical challenge. Here, we propose to develop the instrumentation needed for direct 3D structure determination at near nanometer resolution throughout a nearly millimeter-cubed volume of a soft, potentially heterogeneous, material. This new capability will be a valuable research tool for LANL missions in chemistry, materials science, and nanoscience. Our approach to soft materials visualization builds upon exciting developments in super-resolution optical microscopy that have occurred over the past two years. To date, these new, truly revolutionary, imaging methods have been developed and almost exclusively used for biological applications. However, in addition to biological cells, these super-resolution imaging techniques hold extreme promise for direct visualization of many important nanostructured polymers and other heterogeneous chemical systems. Los Alamos has a unique opportunity to lead the development of these super-resolution imaging methods for problems of chemical rather than biological significance. While these optical methods are limited to systems transparent to visible wavelengths, we stress that many important functional chemicals such as polymers, glasses, sol-gels, aerogels, or colloidal assemblies meet this requirement, with specific examples including materials designed for optical communication, manipulation, or light-harvesting Our Research Goals are: (1) Develop the instrumentation necessary for imaging materials

  3. Multi-photon lithography of 3D micro-structures in As2S3 and Ge5(As2Se3)95 chalcogenide glasses

    NASA Astrophysics Data System (ADS)

    Schwarz, Casey M.; Labh, Shreya; Barker, Jayk E.; Sapia, Ryan J.; Richardson, Gerald D.; Rivero-Baleine, Clara; Gleason, Benn; Richardson, Kathleen A.; Pogrebnyakov, Alexej; Mayer, Theresa S.; Kuebler, Stephen M.

    2016-03-01

    This work reports a detailed study of the processing and photo-patterning of two chalcogenide glasses (ChGs) - arsenic trisulfide (As2S3) and a new composition of germanium-doped arsenic triselenide Ge5(As2Se3)95 - as well as their use for creating functional optical structures. ChGs are materials with excellent infrared (IR) transparency, large index of refraction, low coefficient of thermal expansion, and low change in refractive index with temperature. These features make them well suited for a wide range of commercial and industrial applications including detectors, sensors, photonics, and acousto-optics. Photo-patternable films of As2S3 and Ge5(As2Se3)95 were prepared by thermally depositing the ChGs onto silicon substrates. For some As2S3 samples, an anti-reflection layer of arsenic triselenide (As2Se3) was first added to mitigate the effects of standing-wave interference during laser patterning. The ChG films were photo-patterned by multi-photon lithography (MPL) and then chemically etched to remove the unexposed material, leaving free-standing structures that were negative-tone replicas of the photo-pattern in networked-solid ChG. The chemical composition and refractive index of the unexposed and photo-exposed materials were examined using Raman spectroscopy and near-IR ellipsometry. Nano-structured arrays were photo-patterned and the resulting nano-structure morphology and chemical composition were characterized and correlated with the film compositions, conditions of thermal deposition, patterned irradiation, and etch processing. Photo-patterned Ge5(As2Se3)95 was found to be more resistant than As2S3 toward degradation by formation of surface oxides.

  4. Accurate registration of random radiographic projections based on three spherical references for the purpose of few-view 3D reconstruction

    SciTech Connect

    Schulze, Ralf; Heil, Ulrich; Weinheimer, Oliver; Gross, Daniel; Bruellmann, Dan; Thomas, Eric; Schwanecke, Ulrich; Schoemer, Elmar

    2008-02-15

    Precise registration of radiographic projection images acquired in almost arbitrary geometries for the purpose of three-dimensional (3D) reconstruction is beset with difficulties. We modify and enhance a registration method [R. Schulze, D. D. Bruellmann, F. Roeder, and B. d'Hoedt, Med. Phys. 31, 2849-2854 (2004)] based on coupling a minimum amount of three reference spheres in arbitrary positions to a rigid object under study for precise a posteriori pose estimation. Two consecutive optimization procedures (a, initial guess; b, iterative coordinate refinement) are applied to completely exploit the reference's shadow information for precise registration of the projections. The modification has been extensive, i.e., only the idea of using the sphere shadows to locate each sphere in three dimensions from each projection was retained whereas the approach to extract the shadow information has been changed completely and extended. The registration information is used for subsequent algebraic reconstruction of the 3D information inherent in the projections. We present a detailed mathematical theory of the registration process as well as simulated data investigating its performance in the presence of error. Simulation of the initial guess revealed a mean relative error in the critical depth coordinate ranging between 2.1% and 4.4%, and an evident error reduction by the subsequent iterative coordinate refinement. To prove the applicability of the method for real-world data, algebraic 3D reconstructions from few ({<=}9) projection radiographs of a human skull, a human mandible and a teeth-containing mandible segment are presented. The method facilitates extraction of 3D information from only few projections obtained from off-the-shelf radiographic projection units without the need for costly hardware. Technical requirements as well as radiation dose are low.

  5. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  6. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; ...

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  7. Odyssey over Martian Sunrise, 3-D

    NASA Technical Reports Server (NTRS)

    2003-01-01

    NASA's Mars Odyssey spacecraft passes above a portion of the planet that is rotating into the sunlight in this artist's concept illustration. This red-blue anaglyph artwork can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue (cyan) 3-D glasses.

    The spacecraft has been orbiting Mars since October 24, 2001.

    NASA's Jet Propulsion Laboratory manages the Mars Odyssey mission for the NASA Office of Space Science, Washington, D.C. Investigators at Arizona State University in Tempe, the University of Arizona in Tucson, and NASA's Johnson Space Center, Houston, operate the science instruments. The gamma-ray spectrometer was provided by the University of Arizona in collaboration with the Russian Aviation and Space Agency and Institute for Space Research, which provided the high-energy neutron detector, and the Los Alamos National Laboratories, New Mexico, which provided the neutron spectrometer. Lockheed Martin Space Systems, Denver, is the prime contractor for the project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  8. Terrain and rock 'Yogi' - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The left portion of this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3, shows the large rock nicknamed 'Yogi.' 3D glasses are necessary to identify surface detail. Portions of a petal and deflated airbag are in the foreground. Yogi has been an object of study for rover Sojourner's Alpha Proton X-Ray Spectrometer (APXS) instrument. The APXS will help Pathfinder scientists learn more about the chemical composition of that rock. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  9. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  10. Active Fault Geometry and Crustal Deformation Along the San Andreas Fault System Through San Gorgonio Pass, California: The View in 3D From Seismicity

    NASA Astrophysics Data System (ADS)

    Nicholson, C.; Hauksson, E.; Plesch, A.

    2012-12-01

    Understanding the 3D geometry and deformation style of the San Andreas fault (SAF) is critical to accurate dynamic rupture and ground motion prediction models. We use 3D alignments of hypocenter and focal mechanism nodal planes within a relocated earthquake catalog (1981-2011) [Hauksson et al., 2012] to develop improved 3D fault models for active strands of the SAF and adjacent secondary structures. Through San Gorgonio Pass (SGP), earthquakes define a mechanically layered crust with predominantly high-angle strike-slip faults in the upper ~10 km, while at greater depth, intersecting sets of strike-slip, oblique slip and low-angle thrust faults define a wedge-shaped volume deformation of the lower crust. In some places, this interface between upper and lower crustal deformation may be an active detachment fault, and may have controlled the down-dip extent of recent fault rupture. Alignments of hypocenters and nodal planes define multiple principal slip surfaces through SGP, including a through-going steeply-dipping predominantly strike-slip Banning fault strand at depth that upward truncates a more moderately dipping (40°-50°) blind, oblique North Palm Springs fault. The North Palm Springs fault may be the active down-dip extension of the San Gorgonio Pass thrust offset at depth by the principal, through-going Banning strand. In the northern Coachella Valley, seismicity indicates that the Garnet Hill and Banning fault strands are most likely sub-parallel and steeply dipping (~70°NE) to depths of 8-10 km, where they intersect and merge with a stack of moderately dipping to low-angle oblique thrust faults. Gravity and water well data confirm that these faults are sub-parallel and near vertical in the upper 2-3 km. Although the dense wedge of deep seismicity below SGP and largely south of the SAF contains multiple secondary fault sets of different orientations, the predominant fault set appears to be a series of en echelon NW-striking oblique strike-slip faults

  11. Autostereoscopic 3D display system on the properties of both the expanded depth directional viewing zone and the removed structural crosstalk

    NASA Astrophysics Data System (ADS)

    Lee, Kwang-Hoon; Park, Anjin; Lee, Dong-Kil; Kim, Yang-Gyu; Jang, Wongun; Park, Youngsik

    2014-06-01

    To expand the suitable stereoscopic viewing zone on depth directional and remove the crosstalk induced by the structures of the existing slanted lenticular lens sheet, Segmented Lenticular lens having Varying Optical Power (SL-VOP) is proposed.

  12. User benefits of visualization with 3-D stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Wichansky, Anna M.

    1991-08-01

    The power of today''s supercomputers promises tremendous benefits to users in terms of productivity, creativity, and excitement in computing. A study of a stereoscopic display system for computer workstations was conducted with 20 users and third-party software developers, to determine whether 3-D stereo displays were perceived as better than flat, 2- 1/2D displays. Users perceived more benefits of 3-D stereo in applications such as molecular modeling and cell biology, which involved viewing of complex, abstract, amorphous objects. Users typically mentioned clearer visualization and better understanding of data, easier recognition of form and pattern, and more fun and excitement at work as the chief benefits of stereo displays. Human factors issues affecting the usefulness of stereo included use of 3-D glasses over regular eyeglasses, difficulties in group viewing, lack of portability, and need for better input devices. The future marketability of 3-D stereo displays would be improved by eliminating the need for users to wear equipment, reducing cost, and identifying markets where the abstract display value can be maximized.

  13. Filling gaps in cultural heritage documentation by 3D photography

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.

    2015-08-01

    geometry" and to multistage concepts of 3D photographs in Cultural Heritage just started. Furthermore a revised list of the 3D visualization principles, claiming completeness, has been carried out. Beside others in an outlook *It is highly recommended, to list every historical and current stereo view with relevance to Cultural Heritage in a global Monument Information System (MIS), like in google earth. *3D photographs seem to be very suited, to complete and/or at least partly to replace manual archaeological sketches. In this concern the still underestimated 3D effect will be demonstrated, which even allows, e.g., the spatial perception of extremely small scratches etc... *A consequent dealing with 3D Technology even seems to indicate, currently we experience the beginning of a new age of "real 3DPC- screens", which at least could add or even partly replace the conventional 2D screens. Here the spatial visualization is verified without glasses in an all-around vitreous body. In this respect nowadays widespread lasered crystals showing monuments are identified as "Early Bird" 3D products, which, due to low resolution and contrast and due to lack of color, currently might even remember to the status of the invention of photography by Niepce (1827), but seem to promise a great future also in 3D Cultural Heritage documentation. *Last not least 3D printers more and more seem to conquer the IT-market, obviously showing an international competition.

  14. Investigation of geological structures with a view to HLRW disposal, as revealed through 3D inversion of aeromagnetic and gravity data and the results of CSAMT exploration

    NASA Astrophysics Data System (ADS)

    An, Zhiguo; Di, Qingyun

    2016-12-01

    The Alxa area in Inner Mongolia has been selected as a possible site for geological disposal of high-level radioactive waste (HLRW). Based on results of a previous study on crustal stability, the Tamusu rock mass has been chosen as the target. To determine the geological structure of this rock mass, aeromagnetic and gravity data are collected and inverted. Three-dimensional (3D) inversion horizontal slices show that the internal density of the rock mass and the distribution of magnetic properties are not uniform, with fractures and fragmentation being present. To confirm this result, the controlled source audio-frequency magnetotelluric method (CSAMT) was applied to explore the geological structures, the typical CSAMT sounding curve was analyzed, and the response characteristics of the geological structure and surrounding rock are distinguished. The original data were processed and interpreted in combination with data from surface geology and drilling and logging data. It is found that the CSAMT results were consistent with those from 3D inversion of the gravity and magnetic data, confirming the existence of fractures and fragmentation in the exploration area.

  15. Momentum-resolved view of mixed 2D and nonbulklike 3D electronic structure of the surface state on SrTiO3 (001)

    NASA Astrophysics Data System (ADS)

    Plumb, N. C.; Salluzzo, M.; Razzoli, E.; Mansson, M.; Krempasky, J.; Matt, C. E.; Schmitt, T.; Shi, M.; Mesot, J.; Patthey, L.; Radovic, M.

    2014-03-01

    The recent discovery of a metallic surface state on SrTiO3 may open a route to simplified low-dimensional oxide-based conductors, as well as give new insights into interfacial phenomena in heterostructures such as LaAlO3/SrTiO3. Our recent angle-resolved photoemission spectroscopy (ARPES) study demonstrates that not only quasi-2D but also non-bulklike 3D Fermi surface components make up the surface state. Like their more 2D counterparts, the size and character of the 3D components are fixed with respect to a broad range of sample preparations. As seen in previous studies, the surface state can be ``prepared'' by photon irradiation under UHV conditions. An extremely high fraction of the surface valence states are affected by this process, especially in relation to the stability of oxygen core level intensity during the same exposure, which points to a key role of electronic/structural changes that spread over the surface as the metal emerges.

  16. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  17. Toward a 3D video format for auto-stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha

    2008-08-01

    There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.

  18. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  19. 3D carotid plaque MR Imaging

    PubMed Central

    Parker, Dennis L.

    2015-01-01

    SYNOPSIS There has been significant progress made in 3D carotid plaque magnetic resonance imaging techniques in recent years. 3D plaque imaging clearly represents the future in clinical use. With effective flow suppression techniques, choices of different contrast weighting acquisitions, and time-efficient imaging approaches, 3D plaque imaging offers flexible imaging plane and view angle analysis, large coverage, multi-vascular beds capability, and even can be used in fast screening. PMID:26610656

  20. Super stereoscopy technique for comfortable and realistic 3D displays.

    PubMed

    Akşit, Kaan; Niaki, Amir Hossein Ghanbari; Ulusoy, Erdem; Urey, Hakan

    2014-12-15

    Two well-known problems of stereoscopic displays are the accommodation-convergence conflict and the lack of natural blur for defocused objects. We present a new technique that we name Super Stereoscopy (SS3D) to provide a convenient solution to these problems. Regular stereoscopic glasses are replaced by SS3D glasses which deliver at least two parallax images per eye through pinholes equipped with light selective filters. The pinholes generate blur-free retinal images so as to enable correct accommodation, while the delivery of multiple parallax images per eye creates an approximate blur effect for defocused objects. Experiments performed with cameras and human viewers indicate that the technique works as desired. In case two, pinholes equipped with color filters per eye are used; the technique can be used on a regular stereoscopic display by only uploading a new content, without requiring any change in display hardware, driver, or frame rate. Apart from some tolerable loss in display brightness and decrease in natural spatial resolution limit of the eye because of pinholes, the technique is quite promising for comfortable and realistic 3D vision, especially enabling the display of close objects that are not possible to display and comfortably view on regular 3DTV and cinema.

  1. Mini 3D for shallow gas reconnaissance

    SciTech Connect

    Vallieres, T. des; Enns, D.; Kuehn, H.; Parron, D.; Lafet, Y.; Van Hulle, D.

    1996-12-31

    The Mini 3D project was undertaken by TOTAL and ELF with the support of CEPM (Comite d`Etudes Petrolieres et Marines) to define an economical method of obtaining 3D seismic HR data for shallow gas assessment. An experimental 3D survey was carried out with classical site survey techniques in the North Sea. From these data 19 simulations, were produced to compare different acquisition geometries ranging from dual, 600 m long cables to a single receiver. Results show that short offset, low fold and very simple streamer positioning are sufficient to give a reliable 3D image of gas charged bodies. The 3D data allow a much more accurate risk delineation than 2D HR data. Moreover on financial grounds Mini-3D is comparable in cost to a classical HR 2D survey. In view of these results, such HR 3D should now be the standard for shallow gas surveying.

  2. Development of a 2D Image Reconstruction and Viewing System for Histological Images from Multiple Tissue Blocks: Towards High-Resolution Whole-Organ 3D Histological Images.

    PubMed

    Hashimoto, Noriaki; Bautista, Pinky A; Haneishi, Hideaki; Snuderl, Matija; Yagi, Yukako

    2016-01-01

    High-resolution 3D histology image reconstruction of the whole brain organ starts from reconstructing the high-resolution 2D histology images of a brain slice. In this paper, we introduced a method to automatically align the histology images of thin tissue sections cut from the multiple paraffin-embedded tissue blocks of a brain slice. For this method, we employed template matching and incorporated an optimization technique to further improve the accuracy of the 2D reconstructed image. In the template matching, we used the gross image of the brain slice as a reference to the reconstructed 2D histology image of the slice, while in the optimization procedure, we utilized the Jaccard index as the metric of the reconstruction accuracy. The results of our experiment on the initial 3 different whole-brain tissue slices showed that while the method works, it is also constrained by tissue deformations introduced during the tissue processing and slicing. The size of the reconstructed high-resolution 2D histology image of a brain slice is huge, and designing an image viewer that makes particularly efficient use of the computing power of a standard computer used in our laboratories is of interest. We also present the initial implementation of our 2D image viewer system in this paper.

  3. Hands-on guide for 3D image creation for geological purposes

    NASA Astrophysics Data System (ADS)

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red

  4. Web Based Interactive Anaglyph Stereo Visualization of 3D Model of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Han, J.

    2014-12-01

    The objectives of this study were to create interactive online tool for generating and viewing the anaglyph 3D stereo image on a Web browser via Internet. To achieve this, we designed and developed the prototype system. Three-dimensional visualization is well known and becoming popular in recent years to understand the target object and the related physical phenomena. Geoscience data have the complex data model, which combines large extents with rich small scale visual details. So, the real-time visualization of 3D geoscience data model on the Internet is a challenging work. In this paper, we show the result of creating which can be viewed in 3D anaglyph of geoscience data in any web browser which supports WebGL. We developed an anaglyph image viewing prototype system, and some representative results are displayed by anaglyph 3D stereo image generated in red-cyan colour from pairs of air-photo/digital elevation model and geological map/digital elevation model respectively. The best viewing is achieved by using suitable 3D red-cyan glasses, although alternatively red-blue or red-green spectacles can be also used. The middle mouse wheel can be used to zoom in/out the anaglyph image on a Web browser. Application of anaglyph 3D stereo image is a very important and easy way to understand the underground geologic system and active tectonic geomorphology. The integrated strata with fine three-dimensional topography and geologic map data can help to characterise the mineral potential area and the active tectonic abnormal characteristics. To conclude, it can be stated that anaglyph 3D stereo image provides a simple and feasible method to improve the relief effect of geoscience data such as geomorphology and geology. We believe that with further development, the anaglyph 3D stereo imaging system could as a complement to 3D geologic modeling, constitute a useful tool for better understanding of the underground geology and the active tectonic

  5. Using the Technology: Introducing Point of View Video Glasses Into the Simulated Clinical Learning Environment.

    PubMed

    Metcalfe, Helene; Jonas-Dwyer, Diana; Saunders, Rosemary; Dugmore, Helen

    2015-10-01

    The introduction of learning technologies into educational settings continues to grow alongside the emergence of innovative technologies into the healthcare arena. The challenge for health professionals such as medical, nursing, and allied health practitioners is to develop an improved understanding of these technologies and how they may influence practice and contribute to healthcare. For nurse educators to remain contemporary, there is a need to not only embrace current technologies in teaching and learning but to also ensure that students are able to adapt to this changing pedagogy. One recent technological innovation is the use of wearable computing technology, consisting of video recording with the capability of playback analysis. The authors of this article discuss the introduction of the use of wearable Point of View video glasses by a cohort of nursing students in a simulated clinical learning laboratory. Of particular interest was the ease of use of the glasses, also termed the usability of this technology, which is central to its success. Students' reflections were analyzed together with suggestions for future use.

  6. FPGA implementation of glass-free stereo vision

    NASA Astrophysics Data System (ADS)

    Tang, Weidong; Yan, Xiaolin

    2016-04-01

    This paper presents a real-time efficient glass-free 3D system, which is based on FPGA. The system converts two-view input that is 60 frames per second (fps) 1080P stream into a multi-view video with 30fps and 4K resolution. In order to provide smooth and comfortable viewing experience, glass-free 3D systems must display multi-view videos. To generate a multi-view video from a two-view input includes three steps, the first is to compute disparity maps from two input views; the second is to synthesize a couple of new views based on the computed disparity maps and input views; the last is to produce video from the new views according to the specifications of the lens installed on TV sets.

  7. AE3D

    SciTech Connect

    Spong, Donald A

    2016-06-20

    AE3D solves for the shear Alfven eigenmodes and eigenfrequencies in a torodal magnetic fusion confinement device. The configuration can be either 2D (e.g. tokamak, reversed field pinch) or 3D (e.g. stellarator, helical reversed field pinch, tokamak with ripple). The equations solved are based on a reduced MHD model and sound wave coupling effects are not currently included.

  8. 3D change detection - Approaches and applications

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Tian, Jiaojiao; Reinartz, Peter

    2016-12-01

    Due to the unprecedented technology development of sensors, platforms and algorithms for 3D data acquisition and generation, 3D spaceborne, airborne and close-range data, in the form of image based, Light Detection and Ranging (LiDAR) based point clouds, Digital Elevation Models (DEM) and 3D city models, become more accessible than ever before. Change detection (CD) or time-series data analysis in 3D has gained great attention due to its capability of providing volumetric dynamics to facilitate more applications and provide more accurate results. The state-of-the-art CD reviews aim to provide a comprehensive synthesis and to simplify the taxonomy of the traditional remote sensing CD techniques, which mainly sit within the boundary of 2D image/spectrum analysis, largely ignoring the particularities of 3D aspects of the data. The inclusion of 3D data for change detection (termed 3D CD), not only provides a source with different modality for analysis, but also transcends the border of traditional top-view 2D pixel/object-based analysis to highly detailed, oblique view or voxel-based geometric analysis. This paper reviews the recent developments and applications of 3D CD using remote sensing and close-range data, in support of both academia and industry researchers who seek for solutions in detecting and analyzing 3D dynamics of various objects of interest. We first describe the general considerations of 3D CD problems in different processing stages and identify CD types based on the information used, being the geometric comparison and geometric-spectral analysis. We then summarize relevant works and practices in urban, environment, ecology and civil applications, etc. Given the broad spectrum of applications and different types of 3D data, we discuss important issues in 3D CD methods. Finally, we present concluding remarks in algorithmic aspects of 3D CD.

  9. Exploring Technology-Enhanced Learning Using Google Glass to Offer Students a Unique Instructor's Point of View Live Laboratory Demonstration

    ERIC Educational Resources Information Center

    Man, Fung Fun

    2016-01-01

    Technology-enhanced learning (TEL) is fast gaining momentum among educational institutions all over the world. The usual way in which laboratory instructional videos are filmed takes the third-person view. However, such videos are not as realistic and sensorial. With the advent of Google Glass and GoPro cameras, a more personal and effective way…

  10. Long-range and wide field of view optical coherence tomography for in vivo 3D imaging of large volume object based on akinetic programmable swept source

    PubMed Central

    Song, Shaozhen; Xu, Jingjiang; Wang, Ruikang K.

    2016-01-01

    Current optical coherence tomography (OCT) imaging suffers from short ranging distance and narrow imaging field of view (FOV). There is growing interest in searching for solutions to these limitations in order to expand further in vivo OCT applications. This paper describes a solution where we utilize an akinetic swept source for OCT implementation to enable ~10 cm ranging distance, associated with the use of a wide-angle camera lens in the sample arm to provide a FOV of ~20 x 20 cm2. The akinetic swept source operates at 1300 nm central wavelength with a bandwidth of 100 nm. We propose an adaptive calibration procedure to the programmable akinetic light source so that the sensitivity of the OCT system over ~10 cm ranging distance is substantially improved for imaging of large volume samples. We demonstrate the proposed swept source OCT system for in vivo imaging of entire human hands and faces with an unprecedented FOV (up to 400 cm2). The capability of large-volume OCT imaging with ultra-long ranging and ultra-wide FOV is expected to bring new opportunities for in vivo biomedical applications. PMID:27896012

  11. Automatic view synthesis by image-domain-warping.

    PubMed

    Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa

    2013-09-01

    Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.

  12. "We Put on the Glasses and Moon Comes Closer!" Urban Second Graders Exploring the Earth, the Sun and Moon through 3D Technologies in a Science and Literacy Unit

    ERIC Educational Resources Information Center

    Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin

    2014-01-01

    This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day…

  13. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  14. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  15. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  16. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  17. Lander petal & Twin Peaks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. The two hills in the distance, approximately one to two kilometers away, have been dubbed the 'Twin Peaks' and are of great interest to Pathfinder scientists as objects of future study. The white areas on the left hill, called the 'Ski Run' by scientists, may have been formed by hydrologic processes. A lander petal, airbag, and the rear ramp are at the lower area of the image.

    The image was taken by the Imager for Mars Pathfinder (IMP) after its deployment on Sol 3. Mars Pathfinder was developed and managed by the Jet Propulsion Laboratory (JPL) for the National Aeronautics and Space Administration. The IMP was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  18. Forward ramp and Twin Peaks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A lander petal and the forward ramp are featured in this image, taken by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. There are several prominent rocks, including Wedge at left; Shark, Half-Dome, and Pumpkin in the background; and Flat Top and Little Flat Top at center.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  19. Sojourner's favorite rocks - in 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, and Little Flat Top are at center. The 'Twin Peaks' in the distance are one to two kilometers away. Curvature in the image is due to parallax.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  20. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; ...

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  1. Bootstrapping 3D fermions

    SciTech Connect

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  2. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  3. Quasi 3D dispersion experiment

    NASA Astrophysics Data System (ADS)

    Bakucz, P.

    2003-04-01

    This paper studies the problem of tracer dispersion in a coloured fluid flowing through a two-phase 3D rough channel-system in a 40 cm*40 cm plexi-container filled by homogen glass fractions and colourless fluid. The unstable interface between the driving coloured fluid and the colourless fluid develops viscous fingers with a fractal structure at high capillary number. Five two-dimensional fractal fronts have been observed at the same time using four cameras along the vertical side-walls and using one camera located above the plexi-container. In possession of five fronts the spatial concentration contours are determined using statistical models. The concentration contours are self-affine fractal curves with a fractal dimension D=2.19. This result is valid for disperison at high Péclet numbers.

  4. Venus in 3D

    NASA Technical Reports Server (NTRS)

    Plaut, Jeffrey J.

    1993-01-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  5. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  6. 3D Radiative Transfer Effects in Multi-Angle/Multi-Spectral Radio-Polarimetric Signals from a Mixture of Clouds and Aerosols Viewed by a Non-Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Davis, Anthony B.; Garay, Michael J.; Xu, Feng; Qu, Zheng; Emde, Claudia

    2013-01-01

    When observing a spatially complex mix of aerosols and clouds in a single relatively large field-of-view, nature entangles their signals non-linearly through polarized radiation transport processes that unfold in the 3D position and direction spaces. In contrast, any practical forward model in a retrieval algorithm will use only 1D vector radiative transfer (vRT) in a linear mixing technique. We assess the difference between the observed and predicted signals using synthetic data from a high-fidelity 3D vRT model with clouds generated using a Large Eddy Simulation model and an aerosol climatology. We find that this difference is signal--not noise--for the Aerosol Polarimetry Sensor (APS), an instrument developed by NASA. Moreover, the worst case scenario is also the most interesting case, namely, when the aerosol burden is large, hence hase the most impact on the cloud microphysics and dynamics. Based on our findings, we formulate a mitigation strategy for these unresolved cloud adjacency effects assuming that some spatial information is available about the structure of the clouds at higher resolution from "context" cameras, as was planned for NASA's ill-fated Glory mission that was to carry the APS but failed to reach orbit. Application to POLDER (POLarization and Directionality of Earth Reflectances) data from the period when PARASOL (Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) was in the A-train is briefly discussed.

  7. Subjective experiences of watching stereoscopic Avatar and U2 3D in a cinema

    NASA Astrophysics Data System (ADS)

    Pölönen, Monika; Salmimaa, Marja; Takatalo, Jari; Häkkinen, Jukka

    2012-01-01

    A stereoscopic 3-D version of the film Avatar was shown to 85 people who subsequently answered questions related to sickness, visual strain, stereoscopic image quality, and sense of presence. Viewing Avatar for 165 min induced some symptoms of visual strain and sickness, but the symptom levels remained low. A comparison between Avatar and previously published results for the film U2 3D showed that sickness and visual strain levels were similar despite the films' runtimes. The genre of the film had a significant effect on the viewers' opinions and sense of presence. Avatar, which has been described as a combination of action, adventure, and sci-fi genres, was experienced as more immersive and engaging than the music documentary U2 3D. However, participants in both studies were immersed, focused, and absorbed in watching the stereoscopic 3-D (S3-D) film and were pleased with the film environments. The results also showed that previous stereoscopic 3-D experience significantly reduced the amount of reported eye strain and complaints about the weight of the viewing glasses.

  8. Optically rewritable 3D liquid crystal displays.

    PubMed

    Sun, J; Srivastava, A K; Zhang, W; Wang, L; Chigrinov, V G; Kwok, H S

    2014-11-01

    Optically rewritable liquid crystal display (ORWLCD) is a concept based on the optically addressed bi-stable display that does not need any power to hold the image after being uploaded. Recently, the demand for the 3D image display has increased enormously. Several attempts have been made to achieve 3D image on the ORWLCD, but all of them involve high complexity for image processing on both hardware and software levels. In this Letter, we disclose a concept for the 3D-ORWLCD by dividing the given image in three parts with different optic axis. A quarter-wave plate is placed on the top of the ORWLCD to modify the emerging light from different domains of the image in different manner. Thereafter, Polaroid glasses can be used to visualize the 3D image. The 3D image can be refreshed, on the 3D-ORWLCD, in one-step with proper ORWLCD printer and image processing, and therefore, with easy image refreshing and good image quality, such displays can be applied for many applications viz. 3D bi-stable display, security elements, etc.

  9. NASA's 3D View of Celestial Lightsabers

    NASA Video Gallery

    This movie envisions a three-dimensional perspective on the Hubble Space Telescope's striking image of the Herbig-Haro object known as HH 24. The central star is hidden by gas and dust, but its pro...

  10. 3-D Visualizations At (Almost) No Expense

    NASA Astrophysics Data System (ADS)

    Sedlock, R. L.

    2003-12-01

    Like most teaching-oriented public universities, San José State University (part of the California State University system) currently faces severe budgetary constraints. These circumstances prohibit the construction of one or more Geo-Walls on-campus. Nevertheless, the Department of Geology has pursued alternatives that enable our students to benefit from 3-D visualizations such as those used with the Geo-Wall. This experience - a sort of virtual virtuality - depends only on the availability of a computer lab and an optional plotter. Starting in June 2003, we have used the methods described here with two diverse groups of participants: middle- and high-school teachers taking professional development workshops through grants funded by NSF and NASA, and regular university students enrolled in introductory earth science and geology laboratory courses. We use two types of three-dimensional images with our students: visualizations from the on-line Gallery of Virtual Topography (Steve Reynolds), and USGS digital topographic quadrangles that have been transformed into anaglyph files for viewing with 3-D glasses. The procedure for transforming DEMs into these anaglyph files, developed by Paul Morin, is available at http://geosun.sjsu.edu/~sedlock/anaglyph.html. The resulting images can be used with students in one of two ways. First, maps can be printed on a suitable plotter, laminated (optional but preferable), and used repeatedly with different classes. Second, the images can be viewed in school computer labs or by students on their own computers. Chief advantages of the plotter option are (1) full-size maps (single or tiled) viewable in their entirety, and (2) dependability (independent of Internet connections and electrical power). Chief advantages of the computer option are (1) minimal preparation time and no other needed resources, assuming a computer lab with Internet access, and (2) students can work with the images outside of regularly scheduled courses. Both

  11. 2D/3D switchable displays

    NASA Astrophysics Data System (ADS)

    Dekker, T.; de Zwart, S. T.; Willemsen, O. H.; Hiddink, M. G. H.; IJzerman, W. L.

    2006-02-01

    A prerequisite for a wide market acceptance of 3D displays is the ability to switch between 3D and full resolution 2D. In this paper we present a robust and cost effective concept for an auto-stereoscopic switchable 2D/3D display. The display is based on an LCD panel, equipped with switchable LC-filled lenticular lenses. We will discuss 3D image quality, with the focus on display uniformity. We show that slanting the lenticulars in combination with a good lens design can minimize non-uniformities in our 20" 2D/3D monitors. Furthermore, we introduce fractional viewing systems as a very robust concept to further improve uniformity in the case slanting the lenticulars and optimizing the lens design are not sufficient. We will discuss measurements and numerical simulations of the key optical characteristics of this display. Finally, we discuss 2D image quality, the switching characteristics and the residual lens effect.

  12. Expanding the degree of freedom of observation on depth-direction by the triple-separated slanted parallax barrier in autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Lee, Kwang-Hoon; Choe, Yeong-Seon; Lee, Dong-Kil; Kim, Yang-Gyu; Park, Youngsik; Park, Min-Chul

    2013-05-01

    Autostereoscopic multi-views 3D display system has a narrow freedom of degrees to the observational directions such as horizontal and perpendicular direction to the display plane than the glasses on type. In this paper, we proposed an innovative method that expanding a width of formed viewing zone on the depth direction keeping with the number of views on horizontal direction by using the triple segmented-slanted parallax barrier (TS-SPB) in the glasses-off type of 3D display. The validity of the proposal is verified by optical simulation based on the environment similar to an actual case. In benefits, the maximum number of views to display on horizontal direction is to be 2n and the width of viewing zone on depth direction is to be increased up to 3.36 times compared to the existing one-layered parallax barrier system.

  13. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    , even if one data object lies behind another. Stereoscopic viewing is another powerful tool to investigate 3-D relationships between objects. This form of immersion is constructed through viewing two separate images that are interleaved--typically 48 frames per second, per eye--and synced through an emitter and a set of specialized polarizing eyeglasses. The polarizing lenses flicker at an equivalent rate, blanking the eye for which a particular image was not drawn, producing the desired stereo effect. Volumetric visualization of the ARAD 3-D seismic dataset will be presented. The effective use of transparency reveals detailed structure of the melt-lens beneath the 9°03'N overlapping spreading center (OSC) along the East Pacific Rise, including melt-filled fractures within the propagating rift-tip. In addition, range-gated images of seismic reflectivity will be co-registered to investigate the physical properties (melt versus mush) of the magma chamber at this locale. Surface visualization of a dense, 2-D grid of MCS seismic data beneath Axial seamount (Juan de Fuca Ridge) will also be highlighted, including relationships between the summit caldera and rift zones, and the underlying (and humongous) magma chamber. A selection of Quicktime movies will be shown. Popcorn will be served, really!

  14. Pathways for Learning from 3D Technology

    PubMed Central

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2016-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D presentations could provide additional sensorial cues (e.g., depth cues) that lead to a higher sense of being surrounded by the stimulus; a connection through general interest such that 3D presentation increases a viewer’s interest that leads to greater attention paid to the stimulus (e.g., "involvement"); and a connection through discomfort, with the 3D goggles causing discomfort that interferes with involvement and thus with memory. The memories of 396 participants who viewed two-dimensional (2D) or 3D movies at movie theaters in Southern California were tested. Within three days of viewing a movie, participants filled out an online anonymous questionnaire that queried them about their movie content memories, subjective movie-going experiences (including emotional reactions and "presence") and demographic backgrounds. The responses to the questionnaire were subjected to path analyses in which several different links between 3D presentation to memory (and other variables) were explored. The results showed there were no effects of 3D presentation, either directly or indirectly, upon memory. However, the largest effects of 3D presentation were on emotions and immersion, with 3D presentation leading to reduced positive emotions, increased negative emotions and lowered immersion, compared to 2D presentations. PMID:28078331

  15. The psychology of the 3D experience

    NASA Astrophysics Data System (ADS)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence.

  16. Multi-user 3D display using a head tracker and RGB laser illumination source

    NASA Astrophysics Data System (ADS)

    Surman, Phil; Sexton, Ian; Hopf, Klaus; Bates, Richard; Lee, Wing Kai; Buckley, Edward

    2007-05-01

    A glasses-free (auto-stereoscopic) 3D display that will serve several viewers who have freedom of movement over a large viewing region is described. This operates on the principle of employing head position tracking to provide regions referred to as exit pupils that follow the positions ofthe viewers' eyes in order for appropriate left and right images to be seen. A non-intrusive multi-user head tracker controls the light sources of a specially designed backlight that illuminates a direct-view LCD.

  17. 3D Immersive Visualization with Astrophysical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2017-01-01

    We present the refinement of a new 3D immersion technique for astrophysical data visualization.Methodology to create 360 degree spherical panoramas is reviewed. The 3D software package Blender coupled with Python and the Google Spatial Media module are used together to create the final data products. Data can be viewed interactively with a mobile phone or tablet or in a web browser. The technique can apply to different kinds of astronomical data including 3D stellar and galaxy catalogs, images, and planetary maps.

  18. Auto convergence for stereoscopic 3D cameras

    NASA Astrophysics Data System (ADS)

    Zhang, Buyue; Kothandaraman, Sreenivas; Batur, Aziz Umit

    2012-03-01

    Viewing comfort is an important concern for 3-D capable consumer electronics such as 3-D cameras and TVs. Consumer generated content is typically viewed at a close distance which makes the vergence-accommodation conflict particularly pronounced, causing discomfort and eye fatigue. In this paper, we present a Stereo Auto Convergence (SAC) algorithm for consumer 3-D cameras that reduces the vergence-accommodation conflict on the 3-D display by adjusting the depth of the scene automatically. Our algorithm processes stereo video in realtime and shifts each stereo frame horizontally by an appropriate amount to converge on the chosen object in that frame. The algorithm starts by estimating disparities between the left and right image pairs using correlations of the vertical projections of the image data. The estimated disparities are then analyzed by the algorithm to select a point of convergence. The current and target disparities of the chosen convergence point determines how much horizontal shift is needed. A disparity safety check is then performed to determine whether or not the maximum and minimum disparity limits would be exceeded after auto convergence. If the limits would be exceeded, further adjustments are made to satisfy the safety limits. Finally, desired convergence is achieved by shifting the left and the right frames accordingly. Our algorithm runs real-time at 30 fps on a TI OMAP4 processor. It is tested using an OMAP4 embedded prototype stereo 3-D camera. It significantly improves 3-D viewing comfort.

  19. The influence of autostereoscopic 3D displays on subsequent task performance

    NASA Astrophysics Data System (ADS)

    Barkowsky, Marcus; Le Callet, Patrick

    2010-02-01

    Viewing 3D content on an autostereoscopic is an exciting experience. This is partly due to the fact that the 3D effect is seen without glasses. Nevertheless, it is an unnatural condition for the eyes as the depth effect is created by the disparity of the left and the right view on a flat screen instead of having a real object at the corresponding location. Thus, it may be more tiring to watch 3D than 2D. This question is investigated in this contribution by a subjective experiment. A search task experiment is conducted and the behavior of the participants is recorded with an eyetracker. Several indicators both for low level perception as well as for the task performance itself are evaluated. In addition two optometric tests are performed. A verification session with conventional 2D viewing is included. The results are discussed in detail and it can be concluded that the 3D viewing does not have a negative impact on the task performance used in the experiment.

  20. Innovations in 3D printing: a 3D overview from optics to organs.

    PubMed

    Schubert, Carl; van Langeveld, Mark C; Donoso, Larry A

    2014-02-01

    3D printing is a method of manufacturing in which materials, such as plastic or metal, are deposited onto one another in layers to produce a three dimensional object, such as a pair of eye glasses or other 3D objects. This process contrasts with traditional ink-based printers which produce a two dimensional object (ink on paper). To date, 3D printing has primarily been used in engineering to create engineering prototypes. However, recent advances in printing materials have now enabled 3D printers to make objects that are comparable with traditionally manufactured items. In contrast with conventional printers, 3D printing has the potential to enable mass customisation of goods on a large scale and has relevance in medicine including ophthalmology. 3D printing has already been proved viable in several medical applications including the manufacture of eyeglasses, custom prosthetic devices and dental implants. In this review, we discuss the potential for 3D printing to revolutionise manufacturing in the same way as the printing press revolutionised conventional printing. The applications and limitations of 3D printing are discussed; the production process is demonstrated by producing a set of eyeglass frames from 3D blueprints.

  1. Sand Dunes of Nili Patera in 3-D

    NASA Technical Reports Server (NTRS)

    2001-01-01

    The most exciting new aspect of the Mars Global Surveyor (MGS) Extended Mission is the opportunity to turn the spacecraft and point the Mars Orbiter Camera (MOC) at specific features of interest. Opportunities to point the spacecraft come about ten times a week. Throughout the Primary Mission (March 1999 - January 2001), nearly all MGS operations were conducted with the spacecraft pointing 'nadir'--that is, straight down. A search for the missing Mars Polar Lander in late 1999 and early 2000 demonstrated that pointing the spacecraft could allow opportunities for MOC to see things that simply had not entered its field of view during typical nadir-looking operations, and to target areas previously seen in a nadir view so that stereo ('3-D') pictures could be derived.

    One of the very first places photographed by the MOC at the start of the Mapping Mission in March 1999 was a field of dunes located in Nili Patera, a volcanic depression in central Syrtis Major. A portion of this dune field was shown in a media release on March 11, 1999, 'Sand Dunes of Nili Patera, Syrtis Major'. Subsequently, the image was archived with the NASA Planetary Data System, as shown in the Malin Space Science Systems MOC Gallery. On April 24, 2001, an opportunity arose in which the MGS could be pointed off-nadir to take a new picture of the same dune field. By combining the nadir view from March 1999 and the off-nadir view from April 2001, a stereoscopic image was created. The anaglyph shown here must be viewed with red (left-eye) and blue (right-eye) '3-D' glasses. The dunes and the local topography of the volcanic crater's floor stand out in sharp relief. The images, taken more than one Mars year apart, show no change in the shape or location of the dunes--that is, they do not seem to have moved at all since March 1999.

  2. 3D Printed Microscope for Mobile Devices that Cost Pennies

    ScienceCinema

    Erikson, Rebecca; Baird, Cheryl; Hutchinson, Janine

    2016-07-12

    Scientists at PNNL have designed a 3D-printable microscope for mobile devices using pennies worth of plastic and glass materials. The microscope has a wide range of uses, from education to in-the-field science.

  3. 3D Printed Microscope for Mobile Devices that Cost Pennies

    SciTech Connect

    Erikson, Rebecca; Baird, Cheryl; Hutchinson, Janine

    2014-09-15

    Scientists at PNNL have designed a 3D-printable microscope for mobile devices using pennies worth of plastic and glass materials. The microscope has a wide range of uses, from education to in-the-field science.

  4. 3D and beyond

    NASA Astrophysics Data System (ADS)

    Fung, Y. C.

    1995-05-01

    This conference on physiology and function covers a wide range of subjects, including the vasculature and blood flow, the flow of gas, water, and blood in the lung, the neurological structure and function, the modeling, and the motion and mechanics of organs. Many technologies are discussed. I believe that the list would include a robotic photographer, to hold the optical equipment in a precisely controlled way to obtain the images for the user. Why are 3D images needed? They are to achieve certain objectives through measurements of some objects. For example, in order to improve performance in sports or beauty of a person, we measure the form, dimensions, appearance, and movements.

  5. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  6. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    NASA Astrophysics Data System (ADS)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J.

    2015-09-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.

  7. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  8. 3D technology of Sony Bloggie has no advantage in decision-making of tennis serve direction: A randomized placebo-controlled study.

    PubMed

    Liu, Sicong; Ritchie, Jason; Sáenz-Moncaleano, Camilo; Ward, Savanna K; Paulsen, Cody; Klein, Tyler; Gutierrez, Oscar; Tenenbaum, Gershon

    2017-06-01

    This study aimed at exploring whether 3D technology enhances tennis decision-making under the conceptual framework of human performance model. A 3 (skill-level: varsity, club, recreational) × 3 (experimental condition: placebo, weak 3D [W3D], strong 3D [S3D]) between-participant design was used. Allocated to experimental conditions by a skill-level stratified randomization, 105 tennis players judged tennis serve direction from video scenarios and rated their perceptions of enjoyment, flow, and presence during task performance. Results showed that varsity players made more accurate decisions than less skilled ones. Additionally, applying 3D technology to typical video displays reduced tennis players' decision-making accuracy, although wearing the 3D glasses led to a placebo effect that shortened the decision-making reaction time. The unexpected negative effect of 3D technology on decision-making was possibly due to participants being more familiar to W3D than to S3D, and relatedly, a suboptimal task-technology match. Future directions for advancing this area of research are offered. Highlights 3D technology augments binocular depth cues to tradition video displays, and thus results in the attainment of more authentic visual representation. This process enhances task fidelity in researching perceptual-cognitive skills in sports. The paper clarified both conceptual and methodological difficulties in testing 3D technology in sports settings. Namely, the nomenclature of video footage (with/without 3D technology) and the possible placebo effect (arising from wearing glasses of 3D technology) merit researchers' attention. Participants varying in level of domain-specific expertise were randomized into viewing conditions using a placebo-controlled design. Measurement consisted of both participants' subjective experience (i.e., presence, flow, and enjoyment) and objective performance (i.e., accuracy and reaction time) in a decision-making task. Findings revealed that

  9. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  10. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  11. Focus-distance-controlled 3D TV

    NASA Astrophysics Data System (ADS)

    Yanagisawa, Nobuaki; Kim, Kyung-tae; Son, Jung-Young; Murata, Tatsuya; Orima, Takatoshi

    1996-09-01

    There is a phenomenon that a 3D image appears in proportion to a focus distance when something is watched through a convex lens. An adjustable focus lens which can control the focus distance of the convex lens is contrived and applied to 3D TV. We can watch 3D TV without eyeglasses. The 3D TV image meets the NTSC standard. A parallax data and a focus data about the image can be accommodated at the same time. A continuous image method realizes much wider views. An anti 3D image effect can be avoided by using this method. At present, an analysis of proto-type lens and experiment are being carried out. As a result, a phantom effect and a viewing area can be improved. It is possible to watch the 3D TV at any distance. Distance data are triangulated by two cameras. A plan of AVI photo type using ten thousand lenses is discussed. This method is compared with four major conventional methods. As a result, it is revealed that this method can make the efficient use of Integral Photography and Varifocal type method. In the case of Integral Photography, a miniaturization of this system is possible. But it is difficult to get actual focus. In the case of varifocal type method, there is no problem with focusing, but the miniaturization is impossible. The theory investigated in this paper makes it possible to solve these problems.

  12. Focus-distance-controlled 3D TV

    NASA Astrophysics Data System (ADS)

    Yanagisawa, Nobuaki; Kim, Kyung-tae; Son, Jung-Young; Murata, Tatsuya; Orima, Takatoshi

    1997-05-01

    There is a phenomenon that a 3D image appears in proportion to a focus distance when something is watched through a convex lens. An adjustable focus lens which can control the focus distance of the convex lens is contrived and applied to 3D TV. We can watch 3D TV without eyeglasses. The 3D TV image meets the NTSC standard. A parallax data and a focus data about the image can be accommodated at the same time. A continuous image method realizes much wider views. An anti 3D image effect can be avoided by using this method. At present, an analysis of proto-type lens and experiment are being carried out. As a result, a phantom effect and a viewing area can be improved. It is possible to watch the 3D TV at any distance. Distance data are triangulated by two cameras. A plan of AVI proto type using ten thousands lenses is discussed. This method is compared with four major conventional methods. As a result, it is revealed that this method can make the efficient use of integral photography and varifocal type method. In the case of integral photography, a miniaturization of this system is possible. But it is difficult to get actual focus. In the case of varifocal type method, there is no problem with focusing, but the miniaturization is impossible. The theory investigated in this paper makes it possible to solve these problems.

  13. Development of a 3D pixel module for an ultralarge screen 3D display

    NASA Astrophysics Data System (ADS)

    Hashiba, Toshihiko; Takaki, Yasuhiro

    2004-10-01

    A large screen 2D display used at stadiums and theaters consists of a number of pixel modules. The pixel module usually consists of 8x8 or 16x16 LED pixels. In this study we develop a 3D pixel module in order to construct a large screen 3D display which is glass-free and has the motion parallax. This configuration for a large screen 3D display dramatically reduces the complexity of wiring 3D pixels. The 3D pixel module consists of several LCD panels, several cylindrical lenses, and one small PC. The LCD panels are slanted in order to differentiate the distances from same color pixels to the axis of the cylindrical lens so that the rays from the same color pixels are refracted into the different horizontal directions by the cylindrical lens. We constructed a prototype 3D pixel module, which consists of 8x4 3D pixels. The prototype module is designed to display 300 different patterns into different horizontal directions with the horizontal display angle pitch of 0.099 degree. The LCD panels are controlled by a small PC and the 3D image data is transmitted through the Gigabit Ethernet.

  14. Aging kinetics of levoglucosan orientational glass as a rate dispersion process and consequences for the heterogeneous dynamics view

    NASA Astrophysics Data System (ADS)

    Righetti, Maria Cristina; Tombari, Elpidio; Johari, G. P.

    2016-08-01

    Aging kinetics of a glass is currently modeled in terms of slowing of its α-relaxation dynamics, whose features are interpreted in terms of dynamic heterogeneity, i.e., formation and decay of spatially and temporally distinct nm-size regions. To test the merits of this view, we studied the calorimetric effects of aging an orientational glass of levoglucosan crystal in which such regions would not form in the same way as they form in liquids, and persist in structural glasses, because there is no liquid-like molecular diffusion in the crystal. By measuring the heat capacity, Cp, we determined the change in the enthalpy, H, and the entropy, S, during two aging-protocols: (a) keeping the samples isothermally at temperature, Ta, and measuring the changes after different aging times, ta, and (b) keeping the samples at different Tas and measuring the changes after the same ta. A model-free analysis of the data shows that as ta is increased (procedure (a)), H and S decrease according to a dispersive rate kinetics, and as Ta is increased (procedure (b)), H and S first increase, reach a local maximum at a certain Ta, and then decrease. Even though there is no translational diffusion to produce (liquid-like) free volume, and no translational-rotational decoupling, the aging features are indistinguishable from those of structural glasses. We also find that the Kohlrausch parameter, originally fitted to the glass-aging data, decreases with decrease in Ta, which is incompatible with the current use of the aging data for estimating the α-relaxation time. We argue that the vibrational state of a glass is naturally incompatible with its configurational state, and both change on aging until they are compatible, in the equilibrium liquid. So, dipolar fluctuations seen as the α-relaxation would not be the same motions that cause aging. We suggest that aging kinetics is intrinsically dispersive with its own characteristic rate constant and it does not yield the α-relaxation rate

  15. 360-degree panorama in 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This 360-degree panorama was taken in stereo by the deployed Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses (red left lens, blue right lens) are necessary to help identify surface detail. All three petals, the perimeter of the deflated airbags, deployed rover Sojourner, forward and backward ramps and prominent surface features are visible, including the double Twin Peaks at the horizon. Sojourner would later investigate the rock Barnacle Bill just to its left in this image, and the larger rock Yogi at its forward right.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters. Stereoscopic imaging brings exceptional clarity and depth to many of the features in this image, particularly the ridge beyond the far left petal and the large rock Yogi. The curvature and misalignment of several section are due to image parallax.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  16. The rendering context for stereoscopic 3D web

    NASA Astrophysics Data System (ADS)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  17. Modeling cellular processes in 3D.

    PubMed

    Mogilner, Alex; Odde, David

    2011-12-01

    Recent advances in photonic imaging and fluorescent protein technology offer unprecedented views of molecular space-time dynamics in living cells. At the same time, advances in computing hardware and software enable modeling of ever more complex systems, from global climate to cell division. As modeling and experiment become more closely integrated we must address the issue of modeling cellular processes in 3D. Here, we highlight recent advances related to 3D modeling in cell biology. While some processes require full 3D analysis, we suggest that others are more naturally described in 2D or 1D. Keeping the dimensionality as low as possible reduces computational time and makes models more intuitively comprehensible; however, the ability to test full 3D models will build greater confidence in models generally and remains an important emerging area of cell biological modeling.

  18. Eyes on the Earth 3D

    NASA Technical Reports Server (NTRS)

    Kulikov, anton I.; Doronila, Paul R.; Nguyen, Viet T.; Jackson, Randal K.; Greene, William M.; Hussey, Kevin J.; Garcia, Christopher M.; Lopez, Christian A.

    2013-01-01

    Eyes on the Earth 3D software gives scientists, and the general public, a realtime, 3D interactive means of accurately viewing the real-time locations, speed, and values of recently collected data from several of NASA's Earth Observing Satellites using a standard Web browser (climate.nasa.gov/eyes). Anyone with Web access can use this software to see where the NASA fleet of these satellites is now, or where they will be up to a year in the future. The software also displays several Earth Science Data sets that have been collected on a daily basis. This application uses a third-party, 3D, realtime, interactive game engine called Unity 3D to visualize the satellites and is accessible from a Web browser.

  19. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  20. 3-D movies using microprocessor-controlled optoelectronic spectacles

    NASA Astrophysics Data System (ADS)

    Jacobs, Ken; Karpf, Ron

    2012-02-01

    Despite rapid advances in technology, 3-D movies are impractical for general movie viewing. A new approach that opens all content for casual 3-D viewing is needed. 3Deeps--advanced microprocessor controlled optoelectronic spectacles--provides such a new approach to 3-D. 3Deeps works on a different principle than other methods for 3-D. 3-D movies typically use the asymmetry of dual images to produce stereopsis, necessitating costly dual-image content, complex formatting and transmission standards, and viewing via a corresponding selection device. In contrast, all 3Deeps requires to view movies in realistic depth is an illumination asymmetry--a controlled difference in optical density between the lenses. When a 2-D movie has been projected for viewing, 3Deeps converts every scene containing lateral motion into realistic 3-D. Put on 3Deeps spectacles for 3-D viewing, or remove them for viewing in 2-D. 3Deeps works for all analogue and digital 2-D content, by any mode of transmission, and for projection screens, digital or analogue monitors. An example using aerial photography is presented. A movie consisting of successive monoscopic aerial photographs appears in realistic 3-D when viewed through 3Deeps spectacles.

  1. Intraoral 3D scanner

    NASA Astrophysics Data System (ADS)

    Kühmstedt, Peter; Bräuer-Burchardt, Christian; Munkelt, Christoph; Heinze, Matthias; Palme, Martin; Schmidt, Ingo; Hintersehr, Josef; Notni, Gunther

    2007-09-01

    Here a new set-up of a 3D-scanning system for CAD/CAM in dental industry is proposed. The system is designed for direct scanning of the dental preparations within the mouth. The measuring process is based on phase correlation technique in combination with fast fringe projection in a stereo arrangement. The novelty in the approach is characterized by the following features: A phase correlation between the phase values of the images of two cameras is used for the co-ordinate calculation. This works contrary to the usage of only phase values (phasogrammetry) or classical triangulation (phase values and camera image co-ordinate values) for the determination of the co-ordinates. The main advantage of the method is that the absolute value of the phase at each point does not directly determine the coordinate. Thus errors in the determination of the co-ordinates are prevented. Furthermore, using the epipolar geometry of the stereo-like arrangement the phase unwrapping problem of fringe analysis can be solved. The endoscope like measurement system contains one projection and two camera channels for illumination and observation of the object, respectively. The new system has a measurement field of nearly 25mm × 15mm. The user can measure two or three teeth at one time. So the system can by used for scanning of single tooth up to bridges preparations. In the paper the first realization of the intraoral scanner is described.

  2. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  3. Development of 3D video and 3D data services for T-DMB

    NASA Astrophysics Data System (ADS)

    Yun, Kugjin; Lee, Hyun; Hur, Namho; Kim, Jinwoong

    2008-02-01

    In this paper, we present motivation, system concept, and implementation details of stereoscopic 3D visual services on T-DMB. We have developed two types of 3D visual service : one is '3D video service', which provides 3D depth feeling for a video program by sending left and right view video streams, and the other is '3D data service', which provides presentation of 3D objects overlaid on top of 2D video program. We have developed several highly efficient and sophisticated transmission schemes for the delivery of 3D visual data in order to meet the system requirements such as (1) minimization of bitrate overhead to comply with the strict constraint of T-DMB channel bandwidth; (2) backward and forward compatibility with existing T-DMB; (3) maximize the eye-catching effect of 3D visual representation while reducing eye fatigue. We found that, in contrast to conventional way of providing a stereo version of a program as a whole, the proposed scheme can lead to variety of efficient and effective 3D visual services which can be adapted to many business models.

  4. 3-D transient analysis of pebble-bed HTGR by TORT-TD/ATTICA3D

    SciTech Connect

    Seubert, A.; Sureda, A.; Lapins, J.; Buck, M.; Bader, J.; Laurien, E.

    2012-07-01

    As most of the acceptance criteria are local core parameters, application of transient 3-D fine mesh neutron transport and thermal hydraulics coupled codes is mandatory for best estimate evaluations of safety margins. This also applies to high-temperature gas cooled reactors (HTGR). Application of 3-D fine-mesh transient transport codes using few energy groups coupled with 3-D thermal hydraulics codes becomes feasible in view of increasing computing power. This paper describes the discrete ordinates based coupled code system TORT-TD/ATTICA3D that has recently been extended by a fine-mesh diffusion solver. Based on transient analyses for the PBMR-400 design, the transport/diffusion capabilities are demonstrated and 3-D local flux and power redistribution effects during a partial control rod withdrawal are shown. (authors)

  5. 3D Display Using Conjugated Multiband Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; White, Victor E.; Shcheglov, Kirill

    2012-01-01

    Stereoscopic display techniques are based on the principle of displaying two views, with a slightly different perspective, in such a way that the left eye views only by the left eye, and the right eye views only by the right eye. However, one of the major challenges in optical devices is crosstalk between the two channels. Crosstalk is due to the optical devices not completely blocking the wrong-side image, so the left eye sees a little bit of the right image and the right eye sees a little bit of the left image. This results in eyestrain and headaches. A pair of interference filters worn as an optical device can solve the problem. The device consists of a pair of multiband bandpass filters that are conjugated. The term "conjugated" describes the passband regions of one filter not overlapping with those of the other, but the regions are interdigitated. Along with the glasses, a 3D display produces colors composed of primary colors (basis for producing colors) having the spectral bands the same as the passbands of the filters. More specifically, the primary colors producing one viewpoint will be made up of the passbands of one filter, and those of the other viewpoint will be made up of the passbands of the conjugated filter. Thus, the primary colors of one filter would be seen by the eye that has the matching multiband filter. The inherent characteristic of the interference filter will allow little or no transmission of the wrong side of the stereoscopic images.

  6. A comparison of two Stokes ice sheet models applied to the Marine Ice Sheet Model Intercomparison Project for plan view models (MISMIP3d)

    DOE PAGES

    Zhang, Tong; Price, Stephen F.; Ju, Lili; ...

    2017-01-25

    Here, we present a comparison of the numerics and simulation results for two "full" Stokes ice sheet models, FELIX-S (Leng et al. 2012) and Elmer/Ice. The models are applied to the Marine Ice Sheet Model Intercomparison Project for plan view models (MISMIP3d). For the diagnostic experiment (P75D) the two models give similar results (< 2 % difference with respect to along-flow velocities) when using identical geometries and computational meshes, which we interpret as an indication of inherent consistencies and similarities between the two models. For the standard (Stnd), P75S, and P75R prognostic experiments, we find that FELIX-S (Elmer/Ice) grounding linesmore » are relatively more retreated (advanced), results that are consistent with minor differences observed in the diagnostic experiment results and that we show to be due to different choices in the implementation of basal boundary conditions in the two models. While we are not able to argue for the relative favorability of either implementation, we do show that these differences decrease with increasing horizontal (i.e., both along- and across-flow) grid resolution and that grounding-line positions for FELIX-S and Elmer/Ice converge to within the estimated truncation error for Elmer/Ice. Stokes model solutions are often treated as an accuracy metric in model intercomparison experiments, but computational cost may not always allow for the use of model resolution within the regime of asymptotic convergence. In this case, we propose that an alternative estimate for the uncertainty in the grounding-line position is the span of grounding-line positions predicted by multiple Stokes models.« less

  7. The hype cycle in 3D displays: inherent limits of autostereoscopy

    NASA Astrophysics Data System (ADS)

    Grasnick, Armin

    2013-06-01

    Since a couple of years, a renaissance of 3dimensional cinema can be observed. Even though the stereoscopy was quite popular within the last 150 years, the 3d cinema has disappeared and re-established itself several times. The first boom in the late 19th century stagnated and vanished after a few years of success, the same happened again in 50's and 80's of the 20th century. With the commercial success of the 3d blockbuster "Avatar" in 2009, at the latest, it is obvious that the 3d cinema is having a comeback. How long will it last this time? There are already some signs of a declining interest in 3d movies, as the discrepancy between expectations and the results delivered becomes more evident. From the former hypes it is known: After an initial phase of curiosity (high expectations and excessive fault tolerance), a phase of frustration and saturation (critical analysis and subsequent disappointment) will follow. This phenomenon is known as "Hype Cycle" The everyday experienced evolution of technology has conditioned the consumers. The expectation "any technical improvement will preserve all previous properties" cannot be fulfilled with present 3d technologies. This is an inherent problem of stereoscopy and autostereoscopy: The presentation of an additional dimension caused concessions in relevant characteristics (i.e. resolution, brightness, frequency, viewing area) or leads to undesirable physical side effects (i.e. subjective discomfort, eye strain, spatial disorientation, feeling of nausea). It will be verified that the 3d apparatus (3d glasses or 3d display) is also the source for these restrictions and a reason for decreasing fascination. The limitations of present autostereoscopic technologies will be explained.

  8. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  9. SOLIDFELIX: a transportable 3D static volume display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Kreft, Alexander; Wörden, Henrik Tom

    2009-02-01

    Flat 2D screens cannot display complex 3D structures without the usage of different slices of the 3D model. Volumetric displays like the "FELIX 3D-Displays" can solve the problem. They provide space-filling images and are characterized by "multi-viewer" and "all-round view" capabilities without requiring cumbersome goggles. In the past many scientists tried to develop similar 3D displays. Our paper includes an overview from 1912 up to today. During several years of investigations on swept volume displays within the "FELIX 3D-Projekt" we learned about some significant disadvantages of rotating screens, for example hidden zones. For this reason the FELIX-Team started investigations also in the area of static volume displays. Within three years of research on our 3D static volume display at a normal high school in Germany we were able to achieve considerable results despite minor funding resources within this non-commercial group. Core element of our setup is the display volume which consists of a cubic transparent material (crystal, glass, or polymers doped with special ions, mainly from the rare earth group or other fluorescent materials). We focused our investigations on one frequency, two step upconversion (OFTS-UC) and two frequency, two step upconversion (TFTSUC) with IR-Lasers as excitation source. Our main interest was both to find an appropriate material and an appropriate doping for the display volume. Early experiments were carried out with CaF2 and YLiF4 crystals doped with 0.5 mol% Er3+-ions which were excited in order to create a volumetric pixel (voxel). In addition to that the crystals are limited to a very small size which is the reason why we later investigated on heavy metal fluoride glasses which are easier to produce in large sizes. Currently we are using a ZBLAN glass belonging to the mentioned group and making it possible to increase both the display volume and the brightness of the images significantly. Although, our display is currently

  10. R3D-2-MSA: the RNA 3D structure-to-multiple sequence alignment server

    PubMed Central

    Cannone, Jamie J.; Sweeney, Blake A.; Petrov, Anton I.; Gutell, Robin R.; Zirbel, Craig L.; Leontis, Neocles

    2015-01-01

    The RNA 3D Structure-to-Multiple Sequence Alignment Server (R3D-2-MSA) is a new web service that seamlessly links RNA three-dimensional (3D) structures to high-quality RNA multiple sequence alignments (MSAs) from diverse biological sources. In this first release, R3D-2-MSA provides manual and programmatic access to curated, representative ribosomal RNA sequence alignments from bacterial, archaeal, eukaryal and organellar ribosomes, using nucleotide numbers from representative atomic-resolution 3D structures. A web-based front end is available for manual entry and an Application Program Interface for programmatic access. Users can specify up to five ranges of nucleotides and 50 nucleotide positions per range. The R3D-2-MSA server maps these ranges to the appropriate columns of the corresponding MSA and returns the contents of the columns, either for display in a web browser or in JSON format for subsequent programmatic use. The browser output page provides a 3D interactive display of the query, a full list of sequence variants with taxonomic information and a statistical summary of distinct sequence variants found. The output can be filtered and sorted in the browser. Previous user queries can be viewed at any time by resubmitting the output URL, which encodes the search and re-generates the results. The service is freely available with no login requirement at http://rna.bgsu.edu/r3d-2-msa. PMID:26048960

  11. R3D-2-MSA: the RNA 3D structure-to-multiple sequence alignment server.

    PubMed

    Cannone, Jamie J; Sweeney, Blake A; Petrov, Anton I; Gutell, Robin R; Zirbel, Craig L; Leontis, Neocles

    2015-07-01

    The RNA 3D Structure-to-Multiple Sequence Alignment Server (R3D-2-MSA) is a new web service that seamlessly links RNA three-dimensional (3D) structures to high-quality RNA multiple sequence alignments (MSAs) from diverse biological sources. In this first release, R3D-2-MSA provides manual and programmatic access to curated, representative ribosomal RNA sequence alignments from bacterial, archaeal, eukaryal and organellar ribosomes, using nucleotide numbers from representative atomic-resolution 3D structures. A web-based front end is available for manual entry and an Application Program Interface for programmatic access. Users can specify up to five ranges of nucleotides and 50 nucleotide positions per range. The R3D-2-MSA server maps these ranges to the appropriate columns of the corresponding MSA and returns the contents of the columns, either for display in a web browser or in JSON format for subsequent programmatic use. The browser output page provides a 3D interactive display of the query, a full list of sequence variants with taxonomic information and a statistical summary of distinct sequence variants found. The output can be filtered and sorted in the browser. Previous user queries can be viewed at any time by resubmitting the output URL, which encodes the search and re-generates the results. The service is freely available with no login requirement at http://rna.bgsu.edu/r3d-2-msa.

  12. 3-D capaciflector

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1998-01-01

    A capacitive type proximity sensor having improved range and sensitivity between a surface of arbitrary shape and an intruding object in the vicinity of the surface having one or more outer conductors on the surface which serve as capacitive sensing elements shaped to conform to the underlying surface of a machine. Each sensing element is backed by a reflector driven at the same voltage and in phase with the corresponding capacitive sensing element. Each reflector, in turn, serves to reflect the electric field lines of the capacitive sensing element away from the surface of the machine on which the sensor is mounted so as to enhance the component constituted by the capacitance between the sensing element and an intruding object as a fraction of the total capacitance between the sensing element and ground. Each sensing element and corresponding reflecting element are electrically driven in phase, and the capacitance between the sensing elements individually and the sensed object is determined using circuitry known to the art. The reflector may be shaped to shield the sensor and to shape its field of view, in effect providing an electrostatic lensing effect. Sensors and reflectors may be fabricated using a variety of known techniques such as vapor deposition, sputtering, painting, plating, or deformation of flexible films, to provide conformal coverage of surfaces of arbitrary shape.

  13. Super long viewing distance light homogeneous emitting three-dimensional display

    PubMed Central

    Liao, Hongen

    2015-01-01

    Three-dimensional (3D) display technology has continuously been attracting public attention with the progress in today's 3D television and mature display technologies. The primary characteristics of conventional glasses-free autostereoscopic displays, such as spatial resolution, image depths, and viewing angle, are often limited due to the use of optical lenses or optical gratings. We present a 3D display using MEMS-scanning-mechanism-based light homogeneous emitting (LHE) approach and demonstrate that the display can directly generate an autostereoscopic 3D image without the need for optical lenses or gratings. The generated 3D image has the advantages of non-aberration and a high-definition spatial resolution, making it the first to exhibit animated 3D images with image depth of six meters. Our LHE 3D display approach can be used to generate a natural flat-panel 3D display with super long viewing distance and alternative real-time image update. PMID:25828029

  14. Super long viewing distance light homogeneous emitting three-dimensional display

    NASA Astrophysics Data System (ADS)

    Liao, Hongen

    2015-04-01

    Three-dimensional (3D) display technology has continuously been attracting public attention with the progress in today's 3D television and mature display technologies. The primary characteristics of conventional glasses-free autostereoscopic displays, such as spatial resolution, image depths, and viewing angle, are often limited due to the use of optical lenses or optical gratings. We present a 3D display using MEMS-scanning-mechanism-based light homogeneous emitting (LHE) approach and demonstrate that the display can directly generate an autostereoscopic 3D image without the need for optical lenses or gratings. The generated 3D image has the advantages of non-aberration and a high-definition spatial resolution, making it the first to exhibit animated 3D images with image depth of six meters. Our LHE 3D display approach can be used to generate a natural flat-panel 3D display with super long viewing distance and alternative real-time image update.

  15. Super long viewing distance light homogeneous emitting three-dimensional display.

    PubMed

    Liao, Hongen

    2015-04-01

    Three-dimensional (3D) display technology has continuously been attracting public attention with the progress in today's 3D television and mature display technologies. The primary characteristics of conventional glasses-free autostereoscopic displays, such as spatial resolution, image depths, and viewing angle, are often limited due to the use of optical lenses or optical gratings. We present a 3D display using MEMS-scanning-mechanism-based light homogeneous emitting (LHE) approach and demonstrate that the display can directly generate an autostereoscopic 3D image without the need for optical lenses or gratings. The generated 3D image has the advantages of non-aberration and a high-definition spatial resolution, making it the first to exhibit animated 3D images with image depth of six meters. Our LHE 3D display approach can be used to generate a natural flat-panel 3D display with super long viewing distance and alternative real-time image update.

  16. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  17. Recent development of 3D display technology for new market

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Sik

    2003-11-01

    A multi-view 3D video processor was designed and implemented with several FPGAs for real-time applications and a projection-type 3D display was introduced for low-cost commercialization. One high resolution projection panel and only one projection lens is capable of displaying multiview autostereoscopic images. It can cope with various arrangements of 3D camera systems (or pixel arrays) and resolutions of 3D displays. This system shows high 3-D image quality in terms of resolution, brightness, and contrast so it is well suited for the commercialization in the field of game and advertisement market.

  18. 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  19. [Current status of 3D/4D volume ultrasound of the breast].

    PubMed

    Weismann, C; Hergan, K

    2007-06-01

    3D/4D volume ultrasound is an established method that offers various options for analyzing and presenting ultrasound volume data. The following imaging techniques are based on automatically acquired ultrasound volumes. The multiplanar view is the typical mode of 3D ultrasound data presentation. The niche mode view is a cut open view of the volume data set. The surface mode is a rendering technique that represents the data within a volume of interest (VOI) with different slice thicknesses (typically 1-4 mm) with a contrast-enhanced surface algorithm. Related to the diagnostic target, the transparency mode helps to present echopoor or echorich structures and their spatial relationships within the ultrasound volume. Glass body rendering is a special type of transparency mode that makes the grayscale data transparent and shows the color flow data in a surface render mode. The inversion mode offers a three-dimensional surface presentation of echopoor lesions. Volume Contrast Imaging (VCI) works with static 3D volume data and is able to be used with 4D for dynamic scanning. Volume calculation of a lesion and virtual computer-assisted organ analysis of the same lesion is performed with VoCal software. Tomographic Ultrasound Imaging (TUI) is the perfect tool to document static 3D ultrasound volumes. 3D/4D volume ultrasound of the breast provides diagnostic information of the coronal plane. In this plane benign lesions show the compression pattern sign, while malignant lesions show the retraction pattern or star pattern sign. The indeterminate pattern of a lesion combines signs of compression and retraction or star pattern in the coronal plane. Glass body rendering in combination with Power-Doppler, Color-Doppler or High-Definition Flow Imaging presents the intra- and peritumoral three-dimensional vascular architecture. 3D targeting shows correct or incorrect needle placement in all three planes after 2D or 4D needle guidance. In conclusion, it is safe to say that 3D/4D

  20. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  1. Spherical 3D isotropic wavelets

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2012-04-01

    Context. Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D spherical Fourier-Bessel (SFB) analysis in spherical coordinates is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. Aims: The aim of this paper is to present a new formalism for a spherical 3D isotropic wavelet, i.e. one based on the SFB decomposition of a 3D field and accompany the formalism with a public code to perform wavelet transforms. Methods: We describe a new 3D isotropic spherical wavelet decomposition based on the undecimated wavelet transform (UWT) described in Starck et al. (2006). We also present a new fast discrete spherical Fourier-Bessel transform (DSFBT) based on both a discrete Bessel transform and the HEALPIX angular pixelisation scheme. We test the 3D wavelet transform and as a toy-application, apply a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and find we can successfully remove noise without much loss to the large scale structure. Results: We have described a new spherical 3D isotropic wavelet transform, ideally suited to analyse and denoise future 3D spherical cosmological surveys, which uses a novel DSFBT. We illustrate its potential use for denoising using a toy model. All the algorithms presented in this paper are available for download as a public code called MRS3D at http://jstarck.free.fr/mrs3d.html

  2. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  3. Embedding objects during 3D printing to add new functionalities.

    PubMed

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  4. A Joint Approach to the Study of S-Type and P-Type Habitable Zones in Binary Systems: New Results in the View of 3-D Planetary Climate Models

    NASA Astrophysics Data System (ADS)

    Cuntz, Manfred

    2015-01-01

    In two previous papers, given by Cuntz (2014a,b) [ApJ 780, A14 (19 pages); arXiv:1409.3796], a comprehensive approach has been provided for the study of S-type and P-type habitable zones in stellar binary systems, P-type orbits occur when the planet orbits both binary components, whereas in case of S-type orbits, the planet orbits only one of the binary components with the second component considered a perturbator. The selected approach considers a variety of aspects, including (1) the consideration of a joint constraint including orbital stability and a habitable region for a possible system planet through the stellar radiative energy fluxes; (2) the treatment of conservative (CHZ), general (GHZ) and extended zones of habitability (EHZ) [see Paper I for definitions] for the systems as previously defined for the Solar System; (3) the provision of a combined formalism for the assessment of both S-type and P-type habitability; in particular, mathematical criteria are devised for which kind of system S-type and P-type habitability is realized; and (4) the applications of the theoretical approach to systems with the stars in different kinds of orbits, including elliptical orbits (the most expected case). Particularly, an algebraic formalism for the assessment of both S-type and P-type habitability is given based on a higher-order polynomial expression. Thus, an a prior specification for the presence or absence of S-type or P-type radiative habitable zones is - from a mathematical point of view - neither necessary nor possible, as those are determined by the adopted formalism. Previously, numerous applications of the method have been given encompassing theoretical star-panet systems and and observations. Most recently, this method has been upgraded to include recent studies of 3-D planetary climate models. Originally, this type of work affects the extent and position of habitable zones around single stars; however, it has also profound consequence for the habitable

  5. Visualizing realistic 3D urban environments

    NASA Astrophysics Data System (ADS)

    Lee, Aaron; Chen, Tuolin; Brunig, Michael; Schmidt, Hauke

    2003-05-01

    Visualizing complex urban environments has been an active research topic due to its wide variety of applications in city planning: road construction, emergency facilities planning, and optimal placement of wireless carrier base stations. Traditional 2D visualizations have been around for a long time but they only provide a schematic line-drawing bird's eye view and are sometimes confusing to understand due to the lack of depth information. Early versions of 3D systems have been developed for very expensive graphics workstations which seriously limited the availability. In this paper we describe a 3D visualization system for a desktop PC which integrates multiple resolutions of data and provides a realistic view of the urban environment.

  6. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology.

  7. 3D display based on parallax barrier with multiview zones.

    PubMed

    Lv, Guo-Jiao; Wang, Qiong-Hua; Zhao, Wu-Xiang; Wang, Jun

    2014-03-01

    A 3D display based on a parallax barrier with multiview zones is proposed. This display consists of a 2D display panel and a parallax barrier. The basic element of the parallax barrier has three narrow slits. They can show three columns of subpixels on the 2D display panel and form 3D pixels. The parallax barrier can provide multiview zones. In these multiview zones, the proposed 3D display can use a small number of views to achieve a high density of views. Therefore, the distance between views is the same as the conventional ones with more views. Considering the proposed display has fewer views, which bring more 3D pixels in the 3D images, the resolution and brightness will be higher than the conventional ones. A 12-view prototype of the proposed 3D display is developed, and it provides the same density of views as a conventional one with 28 views. Experimental results show the proposed display has higher resolution and brightness than the conventional one. The cross talk is also limited at a low level.

  8. Watermarking 3D Objects for Verification

    DTIC Science & Technology

    1999-01-01

    signal ( audio /image/video) pro- cessing and steganography fields, and even newer to the computer graphics community. Inherently, digital watermarking of...Many view digital watermarking as a potential solution for copyright protection of valuable digital materials like CD-quality audio , publication...watermark. The object can be an image, an audio clip, a video clip, or a 3D model. Some papers discuss watermarking other forms of multime- dia data

  9. An Interactive 3D Virtual Anatomy Puzzle for Learning and Simulation - Initial Demonstration and Evaluation.

    PubMed

    Messier, Erik; Wilcox, Jascha; Dawson-Elli, Alexander; Diaz, Gabriel; Linte, Cristian A

    2016-01-01

    To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls.

  10. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  11. 3D Buckligami: Digital Matter

    NASA Astrophysics Data System (ADS)

    van Hecke, Martin; de Reus, Koen; Florijn, Bastiaan; Coulais, Corentin

    2014-03-01

    We present a class of elastic structures which exhibit collective buckling in 3D, and create these by a 3D printing/moulding technique. Our structures consist of cubic lattice of anisotropic unit cells, and we show that their mechanical properties are programmable via the orientation of these unit cells.

  12. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  13. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  14. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  15. 6. Looking glass aircraft in the project looking glass historic ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. Looking glass aircraft in the project looking glass historic district. View to north. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  16. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  17. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  18. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  19. A heads-up display for diabetic limb salvage surgery: a view through the google looking glass.

    PubMed

    Armstrong, David G; Rankin, Timothy M; Giovinco, Nicholas A; Mills, Joseph L; Matsuoka, Yoky

    2014-09-01

    Although the use of augmented reality has been well described over the past several years, available devices suffer from high cost, an uncomfortable form factor, suboptimal battery life, and lack an app-based developer ecosystem. This article describes the potential use of a novel, consumer-based, wearable device to assist surgeons in real time during limb preservation surgery and clinical consultation. Using routine intraoperative, clinical, and educational case examples, we describe the use of a wearable augmented reality device (Google Glass; Google, Mountain View, CA). The device facilitated hands-free, rapid communication, documentation, and consultation. An eyeglass-mounted screen form factor has the potential to improve communication, safety, and efficiency of intraoperative and clinical care. We believe this represents a natural progression toward union of medical devices with consumer technology.

  20. Cosmic origins: experiences making a stereoscopic 3D movie

    NASA Astrophysics Data System (ADS)

    Holliman, Nick

    2010-02-01

    Context: Stereoscopic 3D movies are gaining rapid acceptance commercially. In addition our previous experience with the short 3D movie "Cosmic Cookery" showed that there is great public interest in the presentation of cosmology research using this medium. Objective: The objective of the work reported in this paper was to create a three-dimensional stereoscopic movie describing the life of the Milky way galaxy. This was a technical and artistic exercise to take observed and simulated data from leading scientists and produce a short (six minute) movie that describes how the Milky Way was created and what happens in its future. The initial target audience was the visitors to the Royal Society's 2009 Summer Science Exhibition in central London, UK. The movie is also intended to become a presentation tool for scientists and educators following the exhibition. Apparatus: The presentation and playback systems used consisted of off-the shelf devices and software. The display platform for the Royal Society presentation was a RealD LP Pro switch used with a DLP projector to rear project a 4 metre diagonal image. The LP Pro enables the use of cheap disposable linearly polarising glasses so that the high turnover rate of the audience (every ten minutes at peak times) could be sustained without needing delays to clean the glasses. The playback system was a high speed PC with an external 8Tb RAID driving the projectors at 30Hz per eye, the Lightspeed DepthQ software was used to decode and generate the video stream. Results: A wide range of tools were used to render the image sequences, ranging from commercial to custom software. Each tool was able to produce a stream of 1080p images in stereo at 30fps. None of the rendering tools used allowed precise calibration of the stereo effect at render time and therefore all sequences were tuned extensively in a trial and error process until the stereo effect was acceptable and supported a comfortable viewing experience. Conclusion: We

  1. Mars Odyssey Seen by Mars Global Surveyor (3-D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This stereoscopic picture of NASA's Mars Odyssey spacecraft was created from two views of that spacecraft taken by the Mars Orbiter Camera on NASA's Mars Global Surveyor. The camera's successful imaging of Odyssey and of the European Space Agency's Mars Express in April 2005 produced the first pictures of any spacecraft orbiting Mars taken by another spacecraft orbiting Mars.

    Mars Global Surveyor acquired this image of Mars Odyssey on April 21, 2005. The stereoscopic picture combines one view captured while the two orbiters were 90 kilometers (56 miles) apart with a second view captured from a slightly different angle when the two orbiters were 135 kilometers (84 miles) apart. For proper viewing, the user needs '3-D' glasses with red over the left eye and blue over the right eye.

    The Mars Orbiter Camera can resolve features on the surface of Mars as small as a few meters or yards across from Mars Global Surveyor's orbital altitude of 350 to 405 kilometers (217 to 252 miles). From a distance of 100 kilometers (62 miles), the camera would be able to resolve features substantially smaller than 1 meter or yard across.

    Mars Odyssey was launched on April 7, 2001, and reached Mars on Oct. 24, 2001. Mars Global Surveyor left Earth on Nov. 7, 1996, and arrived in Mars orbit on Sept. 12, 1997. Both orbiters are in an extended mission phase, both have relayed data from the Mars Exploration Rovers, and both are continuing to return exciting new results from Mars. JPL, a division of the California Institute of Technology, Pasadena, manages both missions for NASA's Science Mission Directorate, Washington, D.C.

  2. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  3. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  4. 3D Printed Micro Free-Flow Electrophoresis Device.

    PubMed

    Anciaux, Sarah K; Geiger, Matthew; Bowser, Michael T

    2016-08-02

    The cost, time, and restrictions on creative flexibility associated with current fabrication methods present significant challenges in the development and application of microfluidic devices. Additive manufacturing, also referred to as three-dimensional (3D) printing, provides many advantages over existing methods. With 3D printing, devices can be made in a cost-effective manner with the ability to rapidly prototype new designs. We have fabricated a micro free-flow electrophoresis (μFFE) device using a low-cost, consumer-grade 3D printer. Test prints were performed to determine the minimum feature sizes that could be reproducibly produced using 3D printing fabrication. Microfluidic ridges could be fabricated with dimensions as small as 20 μm high × 640 μm wide. Minimum valley dimensions were 30 μm wide × 130 μm wide. An acetone vapor bath was used to smooth acrylonitrile-butadiene-styrene (ABS) surfaces and facilitate bonding of fully enclosed channels. The surfaces of the 3D-printed features were profiled and compared to a similar device fabricated in a glass substrate. Stable stream profiles were obtained in a 3D-printed μFFE device. Separations of fluorescent dyes in the 3D-printed device and its glass counterpart were comparable. A μFFE separation of myoglobin and cytochrome c was also demonstrated on a 3D-printed device. Limits of detection for rhodamine 110 were determined to be 2 and 0.3 nM for the 3D-printed and glass devices, respectively.

  5. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  6. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  7. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  8. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  9. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  10. A NASA 3-D Flyby of Hurricane Seymour

    NASA Video Gallery

    This 3-D Flyby animation from data gathered by the GPM core observatory satellite is from its view of Hurricane Seymour on Oct. 25 at 7:46 am PDT (1646 UTC). GPM showed rain falling at the extreme ...

  11. 3D Surface Reconstruction and Automatic Camera Calibration

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre

    2004-01-01

    Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.

  12. 3D Scan Systems Integration

    DTIC Science & Technology

    2007-11-02

    AGENCY USE ONLY (Leave Blank) 2. REPORT DATE 5 Feb 98 4. TITLE AND SUBTITLE 3D Scan Systems Integration REPORT TYPE AND DATES COVERED...2-89) Prescribed by ANSI Std. Z39-1 298-102 [ EDO QUALITY W3PECTEDI DLA-ARN Final Report for US Defense Logistics Agency on DDFG-T2/P3: 3D...SCAN SYSTEMS INTEGRATION Contract Number SPO100-95-D-1014 Contractor Ohio University Delivery Order # 0001 Delivery Order Title 3D Scan Systems

  13. Spacecraft 3D Augmented Reality Mobile App

    NASA Technical Reports Server (NTRS)

    Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M.

    2013-01-01

    The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker.

  14. Quantitative comparison of interaction with shutter glasses and autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Alpaslan, Zahir Y.; Yeh, Shih-Ching; Rizzo, Albert A.; Sawchuk, Alexander A.

    2005-03-01

    In this paper we describe experimental measurements and comparison of human interaction with three different types of stereo computer displays. We compare traditional shutter glasses-based viewing with three-dimensional (3D) autostereoscopic viewing on displays such as the Sharp LL-151-3D display and StereoGraphics SG 202 display. The method of interaction is a sphere-shaped "cyberprop" containing an Ascension Flock-of-Birds tracker that allows a user to manipulate objects by imparting the motion of the sphere to the virtual object. The tracking data is processed with OpenGL to manipulate objects in virtual 3D space, from which we synthesize two or more images as seen by virtual cameras observing them. We concentrate on the quantitative measurement and analysis of human performance for interactive object selection and manipulation tasks using standardized and scalable configurations of 3D block objects. The experiments use a series of progressively more complex block configurations that are rendered in stereo on various 3D displays. In general, performing the tasks using shutter glasses required less time as compared to using the autostereoscopic displays. While both male and female subjects performed almost equally fast with shutter glasses, male subjects performed better with the LL-151-3D display, while female subjects performed better with the SG202 display. Interestingly, users generally had a slightly higher efficiency in completing a task set using the two autostereoscopic displays as compared to the shutter glasses, although the differences for all users among the displays was relatively small. There was a preference for shutter glasses compared to autostereoscopic displays in the ease of performing tasks, and glasses were slightly preferred for overall image quality and stereo image quality. However, there was little difference in display preference in physical comfort and overall preference. We present some possible explanations of these results and point

  15. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  16. What is 3D good for? A review of human performance on stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  17. 3D polymer scaffold arrays.

    PubMed

    Simon, Carl G; Yang, Yanyin; Dorsey, Shauna M; Ramalingam, Murugan; Chatterjee, Kaushik

    2011-01-01

    We have developed a combinatorial platform for fabricating tissue scaffold arrays that can be used for screening cell-material interactions. Traditional research involves preparing samples one at a time for characterization and testing. Combinatorial and high-throughput (CHT) methods lower the cost of research by reducing the amount of time and material required for experiments by combining many samples into miniaturized specimens. In order to help accelerate biomaterials research, many new CHT methods have been developed for screening cell-material interactions where materials are presented to cells as a 2D film or surface. However, biomaterials are frequently used to fabricate 3D scaffolds, cells exist in vivo in a 3D environment and cells cultured in a 3D environment in vitro typically behave more physiologically than those cultured on a 2D surface. Thus, we have developed a platform for fabricating tissue scaffold libraries where biomaterials can be presented to cells in a 3D format.

  18. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  19. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  20. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  1. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  2. 3-D Haiku: A New Way To Teach a Traditional Form.

    ERIC Educational Resources Information Center

    Tweedie, Sanford; Kolitsky, Michael A.

    2002-01-01

    Describes a three dimensional poetry genre--a way of rewriting two dimensional haiku in a three dimensional cube that can only be viewed in cyberspace. Discusses traditional versus 3-D haiku, introducing 3-D haiku into the classroom, reasons to teach 3-D haiku, and creating 3-D haiku. (RS)

  3. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  4. 3-D visualization of geologic structures and processes

    NASA Astrophysics Data System (ADS)

    Pflug, R.; Klein, H.; Ramshorn, Ch.; Genter, M.; Stärk, A.

    Interactive 3-D computer graphics techniques are used to visualize geologic structures and simulated geologic processes. Geometric models that serve as input to 3-D viewing programs are generated from contour maps, from serial sections, or directly from simulation program output. Choice of viewing parameters strongly affects the perception of irregular surfaces. An interactive 3-D rendering program and its graphical user interface provide visualization tools for structural geology, seismic interpretation, and visual post-processing of simulations. Dynamic display of transient ground-water simulations and sedimentary process simulations can visualize processes developing through time.

  5. Total body irradiation with a compensator fabricated using a 3D optical scanner and a 3D printer.

    PubMed

    Park, So-Yeon; Kim, Jung-In; Joo, Yoon Ha; Lee, Jung Chan; Park, Jong Min

    2017-05-07

    We propose bilateral total body irradiation (TBI) utilizing a 3D printer and a 3D optical scanner. We acquired surface information of an anthropomorphic phantom with the 3D scanner and fabricated the 3D compensator with the 3D printer, which could continuously compensate for the lateral missing tissue of an entire body from the beam's eye view. To test the system's performance, we measured doses with optically stimulated luminescent dosimeters (OSLDs) as well as EBT3 films with the anthropomorphic phantom during TBI without a compensator, conventional bilateral TBI, and TBI with the 3D compensator (3D TBI). The 3D TBI showed the most uniform dose delivery to the phantom. From the OSLD measurements of the 3D TBI, the deviations between the measured doses and the prescription dose ranged from  -6.7% to 2.4% inside the phantom and from  -2.3% to 0.6% on the phantom's surface. From the EBT3 film measurements, the prescription dose could be delivered to the entire body of the phantom within  ±10% accuracy, except for the chest region, where tissue heterogeneity is extreme. The 3D TBI doses were much more uniform than those of the other irradiation techniques, especially in the anterior-to-posterior direction. The 3D TBI was advantageous, owing to its uniform dose delivery as well as its efficient treatment procedure.

  6. 3D plasma camera for planetary missions

    NASA Astrophysics Data System (ADS)

    Berthomier, Matthieu; Morel, Xavier; Techer, Jean-Denis

    2014-05-01

    A new 3D field-of-view toroidal space plasma analyzer based on an innovative optical concept allows the coverage of 4π str solid angle with only two sensor heads. It fits the need of all-sky thermal plasma measurements on three-axis stabilized spacecraft which are the most commonly used platforms for planetary missions. The 3D plasma analyzer also takes advantage of the new possibilities offered by the development of an ultra low-power multi-channel charge sensitive amplifier used for the imaging detector of the instrument. We present the design and measured performances of a prototype model that will fly on a test rocket in 2014.

  7. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  8. The view from the looking glass: how are narcissistic individuals perceived by others?

    PubMed

    Malkin, Mallory L; Zeigler-Hill, Virgil; Barry, Christopher T; Southard, Ashton C

    2013-02-01

    Previous studies have found that narcissistic individuals are often viewed negatively by those who know them well. The present study sought to extend these previous findings by examining whether normal and pathological aspects of narcissism were associated with perceiver ratings of narcissistic characteristics and aggression. This was accomplished by having each of our undergraduate participants (288 targets) recruit friends or family members to complete ratings of the target who recruited them (1,296 perceivers). Results revealed that perceived entitlement was strongly associated with perceived aggression. Further, self-reported levels of pathological narcissism moderated these results such that vulnerable narcissism exacerbated the association between perceived entitlement and aggression, whereas grandiose narcissism mitigated the association. The discussion will focus on the implications of these results for understanding the various features of narcissism.

  9. Real time 3D scanner: investigations and results

    NASA Astrophysics Data System (ADS)

    Nouri, Taoufik; Pflug, Leopold

    1993-12-01

    This article presents a concept of reconstruction of 3-D objects using non-invasive and touch loss techniques. The principle of this method is to display parallel interference optical fringes on an object and then to record the object under two angles of view. According to an appropriated treatment one reconstructs the 3-D object even when the object has no symmetrical plan. The 3-D surface data is available immediately in digital form for computer- visualization and for analysis software tools. The optical set-up for recording the 3-D object, the 3-D data extraction and treatment, as well as the reconstruction of the 3-D object are reported and commented on. This application is dedicated for reconstructive/cosmetic surgery, CAD, animation and research purposes.

  10. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  11. 3D Face modeling using the multi-deformable method.

    PubMed

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-09-25

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.

  12. Usability of stereoscopic view in teleoperation

    NASA Astrophysics Data System (ADS)

    Boonsuk, Wutthigrai

    2015-03-01

    Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.

  13. Macrophage podosomes go 3D.

    PubMed

    Van Goethem, Emeline; Guiet, Romain; Balor, Stéphanie; Charrière, Guillaume M; Poincloux, Renaud; Labrousse, Arnaud; Maridonneau-Parini, Isabelle; Le Cabec, Véronique

    2011-01-01

    Macrophage tissue infiltration is a critical step in the immune response against microorganisms and is also associated with disease progression in chronic inflammation and cancer. Macrophages are constitutively equipped with specialized structures called podosomes dedicated to extracellular matrix (ECM) degradation. We recently reported that these structures play a critical role in trans-matrix mesenchymal migration mode, a protease-dependent mechanism. Podosome molecular components and their ECM-degrading activity have been extensively studied in two dimensions (2D), but yet very little is known about their fate in three-dimensional (3D) environments. Therefore, localization of podosome markers and proteolytic activity were carefully examined in human macrophages performing mesenchymal migration. Using our gelled collagen I 3D matrix model to obligate human macrophages to perform mesenchymal migration, classical podosome markers including talin, paxillin, vinculin, gelsolin, cortactin were found to accumulate at the tip of F-actin-rich cell protrusions together with β1 integrin and CD44 but not β2 integrin. Macrophage proteolytic activity was observed at podosome-like protrusion sites using confocal fluorescence microscopy and electron microscopy. The formation of migration tunnels by macrophages inside the matrix was accomplished by degradation, engulfment and mechanic compaction of the matrix. In addition, videomicroscopy revealed that 3D F-actin-rich protrusions of migrating macrophages were as dynamic as their 2D counterparts. Overall, the specifications of 3D podosomes resembled those of 2D podosome rosettes rather than those of individual podosomes. This observation was further supported by the aspect of 3D podosomes in fibroblasts expressing Hck, a master regulator of podosome rosettes in macrophages. In conclusion, human macrophage podosomes go 3D and take the shape of spherical podosome rosettes when the cells perform mesenchymal migration. This work

  14. 3D Printed Bionic Nanodevices.

    PubMed

    Kong, Yong Lin; Gupta, Maneesh K; Johnson, Blake N; McAlpine, Michael C

    2016-06-01

    The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and 'living' platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with the

  15. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  16. Subsampling models and anti-alias filters for 3-D automultiscopic displays.

    PubMed

    Konrad, Janusz; Agniel, Philippe

    2006-01-01

    A new type of three-dimensional (3-D) display recently introduced on the market holds great promise for the future of 3-D visualization, communication, and entertainment. This so-called automultiscopic display can deliver multiple views without glasses, thus allowing a limited "look-around" (correct motion-parallax). Central to this technology is the process of multiplexing several views into a single viewable image. This multiplexing is a complex process involving irregular subsampling of the original views. If not preceded by low-pass filtering, it results in aliasing that leads to texture as well as depth distortions. In order to eliminate this aliasing, we propose to model the multiplexing process with lattices, find their parameters and then design optimal anti-alias filters. To this effect, we use multidimensional sampling theory and basic optimization tools. We derive optimal anti-alias filters for a specific automultiscopic monitor using three models: the orthogonal lattice, the nonorthogonal lattice, and the union of shifted lattices. In the first case, the resulting separable low-pass filter offers significant aliasing reduction that is further improved by hexagonal-passband low-pass filter for the nonorthogonal lattice model. A more accurate model is obtained using union of shifted lattices, but due to the complex nature of repeated spectra, practical filters designed in this case offer no additional improvement. We also describe a practical method to design finite-precision, low-complexity filters that can be implemented using modern graphics cards.

  17. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  18. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  19. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  20. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  1. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  2. Insect stereopsis demonstrated using a 3D insect cinema.

    PubMed

    Nityananda, Vivek; Tarawneh, Ghaith; Rosner, Ronny; Nicolas, Judith; Crichton, Stuart; Read, Jenny

    2016-01-07

    Stereopsis - 3D vision - has become widely used as a model of perception. However, all our knowledge of possible underlying mechanisms comes almost exclusively from vertebrates. While stereopsis has been demonstrated for one invertebrate, the praying mantis, a lack of techniques to probe invertebrate stereopsis has prevented any further progress for three decades. We therefore developed a stereoscopic display system for insects, using miniature 3D glasses to present separate images to each eye, and tested our ability to deliver stereoscopic illusions to praying mantises. We find that while filtering by circular polarization failed due to excessive crosstalk, "anaglyph" filtering by spectral content clearly succeeded in giving the mantis the illusion of 3D depth. We thus definitively demonstrate stereopsis in mantises and also demonstrate that the anaglyph technique can be effectively used to deliver virtual 3D stimuli to insects. This method opens up broad avenues of research into the parallel evolution of stereoscopic computations and possible new algorithms for depth perception.

  3. Fish body surface data measurement based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Jiang, Ming; Qian, Chen; Yang, Wenkai

    2016-01-01

    To film the moving fish in the glass tank, light will be bent at the interface of air and glass, glass and water. Based on binocular stereo vision and refraction principle, we establish a mathematical model of 3D image correlation to reconstruct the 3D coordinates of samples in the water. Marking speckle in fish surface, a series of real-time speckle images of swimming fish will be obtained by two high-speed cameras, and instantaneous 3D shape, strain, displacement etc. of fish will be reconstructed.

  4. Design of a single projector multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2014-03-01

    Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.

  5. 3D Imaging with Holographic Tomography

    NASA Astrophysics Data System (ADS)

    Sheppard, Colin J. R.; Kou, Shan Shan

    2010-04-01

    There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we

  6. Rear-cross-lenticular 3D display without eyeglasses

    NASA Astrophysics Data System (ADS)

    Morishima, Hideki; Nose, Hiroyasu; Taniguchi, Naosato; Inoguchi, Kazutaka; Matsumura, Susumu

    1998-04-01

    We have developed a prototype 3D Display system without any eyeglasses, which we call `Rear Cross Lenticular 3D Display' (RCL3D), that is very compact and produces high quality 3D image. The RCL3D consists of a LCD panel, two lenticular lens sheets which run perpendicular to each other, a Checkered Pattern Mask and a backlight panel. On the LCD panel, a composite image which consists of alternately arranged horizontally striped images for right eye and left eye, is displayed. This composite image form is compatible with the field sequential stereoscopic image data. The light from backlight panel goes through the apertures of the Checkered Pattern Mask and illuminates the horizontal lines of images for right eye and left eye on LCD and goes to the right eye position and left eye position separately by the function of the two lenticular lens sheets. With this principle, the RCL3D shows 3D image to an observer without any eyeglasses. We applied simulation of viewing zone, using random ray tracing to the RCL3D and found that illuminated areas for right eye and left eye are separated clearly as series of alternate vertical stripes. We will present the prototype of the RCL3D (14.5', XGA) and simulation results.

  7. Optical 3D imaging and visualization of concealed objects

    NASA Astrophysics Data System (ADS)

    Berginc, G.; Bellet, J.-B.; Berechet, I.; Berechet, S.

    2016-09-01

    This paper gives new insights on optical 3D imagery. In this paper we explore the advantages of laser imagery to form a three-dimensional image of the scene. 3D laser imaging can be used for three-dimensional medical imaging and surveillance because of ability to identify tumors or concealed objects. We consider the problem of 3D reconstruction based upon 2D angle-dependent laser images. The objective of this new 3D laser imaging is to provide users a complete 3D reconstruction of objects from available 2D data limited in number. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different meshed objects of the scene of interest or from experimental 2D laser images. We show that combining the Radom transform on 2D laser images with the Maximum Intensity Projection can generate 3D views of the considered scene from which we can extract the 3D concealed object in real time. With different original numerical or experimental examples, we investigate the effects of the input contrasts. We show the robustness and the stability of the method. We have developed a new patented method of 3D laser imaging based on three-dimensional reflective tomographic reconstruction algorithms and an associated visualization method. In this paper we present the global 3D reconstruction and visualization procedures.

  8. Multi-resolution optical 3D sensor

    NASA Astrophysics Data System (ADS)

    Kühmstedt, Peter; Heinze, Matthias; Schmidt, Ingo; Breitbarth, Martin; Notni, Gunther

    2007-06-01

    A new multi resolution self calibrating optical 3D measurement system using fringe projection technique named "kolibri FLEX multi" will be presented. It can be utilised to acquire the all around shape of small to medium objects, simultaneously. The basic measurement principle is the phasogrammetric approach /1,2,3/ in combination with the method of virtual landmarks for the merging of the 3D single views. The system consists in minimum of two fringe projection sensors. The sensors are mounted on a rotation stage illuminating the object from different directions. The measurement fields of the sensors can be chosen different, here as an example 40mm and 180mm in diameter. In the measurement the object can be scanned at the same time with these two resolutions. Using the method of virtual landmarks both point clouds are calculated within the same world coordinate system resulting in a common 3D-point cloud. The final point cloud includes the overview of the object with low point density (wide field) and a region with high point density (focussed view) at the same time. The advantage of the new method is the possibility to measure with different resolutions at the same object region without any mechanical changes in the system or data post processing. Typical parameters of the system are: the measurement time is 2min for 12 images and the measurement accuracy is below 3μm up to 10 μm. The flexibility makes the measurement system useful for a wide range of applications such as quality control, rapid prototyping, design and CAD/CAM which will be shown in the paper.

  9. The design of red-blue 3D video fusion system based on DM642

    NASA Astrophysics Data System (ADS)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  10. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  11. Streamlined, Inexpensive 3D Printing of the Brain and Skull

    PubMed Central

    Cash, Sydney S.

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3–4 in consumable plastic filament as described, and the total process takes 14–17 hours, almost all of which is unsupervised (preprocessing = 4–6 hr; printing = 9–11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1–5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes. PMID:26295459

  12. Streamlined, Inexpensive 3D Printing of the Brain and Skull.

    PubMed

    Naftulin, Jason S; Kimchi, Eyal Y; Cash, Sydney S

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3-4 in consumable plastic filament as described, and the total process takes 14-17 hours, almost all of which is unsupervised (preprocessing = 4-6 hr; printing = 9-11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1-5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes.

  13. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  14. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  15. Comparing swimsuits in 3D.

    PubMed

    van Geer, Erik; Molenbroek, Johan; Schreven, Sander; deVoogd-Claessen, Lenneke; Toussaint, Huib

    2012-01-01

    In competitive swimming, suits have become more important. These suits influence friction, pressure and wave drag. Friction drag is related to the surface properties whereas both pressure and wave drag are greatly influenced by body shape. To find a relationship between the body shape and the drag, the anthropometry of several world class female swimmers wearing different suits was accurately defined using a 3D scanner and traditional measuring methods. The 3D scans delivered more detailed information about the body shape. On the same day the swimmers did performance tests in the water with the tested suits. Afterwards the result of the performance tests and the differences found in body shape was analyzed to determine the deformation caused by a swimsuit and its effect on the swimming performance. Although the amount of data is limited because of the few test subjects, there is an indication that the deformation of the body influences the swimming performance.

  16. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  17. 3D-graphite structure

    SciTech Connect

    Belenkov, E. A. Ali-Pasha, V. A.

    2011-01-15

    The structure of clusters of some new carbon 3D-graphite phases have been calculated using the molecular-mechanics methods. It is established that 3D-graphite polytypes {alpha}{sub 1,1}, {alpha}{sub 1,3}, {alpha}{sub 1,5}, {alpha}{sub 2,1}, {alpha}{sub 2,3}, {alpha}{sub 3,1}, {beta}{sub 1,2}, {beta}{sub 1,4}, {beta}{sub 1,6}, {beta}{sub 2,1}, and {beta}{sub 3,2} consist of sp{sup 2}-hybridized atoms, have hexagonal unit cells, and differ in regards to the structure of layers and order of their alternation. A possible way to experimentally synthesize new carbon phases is proposed: the polymerization and carbonization of hydrocarbon molecules.

  18. Proposed traceable structural resolution protocols for 3D imaging systems

    NASA Astrophysics Data System (ADS)

    MacKinnon, David; Beraldin, J.-Angelo; Cournoyer, Luc; Carrier, Benjamin; Blais, François

    2009-08-01

    A protocol for determining structural resolution using a potentially-traceable reference material is proposed. Where possible, terminology was selected to conform to those published in ISO JCGM 200:2008 (VIM) and ASTM E 2544-08 documents. The concepts of resolvability and edge width are introduced to more completely describe the ability of an optical non-contact 3D imaging system to resolve small features. A distinction is made between 3D range cameras, that obtain spatial data from the total field of view at once, and 3D range scanners, that accumulate spatial data for the total field of view over time. The protocol is presented through the evaluation of a 3D laser line range scanner.

  19. Interactive 3D display simulator for autostereoscopic smart pad

    NASA Astrophysics Data System (ADS)

    Choe, Yeong-Seon; Lee, Ho-Dong; Park, Min-Chul; Son, Jung-Young; Park, Gwi-Tae

    2012-06-01

    There is growing interest of displaying 3D images on a smart pad for entertainments and information services. Designing and realizing various types of 3D displays on the smart pad is not easy for costs and given time. Software simulation can be an alternative method to save and shorten the development. In this paper, we propose a 3D display simulator for autostereoscopic smart pad. It simulates light intensity of each view and crosstalk for smart pad display panels. Designers of 3D display for smart pad can interactively simulate many kinds of autostereoscopic displays interactively by changing parameters required for panel design. Crosstalk to reduce leakage of one eye's image into the image of the other eye, and light intensity for computing visual comfort zone are important factors in designing autostereoscopic display for smart pad. Interaction enables intuitive designs. This paper describes an interactive 3D display simulator for autostereoscopic smart pad.

  20. Autonomic nervous system responses can reveal visual fatigue induced by 3D displays.

    PubMed

    Kim, Chi Jung; Park, Sangin; Won, Myeung Ju; Whang, Mincheol; Lee, Eui Chul

    2013-09-26

    Previous research has indicated that viewing 3D displays may induce greater visual fatigue than viewing 2D displays. Whether viewing 3D displays can evoke measureable emotional responses, however, is uncertain. In the present study, we examined autonomic nervous system responses in subjects viewing 2D or 3D displays. Autonomic responses were quantified in each subject by heart rate, galvanic skin response, and skin temperature. Viewers of both 2D and 3D displays showed strong positive correlations with heart rate, which indicated little differences between groups. In contrast, galvanic skin response and skin temperature showed weak positive correlations with average difference between viewing 2D and 3D. We suggest that galvanic skin response and skin temperature can be used to measure and compare autonomic nervous responses in subjects viewing 2D and 3D displays.

  1. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  2. Met.3D - a new open-source tool for interactive 3D visualization of ensemble weather forecasts

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Kern, Michael; Schäfler, Andreas; Westermann, Rüdiger

    2015-04-01

    We introduce Met.3D, a new open-source tool for the interactive 3D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns, however, is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output -- 3D visualisation, ensemble visualization, and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2D visualization methods commonly used in meteorology to 3D visualization by combining both visualization types in a 3D context. It implements methods that address the issue of spatial perception in the 3D view as well as approaches to using the ensemble in order to assess forecast uncertainty. Interactivity is key to the Met.3D approach. The tool uses modern graphics hardware technology to achieve interactive visualization of present-day numerical weather prediction datasets on standard consumer hardware. Met.3D supports forecast data from the European Centre for Medium Range Weather Forecasts and operates directly on ECMWF hybrid sigma-pressure level grids. In this presentation, we provide an overview of the software --illustrated with short video examples--, and give information on its availability.

  3. Auto-stereoscopic 3D displays with reduced crosstalk.

    PubMed

    Lee, Chulhee; Seo, Guiwon; Lee, Jonghwa; Han, Tae-hwan; Park, Jong Geun

    2011-11-21

    In this paper, we propose new auto-stereoscopic 3D displays that substantially reduce crosstalk. In general, it is difficult to eliminate crosstalk in auto-stereoscopic 3D displays. Ideally, the parallax barrier can eliminate crosstalk for a single viewer at the ideal position. However, due to variations in the viewing distance and the interpupillary distance, crosstalk is a problem in parallax barrier displays. In this paper, we propose 3-dimensional barriers, which can significantly reduce crosstalk.

  4. GPU-Accelerated Denoising in 3D (GD3D)

    SciTech Connect

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer the second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.

  5. Modeling, Prediction, and Reduction of 3D Crosstalk in Circular Polarized Stereoscopic LCDs.

    PubMed

    Zeng, Menglin; Robinson, Alan E; Nguyen, Truong Q

    2015-12-01

    Crosstalk, which is the incomplete separation between the left and right views in 3D displays, induces ghosting and causes difficulty of the eyes to fuse the stereo image for depth perception. Circularly polarized (CP) liquid crystal display (LCD) is one of the main-stream consumer 3D displays with the prospering of 3D movies and gamings. The polarizing system including the patterned retarder is one of the major causes of crosstalk in CP LCD. The contributions of this paper are the modeling of the polarizing system of CP LCD, and a crosstalk reduction method that efficiently cancels crosstalk and preserves image contrast. For the modeling, the practical orientation of the polarized glasses (PG) is considered. In addition, this paper calculates the rotation of the light-propagation coordinate for the Stokes vector as light propagates from LCD to PG, and this calculation is missing in the previous works when applying Mueller calculus. The proposed crosstalk reduction method is formulated as a linear programming problem, which can be easily solved. In addition, we propose excluding the highly textured areas in the input images to further preserve image contrast in crosstalk reduction.

  6. Diagnostic algorithm: how to make use of new 2D, 3D and 4D ultrasound technologies in breast imaging.

    PubMed

    Weismann, C F; Datz, L

    2007-11-01

    The aim of this publication is to present a time saving diagnostic algorithm consisting of two-dimensional (2D), three-dimensional (3D) and four-dimensional (4D) ultrasound (US) technologies. This algorithm of eight steps combines different imaging modalities and render modes which allow a step by step analysis of 2D, 3D and 4D diagnostic criteria. Advanced breast US systems with broadband high frequency linear transducers, full digital data management and high resolution are the actual basis for two-dimensional breast US studies in order to detect early breast cancer (step 1). The continuous developments of 2D US technologies including contrast resolution imaging (CRI) and speckle reduction imaging (SRI) have a direct influence on the high quality of three-dimensional and four-dimensional presentation of anatomical breast structures and pathological details. The diagnostic options provided by static 3D volume datasets according to US BI-RADS analogue assessment, concerning lesion shape, orientation, margin, echogenic rim sign, lesion echogenicity, acoustic transmission, associated calcifications, 3D criteria of the coronal plane, surrounding tissue composition (step 2) and lesion vascularity (step 6) are discussed. Static 3D datasets offer the combination of long axes distance measurements and volume calculations, which are the basis for an accurate follow-up in BI-RADS II and BI-RADS III lesions (step 3). Real time 4D volume contrast imaging (VCI) is able to demonstrate tissue elasticity (step 5). Glass body rendering is a static 3D tool which presents greyscale and colour information to study the vascularity and the vascular architecture of a lesion (step 6). Tomographic ultrasound imaging (TUI) is used for a slice by slice documentation in different investigation planes (A-,B- or C-plane) (steps 4 and 7). The final step 8 uses the panoramic view technique (XTD-View) to document the localisation within the breast and to make the position of a lesion simply

  7. 11. Interior view of communications compartment. View toward rear of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Interior view of communications compartment. View toward rear of aircraft. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  8. 10. Interior view of communications compartment. View toward front of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. Interior view of communications compartment. View toward front of aircraft. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  9. 9. Interior view of electronics compartment. View toward rear of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. Interior view of electronics compartment. View toward rear of aircraft. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  10. 3D-model building of the jaw impression

    NASA Astrophysics Data System (ADS)

    Ahmed, Moumen T.; Yamany, Sameh M.; Hemayed, Elsayed E.; Farag, Aly A.

    1997-03-01

    A novel approach is proposed to obtain a record of the patient's occlusion using computer vision. Data acquisition is obtained using intra-oral video cameras. The technique utilizes shape from shading to extract 3D information from 2D views of the jaw, and a novel technique for 3D data registration using genetic algorithms. The resulting 3D model can be used for diagnosis, treatment planning, and implant purposes. The overall purpose of this research is to develop a model-based vision system for orthodontics to replace traditional approaches. This system will be flexible, accurate, and will reduce the cost of orthodontic treatments.

  11. 3D cinema to 3DTV content adaptation

    NASA Astrophysics Data System (ADS)

    Yasakethu, L.; Blondé, L.; Doyen, D.; Huynh-Thu, Q.

    2012-03-01

    3D cinema and 3DTV have grown in popularity in recent years. Filmmakers have a significant opportunity in front of them given the recent success of 3D films. In this paper we investigate whether this opportunity could be extended to the home in a meaningful way. "3D" perceived from viewing stereoscopic content depends on the viewing geometry. This implies that the stereoscopic-3D content should be captured for a specific viewing geometry in order to provide a satisfactory 3D experience. However, although it would be possible, it is clearly not viable, to produce and transmit multiple streams of the same content for different screen sizes. In this study to solve the above problem, we analyze the performance of six different disparity-based transformation techniques, which could be used for cinema-to-3DTV content conversion. Subjective tests are performed to evaluate the effectiveness of the algorithms in terms of depth effect, visual comfort and overall 3D quality. The resultant 3DTV experience is also compared to that of cinema. We show that by applying the proper transformation technique on the content originally captured for cinema, it is possible to enhance the 3DTV experience. The selection of the appropriate transformation is highly dependent on the content characteristics.

  12. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  13. A 3-D Look at Wind-Sculpted Ridges in Aeolis

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Layers of bedrock etched by wind to form sharp, elongated ridges known to geomorphologists as yardangs are commonplace in the southern Elysium Planitia/southern Amazonis region of Mars. The ridges shown in this 3-D composite of two overlapping Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) images occur in the eastern Aeolis region of southern Elysium Planitia near 2.3oS, 206.8oW. To view the picture in stereo, you need red-blue 3-D glasses (red filter over the left eye, blue over the right). For wind to erode bedrock into the patterns seen here, the rock usually must consist of something that is fine-grained and of nearly uniform grain size, such as sand. It must also be relatively easy to erode. For decades, most Mars researchers have interpreted these materials to be eroded deposits of volcanic ash. Nothing in the new picture shown here can support nor refute this earlier speculation. The entire area is mantled by light-toned dust. Small landslides within this thin dust layer form dark streaks on some of the steeper slopes in this picture (for more examples and explanations for these streaks, see previous web pages listed below).

    The stereo (3-D) picture was compiled using an off-nadir view taken by the MOC during the Aerobrake-1 subphase of the mission in January 1998 with a nadir (straight-down-looking) view acquired in October 2000. The total area shown is about 6.7 kilometers (4.2 miles) wide by 2.5 kilometers (1.5 miles) high and is illuminated by sunlight from the upper right. The relief in the stereo image is quite exaggerated: the ridges are between about 50 and 100 meters (about 165-330 feet) high. North is toward the lower right.

  14. A clearer view of the insect brain—combining bleaching with standard whole-mount immunocytochemistry allows confocal imaging of pigment-covered brain areas for 3D reconstruction

    PubMed Central

    Stöckl, Anna L.; Heinze, Stanley

    2015-01-01

    In the study of insect neuroanatomy, three-dimensional (3D) reconstructions of neurons and neuropils have become a standard technique. As images have to be obtained from whole-mount brain preparations, pigmentation on the brain surface poses a serious challenge to imaging. In insects, this is a major problematic in the first visual neuropil of the optic lobe, the lamina, which is obstructed by the pigment of the retina as well as by the pigmented fenestration layer. This has prevented inclusion of this major processing center of the insect visual system into most neuroanatomical brain atlases and hinders imaging of neurons within the lamina by confocal microscopy. It has recently been shown that hydrogen peroxide bleaching is compatible with immunohistochemical labeling in insect brains, and we therefore developed a simple technique for removal of pigments on the surface of insect brains by chemical bleaching. We show that our technique enables imaging of the pigment-obstructed regions of insect brains when combined with standard protocols for both anti-synapsin-labeled as well as neurobiotin-injected samples. This method can be combined with different fixation procedures, as well as different fluorophore excitation wavelengths without negative effects on staining quality. It can therefore serve as an effective addition to most standard histology protocols used in insect neuroanatomy. PMID:26441552

  15. Chimp as Viewed by Rover

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This anaglyph view of Chimp, south southwest of the lander, was produced by combining two right eye frames taken from different viewing angles by Sojourner Rover. One of the right eye frames was distorted using Photoshop to approximate the projection of the left eye view (without this, the stereo pair is painful to view). The left view is assigned to the red color plane and the right view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech).

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  16. From Surface Data to 3D Geologic Maps

    NASA Astrophysics Data System (ADS)

    Dhont, D.; Luxey, P.; Longuesserre, V.; Monod, B.; Guillaume, B.

    2008-12-01

    New trends in earth sciences are mostly related to technologies allowing graphical representations of the geology in 3D. However, the concept of 3D geologic map is commonly misused. For instance, displays of geologic maps draped onto DEM in rotating perspective views have been misleadingly called 3D geologic maps, but this still cannot provide any volumetric underground information as a true 3D geologic map should. Here, we present a way to produce mathematically and geometrically correct 3D geologic maps constituted by the volume and shape of all geologic features of a given area. The originality of the method is that it is based on the integration of surface data only consisting of (1) geologic maps, (2) satellite images, (3) DEM and (4) bedding dips and strikes. To generate 3D geologic maps, we used a 3D geologic modeler that combines and extrapolates the surface information into a coherent 3D data set. The significance of geometrically correct 3D geologic maps is demonstrated for various geologic settings and applications. 3D models are of primarily importance for educational purposes because they reveal features that standard 2D geologic maps by themselves could not show. The 3D visualization helps in the understanding of the geometrical relationship between the different geologic features and, in turn, for the quantification of the geology at the regional scale. Furthermore, given the logistical challenges associated with modern oil and mineral exploration in remote and rugged terrain, these volume-based models can provide geological and commercial insight prior to seismic evaluation.

  17. Towards a Normalised 3D Geovisualisation: The Viewpoint Management

    NASA Astrophysics Data System (ADS)

    Neuville, R.; Poux, F.; Hallot, P.; Billen, R.

    2016-10-01

    This paper deals with the viewpoint management in 3D environments considering an allocentric environment. The recent advances in computer sciences and the growing number of affordable remote sensors lead to impressive improvements in the 3D visualisation. Despite some research relating to the analysis of visual variables used in 3D environments, we notice that it lacks a real standardisation of 3D representation rules. In this paper we study the "viewpoint" as being the first considered parameter for a normalised visualisation of 3D data. Unlike in a 2D environment, the viewing direction is not only fixed in a top down direction in 3D. A non-optimal camera location means a poor 3D representation in terms of relayed information. Based on this statement we propose a model based on the analysis of the computational display pixels that determines a viewpoint maximising the relayed information according to one kind of query. We developed an OpenGL prototype working on screen pixels that allows to determine the optimal camera location based on a screen pixels colour algorithm. The viewpoint management constitutes a first step towards a normalised 3D geovisualisation.

  18. 3D annotation and manipulation of medical anatomical structures

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  19. 3D Medical Collaboration Technology to Enhance Emergency Healthcare

    PubMed Central

    Welch, Greg; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M.; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E.

    2009-01-01

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare. PMID:19521951

  20. 3D Nanostructuring of Semiconductors

    NASA Astrophysics Data System (ADS)

    Blick, Robert

    2000-03-01

    Modern semiconductor technology allows to machine devices on the nanometer scale. I will discuss the current limits of the fabrication processes, which enable the definition of single electron transistors with dimensions down to 8 nm. In addition to the conventional 2D patterning and structuring of semiconductors, I will demonstrate how to apply 3D nanostructuring techniques to build freely suspended single-crystal beams with lateral dimension down to 20 nm. In transport measurements in the temperature range from 30 mK up to 100 K these nano-crystals are characterized regarding their electronic as well as their mechanical properties. Moreover, I will present possible applications of these devices.

  1. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  2. A Clean Adirondack (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is a 3-D anaglyph showing a microscopic image taken of an area measuring 3 centimeters (1.2 inches) across on the rock called Adirondack. The image was taken at Gusev Crater on the 33rd day of the Mars Exploration Rover Spirit's journey (Feb. 5, 2004), after the rover used its rock abrasion tool brush to clean the surface of the rock. Dust, which was pushed off to the side during cleaning, can still be seen to the left and in low areas of the rock.

  3. 3D Printed Shelby Cobra

    SciTech Connect

    Love, Lonnie

    2015-01-09

    ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.

  4. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  5. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing.

  6. 3D Printed Bionic Ears

    PubMed Central

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  7. 3D structured illumination microscopy

    NASA Astrophysics Data System (ADS)

    Dougherty, William M.; Goodwin, Paul C.

    2011-03-01

    Three-dimensional structured illumination microscopy achieves double the lateral and axial resolution of wide-field microscopy, using conventional fluorescent dyes, proteins and sample preparation techniques. A three-dimensional interference-fringe pattern excites the fluorescence, filling in the "missing cone" of the wide field optical transfer function, thereby enabling axial (z) discrimination. The pattern acts as a spatial carrier frequency that mixes with the higher spatial frequency components of the image, which usually succumb to the diffraction limit. The fluorescence image encodes the high frequency content as a down-mixed, moiré-like pattern. A series of images is required, wherein the 3D pattern is shifted and rotated, providing down-mixed data for a system of linear equations. Super-resolution is obtained by solving these equations. The speed with which the image series can be obtained can be a problem for the microscopy of living cells. Challenges include pattern-switching speeds, optical efficiency, wavefront quality and fringe contrast, fringe pitch optimization, and polarization issues. We will review some recent developments in 3D-SIM hardware with the goal of super-resolved z-stacks of motile cells.

  8. Optical characterization of different types of 3D displays

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    All 3D displays have the same intrinsic method to induce depth perception. They provide different images in the left and right eye of the observer to obtain the stereoscopic effect. The three most common solutions already available on the market are active glass, passive glass and auto-stereoscopic 3D displays. The three types of displays are based on different physical principle (polarization, time selection or spatial emission) and consequently require different measurement instruments and techniques. In the proposed paper, we present some of these solutions and the technical characteristics that can be obtained to compare the displays. We show in particular that local and global measurements can be made in the three cases to access to different characteristics. We also discuss the new technologies currently under development and their needs in terms of optical characterization.

  9. Virtual VMASC: A 3D Game Environment

    NASA Technical Reports Server (NTRS)

    Manepalli, Suchitra; Shen, Yuzhong; Garcia, Hector M.; Lawsure, Kaleen

    2010-01-01

    The advantages of creating interactive 3D simulations that allow viewing, exploring, and interacting with land improvements, such as buildings, in digital form are manifold and range from allowing individuals from anywhere in the world to explore those virtual land improvements online, to training military personnel in dealing with war-time environments, and to making those land improvements available in virtual worlds such as Second Life. While we haven't fully explored the true potential of such simulations, we have identified a requirement within our organization to use simulations like those to replace our front-desk personnel and allow visitors to query, naVigate, and communicate virtually with various entities within the building. We implemented the Virtual VMASC 3D simulation of the Virginia Modeling Analysis and Simulation Center (VMASC) office building to not only meet our front-desk requirement but also to evaluate the effort required in designing such a simulation and, thereby, leverage the experience we gained in future projects of this kind. This paper describes the goals we set for our implementation, the software approach taken, the modeling contribution made, and the technologies used such as XNA Game Studio, .NET framework, Autodesk software packages, and, finally, the applicability of our implementation on a variety of architectures including Xbox 360 and PC. This paper also summarizes the result of our evaluation and the lessons learned from our effort.

  10. Looking Back at 'Eagle Crater'(3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Click on the image for Looking Back at 'Eagle Crater'(3-D) (QTVR)

    This is a 3-D version of the first 360-degree view from the Mars Exploration Rover Opportunity's new position outside 'Eagle Crater,' the small crater where the rover landed about two months ago. Scientists are busy analyzing Opportunity's new view of the plains of Meridiani Planum. The plentiful ripples are a clear indication that wind is the primary geologic process currently in effect on the plains. The rover's tracks can be seen leading away from Eagle Crater. At the far left are two depressions--each about a meter (about 3.3 feet) across---that feature bright spots in their centers. One possibility is that the bright material is similar in composition to the rocks in Eagle Crater's outcrop and the surrounding darker material is what's referred to as 'lag deposit,' or erosional remnants, which are much harder and more difficult to wear away. These twin dimples might be revealing pieces of a larger outcrop that lies beneath. The depression closest to Opportunity is whimsically referred to as 'Homeplate' and the one behind it as 'First Base.' The rover's panoramic camera is set to take detailed images of the depressions today, on Opportunity's 58th sol. The backshell and parachute that helped protect the rover and deliver it safely to the surface of Mars are also visible near the horizon, at the left of the image. This image was taken by the rover's navigation camera.

  11. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-06

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction.

  12. 3D Printed Shelby Cobra

    ScienceCinema

    Love, Lonnie

    2016-11-02

    ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.

  13. Air-structured optical fiber drawn from a 3D-printed preform.

    PubMed

    Cook, Kevin; Canning, John; Leon-Saval, Sergio; Reid, Zane; Hossain, Md Arafat; Comatti, Jade-Edouard; Luo, Yanhua; Peng, Gang-Ding

    2015-09-01

    A structured optical fiber is drawn from a 3D-printed structured preform. Preforms containing a single ring of holes around the core are fabricated using filament made from a modified butadiene polymer. More broadly, 3D printers capable of processing soft glasses, silica, and other materials are likely to come on line in the not-so-distant future. 3D printing of optical preforms signals a new milestone in optical fiber manufacture.

  14. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  15. Implementation of active-type Lamina 3D display system.

    PubMed

    Yoon, Sangcheol; Baek, Hogil; Min, Sung-Wook; Park, Soon-Gi; Park, Min-Kyu; Yoo, Seong-Hyeon; Kim, Hak-Rin; Lee, Byoungho

    2015-06-15

    Lamina 3D display is a new type of multi-layer 3D display, which utilizes the polarization state as a new dimension of depth information. Lamina 3D display system has advanced properties - to reduce the data amount representing 3D image, to be easily made using the conventional projectors, and to have a potential being applied to the many applications. However, the system might have some limitations in depth range and viewing angle due to the properties of the expressive volume components. In this paper, we propose the volume using the layers of switchable diffusers to implement the active-type Lamina 3D display system. Because the diffusing rate of the layers has no relation with the polarization state, the polarizer wheel is applied to the proposed system in purpose of making the sectioned image synchronized with the diffusing layer at the designated location. The imaging volume of the proposed system consists of five layers of polymer dispersed liquid crystal and the total size of the implemented volume is 24x18x12 mm3(3). The proposed system can achieve the improvements of viewing qualities such as enhanced depth expression and widened viewing angle.

  16. Creating 3D realistic head: from two orthogonal photos to multiview face contents

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Lin, Qian; Tang, Feng; Tang, Liang; Lim, Sukhwan; Wang, Shengjin

    2011-03-01

    3D Head models have many applications, such as virtual conference, 3D web game, and so on. The existing several web-based face modeling solutions that can create a 3D face model from one or two user uploaded face images, are limited to generating the 3D model of only face region. The accuracy of such reconstruction is very limited for side views, as well as hair regions. The goal of our research is to develop a framework for reconstructing the realistic 3D human head based on two approximate orthogonal views. Our framework takes two images, and goes through segmentation, feature points detection, 3D bald head reconstruction, 3D hair reconstruction and texture mapping to create a 3D head model. The main contribution of the paper is that the processing steps are applies to both the face region as well as the hair region.

  17. 3D Building Reconstruction Using Dense Photogrammetric Point Cloud

    NASA Astrophysics Data System (ADS)

    Malihi, S.; Valadan Zoej, M. J.; Hahn, M.; Mokhtarzade, M.; Arefi, H.

    2016-06-01

    Three dimensional models of urban areas play an important role in city planning, disaster management, city navigation and other applications. Reconstruction of 3D building models is still a challenging issue in 3D city modelling. Point clouds generated from multi view images of UAV is a novel source of spatial data, which is used in this research for building reconstruction. The process starts with the segmentation of point clouds of roofs and walls into planar groups. By generating related surfaces and using geometrical constraints plus considering symmetry, a 3d model of building is reconstructed. In a refinement step, dormers are extracted, and their models are reconstructed. The details of the 3d reconstructed model are in LoD3 level, with respect to modelling eaves, fractions of roof and dormers.

  18. Stereoscopic 3D video games and their effects on engagement

    NASA Astrophysics Data System (ADS)

    Hogue, Andrew; Kapralos, Bill; Zerebecki, Chris; Tawadrous, Mina; Stanfield, Brodie; Hogue, Urszula

    2012-03-01

    With television manufacturers developing low-cost stereoscopic 3D displays, a large number of consumers will undoubtedly have access to 3D-capable televisions at home. The availability of 3D technology places the onus on content creators to develop interesting and engaging content. While the technology of stereoscopic displays and content generation are well understood, there are many questions yet to be answered surrounding its effects on the viewer. Effects of stereoscopic display on passive viewers for film are known, however video games are fundamentally different since the viewer/player is actively (rather than passively) engaged in the content. Questions of how stereoscopic viewing affects interaction mechanics have previously been studied in the context of player performance but very few have attempted to quantify the player experience to determine whether stereoscopic 3D has a positive or negative influence on their overall engagement. In this paper we present a preliminary study of the effects stereoscopic 3D have on player engagement in video games. Participants played a video game in two conditions, traditional 2D and stereoscopic 3D and their engagement was quantified using a previously validated self-reporting tool. The results suggest that S3D has a positive effect on immersion, presence, flow, and absorption.

  19. 3D volumetric radar using 94-GHz millimeter waves

    NASA Astrophysics Data System (ADS)

    Takács, Barnabás

    2006-05-01

    This article describes a novel approach to the real-time visualization of 3D imagery obtained from a 3D millimeter wave scanning radar. The MMW radar system employs a spinning antenna to generate a fan-shaped scanning pattern of the entire scene. The beams formed this way provide all weather 3D distance measurements (range/azimuth display) of objects as they appear on the ground. The beam width of the antenna and its side lobes are optimized to produce the best possible resolution even at distances of up to 15 Kms. To create a full 3D data set the fan-pattern is tilted up and down with the help of a controlled stepper motor. For our experiments we collected data at 0.1 degrees increments while using both bi-static as well as a mono-static antennas in our arrangement. The data collected formed a stack of range-azimuth images in the shape of a cone. This information is displayed using our high-end 3D visualization engine capable of displaying high-resolution volumetric models with 30 frames per second. The resulting 3D scenes can then be viewed from any angle and subsequently processed to integrate, fuse or match them against real-life sensor imagery or 3D model data stored in a synthetic database.

  20. Slope instability in complex 3D topography promoted by convergent 3D groundwater flow

    NASA Astrophysics Data System (ADS)

    Reid, M. E.; Brien, D. L.

    2012-12-01

    Slope instability in complex topography is generally controlled by the interaction between gravitationally induced stresses, 3D strengths, and 3D pore-fluid pressure fields produced by flowing groundwater. As an example of this complexity, coastal bluffs sculpted by landsliding commonly exhibit a progression of undulating headlands and re-entrants. In this landscape, stresses differ between headlands and re-entrants and 3D groundwater flow varies from vertical rainfall infiltration to lateral groundwater flow on lower permeability layers with subsequent discharge at the curved bluff faces. In plan view, groundwater flow converges in the re-entrant regions. To investigate relative slope instability induced by undulating topography, we couple the USGS 3D limit-equilibrium slope-stability model, SCOOPS, with the USGS 3D groundwater flow model, MODFLOW. By rapidly analyzing the stability of millions of potential failures, the SCOOPS model can determine relative slope stability throughout the 3D domain underlying a digital elevation model (DEM), and it can utilize both fully 3D distributions of pore-water pressure and material strength. The two models are linked by first computing a groundwater-flow field in MODFLOW, and then computing stability in SCOOPS using the pore-pressure field derived from groundwater flow. Using these two models, our analyses of 60m high coastal bluffs in Seattle, Washington showed augmented instability in topographic re-entrants given recharge from a rainy season. Here, increased recharge led to elevated perched water tables with enhanced effects in the re-entrants owing to convergence of groundwater flow. Stability in these areas was reduced about 80% compared to equivalent dry conditions. To further isolate these effects, we examined groundwater flow and stability in hypothetical landscapes composed of uniform and equally spaced, oscillating headlands and re-entrants with differing amplitudes. The landscapes had a constant slope for both

  1. Measuring the Visual Salience of 3D Printed Objects.

    PubMed

    Wang, Xi; Lindlbauer, David; Lessig, Christian; Maertens, Marianne; Alexa, Marc

    2016-01-01

    To investigate human viewing behavior on physical realizations of 3D objects, the authors use an eye tracker with scene camera and fiducial markers on 3D objects to gather fixations on the presented stimuli. They use this data to validate assumptions regarding visual saliency that so far have experimentally only been analyzed for flat stimuli. They provide a way to compare fixation sequences from different subjects and developed a model for generating test sequences of fixations unrelated to the stimuli. Their results suggest that human observers agree in their fixations for the same object under similar viewing conditions. They also developed a simple procedure to validate computational models for visual saliency of 3D objects and found that popular models of mesh saliency based on center surround patterns fail to predict fixations.