Science.gov

Sample records for 3-d viewing glasses

  1. Wide-viewing-angle floating 3D display system with no 3D glasses

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1998-04-01

    Previously, the author has described a new 3D imaging technology entitled 'real depth' with several different configurations and methods of implantation. Included were several methods to 'float' images in free space. Viewers can pass their hands through the image or appear to hold it in their hands. Most implementations provide an angle of view of approximately 45 degrees. The technology produces images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. Unlike stereoscopic 3D imaging, no glasses, headgear or other viewing aids are used. In addition to providing traditional depth cues, such as perspective and background images occlusion, the technology also provides both horizontal and vertical binocular parallax producing visual accommodation and convergence which coincide. Consequently, viewing these images do not produce headaches, fatigue, or eyestrain, regardless of how long they are viewed. A method was also proposed to provide a floating image display system with a wide angle of view. Implementation of this design proved problematic, producing various image distortions. In this paper the author discloses new methods to produce aerial images with a wide angel of view and improved image quality.

  2. User experience while viewing stereoscopic 3D television

    PubMed Central

    Read, Jenny C.A.; Bohr, Iwo

    2014-01-01

    3D display technologies have been linked to visual discomfort and fatigue. In a lab-based study with a between-subjects design, 433 viewers aged from 4 to 82 years watched the same movie in either 2D or stereo 3D (S3D), and subjectively reported on a range of aspects of their viewing experience. Our results suggest that a minority of viewers, around 14%, experience adverse effects due to viewing S3D, mainly headache and eyestrain. A control experiment where participants viewed 2D content through 3D glasses suggests that around 8% may report adverse effects which are not due directly to viewing S3D, but instead are due to the glasses or to negative preconceptions about S3D (the ‘nocebo effect'). Women were slightly more likely than men to report adverse effects with S3D. We could not detect any link between pre-existing eye conditions or low stereoacuity and the likelihood of experiencing adverse effects with S3D. Practitioner Summary: Stereoscopic 3D (S3D) has been linked to visual discomfort and fatigue. Viewers watched the same movie in either 2D or stereo 3D (between-subjects design). Around 14% reported effects such as headache and eyestrain linked to S3D itself, while 8% report adverse effects attributable to 3D glasses or negative expectations. PMID:24874550

  3. Real-Depth imaging: a new 3D imaging technology with inexpensive direct-view (no glasses) video and other applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for a new, patent pending, 'floating 3-D, off-the-screen- experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3-D graphics' technologies are actually flat on screen. Floating ImagesTM technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3-D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax (the ability to look around foreground objects to see previously hidden background objects, with each eye seeing a different view at all times) and accommodation (the need to re-focus one's eyes when shifting attention from a near object to a distant object) which coincides with convergence (the need to re-aim one's eyes when shifting attention from a near object to a distant object). Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed (unlike stereoscopic and autostereoscopic displays). The imagery (video or computer generated) must either be formatted for the Floating ImagesTM platform when written or existing software can be re-formatted without much difficulty.

  4. Methods For Electronic 3-D Moving Pictures Without Glasses

    NASA Astrophysics Data System (ADS)

    Collender, Robert B.

    1987-06-01

    This paper describes implementation approaches in image acquisition and playback for 3-D computer graphics, 3-D television and 3-D theatre movies without special glasses. Projection lamps, spatial light modulators, CRT's and dynamic scanning are all eliminated by the application of an active image array, all static components and a semi-specular screen. The resulting picture shows horizontal parallax with a wide horizontal view field (up to 360 de-grees) giving a holographic appearance in full color with smooth continuous viewing without speckle. Static component systems are compared with dynamic component systems using both linear and circular arrays. Implementation of computer graphic systems are shown that allow complex shaded color images to extend from the viewer's eyes to infinity. Large screen systems visible by hundreds of people are feasible by the use of low f-stops and high gain screens in projection. Screen geometries and special screen properties are shown. Viewing characteristics offer no restrictions in view-position over the entire view-field and have a "look-around" feature for all the categories of computer graphics, television and movies. Standard video cassettes and optical discs can also interface the system to generate a 3-D window viewable without glasses. A prognosis is given for technology application to 3-D pictures without glasses that replicate the daily viewing experience. Super-position of computer graphics on real-world pictures is shown feasible.

  5. User experience while viewing stereoscopic 3D television.

    PubMed

    Read, Jenny C A; Bohr, Iwo

    2014-01-01

    3D display technologies have been linked to visual discomfort and fatigue. In a lab-based study with a between-subjects design, 433 viewers aged from 4 to 82 years watched the same movie in either 2D or stereo 3D (S3D), and subjectively reported on a range of aspects of their viewing experience. Our results suggest that a minority of viewers, around 14%, experience adverse effects due to viewing S3D, mainly headache and eyestrain. A control experiment where participants viewed 2D content through 3D glasses suggests that around 8% may report adverse effects which are not due directly to viewing S3D, but instead are due to the glasses or to negative preconceptions about S3D (the 'nocebo effect'). Women were slightly more likely than men to report adverse effects with S3D. We could not detect any link between pre-existing eye conditions or low stereoacuity and the likelihood of experiencing adverse effects with S3D. PMID:24874550

  6. Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio

    2013-05-01

    With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.

  7. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  8. 3D object retrieval using salient views.

    PubMed

    Atmosukarto, Indriyati; Shapiro, Linda G

    2013-06-01

    This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points (Atmosukarto and Shapiro in International workshop on structural, syntactic, and statistical pattern recognition, 2008; Atmosukarto and Shapiro in ACM multimedia information retrieval, 2008). The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from Chen et al. (Comput Graph Forum 22(3):223-232, 2003) are then used to calculate the similarity between two 3D objects. Retrieval experiments were performed on three datasets: the Heads dataset, the SHREC2008 dataset, and the Princeton dataset. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method (Chen et al. in Comput Graph Forum 22(3):223-232, 2003), and our method achieves a 15-fold speedup in the feature extraction computation time. PMID:23833704

  9. 3D object retrieval using salient views

    PubMed Central

    Shapiro, Linda G.

    2013-01-01

    This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points (Atmosukarto and Shapiro in International workshop on structural, syntactic, and statistical pattern recognition, 2008; Atmosukarto and Shapiro in ACM multimedia information retrieval, 2008). The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from Chen et al. (Comput Graph Forum 22(3):223–232, 2003) are then used to calculate the similarity between two 3D objects. Retrieval experiments were performed on three datasets: the Heads dataset, the SHREC2008 dataset, and the Princeton dataset. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method (Chen et al. in Comput Graph Forum 22(3):223–232, 2003), and our method achieves a 15-fold speedup in the feature extraction computation time. PMID:23833704

  10. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  11. EEG-based usability assessment of 3D shutter glasses

    NASA Astrophysics Data System (ADS)

    Wenzel, Markus A.; Schultze-Kraft, Rafael; Meinecke, Frank C.; Cardinaux, Fabien; Kemp, Thomas; Müller, Klaus-Robert; Curio, Gabriel; Blankertz, Benjamin

    2016-02-01

    Objective. Neurotechnology can contribute to the usability assessment of products by providing objective measures of neural workload and can uncover usability impediments that are not consciously perceived by test persons. In this study, the neural processing effort imposed on the viewer of 3D television by shutter glasses was quantified as a function of shutter frequency. In particular, we sought to determine the critical shutter frequency at which the ‘neural flicker’ vanishes, such that visual fatigue due to this additional neural effort can be prevented by increasing the frequency of the system. Approach. Twenty-three participants viewed an image through 3D shutter glasses, while multichannel electroencephalogram (EEG) was recorded. In total ten shutter frequencies were employed, selected individually for each participant to cover the range below, at and above the threshold of flicker perception. The source of the neural flicker correlate was extracted using independent component analysis and the flicker impact on the visual cortex was quantified by decoding the state of the shutter from the EEG. Main Result. Effects of the shutter glasses were traced in the EEG up to around 67 Hz—about 20 Hz over the flicker perception threshold—and vanished at the subsequent frequency level of 77 Hz. Significance. The impact of the shutter glasses on the visual cortex can be detected by neurotechnology even when a flicker is not reported by the participants. Potential impact. Increasing the shutter frequency from the usual 50 Hz or 60 Hz to 77 Hz reduces the risk of visual fatigue and thus improves shutter-glass-based 3D usability.

  12. Mobile glasses-free 3D using compact waveguide hologram

    NASA Astrophysics Data System (ADS)

    Pyun, K.; Choi, C.; Morozov, A.; Putilin, A.; Bovsunovskiy, I.; Kim, S.; Ahn, J.; Lee, H.-S.; Lee, S.

    2013-02-01

    The exploding mobile communication devices make 3D data available anywhere anytime. However, to record and reconstruct 3D, the huge number of optical components is often required, which makes overall device size bulky and image quality degraded due to the error-prone tuning. In addition, if additional glass is required, then user experience of 3D is exhausting and unpleasant. Holography is the ultimate 3D that users experience natural 3D in every direction. For mobile glasses-free 3D experience, it is critical to make holography device that can be as compact and integrated as possible. For reliable and economical mass production, integrated optics is needed as integrated circuits in semiconductor industry. Thus, we propose mobile glasses-free 3D using compact waveguide hologram in terms of overall device sizes, quantity of elements and combined functionality of each element. The main advantages of proposed solution are as follows: First, this solution utilizes various integral optical elements, where each of them is a united not adjustable optical element, replacing separate and adjustable optical elements with various forms and configurations. Second, geometrical form of integral elements provides small sizes of whole device. Third, geometrical form of integral elements allows creating flat device. And finally, absence of adjustable elements provide rigidly of whole device. The usage of integrated optical means based on waveguide holographic elements allows creating a new type of compact and high functional devices for mobile glasses-free 3D applications such as mobile medical 3D data visualization.

  13. First 3D view of solar eruptions

    NASA Astrophysics Data System (ADS)

    2004-07-01

    arrival times and impact angles at the Earth," says Dr Thomas Moran of the Catholic University, Washington, USA. In collaboration with Dr Joseph Davila, of NASA’s Goddard Space Flight Center, Greenbelt, USA, Moran has analysed two-dimensional images from the ESA/NASA Solar and Heliospheric Observatory (SOHO) in a new way to yield 3D images. Their technique is able to reveal the complex and distorted magnetic fields that travel with the CME cloud and sometimes interact with Earth's own magnetic field, pouring tremendous amounts of energy into the space near Earth. "These magnetic fields are invisible," Moran explains, "but since the CME gas is electrified, it spirals around the magnetic fields, tracing out their shapes." Therefore, a 3D view of the CME electrified gas (called a plasma) gives scientists valuable information on the structure and behaviour of the magnetic fields powering the CME. The new analysis technique for SOHO data determines the three-dimensional structure of a CME by taking a sequence of three SOHO Large Angle and Spectrometric Coronagraph (LASCO) images through various polarisers, at different angles. Whilst the light emitted by the Sun is not polarised, once it is scattered off electrons in the CME plasma it takes up some polarisation. This means that the electric fields of some of the scattered light are forced to oscillate in certain directions, whereas the electric field in the light emitted by the Sun is free to oscillate in all directions. Moran and Davila knew that light from CME structures closer to the plane of the Sun (as seen on the LASCO images) had to be more polarised than light from structures farther from that plane. Thus, by computing the ratio of polarised to unpolarised light for each CME structure, they could measure its distance from the plane. This provided the missing third dimension to the LASCO images. With this technique, the team has confirmed that the structure of CMEs directed towards Earth is an expanding arcade of

  14. 3-D Perspective View, Kamchatka Peninsula, Russia

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western side of the volcanically active Kamchatka Peninsula in eastern Russia. The image was generated using the first data collected during the Shuttle Radar Topography Mission (SRTM). In the foreground is the Sea of Okhotsk. Inland from the coast, vegetated floodplains and low relief hills rise toward snow capped peaks. The topographic effects on snow and vegetation distribution are very clear in this near-horizontal view. Forming the skyline is the Sredinnyy Khrebet, the volcanic mountain range that makes up the spine of the peninsula. High resolution SRTM topographic data will be used by geologists to study how volcanoes form and to understand the hazards posed by future eruptions.

    This image was generated using topographic data from SRTM and an enhanced true-color image from the Landsat 7 satellite. This image contains about 2,400 meters (7,880 feet) of total relief. The topographic expression was enhanced by adding artificial shading as calculated from the SRTM elevation model. The Landsat data was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    SRTM, launched on February 11, 2000, used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar(SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. To collect the 3-D SRTM data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. SRTM collected three-dimensional measurements of nearly 80 percent of the Earth's surface. SRTM is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 33.3 km (20.6 miles) wide x

  15. First 3D view of solar eruptions

    NASA Astrophysics Data System (ADS)

    2004-07-01

    arrival times and impact angles at the Earth," says Dr Thomas Moran of the Catholic University, Washington, USA. In collaboration with Dr Joseph Davila, of NASA’s Goddard Space Flight Center, Greenbelt, USA, Moran has analysed two-dimensional images from the ESA/NASA Solar and Heliospheric Observatory (SOHO) in a new way to yield 3D images. Their technique is able to reveal the complex and distorted magnetic fields that travel with the CME cloud and sometimes interact with Earth's own magnetic field, pouring tremendous amounts of energy into the space near Earth. "These magnetic fields are invisible," Moran explains, "but since the CME gas is electrified, it spirals around the magnetic fields, tracing out their shapes." Therefore, a 3D view of the CME electrified gas (called a plasma) gives scientists valuable information on the structure and behaviour of the magnetic fields powering the CME. The new analysis technique for SOHO data determines the three-dimensional structure of a CME by taking a sequence of three SOHO Large Angle and Spectrometric Coronagraph (LASCO) images through various polarisers, at different angles. Whilst the light emitted by the Sun is not polarised, once it is scattered off electrons in the CME plasma it takes up some polarisation. This means that the electric fields of some of the scattered light are forced to oscillate in certain directions, whereas the electric field in the light emitted by the Sun is free to oscillate in all directions. Moran and Davila knew that light from CME structures closer to the plane of the Sun (as seen on the LASCO images) had to be more polarised than light from structures farther from that plane. Thus, by computing the ratio of polarised to unpolarised light for each CME structure, they could measure its distance from the plane. This provided the missing third dimension to the LASCO images. With this technique, the team has confirmed that the structure of CMEs directed towards Earth is an expanding arcade of

  16. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  17. 3D View of Death Valley, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This 3-D perspective view looking north over Death Valley, California, was produced by draping ASTER nighttime thermal infrared data over topographic data from the US Geological Survey. The ASTER data were acquired April 7, 2000 with the multi-spectral thermal infrared channels, and cover an area of 60 by 80 km (37 by 50 miles). Bands 13, 12, and 10 are displayed in red, green and blue respectively. The data have been computer enhanced to exaggerate the color variations that highlight differences in types of surface materials. Salt deposits on the floor of Death Valley appear in shades of yellow, green, purple, and pink, indicating presence of carbonate, sulfate, and chloride minerals. The Panamint Mtns. to the west, and the Black Mtns. to the east, are made up of sedimentary limestones, sandstones, shales, and metamorphic rocks. The bright red areas are dominated by the mineral quartz, such as is found in sandstones; green areas are limestones. In the lower center part of the image is Badwater, the lowest point in North America.

    Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of International Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products. Dr. Anne Kahle at NASA's Jet Propulsion Laboratory, Pasadena, Calif., is the U.S. Science team leader; Moshe Pniel of JPL is the project manager. ASTER is the only high resolution imaging sensor on Terra. The primary goal of the ASTER mission is to obtain high-resolution image data in 14 channels over the entire land surface, as well as black and white stereo images. With revisit time of between 4 and 16 days, ASTER will provide the capability for repeat coverage of changing areas on Earth's surface.

    The broad spectral coverage and high spectral resolution of ASTER

  18. View synthesis techniques for 3D video

    NASA Astrophysics Data System (ADS)

    Tian, Dong; Lai, Po-Lin; Lopez, Patrick; Gomila, Cristina

    2009-08-01

    To facilitate new video applications such as three-dimensional video (3DV) and free-viewpoint video (FVV), multiple view plus depth format (MVD), which consists of both video views and the corresponding per-pixel depth images, is being investigated. Virtual views can be generated using depth image based rendering (DIBR), which takes video and the corresponding depth images as input. This paper discusses view synthesis techniques based on DIBR, which includes forward warping, blending and hole filling. Especially, we will emphasize on the techniques brought to the MPEG view synthesis reference software (VSRS). Unlike the case in the field of computer graphics, the ground truth depth images for nature content are very difficult to obtain. The estimated depth images used for view synthesis typically contain different types of noises. Some robust synthesis modes to combat against the depth errors are also presented in this paper. In addition, we briefly discuss how to use synthesis techniques with minor modifications to generate the occlusion layer information for layered depth video (LDV) data, which is another potential format for 3DV applications.

  19. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ). PMID:26440264

  20. World Wind 3D Earth Viewing

    NASA Technical Reports Server (NTRS)

    Hogan, Patrick; Maxwell, Christopher; Kim, Randolph; Gaskins, Tom

    2007-01-01

    World Wind allows users to zoom from satellite altitude down to any place on Earth, leveraging high-resolution LandSat imagery and SRTM (Shuttle Radar Topography Mission) elevation data to experience Earth in visually rich 3D. In addition to Earth, World Wind can also visualize other planets, and there are already comprehensive data sets for Mars and the Earth's moon, which are as easily accessible as those of Earth. There have been more than 20 million downloads to date, and the software is being used heavily by the Department of Defense due to the code s ability to be extended and the evolution of the code courtesy of NASA and the user community. Primary features include the dynamic access to public domain imagery and its ease of use. All one needs to control World Wind is a two-button mouse. Additional guides and features can be accessed through a simplified menu. A JAVA version will be available soon. Navigation is automated with single clicks of a mouse, or by typing in any location to automatically zoom in to see it. The World Wind install package contains the necessary requirements such as the .NET runtime and managed DirectX library. World Wind can display combinations of data from a variety of sources, including Blue Marble, LandSat 7, SRTM, NASA Scientific Visualization Studio, GLOBE, and much more. A thorough list of features, the user manual, a key chart, and screen shots are available at http://worldwind.arc.nasa.gov.

  1. 3D View of Mars Particle

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This is a 3D representation of the pits seen in the first Atomic Force Microscope, or AFM, images sent back from NASA's Phoenix Mars Lander. Red represents the highest point and purple represents the lowest point.

    The particle in the upper left corner shown at the highest magnification ever seen from another world is a rounded particle about one micrometer, or one millionth of a meter, across. It is a particle of the dust that cloaks Mars. Such dust particles color the Martian sky pink, feed storms that regularly envelop the planet and produce Mars' distinctive red soil.

    The particle was part of a sample informally called 'Sorceress' delivered to the AFM on the 38th Martian day, or sol, of the mission (July 2, 2008). The AFM is part of Phoenix's microscopic station called MECA, or the Microscopy, Electrochemistry, and Conductivity Analyzer.

    The AFM was developed by a Swiss-led consortium, with Imperial College London producing the silicon substrate that holds sampled particles.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  2. 3D View of Grand Canyon, Arizona

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Grand Canyon is one of North America's most spectacular geologic features. Carved primarily by the Colorado River over the past six million years, the canyon sports vertical drops of 5,000 feet and spans a 445-kilometer-long stretch of Arizona desert. The strata along the steep walls of the canyon form a record of geologic time from the Paleozoic Era (250 million years ago) to the Precambrian (1.7 billion years ago).

    The above view was acquired by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument aboard the Terra spacecraft. Visible and near infrared data were combined to form an image that simulates the natural colors of water and vegetation. Rock colors, however, are not accurate. The image data were combined with elevation data to produce this perspective view, with no vertical exaggeration, looking from above the South Rim up Bright Angel Canyon towards the North Rim. The light lines on the plateau at lower right are the roads around the Canyon View Information Plaza. The Bright Angel Trail, which reaches the Colorado in 11.3 kilometers, can be seen dropping into the canyon over Plateau Point at bottom center. The blue and black areas on the North Rim indicate a forest fire that was smoldering as the data were acquired on May 12, 2000.

    Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of International Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products. Dr. Anne Kahle at NASA's Jet Propulsion Laboratory, Pasadena, Calif., is the U.S. Science team leader; Moshe Pniel of JPL is the project manager. ASTER is the only high resolution imaging sensor on Terra. The primary goal of the ASTER mission is to obtain high-resolution image data in 14 channels over the entire land

  3. Balance and coordination after viewing stereoscopic 3D television.

    PubMed

    Read, Jenny C A; Simonotto, Jennifer; Bohr, Iwo; Godfrey, Alan; Galna, Brook; Rochester, Lynn; Smulders, Tom V

    2015-07-01

    Manufacturers and the media have raised the possibility that viewing stereoscopic 3D television (S3D TV) may cause temporary disruption to balance and visuomotor coordination. We looked for evidence of such effects in a laboratory-based study. Four hundred and thirty-three people aged 4-82 years old carried out tests of balance and coordination before and after viewing an 80 min movie in either conventional 2D or stereoscopic 3D, while wearing two triaxial accelerometers. Accelerometry produced little evidence of any change in body motion associated with S3D TV. We found no evidence that viewing the movie in S3D causes a detectable impairment in balance or in visuomotor coordination. PMID:26587261

  4. Balance and coordination after viewing stereoscopic 3D television

    PubMed Central

    Read, Jenny C. A.; Simonotto, Jennifer; Bohr, Iwo; Godfrey, Alan; Galna, Brook; Rochester, Lynn; Smulders, Tom V.

    2015-01-01

    Manufacturers and the media have raised the possibility that viewing stereoscopic 3D television (S3D TV) may cause temporary disruption to balance and visuomotor coordination. We looked for evidence of such effects in a laboratory-based study. Four hundred and thirty-three people aged 4–82 years old carried out tests of balance and coordination before and after viewing an 80 min movie in either conventional 2D or stereoscopic 3D, while wearing two triaxial accelerometers. Accelerometry produced little evidence of any change in body motion associated with S3D TV. We found no evidence that viewing the movie in S3D causes a detectable impairment in balance or in visuomotor coordination. PMID:26587261

  5. Evaluation of viewing experiences induced by curved 3D display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul; Yano, Sumio

    2015-05-01

    As advanced display technology has been developed, much attention has been given to flexible panels. On top of that, with the momentum of the 3D era, stereoscopic 3D technique has been combined with the curved displays. However, despite the increased needs for 3D function in the curved displays, comparisons between curved and flat panel displays with 3D views have rarely been tested. Most of the previous studies have investigated their basic ergonomic aspects such as viewing posture and distance with only 2D views. It has generally been known that curved displays are more effective in enhancing involvement in specific content stories because field of views and distance from the eyes of viewers to both edges of the screen are more natural in curved displays than in flat panel ones. For flat panel displays, ocular torsions may occur when viewers try to move their eyes from the center to the edges of the screen to continuously capture rapidly moving 3D objects. This is due in part to differences in viewing distances from the center of the screen to eyes of viewers and from the edges of the screen to the eyes. Thus, this study compared S3D viewing experiences induced by a curved display with those of a flat panel display by evaluating significant subjective and objective measures.

  6. 3D scene modeling from multiple range views

    NASA Astrophysics Data System (ADS)

    Sequeira, Vitor; Goncalves, Joao G. M.; Ribeiro, M. Isabel

    1995-09-01

    This paper presents a new 3D scene analysis system that automatically reconstructs the 3D geometric model of real-world scenes from multiple range images acquired by a laser range finder on board of a mobile robot. The reconstruction is achieved through an integrated procedure including range data acquisition, geometrical feature extraction, registration, and integration of multiple views. Different descriptions of the final 3D scene model are obtained: a polygonal triangular mesh, a surface description in terms of planar and biquadratics surfaces, and a 3D boundary representation. Relevant experimental results from the complete 3D scene modeling are presented. Direct applications of this technique include 3D reconstruction and/or update of architectual or industrial plans into a CAD model, design verification of buildings, navigation of autonomous robots, and input to virtual reality systems.

  7. Virtual view adaptation for 3D multiview video streaming

    NASA Astrophysics Data System (ADS)

    Petrovic, Goran; Do, Luat; Zinger, Sveta; de With, Peter H. N.

    2010-02-01

    Virtual views in 3D-TV and multi-view video systems are reconstructed images of the scene generated synthetically from the original views. In this paper, we analyze the performance of streaming virtual views over IP-networks with a limited and time-varying available bandwidth. We show that the average video quality perceived by the user can be improved with an adaptive streaming strategy aiming at maximizing the average video quality. Our adaptive 3D multi-view streaming can provide a quality improvement of 2 dB on the average - over non-adaptive streaming. We demonstrate that an optimized virtual view adaptation algorithm needs to be view-dependent and achieve an improvement of up to 0.7 dB. We analyze our adaptation strategies under dynamic available bandwidth in the network.

  8. A closer view of prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Shark, Half-Dome, Pumpkin, Flat Top and Frog are at center. Little Flat Top is at right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  9. 3-D Television Without Glasses: On Standard Bandwidth

    NASA Astrophysics Data System (ADS)

    Collender, Robert B.

    1983-10-01

    This system for stereoscopic television uses relative camera to scene translating motion and does not require optical aids at the observer's eyes, presents a horizontal parallax (hologram like) 3-D full motion scene to a wide audience, has no dead zones or pseudo 3-D zones over the entire horizontal viewing field and operates on standard telecast signals requiring no changes to the television studio equipment or the home television antenna. The only change required at the receiving end is a special television projector. The system is compatible with pre-recorded standard color television signals. The cathode ray tube is eliminated by substituting an array of solid state charge couple device liquid crystal light valves which have the property to receive television fields in parallel from memory and which are arrayed in an arc for scanning purposes. The array contains a scrolled sequence of successive television frames which serve as the basis for 3-D horizontal viewing parallax. These light valves reflect polarized light with the degree of polarization made a function of the scene brightness. The array is optically scanned and the sequence rapidly projected onto a cylindrical concaved semi-specular screen that returns all of the light to a rapidly translating vertical "aerial" exit slit of light through which the audience views the reconstructed 3-D scene.

  10. Efficient fabrication method of nano-grating for 3D holographic display with full parallax views.

    PubMed

    Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Fang, Zongbao; Pu, Donglin; Ye, Yan; Liu, Yanhua; Chen, Linsen

    2016-03-21

    Without any special glasses, multiview 3D displays based on the diffractive optics can present high resolution, full-parallax 3D images in an ultra-wide viewing angle. The enabling optical component, namely the phase plate, can produce arbitrarily distributed view zones by carefully designing the orientation and the period of each nano-grating pixel. However, such 3D display screen is restricted to a limited size due to the time-consuming fabricating process of nano-gratings on the phase plate. In this paper, we proposed and developed a lithography system that can fabricate the phase plate efficiently. Here we made two phase plates with full nano-grating pixel coverage at a speed of 20 mm2/mins, a 500 fold increment in the efficiency when compared to the method of E-beam lithography. One 2.5-inch phase plate generated 9-view 3D images with horizontal-parallax, while the other 6-inch phase plate produced 64-view 3D images with full-parallax. The angular divergence in horizontal axis and vertical axis was 1.5 degrees, and 1.25 degrees, respectively, slightly larger than the simulated value of 1.2 degrees by Finite Difference Time Domain (FDTD). The intensity variation was less than 10% for each viewpoint, in consistency with the simulation results. On top of each phase plate, a high-resolution binary masking pattern containing amplitude information of all viewing zone was well aligned. We achieved a resolution of 400 pixels/inch and a viewing angle of 40 degrees for 9-view 3D images with horizontal parallax. In another prototype, the resolution of each view was 160 pixels/inch and the view angle was 50 degrees for 64-view 3D images with full parallax. As demonstrated in the experiments, the homemade lithography system provided the key fabricating technology for multiview 3D holographic display. PMID:27136814

  11. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  12. System crosstalk measurement of a time-sequential 3D display using ideal shutter glasses

    NASA Astrophysics Data System (ADS)

    Chen, Fu-Hao; Huang, Kuo-Chung; Lin, Lang-Chin; Chou, Yi-Heng; Lee, Kuen

    2011-03-01

    The market of stereoscopic 3D TV grows up fast recently; however, for 3D TV really taking off, the interoperability of shutter glasses (SG) to view different TV sets must be solved, so we developed a measurement method with ideal shutter glasses (ISG) to separate time-sequential stereoscopic displays and SG. For measuring the crosstalk from time-sequential stereoscopic 3D displays, the influences from SG must be eliminated. The advantages are that the sources to crosstalk are distinguished, and the interoperability of SG is broadened. Hence, this paper proposed ideal shutter glasses, whose non-ideal properties are eliminated, as a platform to evaluate the crosstalk purely from the display. In the ISG method, the illuminance of the display was measured in time domain to analyze the system crosstalk SCT of the display. In this experiment, the ISG method was used to measure SCT with a high-speed-response illuminance meter. From the time-resolved illuminance signals, the slow time response of liquid crystal leading to SCT is visualized and quantified. Furthermore, an intriguing phenomenon that SCT measured through SG increases with shortening view distance was observed, and it may arise from LC leakage of the display and shutter leakage at large view angle. Thus, we measured how LC and shutter leakage depending on view angle and verified our argument. Besides, we used the ISG method to evaluate two displays.

  13. 3D Viewing: Odd Perception - Illusion? reality? or both?

    NASA Astrophysics Data System (ADS)

    Kisimoto, K.; Iizasa, K.

    2008-12-01

    We live in the three dimensional space, don't we? It could be at least four dimensions, but that is another story. In either way our perceptual capability of 3D-Viewing is constrained by our 2D-perception (our intrinsic tools of perception). I carried out a few visual experiments using topographic data to show our intrinsic (or biological) disability (or shortcoming) in 3D-recognition of our world. Results of the experiments suggest: (1) 3D-surface model displayed on a 2D-computer screen (or paper) always has two interpretations of the 3D- surface geometry, if we choose one of the interpretation (in other word, if we are hooked by one perception of the two), we maintain its perception even if the 3D-model changes its viewing perspective in time shown on the screen, (2) more interesting is that 3D-real solid object (e.g.,made of clay) also gives above mentioned two interpretations of the geometry of the object, if we observe the object with one-eye. Most famous example of this viewing illusion is exemplified by a magician, who died in 2007, Jerry Andrus who made a super-cool paper crafted dragon which causes visual illusion to one-eyed viewer. I, by the experiments, confirmed this phenomenon in another perceptually persuasive (deceptive?) way. My conclusion is that this illusion is intrinsic, i.e. reality for human, because, even if we live in 3D-space, our perceptional tool (eyes) is composed of 2D sensors whose information is reconstructed or processed to 3D by our experience-based brain. So, (3) when we observe the 3D-surface-model on the computer screen, we are always one eye short even if we use both eyes. One last suggestion from my experiments is that recent highly sophisticated 3D- models might include too many information that human perceptions cannot handle properly, i.e. we might not be understanding the 3D world (geospace) at all, just illusioned.

  14. 3-D object recognition using 2-D views.

    PubMed

    Li, Wenjing; Bebis, George; Bourbakis, Nikolaos G

    2008-11-01

    We consider the problem of recognizing 3-D objects from 2-D images using geometric models and assuming different viewing angles and positions. Our goal is to recognize and localize instances of specific objects (i.e., model-based) in a scene. This is in contrast to category-based object recognition methods where the goal is to search for instances of objects that belong to a certain visual category (e.g., faces or cars). The key contribution of our work is improving 3-D object recognition by integrating Algebraic Functions of Views (AFoVs), a powerful framework for predicting the geometric appearance of an object due to viewpoint changes, with indexing and learning. During training, we compute the space of views that groups of object features can produce under the assumption of 3-D linear transformations, by combining a small number of reference views that contain the object features using AFoVs. Unrealistic views (e.g., due to the assumption of 3-D linear transformations) are eliminated by imposing a pair of rigidity constraints based on knowledge of the transformation between the reference views of the object. To represent the space of views that an object can produce compactly while allowing efficient hypothesis generation during recognition, we propose combining indexing with learning in two stages. In the first stage, we sample the space of views of an object sparsely and represent information about the samples using indexing. In the second stage, we build probabilistic models of shape appearance by sampling the space of views of the object densely and learning the manifold formed by the samples. Learning employs the Expectation-Maximization (EM) algorithm and takes place in a "universal," lower-dimensional, space computed through Random Projection (RP). During recognition, we extract groups of point features from the scene and we use indexing to retrieve the most feasible model groups that might have produced them (i.e., hypothesis generation). The likelihood

  15. View-dependent streamlines for 3D vector fields.

    PubMed

    Marchesin, Stéphane; Chen, Cheng-Kai; Ho, Chris; Ma, Kwan-Liu

    2010-01-01

    This paper introduces a new streamline placement and selection algorithm for 3D vector fields. Instead of considering the problem as a simple feature search in data space, we base our work on the observation that most streamline fields generate a lot of self-occlusion which prevents proper visualization. In order to avoid this issue, we approach the problem in a view-dependent fashion and dynamically determine a set of streamlines which contributes to data understanding without cluttering the view. Since our technique couples flow characteristic criteria and view-dependent streamline selection we are able achieve the best of both worlds: relevant flow description and intelligible, uncluttered pictures. We detail an efficient GPU implementation of our algorithm, show comprehensive visual results on multiple datasets and compare our method with existing flow depiction techniques. Our results show that our technique greatly improves the readability of streamline visualizations on different datasets without requiring user intervention. PMID:20975200

  16. A method of multi-view intraoral 3D measurement

    NASA Astrophysics Data System (ADS)

    Zhao, Huijie; Wang, Zhen; Jiang, Hongzhi; Xu, Yang; Lv, Peijun; Sun, Yunchun

    2015-02-01

    In dental restoration, its important to achieve a high-accuracy digital impression. Most of the existing intraoral measurement systems can only measure the tooth from a single view. Therfore - if we are wilng to acquire the whole data of a tooth, the scans of the tooth from multi-direction ad the data stitching based on the features of the surface are needed, which increases the measurement duration and influence the measurement accuracy. In this paper, we introduce a fringe-projection based on multi-view intraoral measurement system. It can acquire 3D data of the occlusal surface, the buccal surface and the lingual surface of a tooth synchronously, by using a senor with three mirrors, which aim at the three surfaces respectively and thus expand the measuring area. The constant relationship of the three mirrors is calibrated before measurement and can help stitch the data clouds acquired through different mirrors accurately. Therefore the system can obtain the 3D data of a tooth without the need to measure it from different directions for many times. Experiments proved the availability and reliability of this miniaturized measurement system.

  17. Color and 3D views of the Sierra Nevada mountains

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A stereo 'anaglyph' created using the nadir and 45.6-degree forward-viewing cameras provides a three-dimensional view of the scene when viewed with red/blue glasses. The red filter should be placed over your left eye. To facilitate the stereo viewing, the images have been oriented with north toward the left. Some prominent features are Mono Lake, in the center of the image; Walker Lake, to its left; and Lake Tahoe, near the lower left. This view of the Sierra Nevadas includes Yosemite, Kings Canyon, and Sequoia National Parks. Mount Whitney, the highest peak in the contiguous 48 states (elev. 14,495 feet), is visible near the righthand edge. Above it (to the east), the Owens Valley shows up prominently between the Sierra Nevada and Inyo ranges. Precipitation falling as rain or snow on the Sierras feeds numerous rivers flowing southwestward into the San Joaquin Valley. The abundant fields of this productive agricultural area can be seen along the lower right; a large number of reservoirs that supply water for crop irrigation are apparent in the western foothills of the Sierras. Urban areas in the valley appear as gray patches; among the California cities that are visible are Fresno, Merced, and Modesto.

  18. 4. View showing underside of wing, looking glass aircraft. View ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. View showing underside of wing, looking glass aircraft. View to north. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  19. 3. General view showing rear of looking glass aircraft. View ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. General view showing rear of looking glass aircraft. View to north. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  20. 5. Headon view of looking glass aircraft. View to southwest. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. Head-on view of looking glass aircraft. View to southwest. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  1. A 3D view of the SN 1987A Ejecta

    NASA Astrophysics Data System (ADS)

    Fransson, Claes

    2013-10-01

    SN 1987A represents the most important source of information about the explosion physics of any SN. For this the morphology of the ejecta is together with the radioactive isotopes the best diagnostics. From HST imaging in H-alpha and NIR AO imaging in Si/Fe at 1.64 mu one finds completely different morphologies, with the 1.64 mu image dominated by the processed core and H-alpha by the surrounding H envelope. Besides Cas A (Type IIb), this is the only core collapse SN where we have this information. We propose to use STIS to map the debris in SN 1987A in 3D with the best possible angular resolution. There has been no such STIS map since 2004, while the physics of the emission has undergone some profound changes. From being powered by radioactivity the energy input is now dominated by X-rays from the collision with the circumstellar ring. Compared to 2004 the 3D structure can be determined with a factor of 3 better spatial resolution and also better spectral resolution. The 3D structure in H-alpha can also give independent clues to where the large mass of dust detected by Herschel is located as well as its properties. It also gives a complementary view of the ejecta to the future ALMA imaging in CO which will have similar spatial resolution. Besides the debris we will be able to probe the 10,000 km/s reverse shock close to the ring in H-alpha. By observing this also in Ly-alpha one may test different emission processes which have been proposed, as well as probing the region producing the synchrotron emission observed by ALMA. The opportunity to observe the SN in this stage will never come back

  2. Design and fabrication of concave-convex lens for head mounted virtual reality 3D glasses

    NASA Astrophysics Data System (ADS)

    Deng, Zhaoyang; Cheng, Dewen; Hu, Yuan; Huang, Yifan; Wang, Yongtian

    2015-08-01

    As a kind of light-weighted and convenient tool to achieve stereoscopic vision, virtual reality glasses are gaining more popularity nowadays. For these glasses, molded plastic lenses are often adopted to handle both the imaging property and the cost of massive production. However, the as-built performance of the glass depends on both the optical design and the injection molding process, and maintaining the profile of the lens during injection molding process presents particular challenges. In this paper, optical design is combined with processing simulation analysis to obtain a design result suitable for injection molding. Based on the design and analysis results, different experiments are done using high-quality equipment to optimize the process parameters of injection molding. Finally, a single concave-convex lens is designed with a field-of-view of 90° for the virtual reality 3D glasses. The as-built profile error of the glass lens is controlled within 5μm, which indicates that the designed shape of the lens is fairly realized and the designed optical performance can thus be achieved.

  3. Glasses-free 3D display based on micro-nano-approach and system

    NASA Astrophysics Data System (ADS)

    Lou, Yimin; Ye, Yan; Shen, Su; Pu, Donglin; Chen, Linsen

    2014-11-01

    Micro-nano optics and digital dot matrix hologram (DDMH) technique has been combined to code and fabricate glassfree 3D image. Two kinds of true color 3D DDMH have been designed. One of the design releases the fabrication complexity and the other enlarges the view angle of 3D DDMH. Chromatic aberration has been corrected using rainbow hologram technique. A holographic printing system combined the interference and projection lithography technique has been demonstrated. Fresnel lens and large view angle 3D DDMH have been outputted, excellent color performance of 3D image has been realized.

  4. Glasses for 3D ultrasound computer tomography: phase compensation

    NASA Astrophysics Data System (ADS)

    Zapf, M.; Hopp, T.; Ruiter, N. V.

    2016-03-01

    Ultrasound Computer Tomography (USCT), developed at KIT, is a promising new imaging system for breast cancer diagnosis, and was successfully tested in a pilot study. The 3D USCT II prototype consists of several hundreds of ultrasound (US) transducers on a semi-ellipsoidal aperture. Spherical waves are sequentially emitted by individual transducers and received in parallel by many transducers. Reflectivity volumes are reconstructed by synthetic aperture focusing (SAFT). However, straight forward SAFT imaging leads to blurred images due to system imperfections. We present an extension of a previously proposed approach to enhance the images. This approach includes additional a priori information and system characteristics. Now spatial phase compensation was included. The approach was evaluated with a simulation and clinical data sets. An increase in the image quality was observed and quantitatively measured by SNR and other metrics.

  5. 3D laser gated viewing from a moving submarine platform

    NASA Astrophysics Data System (ADS)

    Christnacher, F.; Laurenzis, M.; Monnin, D.; Schmitt, G.; Metzger, Nicolas; Schertzer, Stéphane; Scholtz, T.

    2014-10-01

    Range-gated active imaging is a prominent technique for night vision, remote sensing or vision through obstacles (fog, smoke, camouflage netting…). Furthermore, range-gated imaging not only informs on the scene reflectance but also on the range for each pixel. In this paper, we discuss 3D imaging methods for underwater imaging applications. In this situation, it is particularly difficult to stabilize the imaging platform and these 3D reconstruction algorithms suffer from the motion between the different images in the recorded sequence. To overcome this drawback, we investigated a new method based on a combination between image registration by homography and 3D scene reconstruction through tomography or two-image technique. After stabilisation, the 3D reconstruction is achieved by using the two upper-mentioned techniques. In the different experimental examples given in this paper, a centimetric resolution could be achieved.

  6. Automated 3D reconstruction of interiors with multiple scan views

    NASA Astrophysics Data System (ADS)

    Sequeira, Vitor; Ng, Kia C.; Wolfart, Erik; Goncalves, Joao G. M.; Hogg, David C.

    1998-12-01

    This paper presents two integrated solutions for realistic 3D model acquisition and reconstruction; an early prototype, in the form of a push trolley, and a later prototype in the form of an autonomous robot. The systems encompass all hardware and software required, from laser and video data acquisition, processing and output of texture-mapped 3D models in VRML format, to batteries for power supply and wireless network communications. The autonomous version is also equipped with a mobile platform and other sensors for the purpose of automatic navigation. The applications for such a system range from real estate and tourism (e.g., showing a 3D computer model of a property to a potential buyer or tenant) or as tool for content creation (e.g., creating 3D models of heritage buildings or producing broadcast quality virtual studios). The system can also be used in industrial environments as a reverse engineering tool to update the design of a plant, or as a 3D photo-archive for insurance purposes. The system is Internet compatible: the photo-realistic models can be accessed via the Internet and manipulated interactively in 3D using a common Web browser with a VRML plug-in. Further information and example reconstructed models are available on- line via the RESOLV web-page at http://www.scs.leeds.ac.uk/resolv/.

  7. 3-D Perspective View, Miquelon and Saint Pierre Islands

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image shows Miquelon and Saint Pierre Islands, located south of Newfoundland, Canada. These islands, along with five smaller islands, are a self-governing territory of France. North is in the top right corner of the image. The island of Miquelon, in the background, is divided by a thin barrier beach into Petite Miquelon on the left, and Grande Miquelon on the right. Saint Pierre Island is seen in the foreground. The maximum elevation of this land is 240 meters (787 feet). The land mass of the islands is about 242square kilometers (94 square miles) or 1.5 times the size of Washington, DC.

    This three-dimensional perspective view is one of several still photographs taken from a simulated flyover of the islands. It shows how elevation data collected by the Shuttle Radar Topography Mission (SRTM) can be used to enhance other satellite images. Color and natural shading are provided by a Landsat 7 image taken on September 7, 1999. The Landsat image was draped over the SRTM data. Terrain perspective and shading are from SRTM. The vertical scale has been increased six times to make it easier to see the small features. This also makes the sea cliffs around the edges of the islands look larger. In this view the capital city of Saint Pierre is seen as the bright area in the foreground of the island. The thin bright line seen in the water is a breakwater that offers some walled protection for the coastal city.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and

  8. A 3D glass optrode array for optical neural stimulation

    PubMed Central

    Abaya, T.V.F.; Blair, S.; Tathireddy, P.; Rieth, L.; Solzbacher, F.

    2012-01-01

    This paper presents optical characterization of a first-generation SiO2 optrode array as a set of penetrating waveguides for both optogenetic and infrared (IR) neural stimulation. Fused silica and quartz discs of 3-mm thickness and 50-mm diameter were micromachined to yield 10 × 10 arrays of up to 2-mm long optrodes at a 400-μm pitch; array size, length and spacing may be varied along with the width and tip angle. Light delivery and loss mechanisms through these glass optrodes were characterized. Light in-coupling techniques include using optical fibers and collimated beams. Losses involve Fresnel reflection, coupling, scattering and total internal reflection in the tips. Transmission efficiency was constant in the visible and near-IR range, with the highest value measured as 71% using a 50-μm multi-mode in-coupling fiber butt-coupled to the backplane of the device. Transmittance and output beam profiles of optrodes with different geometries was investigated. Length and tip angle do not affect the amount of output power, but optrode width and tip angle influence the beam size and divergence independently. Finally, array insertion in tissue was performed to demonstrate its robustness for optical access in deep tissue. PMID:23243561

  9. LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.

  10. Multi sky-view 3D aerosol distribution recovery.

    PubMed

    Aides, Amit; Schechner, Yoav Y; Holodovsky, Vadim; Garay, Michael J; Davis, Anthony B

    2013-11-01

    Aerosols affect climate, health and aviation. Currently, their retrieval assumes a plane-parallel atmosphere and solely vertical radiative transfer. We propose a principle to estimate the aerosol distribution as it really is: a three dimensional (3D) volume. The principle is a type of tomography. The process involves wide angle integral imaging of the sky on a very large scale. The imaging can use an array of cameras in visible light. We formulate an image formation model based on 3D radiative transfer. Model inversion is done using optimization methods, exploiting a closed-form gradient which we derive for the model-fit cost function. The tomography model is distinct, as the radiation source is unidirectional and uncontrolled, while off-axis scattering dominates the images. PMID:24216808

  11. Computer-generated hologram for 3D scene from multi-view images

    NASA Astrophysics Data System (ADS)

    Chang, Eun-Young; Kang, Yun-Suk; Moon, KyungAe; Ho, Yo-Sung; Kim, Jinwoong

    2013-05-01

    Recently, the computer generated hologram (CGH) calculated from real existing objects is more actively investigated to support holographic video and TV applications. In this paper, we propose a method of generating a hologram of the natural 3-D scene from multi-view images in order to provide motion parallax viewing with a suitable navigation range. After a unified 3-D point source set describing the captured 3-D scene is obtained from multi-view images, a hologram pattern supporting motion-parallax is calculated from the set using a point-based CGH method. We confirmed that 3-D scenes are faithfully reconstructed using numerical reconstruction.

  12. Spirit 360-Degree View, Sol 388 (3D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this 360-degree view of the rover's surroundings on Spirit's 388th martian day, or sol (Feb. 4, 2005). Spirit had driven about 13 meters (43 feet) uphill toward 'Cumberland Ridge' on this sol. This location is catalogued as Spirit's Site 102, Position 513. The view is presented in a cylindrical-perspective projection with geometric and brightness seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  13. Spirit 360-Degree View on Sol 409 (3D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this 360-degree view of the rover's surroundings on Spirit's 409th martian day, or sol (Feb. 26, 2005). Spirit had driven 2 meters (7 feet) on this sol to get in position on 'Cumberland Ridge' for looking into 'Tennessee Valley' to the east. This location is catalogued as Spirit's Site 108. Rover-wheel tracks from climbing the ridge are visible on the right. The summit of 'Husband Hill' is at the center, to the south. This view is presented in a cylindrical-perspective projection with geometric and brightness seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  14. Real-Depth imaging: a new (no glasses) 3D imaging technology with video/data projection applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax and accommodation which coincides with convergence. Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed. The imagery must either be formatted for the Floating Images platform when written, or existing software can be reformatted without much difficult. The optical hardware system can be made to accommodate virtually any projection system to produce Floating Images for the Boardroom, video arcade, stage shows, or the classroom.

  15. Recognition by Humans and Pigeons of Novel Views of 3-D Objects and Their Photographs

    ERIC Educational Resources Information Center

    Friedman, Alinda; Spetch, Marcia L.; Ferrey, Anne

    2005-01-01

    Humans and pigeons were trained to discriminate between 2 views of actual 3-D objects or their photographs. They were tested on novel views that were either within the closest rotational distance between the training views (interpolated) or outside of that range (extrapolated). When training views were 60? apart, pigeons, but not humans,…

  16. ISM abundances and history: a 3D, solar neighborhood view

    NASA Astrophysics Data System (ADS)

    Lallement, R.; Vergely, J.-L.; Puspitarini, L.

    For observational reasons, the solar neighborhood is particularly suitable for the study of the multi-phase interstellar (IS) medium and the search for traces of its temporal evolution. On the other hand, by a number of aspects it seems to be a peculiar region. We use recent 3D maps of the IS dust based on color excess data as well as former maps of the gas to illustrate how such maps can be used to shed additional light on the specificity of the local medium, its history and abundance pattern. 3D maps reveal a gigantic cavity located in the third quadrant and connected to the Local Bubble, the latter itself running into an elongated cavity toward l≃ 70°. Most nearby cloud complexes of the so-called Gould belt but also more distant clouds seem to border a large fraction of this entire structure. The IS medium with the large cavity appears ionized and dust-poor, as deduced from ionized calcium and neutral sodium to dust ratios. The geometry favors the proposed scenario of Gould belt-Local Arm formation through the braking of a supercloud by interaction with a spiral density wave \\citep{olano01}. The highly variable D/H ratio in the nearby IS gas may also be spatially related to the global structure. We speculate about potential consequences of the supercloud encounter and dust-gas decoupling during its braking, in particular the formation of strong inhomogeneities in both the dust to gas abundance ratio and the dust characteristics: (i) during the ≃ 500 Myrs prior to the collision, dust within the supercloud may have been gradually, strongly enriched in D due to an absence of strong stellar formation and preferential adsorption of D \\citep{jura82,draine03} ; (ii) during its interaction with the Plane and the braking dust-rich and dust-poor regions may have formed due to differential gas drag, the dust being more concentrated in the dense areas; strong radiation pressure from OB associations at the boundary of the left-behind giant cavity may have also helped

  17. Venus - 3D Perspective View of Maat Mons

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Maat Mons is displayed in this three-dimensional perspective view of the surface of Venus. The viewpoint is located 560 kilometers (347 miles) north of Maat Mons at an elevation of 1.7 kilometers (1 mile) above the terrain. Lava flows extend for hundreds of kilometers across the fractured plains shown in the foreground, to the base of Maat Mons. The view is to the south with Maat Mons appearing at the center of the image on the horizon. Maat Mons, an 8-kilometer (5 mile) high volcano, is located at approximately 0.9 degrees north latitude, 194.5 degrees east longitude. Maat Mons is named for an Egyptian goddess of truth and justice. Magellan synthetic aperture radar data is combined with radar altimetry to develop a three-dimensional map of the surface. The vertical scale in this perspective has been exaggerated 22.5 times. Rays cast in a computer intersect the surface to create a three-dimensional perspective view. Simulated color and a digital elevation map developed by the U.S. Geological Survey, are used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced at the JPL Multimission Image Processing Laboratory.

  18. Fabrication of 3D microfluidic structures inside glass by femtosecond laser micromachining

    NASA Astrophysics Data System (ADS)

    Sugioka, Koji; Cheng, Ya

    2014-01-01

    Femtosecond lasers have opened up new avenues in materials processing due to their unique characteristics of ultrashort pulse widths and extremely high peak intensities. One of the most important features of femtosecond laser processing is that a femtosecond laser beam can induce strong absorption in even transparent materials due to nonlinear multiphoton absorption. This makes it possible to directly create three-dimensional (3D) microfluidic structures in glass that are of great use for fabrication of biochips. For fabrication of the 3D microfluidic structures, two technical approaches are being attempted. One of them employs femtosecond laser-induced internal modification of glass followed by wet chemical etching using an acid solution (Femtosecond laser-assisted wet chemical etching), while the other one performs femtosecond laser 3D ablation of the glass in distilled water (liquid-assisted femtosecond laser drilling). This paper provides a review on these two techniques for fabrication of 3D micro and nanofluidic structures in glass based on our development and experimental results.

  19. Registration of multi-view apical 3D echocardiography images

    NASA Astrophysics Data System (ADS)

    Mulder, H. W.; van Stralen, M.; van der Zwaan, H. B.; Leung, K. Y. E.; Bosch, J. G.; Pluim, J. P. W.

    2011-03-01

    Real-time three-dimensional echocardiography (RT3DE) is a non-invasive method to visualize the heart. Disadvantageously, it suffers from non-uniform image quality and a limited field of view. Image quality can be improved by fusion of multiple echocardiography images. Successful registration of the images is essential for prosperous fusion. Therefore, this study examines the performance of different methods for intrasubject registration of multi-view apical RT3DE images. A total of 14 data sets was annotated by two observers who indicated the position of the apex and four points on the mitral valve ring. These annotations were used to evaluate registration. Multi-view end-diastolic (ED) as well as end-systolic (ES) images were rigidly registered in a multi-resolution strategy. The performance of single-frame and multi-frame registration was examined. Multi-frame registration optimizes the metric for several time frames simultaneously. Furthermore, the suitability of mutual information (MI) as similarity measure was compared to normalized cross-correlation (NCC). For initialization of the registration, a transformation that describes the probe movement was obtained by manually registering five representative data sets. It was found that multi-frame registration can improve registration results with respect to single-frame registration. Additionally, NCC outperformed MI as similarity measure. If NCC was optimized in a multi-frame registration strategy including ED and ES time frames, the performance of the automatic method was comparable to that of manual registration. In conclusion, automatic registration of RT3DE images performs as good as manual registration. As registration precedes image fusion, this method can contribute to improved quality of echocardiography images.

  20. Color and 3D views of the Sierra Nevada mountains

    NASA Technical Reports Server (NTRS)

    2002-01-01

    These true-color images covering north-central New Mexico capture the bluish-white smoke plume of the Los Alamos fire, just west of the Rio Grande river. The middle image is a downward-looking (nadir) view, taken by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. As MISR flew from north to south, it viewed the scene from nine different angles. The top image was taken by the MISR camera looking 60 degrees forward along the orbit, whereas the bottom image looks 60 degrees aft. The plume stands out more dramatically in the steep-angle views. Its color and brightness also change with angle. By comparison, a thin, white, water cloud appears in the upper right portion of the scene, and is most easily detected in the top image. MISR uses these angle-to-angle differences to monitor particulate pollution and to identify different types of haze. Such observations allow scientists to study how airborne particles interact with sunlight, a measure of their impact on Earth's climate system. The images are about 400 km (250 miles) wide. The spatial resolution of the nadir image is 275 meters (300 yards); it is 1.1 kilometers (1,200 yards) for the off-nadir images. North is toward the top. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology. For more information, see the MISR web site Image courtesy NASA/GSFC/JPL, MISR Science Team

  1. Large viewing angle projection type electro-holography using new type mist 3D screen

    NASA Astrophysics Data System (ADS)

    Sato, Koki; Zhao, Hongming; Takano, Kunihiko

    2008-02-01

    Recently, many type of 3-D displays are now being developed. We want to see 3-D moving image with comfortably and more expanded depth, Holography is different from the other 3-D display because natural stereoscopic image can be obtained. We have once developed a electro-holographic display using virtual image. But the viewing area is so small because the pixcel size of LCD is not so small. This time we developed the projection type electro-holographic display system. In the case of projection type holography [1], it needs to use the 3-D screen in order to project the reconstructed image clearly and viewing angle becomes wide. We developed the electro-holographic display system using mist 3-D screen. However, a reconstructed image with mist 3-D screen was flickered by gravity and flow of air. Then we considered to reduce the flicker of the image and we found that flicker could be reduced using flow controlled nozzle. Hence, at first we considered the most suitable shape of 3-D screen and then we constructed the array of flow controlled mist 3D screen. By the results of experiment we could get considerably high contrast 3-D moving image and get the viewing area more than 30°by this flow controlled nozzle attached new type mist 3-D screen and make clear the efficiency of this method.

  2. Facile synthesis 3D flexible core-shell graphene/glass fiber via chemical vapor deposition

    NASA Astrophysics Data System (ADS)

    Yang, Cheng; Xu, Yuanyuan; Zhang, Chao; Sun, Zhencui; Chen, Chuansong; Li, Xiuhua; Jiang, Shouzhen; Man, Baoyuan

    2014-08-01

    Direct deposition of graphene layers on the flexible glass fiber surface to form the three-dimensional (3D) core-shell structures is offered using a two-heating reactor chemical vapor deposition system. The two-heating reactor is utilized to offer sufficient, well-proportioned floating C atoms and provide a facile way for low-temperature deposition. Graphene layers, which are controlled by changing the growth time, can be grown on the surface of wire-type glass fiber with the diameter from 30 nm to 120 um. The core-shell graphene/glass fiber deposition mechanism is proposed, suggesting that the 3D graphene films can be deposited on any proper wire-type substrates. These results open a facile way for direct and high-efficiency deposition of the transfer-free graphene layers on the low-temperature dielectric wire-type substrates.

  3. Space Radar Image of Long Valley, California - 3D view

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory

  4. Venus - 3D Perspective View of Maat Mons

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Maat Mons is displayed in this computer generated three-dimensional perspective of the surface of Venus. The viewpoint is located 634 kilometers (393 miles) north of Maat Mons at an elevation of 3 kilometers (2 miles) above the terrain. Lava flows extend for hundreds of kilometers across the fractured plains shown in the foreground, to the base of Maat Mons. The view is to the south with the volcano Maat Mons appearing at the center of the image on the horizon and rising to almost 5 kilometers (3 miles) above the surrounding terrain. Maat Mons is located at approximately 0.9 degrees north latitude, 194.5 degrees east longitude with a peak that ascends to 8 kilometers (5 miles) above the mean surface. Maat Mons is named for an Egyptian Goddess of truth and justice. Magellan synthetic aperture radar data is combined with radar altimetry to develop a three-dimensional map of the surface. The vertical scale in this perspective has been exaggerated 10 times. Rays cast in a computer intersect the surface to crate a three-dimensional perspective view. Simulated color and a digital elevation map developed by the U.S. Geological Survey are used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced by the Solar System Visualization project and the Magellan Science team at the JPL Multimission Image Processing Laboratory and is a single frame from a video released at the April 22, 1992 news conference.

  5. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  6. 3D printed glass: surface finish and bulk properties as a function of the printing process

    NASA Astrophysics Data System (ADS)

    Klein, Susanne; Avery, Michael P.; Richardson, Robert; Bartlett, Paul; Frei, Regina; Simske, Steven

    2015-03-01

    It is impossible to print glass directly from a melt, layer by layer. Glass is not only very sensitive to temperature gradients between different layers but also to the cooling process. To achieve a glass state the melt, has to be cooled rapidly to avoid crystallization of the material and then annealed to remove cooling induced stress. In 3D-printing of glass the objects are shaped at room temperature and then fired. The material properties of the final objects are crucially dependent on the frit size of the glass powder used during shaping, the chemical formula of the binder and the firing procedure. For frit sizes below 250 μm, we seem to find a constant volume of pores of less than 5%. Decreasing frit size leads to an increase in the number of pores which then leads to an increase of opacity. The two different binders, 2- hydroxyethyl cellulose and carboxymethylcellulose sodium salt, generate very different porosities. The porosity of samples with 2-hydroxyethyl cellulose is similar to frit-only samples, whereas carboxymethylcellulose sodium salt creates a glass foam. The surface finish is determined by the material the glass comes into contact with during firing.

  7. Fabrication of 3D solenoid microcoils in silica glass by femtosecond laser wet etch and microsolidics

    NASA Astrophysics Data System (ADS)

    Meng, Xiangwei; Yang, Qing; Chen, Feng; Shan, Chao; Liu, Keyin; Li, Yanyang; Bian, Hao; Du, Guangqing; Hou, Xun

    2015-02-01

    This paper reports a flexible fabrication method for 3D solenoid microcoils in silica glass. The method consists of femtosecond laser wet etching (FLWE) and microsolidics process. The 3D microchannel with high aspect ratio is fabricated by an improved FLWE method. In the microsolidics process, an alloy was chosen as the conductive metal. The microwires are achieved by injecting liquid alloy into the microchannel, and allowing the alloy to cool and solidify. The alloy microwires with high melting point can overcome the limitation of working temperature and improve the electrical property. The geometry, the height and diameter of microcoils were flexibly fabricated by the pre-designed laser writing path, the laser power and etching time. The 3D microcoils can provide uniform magnetic field and be widely integrated in many magnetic microsystems.

  8. 3D multi-view system using electro-wetting liquid lenticular lenses

    NASA Astrophysics Data System (ADS)

    Won, Yong Hyub; Kim, Junoh; Kim, Cheoljoong; Shin, Dooseub; Lee, Junsik; Koo, Gyohyun

    2016-06-01

    Lenticular multi-view system has great potential of three dimensional image realization. This paper introduces a fabrication of liquid lenticular lens array and an idea of increasing view points with a same resolution. Tunable liquid lens array can produce three dimensional images by using electro-wetting principle that changes surface tensions by applying voltage. The liquid lenticular device consists of a chamber, two different liquids and a sealing plate. To fabricate the chamber, an <100> silicon wafer is wet-etched by KOH solution and a trapezoid shaped chamber can be made after a certain time. The chamber having slanted walls is advantageous for electro-wetting achieving high diopter. Electroplating is done to make a nikel mold and poly methyl methacrylate (PMMA) chamber is fabricated through an embossing process. Indium tin oxide (ITO) is sputtered and parylene C and Teflon AF1600 is deposited for dielectric and hydrophobic layer respectively. Two immiscible liquids are injected and a glass plate as a sealing plate is covered with polycarbonates (PC) gaskets and sealed by UV adhesive. Two immiscible liquids are D.I water and a mixture of 1-chloronaphthalene and dodecane. The completed lenticular lens shows 2D and 3D images by applying certain voltages. Dioptric power and operation speed of the lenticular lens array are measured. A novel idea that an increment of viewpoints by electrode separation process is also proposed. The left and right electrodes of lenticular lens can be induced by different voltages and resulted in tilted optical axis. By switching the optical axis quickly, two times of view-points can be achieved with a same pixel resolution.

  9. The use of Interferometric Microscopy to assess 3D modifications of deteriorated medieval glass.

    NASA Astrophysics Data System (ADS)

    Gentaz, L.; Lombardo, T.; Chabas, A.

    2012-04-01

    Due to low durability, Northern European medieval glass undergoes the action of the atmospheric environment leading in some cases to a state of dramatic deterioration. Modification features varies from a simple loss of transparency to a severe material loss. In order to understand the underlying mechanisms and preserve this heritage, fundamental research is necessary too. In this optic, field exposure of analogues and original stained glass was carried out to study the early stages of the glass weathering. Model glass and original stained glass (after removal of deterioration products) were exposed in real conditions in an urban site (Paris) for 48 months. A regular withdrawal of samples allowed a follow-up of short-term glass evolution. Morphological modifications of the exposed samples were investigated through conventional and non destructive microscopy, using respectively a Scanning Electron Microscope (SEM) and an Interferometric Microscope (IM). This latter allows a 3D quantification of the object with no sample preparation. For all glasses, both surface recession and build-up of deposit were observed as a consequence of a leaching process (interdiffusion of protons and glass cations). The build-up of a deposit comes from the reaction between the extracted glass cations and atmospheric gases. Instead, surface recession is due mainly to the formation of brittle layer of altered glass at the sub-surface, where a fracture network can appear, leading to the scaling of parts of this modified glass. Finally, dissolution of the glass takes place, inducing the formation of pits and craters. The arithmetic roughness (Ra) was used as an indicator of weathering increase, in order to evaluate the deterioration state. For instance, the Ra grew from few tens of nm for pristine glass to thousands of nm for scaled areas. This technique also allowed a precise quantification of dimensions (height, depth and width) of deposits and pits, and the estimation of their overall

  10. Sweeping View of the 'Columbia Hills' and Gusev Crater (3-D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Mars.

    It took seven days, from sols 591 to 597 (Sept. 1 to Sept. 7, 2005) of its exploration of Mars, for Spirit's panoramic camera to acquire all the images combined into this mosaic. This panorama covers a field of view just under 180 degrees from left to right. This stereo view is presented in a cylindrical-perspective projection with geometric seam correction. The stereo image may be viewed with standard blue and red 3-D glasses.

  11. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    NASA Astrophysics Data System (ADS)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  12. Femtosecond laser 3D nanofabrication in glass: enabling direct write of integrated micro/nanofluidic chips

    NASA Astrophysics Data System (ADS)

    Cheng, Ya; Liao, Yang; Sugioka, Koji

    2014-03-01

    The creation of complex three-dimensional (3D) fluidic systems composed of hollow micro- and nanostructures embedded in transparent substrates has attracted significant attention from both scientific and applied research communities. However, it is by now still a formidable challenge to build 3D micro- and nanofluidic structures with arbitrary configurations using conventional planar lithographic fabrication methods. As a direct and maskless fabrication technique, femtosecond laser micromachining provides a straightforward approach for high-precision spatial-selective modification inside transparent materials through nonlinear optical absorption. Here, we demonstrate rapid fabrication of high-aspect-ratio micro- and/or nanofluidic structures with various 3D configurations in glass substrates by femtosecond laser direct writing. Based on this approach, we demonstrate several functional micro- and nanofluidic devices including a 3D passive microfluidic mixer, a capillary electrophoresis (CE) analysis chip, and an integrated micro-nanofluidic system for single DNA analysis. This technology offers new opportunities to develop novel 3D micro-nanofluidic systems for a variety of lab-on-a-chip applications.

  13. Calibration and 3D reconstruction of underwater objects with non-single-view projection model by structured light stereo imaging.

    PubMed

    Wang, Yexin; Negahdaripour, Shahriar; Aykin, Murat D

    2016-08-20

    Establishing the projection model of imaging systems is critical in 3D reconstruction of object shapes from multiple 2D views. When deployed underwater, these are enclosed in waterproof housings with transparent glass ports that generate nonlinear refractions of optical rays at interfaces, leading to invalidation of the commonly assumed single-viewpoint (SVP) model. In this paper, we propose a non-SVP ray tracing model for the calibration of a projector-camera system, employed for 3D reconstruction based on the structured light paradigm. The projector utilizes dot patterns, having established that the contrast loss is less severe than for traditional stripe patterns in highly turbid waters. Experimental results are presented to assess the achieved calibrating accuracy. PMID:27556973

  14. A stereo matching model observer for stereoscopic viewing of 3D medical images

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.; Muralidlhar, Gautam S.

    2014-03-01

    Stereoscopic viewing of 3D medical imaging data has the potential to increase the detection of abnormalities. We present a new stereo model observer inspired by the characteristics of stereopsis in human vision. Given a stereo pair of images of an object (i.e., left and right images separated by a small displacement), the model observer rst nds the corresponding points between the two views, and then fuses them together to create a 2D cyclopean view. Assuming that the cyclopean view has extracted most of the 3D information presented in the stereo pair, a channelized Hotelling observer (CHO) can be utilized to make decisions. We conduct a simulation study that attempts to mimic the detection of breast lesions on stereoscopic viewing of breast tomosynthesis projection images. We render voxel datasets that contain random 3D power-law noise to model normal breast tissues with various breast densities. 3D Gaussian signal is added to some of the datasets to model the presence of a breast lesion. By changing the separation angle between the two views, multiple stereo pairs of projection images are generated for each voxel dataset. The performance of the model is evaluated in terms of the accuracy of binary decisions on the presence of the simulated lesions.

  15. A procedure for generating quantitative 3-D camera views of tokamak divertors

    SciTech Connect

    Edmonds, P.H.; Medley, S.S.

    1996-05-01

    A procedure is described for precision modeling of the views for imaging diagnostics monitoring tokamak internal components, particularly high heat flux divertor components. These models are required to enable predictions of resolution and viewing angle for the available viewing locations. Because of the oblique views expected for slot divertors, fully 3-D perspective imaging is required. A suite of matched 3-D CAD, graphics and animation applications are used to provide a fast and flexible technique for reproducing these views. An analytic calculation of the resolution and viewing incidence angle is developed to validate the results of the modeling procedures. The calculation is applicable to any viewed surface describable with a coordinate array. The Tokamak Physics Experiment (TPX) diagnostics for infrared viewing are used as an example to demonstrate the implementation of the tools. For the TPX experiment the available locations are severely constrained by access limitations at the end resulting images are marginal in both resolution and viewing incidence angle. Full coverage of the divertor is possible if an array of cameras is installed at 45 degree toroidal intervals. Two poloidal locations are required in order to view both the upper and lower divertors. The procedures described here provide a complete design tool for in-vessel viewing, both for camera location and for identification of viewed surfaces. Additionally these same tools can be used for the interpretation of the actual images obtained by the actual diagnostic.

  16. The numerical integration and 3-D finite element formulation of a viscoelastic model of glass

    SciTech Connect

    Chambers, R.S.

    1994-08-01

    The use of glasses is widespread in making hermetic, insulating seals for many electronic components. Flat panel displays and fiber optic connectors are other products utilizing glass as a structural element. When glass is cooled from sealing temperatures, residual stresses are generated due to mismatches in thermal shrinkage created by the dissimilar material properties of the adjoining materials. Because glass is such a brittle material at room temperature, tensile residual stresses must be kept small to ensure durability and avoid cracking. Although production designs and the required manufacturing process development can be deduced empirically, this is an expensive and time consuming process that does not necessarily lead to an optimal design. Agile manufacturing demands that analyses be used to reduce development costs and schedules by providing insight and guiding the design process through the development cycle. To make these gains, however, viscoelastic models of glass must be available along with the right tool to use them. A viscoelastic model of glass can be used to simulate the stress and volume relaxation that occurs at elevated temperatures as the molecular structure of the glass seeks to equilibrate to the state of the supercooled liquid. The substance of the numerical treatment needed to support the implementation of the model in a 3-D finite element program is presented herein. An accurate second-order, central difference integrator is proposed for the constitutive equations, and numerical solutions are compared to those obtained with other integrators. Inherent convergence problems are reviewed and fixes are described. The resulting algorithms are generally applicable to the broad class of viscoelastic material models. First-order error estimates are used as a basis for developing a scheme for automatic time step controls, and several demonstration problems are presented to illustrate the performance of the methodology.

  17. Complementary cellophane optic gate and its use for a 3D iPad without glasses

    NASA Astrophysics Data System (ADS)

    Iizuka, K.

    2012-04-01

    A complementary cellophane optic gate was fabricated using a birefringent cellophane sheet. Previous versions of the optic gate required the retardance of the cellophane to be as close to 180° as possible throughout the entire visible wavelength range, which meant it was often difficult to find a cellophane sheet with the right thickness and dispersion characteristics to meet this requirement. The complementary optic gate reported in this paper has no restriction on the thickness, composition, or wavelength range of the cellophane sheet except that the cellophane must have some birefringence. Even with an arbitrary retardance, an extinction ratio of 5 × 10-3 was achieved at λ = 0.63 μm. The optic gate was used to convert an iPad into a 3D display without the need for the observer to wear glasses. The high extinction ratio of the optic gate resulted in a 3D display of supreme quality.

  18. Constructing 3-D Models Of A Scene From Planned Multiple Views

    NASA Astrophysics Data System (ADS)

    Xie, Shun-en; Calvert, Thomas W.

    1987-03-01

    Whether in an office, a warehouse or a home, the mobile robot must often work in a cluttered environment; although the basic layout of the environment may be known in advance, the nature and placement of objects within the environment will generally be unknown. Thus the intelligent mobile robot must be able to sense its environment with a vision system and it must be able to analyse multiple views to construct 3-d models of the objects it encounters. Since this analysis results in a heavy computational load, it is important to minimize the number of views and to use a planner to dynamically select a minimal set of vantage viewpoints. This paper discusses an approach to this general problem and describes a prototype system for a mobile intelligent robot which can construct 3-d models from planned sequential views. The principal components of this system are: (1) decomposition of a framed view into its components and the construction of partial 3-d descriptions of the view, (2) matching of the known environment to the partial 3-d descriptions of the view, (3) matching of partial descriptions of bodies derived from the current view with partial models constructed from previous views, (4) identification of new information in the current view and use of the information to update the models, (5) identification of unknown parts of partially constructed body models so that further viewpoints can be planned, (6) construction of a partial map of the scene and updating with each successive view, (7) selection of new viewpoints to maximize the information returned by a planner, (8) use of an expert system to convert the original boundary representations of the bodies to a new Constructive Solid Geometry-Extended Enhanced Spherical Image (CSG-EESI) representation to facilitate the recovery of structural information. Although the complete prototype system has not been implemented, its key components have been implemented and tested.

  19. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    NASA Astrophysics Data System (ADS)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  20. Four-view stereoscopic imaging and display system for web-based 3D image communication

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Cheol; Park, Young-Gyoo; Kim, Eun-Soo

    2004-10-01

    In this paper, a new software-oriented autostereoscopic 4-view imaging & display system for web-based 3D image communication is implemented by using 4 digital cameras, Intel Xeon server computer system, graphic card having four outputs, projection-type 4-view 3D display system and Microsoft' DirectShow programming library. And its performance is also analyzed in terms of image-grabbing frame rates, displayed image resolution, possible color depth and number of views. From some experimental results, it is found that the proposed system can display 4-view VGA images with a full color of 16bits and a frame rate of 15fps in real-time. But the image resolution, color depth, frame rate and number of views are mutually interrelated and can be easily controlled in the proposed system by using the developed software program so that, a lot of flexibility in design and implementation of the proposed multiview 3D imaging and display system are expected in the practical application of web-based 3D image communication.

  1. A 3D photographic capsule endoscope system with full field of view

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng

    2013-09-01

    Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.

  2. Are 3-D coronal mass ejection parameters from single-view observations consistent with multiview ones?

    NASA Astrophysics Data System (ADS)

    Lee, Harim; Moon, Y.-J.; Na, Hyeonock; Jang, Soojeong; Lee, Jae-Ok

    2015-12-01

    To prepare for when only single-view observations are available, we have made a test whether the 3-D parameters (radial velocity, angular width, and source location) of halo coronal mass ejections (HCMEs) from single-view observations are consistent with those from multiview observations. For this test, we select 44 HCMEs from December 2010 to June 2011 with the following conditions: partial and full HCMEs by SOHO and limb CMEs by twin STEREO spacecraft when they were approximately in quadrature. In this study, we compare the 3-D parameters of the HCMEs from three different methods: (1) a geometrical triangulation method, the STEREO CAT tool developed by NASA/CCMC, for multiview observations using STEREO/SECCHI and SOHO/LASCO data, (2) the graduated cylindrical shell (GCS) flux rope model for multiview observations using STEREO/SECCHI data, and (3) an ice cream cone model for single-view observations using SOHO/LASCO data. We find that the radial velocities and the source locations of the HCMEs from three methods are well consistent with one another with high correlation coefficients (≥0.9). However, the angular widths by the ice cream cone model are noticeably underestimated for broad CMEs larger than 100° and several partial HCMEs. A comparison between the 3-D CME parameters directly measured from twin STEREO spacecraft and the above 3-D parameters shows that the parameters from multiview are more consistent with the STEREO measurements than those from single view.

  3. Adaptive image warping for hole prevention in 3D view synthesis.

    PubMed

    Plath, Nils; Knorr, Sebastian; Goldmann, Lutz; Sikora, Thomas

    2013-09-01

    Increasing popularity of 3D videos calls for new methods to ease the conversion process of existing monocular video to stereoscopic or multi-view video. A popular way to convert video is given by depth image-based rendering methods, in which a depth map that is associated with an image frame is used to generate a virtual view. Because of the lack of knowledge about the 3D structure of a scene and its corresponding texture, the conversion of 2D video, inevitably, however, leads to holes in the resulting 3D image as a result of newly-exposed areas. The conversion process can be altered such that no holes become visible in the resulting 3D view by superimposing a regular grid over the depth map and deforming it. In this paper, an adaptive image warping approach as an improvement to the regular approach is proposed. The new algorithm exploits the smoothness of a typical depth map to reduce the complexity of the underlying optimization problem that is necessary to find the deformation, which is required to prevent holes. This is achieved by splitting a depth map into blocks of homogeneous depth using quadtrees and running the optimization on the resulting adaptive grid. The results show that this approach leads to a considerable reduction of the computational complexity while maintaining the visual quality of the synthesized views. PMID:23782807

  4. Multi-Modal Clique-Graph Matching for View-Based 3D Model Retrieval.

    PubMed

    Liu, An-An; Nie, Wei-Zhi; Gao, Yue; Su, Yu-Ting

    2016-05-01

    Multi-view matching is an important but a challenging task in view-based 3D model retrieval. To address this challenge, we propose an original multi-modal clique graph (MCG) matching method in this paper. We systematically present a method for MCG generation that is composed of cliques, which consist of neighbor nodes in multi-modal feature space and hyper-edges that link pairwise cliques. Moreover, we propose an image set-based clique/edgewise similarity measure to address the issue of the set-to-set distance measure, which is the core problem in MCG matching. The proposed MCG provides the following benefits: 1) preserves the local and global attributes of a graph with the designed structure; 2) eliminates redundant and noisy information by strengthening inliers while suppressing outliers; and 3) avoids the difficulty of defining high-order attributes and solving hyper-graph matching. We validate the MCG-based 3D model retrieval using three popular single-modal data sets and one novel multi-modal data set. Extensive experiments show the superiority of the proposed method through comparisons. Moreover, we contribute a novel real-world 3D object data set, the multi-view RGB-D object data set. To the best of our knowledge, it is the largest real-world 3D object data set containing multi-modal and multi-view information. PMID:26978821

  5. Effect of mental fatigue caused by mobile 3D viewing on selective attention: an ERP study.

    PubMed

    Mun, Sungchul; Kim, Eun-Soo; Park, Min-Chul

    2014-12-01

    This study investigated behavioral responses to and auditory event-related potential (ERP) correlates of mental fatigue caused by mobile three-dimensional (3D) viewing. Twenty-six participants (14 women) performed a selective attention task in which they were asked to respond to the sounds presented at the attended side while ignoring sounds at the ignored side before and after mobile 3D viewing. Considering different individual susceptibilities to 3D, participants' subjective fatigue data were used to categorize them into two groups: fatigued and unfatigued. The amplitudes of d-ERP components were defined as differences in amplitudes between time-locked brain oscillations of the attended and ignored sounds, and these values were used to calculate the degree to which spatial selective attention was impaired by 3D mental fatigue. The fatigued group showed significantly longer response times after mobile 3D viewing compared to before the viewing. However, response accuracy did not significantly change between the two conditions, implying that the participants used a behavioral strategy to cope with their performance accuracy decrement by increasing their response times. No significant differences were observed for the unfatigued group. Analysis of covariance revealed group differences with significant and trends toward significant decreases in the d-P200 and d-late positive potential (LPP) amplitudes at the occipital electrodes of the fatigued and unfatigued groups. Our findings indicate that mentally fatigued participants did not effectively block out distractors in their information processing mechanism, providing support for the hypothesis that 3D mental fatigue impairs spatial selective attention and is characterized by changes in d-P200 and d-LPP amplitudes. PMID:25194505

  6. Web-based intermediate view reconstruction for multiview stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Kim, Dong-Kyu; Lee, Won-Kyung; Ko, Jung-Hwan; Bae, Kyung-hoon; Kim, Eun-Soo

    2005-08-01

    In this paper, web-based intermediate view reconstruction for multiview stereoscopic 3D display system is proposed by using stereo cameras and disparity maps, Intel Xeon server computer system and Microsoft's DirectShow programming library and its performance is analyzed in terms of image-grabbing frame rate and number of views. In the proposed system, stereo images are initially captured by using stereo digital cameras and then, these are processed in the Intel Xeon server computer system. And then, the captured two-view image data is compressed by extraction of disparity data between them and transmitted to another client system through the information network, in which the received stereo data is displayed on the 16-view stereoscopic 3D display system by using intermediate view reconstruction. The program for controlling the overall system is developed based on the Microsoft DirectShow SDK. From some experimental results, it is found that the proposed system can display 16-view 3D images with a gray of 8bits and a frame rate of 15fps in real-time.

  7. JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases

    PubMed Central

    Feng, Guangjie; Burton, Nick; Hill, Bill; Davidson, Duncan; Kerwin, Janet; Scott, Mark; Lindsay, Susan; Baldock, Richard

    2005-01-01

    Background Many three-dimensional (3D) images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. Results We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. Conclusion We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily. PMID:15757508

  8. Determination of the optimum viewing distance for a multi-view auto-stereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Park, Inkyu; Kim, Sung-Kyu

    2014-09-22

    We present methodologies for determining the optimum viewing distance (OVD) for a multi-view auto-stereoscopic 3D display system with a parallax barrier. The OVD can be efficiently determined as the viewing distance where statistical deviation of centers of quasi-linear distributions of illuminance at central viewing zones is minimized using local areas of a display panel. This method can offer reduced computation time because it does not use the entire area of the display panel during a simulation, but still secures considerable accuracy. The method is verified in experiments, showing its applicability for efficient optical characterization. PMID:25321731

  9. Evaluation of 3D nano-macro porous bioactive glass scaffold for hard tissue engineering.

    PubMed

    Wang, S; Falk, M M; Rashad, A; Saad, M M; Marques, A C; Almeida, R M; Marei, M K; Jain, H

    2011-05-01

    Recently, nano-macro dual-porous, three-dimensional (3D) glass structures were developed for use as bioscaffolds for hard tissue regeneration, but there have been concerns regarding the interconnectivity and homogeneity of nanopores in the scaffolds, as well as the cytotoxicity of the environment deep inside due to limited fluid access. Therefore, mercury porosimetry, nitrogen absorption, and TEM have been used to characterize nanopore network of the scaffolds. In parallel, viability of MG 63 human osteosarcoma cells seeded on scaffold surface was investigated by fluorescence, confocal and electron microscopy methods. The results show that cells attach, migrate and penetrate inside the glass scaffold with high proliferation and viability rate. Additionally, scaffolds were implanted under the skin of a male New Zealand rabbit for in vivo animal test. Initial observations show the formation of new tissue with blood vessels and collagen fibers deep inside the implanted scaffolds with no obvious inflammatory reaction. Thus, the new nano-macro dual-porous glass structure could be a promising bioscaffold for use in regenerative medicine and tissue engineering for bone regeneration. PMID:21445655

  10. Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.

    PubMed

    Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho

    2016-04-18

    We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed. PMID:27137284

  11. Reconstructing White Walls: Multi-View Multi-Shot 3d Reconstruction of Textureless Surfaces

    NASA Astrophysics Data System (ADS)

    Ley, Andreas; Hänsch, Ronny; Hellwich, Olaf

    2016-06-01

    The reconstruction of the 3D geometry of a scene based on image sequences has been a very active field of research for decades. Nevertheless, there are still existing challenges in particular for homogeneous parts of objects. This paper proposes a solution to enhance the 3D reconstruction of weakly-textured surfaces by using standard cameras as well as a standard multi-view stereo pipeline. The underlying idea of the proposed method is based on improving the signal-to-noise ratio in weakly-textured regions while adaptively amplifying the local contrast to make better use of the limited numerical range in 8-bit images. Based on this premise, multiple shots per viewpoint are used to suppress statistically uncorrelated noise and enhance low-contrast texture. By only changing the image acquisition and adding a preprocessing step, a tremendous increase of up to 300% in completeness of the 3D reconstruction is achieved.

  12. Detection and 3D reconstruction of traffic signs from multiple view color images

    NASA Astrophysics Data System (ADS)

    Soheilian, Bahman; Paparoditis, Nicolas; Vallet, Bruno

    2013-03-01

    3D reconstruction of traffic signs is of great interest in many applications such as image-based localization and navigation. In order to reflect the reality, the reconstruction process should meet both accuracy and precision. In order to reach such a valid reconstruction from calibrated multi-view images, accurate and precise extraction of signs in every individual view is a must. This paper presents first an automatic pipeline for identifying and extracting the silhouette of signs in every individual image. Then, a multi-view constrained 3D reconstruction algorithm provides an optimum 3D silhouette for the detected signs. The first step called detection, tackles with a color-based segmentation to generate ROIs (Region of Interests) in image. The shape of every ROI is estimated by fitting an ellipse, a quadrilateral or a triangle to edge points. A ROI is rejected if none of the three shapes can be fitted sufficiently precisely. Thanks to the estimated shape the remained candidates ROIs are rectified to remove the perspective distortion and then matched with a set of reference signs using textural information. Poor matches are rejected and the types of remained ones are identified. The output of the detection algorithm is a set of identified road signs whose silhouette in image plane is represented by and ellipse, a quadrilateral or a triangle. The 3D reconstruction process is based on a hypothesis generation and verification. Hypotheses are generated by a stereo matching approach taking into account epipolar geometry and also the similarity of the categories. The hypotheses that are plausibly correspond to the same 3D road sign are identified and grouped during this process. Finally, all the hypotheses of the same group are merged to generate a unique 3D road sign by a multi-view algorithm integrating a priori knowledges about 3D shape of road signs as constraints. The algorithm is assessed on real and synthetic images and reached and average accuracy of 3.5cm for

  13. 3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Otake, Y.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Siewerdsen, J. H.

    2014-01-01

    An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ˜0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ˜10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.

  14. Integration of multiple view plus depth data for free viewpoint 3D display

    NASA Astrophysics Data System (ADS)

    Suzuki, Kazuyoshi; Yoshida, Yuko; Kawamoto, Tetsuya; Fujii, Toshiaki; Mase, Kenji

    2014-03-01

    This paper proposes a method for constructing a reasonable scale of end-to-end free-viewpoint video system that captures multiple view and depth data, reconstructs three-dimensional polygon models of objects, and display them on virtual 3D CG spaces. This system consists of a desktop PC and four Kinect sensors. First, multiple view plus depth data at four viewpoints are captured by Kinect sensors simultaneously. Then, the captured data are integrated to point cloud data by using camera parameters. The obtained point cloud data are sampled to volume data that consists of voxels. Since volume data that are generated from point cloud data are sparse, those data are made dense by using global optimization algorithm. Final step is to reconstruct surfaces on dense volume data by discrete marching cubes method. Since accuracy of depth maps affects to the quality of 3D polygon model, a simple inpainting method for improving depth maps is also presented.

  15. Modeling of multi-view 3D freehand radio frequency ultrasound.

    PubMed

    Klein, T; Hansson, M; Navab, Nassir

    2012-01-01

    Nowadays ultrasound (US) examinations are typically performed with conventional machines providing two dimensional imagery. However, there exist a multitude of applications where doctors could benefit from three dimensional ultrasound providing better judgment, due to the extended spatial view. 3D freehand US allows acquisition of images by means of a tracking device attached to the ultrasound transducer. Unfortunately, view dependency makes the 3D representation of ultrasound a non-trivial task. To address this we model speckle statistics, in envelope-detected radio frequency (RF) data, using a finite mixture model (FMM), assuming a parametric representation of data, in which the multiple views are treated as components of the FMM. The proposed model is show-cased with registration, using an ultrasound specific distribution based pseudo-distance, and reconstruction tasks, performed on the manifold of Gamma model parameters. Example field of application is neurology using transcranial US, as this domain requires high accuracy and data systematically features low SNR, making intensity based registration difficult. In particular, 3D US can be specifically used to improve differential diagnosis of Parkinson's disease (PD) compared to conventional approaches and is therefore of high relevance for future application. PMID:23285579

  16. Learning the 3-D structure of objects from 2-D views depends on shape, not format

    PubMed Central

    Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit

    2016-01-01

    Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196

  17. Learning the 3-D structure of objects from 2-D views depends on shape, not format.

    PubMed

    Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit

    2016-05-01

    Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196

  18. Image selection in photogrammetric multi-view stereo methods for metric and complete 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Hosseininaveh Ahmadabadian, Ali; Robson, Stuart; Boehm, Jan; Shortis, Mark

    2013-04-01

    Multi-View Stereo (MVS) as a low cost technique for precise 3D reconstruction can be a rival for laser scanners if the scale of the model is resolved. A fusion of stereo imaging equipment with photogrammetric bundle adjustment and MVS methods, known as photogrammetric MVS, can generate correctly scaled 3D models without using any known object distances. Although a huge number of stereo images (e.g. 200 high resolution images from a small object) captured of the object contains redundant data that allows detailed and accurate 3D reconstruction, the capture and processing time is increased when a vast amount of high resolution images are employed. Moreover, some parts of the object are often missing due to the lack of coverage of all areas. These problems demand a logical selection of the most suitable stereo camera views from the large image dataset. This paper presents a method for clustering and choosing optimal stereo or optionally single images from a large image dataset. The approach focusses on the two key steps of image clustering and iterative image selection. The method is developed within a software application called Imaging Network Designer (IND) and tested by the 3D recording of a gearbox and three metric reference objects. A comparison is made between IND and CMVS, which is a free package for selecting vantage images. The final 3D models obtained from the IND and CMVS approaches are compared with datasets generated with an MMDx Nikon Laser scanner. Results demonstrate that IND can provide a better image selection for MVS than CMVS in terms of surface coordinate uncertainty and completeness.

  19. View-independent Contour Culling of 3D Density Maps for Far-field Viewing of Iso-surfaces

    PubMed Central

    Feng, Powei; Ju, Tao; Warren, Joe

    2011-01-01

    In many applications, iso-surface is the primary method for visualizing the structure of 3D density maps. We consider a common scenario where the user views the iso-surfaces from a distance and varies the level associated with the iso-surface as well as the view direction to gain a sense of the general 3D structure of the density map. For many types of density data, the iso-surfaces associated with a particular threshold may be nested and never visible during this type of viewing. In this paper, we discuss a simple, conservative culling method that avoids the generation of interior portions of iso-surfaces at the contouring stage. Unlike existing methods that perform culling based on the current view direction, our culling is performed once for all views and requires no additional computation as the view changes. By pre-computing a single visibility map, culling is done at any iso-value with little overhead in contouring. We demonstrate the effectiveness of the algorithm on a range of bio-medical data and discuss a practical application in online visualization. PMID:21673830

  20. Single DMD time-multiplexed 64-views autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Loreti, Luigi

    2013-03-01

    Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links

  1. Determining canonical views of 3D object using minimum description length criterion and compressive sensing method

    NASA Astrophysics Data System (ADS)

    Chen, Ping-Feng; Krim, Hamid

    2008-02-01

    In this paper, we propose using two methods to determine the canonical views of 3D objects: minimum description length (MDL) criterion and compressive sensing method. MDL criterion searches for the description length that achieves the balance between model accuracy and parsimony. It takes the form of the sum of a likelihood and a penalizing term, where the likelihood is in favor of model accuracy such that more views assists the description of an object, while the second term penalizes lengthy description to prevent overfitting of the model. In order to devise the likelihood term, we propose a model to represent a 3D object as the weighted sum of multiple range images, which is used in the second method to determine the canonical views as well. In compressive sensing method, an intelligent way of parsimoniously sampling an object is presented. We make direct inference from Donoho1 and Candes'2 work, and adapt it to our model. Each range image is viewed as a projection, or a sample, of a 3D model, and by using compressive sensing theory, we are able to reconstruct the object with an overwhelming probability by scarcely sensing the object in a random manner. Compressive sensing is different from traditional compressing method in the sense that the former compress things in the sampling stage while the later collects a large number of samples and then compressing mechanism is carried out thereafter. Compressive sensing scheme is particularly useful when the number of sensors are limited or the sampling machinery cost much resource or time.

  2. Mesoporous bioactive glass nanolayer-functionalized 3D-printed scaffolds for accelerating osteogenesis and angiogenesis.

    PubMed

    Zhang, Yali; Xia, Lunguo; Zhai, Dong; Shi, Mengchao; Luo, Yongxiang; Feng, Chun; Fang, Bing; Yin, Jingbo; Chang, Jiang; Wu, Chengtie

    2015-12-01

    The hierarchical microstructure, surface and interface of biomaterials are important factors influencing their bioactivity. Porous bioceramic scaffolds have been widely used for bone tissue engineering by optimizing their chemical composition and large-pore structure. However, the surface and interface of struts in bioceramic scaffolds are often ignored. The aim of this study is to incorporate hierarchical pores and bioactive components into the bioceramic scaffolds by constructing nanopores and bioactive elements on the struts of scaffolds and further improve their bone-forming activity. Mesoporous bioactive glass (MBG) modified β-tricalcium phosphate (MBG-β-TCP) scaffolds with a hierarchical pore structure and a functional strut surface (∼100 nm of MBG nanolayer) were successfully prepared via 3D printing and spin coating. The compressive strength and apatite-mineralization ability of MBG-β-TCP scaffolds were significantly enhanced as compared to β-TCP scaffolds without the MBG nanolayer. The attachment, viability, alkaline phosphatase (ALP) activity, osteogenic gene expression (Runx2, BMP2, OPN and Col I) and protein expression (OPN, Col I, VEGF, HIF-1α) of rabbit bone marrow stromal cells (rBMSCs) as well as the attachment, viability and angiogenic gene expression (VEGF and HIF-1α) of human umbilical vein endothelial cells (HUVECs) in MBG-β-TCP scaffolds were significantly upregulated compared with conventional bioactive glass (BG)-modified β-TCP (BG-β-TCP) and pure β-TCP scaffolds. Furthermore, MBG-β-TCP scaffolds significantly enhanced the formation of new bone in vivo as compared to BG-β-TCP and β-TCP scaffolds. The results suggest that application of the MBG nanolayer to modify 3D-printed bioceramic scaffolds offers a new strategy to construct hierarchically porous scaffolds with significantly improved physicochemical and biological properties, such as mechanical properties, osteogenesis, angiogenesis and protein expression for bone tissue

  3. Mesoporous bioactive glass nanolayer-functionalized 3D-printed scaffolds for accelerating osteogenesis and angiogenesis

    NASA Astrophysics Data System (ADS)

    Zhang, Yali; Xia, Lunguo; Zhai, Dong; Shi, Mengchao; Luo, Yongxiang; Feng, Chun; Fang, Bing; Yin, Jingbo; Chang, Jiang; Wu, Chengtie

    2015-11-01

    The hierarchical microstructure, surface and interface of biomaterials are important factors influencing their bioactivity. Porous bioceramic scaffolds have been widely used for bone tissue engineering by optimizing their chemical composition and large-pore structure. However, the surface and interface of struts in bioceramic scaffolds are often ignored. The aim of this study is to incorporate hierarchical pores and bioactive components into the bioceramic scaffolds by constructing nanopores and bioactive elements on the struts of scaffolds and further improve their bone-forming activity. Mesoporous bioactive glass (MBG) modified β-tricalcium phosphate (MBG-β-TCP) scaffolds with a hierarchical pore structure and a functional strut surface (~100 nm of MBG nanolayer) were successfully prepared via 3D printing and spin coating. The compressive strength and apatite-mineralization ability of MBG-β-TCP scaffolds were significantly enhanced as compared to β-TCP scaffolds without the MBG nanolayer. The attachment, viability, alkaline phosphatase (ALP) activity, osteogenic gene expression (Runx2, BMP2, OPN and Col I) and protein expression (OPN, Col I, VEGF, HIF-1α) of rabbit bone marrow stromal cells (rBMSCs) as well as the attachment, viability and angiogenic gene expression (VEGF and HIF-1α) of human umbilical vein endothelial cells (HUVECs) in MBG-β-TCP scaffolds were significantly upregulated compared with conventional bioactive glass (BG)-modified β-TCP (BG-β-TCP) and pure β-TCP scaffolds. Furthermore, MBG-β-TCP scaffolds significantly enhanced the formation of new bone in vivo as compared to BG-β-TCP and β-TCP scaffolds. The results suggest that application of the MBG nanolayer to modify 3D-printed bioceramic scaffolds offers a new strategy to construct hierarchically porous scaffolds with significantly improved physicochemical and biological properties, such as mechanical properties, osteogenesis, angiogenesis and protein expression for bone tissue

  4. Multi-view indoor human behavior recognition based on 3D skeleton

    NASA Astrophysics Data System (ADS)

    Peng, Ling; Lu, Tongwei; Min, Feng

    2015-12-01

    For the problems caused by viewpoint changes in activity recognition, a multi-view interior human behavior recognition method based on 3D framework is presented. First, Microsoft's Kinect device is used to obtain body motion video in the positive perspective, the oblique angle and the side perspective. Second, it extracts bone joints and get global human features and the local features of arms and legs at the same time to form 3D skeletal features set. Third, online dictionary learning on feature set is used to reduce the dimension of feature. Finally, linear support vector machine (LSVM) is used to obtain the results of behavior recognition. The experimental results show that this method has better recognition rate.

  5. In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie

    2015-03-01

    Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.

  6. Bird's Eye View - A 3-D Situational Awareness Tool for the Space Station

    NASA Technical Reports Server (NTRS)

    Dershowitz, Adam; Chamitoff, Gregory

    2002-01-01

    Even as space-qualified computer hardware lags well behind the latest home computers, the possibility of using high-fidelity interactive 3-D graphics for displaying important on board information has finally arrived, and is being used on board the International Space Station (ISS). With the quantity and complexity of space-flight telemetry, 3-D displays can greatly enhance the ability of users, both onboard and on the ground, to interpret data quickly and accurately. This is particularly true for data related to vehicle attitude, position, configuration, and relation to other objects on the ground or in-orbit Bird's Eye View (BEV) is a 3-D real-time application that provides a high degree of Situational Awareness for the crew. Its purpose is to instantly convey important motion-related parameters to the crew and mission controllers by presenting 3-D simulated camera views of the International Space Station (ISS) in its actual environment Driven by actual telemetry, and running on board, as well as on the ground, the user can visualize the Space Station relative to the Earth, Sun, stars, various reference frames, and selected targets, such as ground-sites or communication satellites. Since the actual ISS configuration (geometry) is also modeled accurately, everything from the alignment of the solar panels to the expected view from a selected window can be visualized accurately. A virtual representation of the Space Station in real time has many useful applications. By selecting different cameras, the crew or mission control can monitor the station's orientation in space, position over the Earth, transition from day to night, direction to the Sun, the view from a particular window, or the motion of the robotic arm. By viewing the vehicle attitude and solar panel orientations relative to the Sun, the power status of the ISS can be easily visualized and understood. Similarly, the thermal impacts of vehicle attitude can be analyzed and visually confirmed. Communication

  7. Controlled Experimental Study Depicting Moving Objects in View-Shared Time-Resolved 3D MRA

    PubMed Central

    Mostardi, Petrice M.; Haider, Clifton R.; Rossman, Phillip J.; Borisch, Eric A.; Riederer, Stephen J.

    2010-01-01

    Various methods have been used for time-resolved contrast-enhanced MRA (CE-MRA), many involving view sharing. However, the extent to which the resultant image time series represents the actual dynamic behavior of the contrast bolus is not always clear. Although numerical simulations can be used to estimate performance, an experimental study can allow more realistic characterization. The purpose of this work was to use a computer-controlled motion phantom for study of the temporal fidelity of 3D time-resolved sequences in depicting a contrast bolus. It is hypothesized that the view order of the acquisition and the selection of views in the reconstruction can affect the positional accuracy and sharpness of the leading edge of the bolus and artifactual signal preceding the edge. Phantom studies were performed using dilute gadolinium-filled vials that were moved along tabletop tracks by a computer-controlled motor. Several view orders were tested, which use view-sharing and Cartesian sampling. Compactness of measuring the k-space center, consistency of view ordering within each reconstruction frame, and sampling the k-space center near the end of the temporal footprint were shown to be important in accurate portrayal of the leading edge of the bolus. A number of findings were confirmed in an in vivo CE-MRA study. PMID:19319897

  8. Controlled experimental study depicting moving objects in view-shared time-resolved 3D MRA.

    PubMed

    Mostardi, Petrice M; Haider, Clifton R; Rossman, Phillip J; Borisch, Eric A; Riederer, Stephen J

    2009-07-01

    Various methods have been used for time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA), many involving view sharing. However, the extent to which the resultant image time series represents the actual dynamic behavior of the contrast bolus is not always clear. Although numerical simulations can be used to estimate performance, an experimental study can allow more realistic characterization. The purpose of this work was to use a computer-controlled motion phantom for study of the temporal fidelity of three-dimensional (3D) time-resolved sequences in depicting a contrast bolus. It is hypothesized that the view order of the acquisition and the selection of views in the reconstruction can affect the positional accuracy and sharpness of the leading edge of the bolus and artifactual signal preceding the edge. Phantom studies were performed using dilute gadolinium-filled vials that were moved along tabletop tracks by a computer-controlled motor. Several view orders were tested using view-sharing and Cartesian sampling. Compactness of measuring the k-space center, consistency of view ordering within each reconstruction frame, and sampling the k-space center near the end of the temporal footprint were shown to be important in accurate portrayal of the leading edge of the bolus. A number of findings were confirmed in an in vivo CE-MRA study. PMID:19319897

  9. 3D Cryo-Imaging: A Very High-Resolution View of the Whole Mouse

    PubMed Central

    Roy, Debashish; Steyer, Grant J.; Gargesha, Madhusudhana; Stone, Meredith E.; Wilson, David L.

    2009-01-01

    We developed the Case Cryo-imaging system that provides information rich, very high-resolution, color brightfield, and molecular fluorescence images of a whole mouse using a section-and-image block-face imaging technology. The system consists of a mouse-sized, motorized cryo-microtome with special features for imaging, a modified, brightfield/ fluorescence microscope, and a robotic xyz imaging system positioner, all of which is fully automated by a control system. Using the robotic system, we acquired microscopic tiled images at a pixel size of 15.6 µm over the block face of a whole mouse sectioned at 40 µm, with a total data volume of 55 GB. Viewing 2D images at multiple resolutions, we identified small structures such as cardiac vessels, muscle layers, villi of the small intestine, the optic nerve, and layers of the eye. Cryo-imaging was also suitable for imaging embryo mutants in 3D. A mouse, in which enhanced green fluorescent protein was expressed under gamma actin promoter in smooth muscle cells, gave clear 3D views of smooth muscle in the urogenital and gastrointestinal tracts. With cryo-imaging, we could obtain 3D vasculature down to 10 µm, over very large regions of mouse brain. Software is fully automated with fully programmable imaging/sectioning protocols, email notifications, and automatic volume visualization. With a unique combination of field-of-view, depth of field, contrast, and resolution, the Case Cryo-imaging system fills the gap between whole animal in vivo imaging and histology. PMID:19248166

  10. An Image-Based Technique for 3d Building Reconstruction Using Multi-View Uav Images

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2015-12-01

    Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs) images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM) is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  11. 4. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. General view of looking glass aircraft in the project looking glass historic district. View to west. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  12. 3. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. General view of looking glass aircraft in the project looking glass historic district. View to west. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  13. 5. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. General view of looking glass aircraft in the project looking glass historic district. View to north. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  14. 2. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. General view of looking glass aircraft in the project looking glass historic district. View to south. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  15. 1. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. General view of looking glass aircraft in the project looking glass historic district. View to southeast. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  16. Non-destructive readout of 2D and 3D dose distributions using a disk-type radiophotoluminescent glass plate

    NASA Astrophysics Data System (ADS)

    Kurobori, T.; Maruyama, Y.; Miyamoto, Y.; Sasaki, T.; Nanto, H.

    2015-04-01

    Novel disk-type X-ray two- and three-dimensional (2D, 3D) dose distributions have been developed using atomic-scale defects as minimum luminescent units, such as radiation- induced silver (Ag)-related species in a Ag-activated phosphate glass. This luminescent detector is based on the radiophotoluminescence(RPL) phenomenon. Accurate accumulated dose distributions with a high spatial resolution on the order of microns over large areas, a wide dynamic range covering three orders of magnitude and a non-destructive readout were successfully demonstrated for the first time by using a disk-type glass plate with a 100-mm diameter and a 1-mm thickness. In addition, the combination of a confocal optical detection system with a transparent glass detector enables 3D reconstruction by piling up each dose image at different depths within the material.

  17. 3D view weighted cone-beam backprojection reconstruction for digital tomosynthesis

    NASA Astrophysics Data System (ADS)

    Li, Baojun; Avinash, Gopal; Claus, Bernhard; Metz, Stephen

    2007-03-01

    Cone-beam filtered backprojection (CB-FBP) is one of the major reconstruction algorithms for digital tomosynthesis. In conventional FBP, the photon fluxes in projections are evenly distributed along the X-ray beam. Due to the limited view angles and finite detector dimensions, this uniform weighting causes non-uniformity in the recon images and leads to cone-beam artifact. In this paper, we propose a 3-D view weighting technique in combination with FBP to combat this artifact. An anthropomorphic chest phantom was placed at supine position to enable the imaging of chest PA view. During a linear sweep of X-ray source, 41 X-ray images at different projection angles were acquired with the following protocol: 120kVp, 160mA, and 0.64mAs/exposure. To create the worst scenario for testing, we chose 60 degrees as the sweep angle in this exam. The data set was reconstructed with conventional CB-FBP and proposed algorithm under the same parameters: FOV = 40x40 cm^2, and slice thickness = 4mm. 3 recon slices were randomly selected for review with slice height = 10.5/14.5/17.5cm. Results were assessed qualitatively by human observers and quantitatively through ROI measurement. In each slice, three pre-defined ROIs (50x50 pixels)--ROI A and B are in artifact more pronounced area, and ROI C is in relatively artifact-free area--are extracted and measured. The non-uniformity error was defined as the ratio of MEAN(AVG(C-A), AVG(C-B)) / AVG(C). The average non-uniformity error over the three test images was 0.428 for without view weighting and only 0.041 for with view weighting.

  18. Assessment of next-best-view algorithms performance with various 3D scanners and manipulator

    NASA Astrophysics Data System (ADS)

    Karaszewski, M.; Adamczyk, M.; Sitnik, R.

    2016-09-01

    The problem of calculating three dimensional (3D) sensor position (and orientation) during the digitization of real-world objects (called next best view planning or NBV) has been an active topic of research for over 20 years. While many solutions have been developed, it is hard to compare their quality based only on the exemplary results presented in papers. We implemented 13 of the most popular NBV algorithms and evaluated their performance by digitizing five objects of various properties, using three measurement heads with different working volumes mounted on a 6-axis robot with a rotating table for placing objects. The results obtained for the 13 algorithms were then compared based on four criteria: the number of directional measurements, digitization time, total positioning distance, and surface coverage required to digitize test objects with available measurement heads.

  19. View showing rear of looking glass aircraft on operational apron ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View showing rear of looking glass aircraft on operational apron with nose dock hangar in background. View to northeast - Offutt Air Force Base, Looking Glass Airborne Command Post, Operational & Hangar Access Aprons, Spanning length of northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  20. A 3D view of the outflow in the Orion Molecular Cloud 1 (OMC-1)

    NASA Astrophysics Data System (ADS)

    Nissen, H. D.; Cunningham, N. J.; Gustafsson, M.; Bally, J.; Lemaire, J.-L.; Favre, C.; Field, D.

    2012-04-01

    Context. Stars whose mass is an order of magnitude greater than the Sun play a prominent role in the evolution of galaxies, exploding as supernovae, triggering bursts of star formation and spreading heavy elements about their host galaxies. A fundamental aspect of star formation is the creation of an outflow. The fast outflow emerging from a region associated with massive star formation in the Orion Molecular Cloud 1 (OMC-1), located behind the Orion Nebula, appears to have been set in motion by an explosive event. Aims: We study the structure and dynamics of outflows in OMC-1. We combine radial velocity and proper motion data for near-IR emission of molecular hydrogen to obtain the first 3-dimensional (3D) structure of the OMC-1 outflow. Our work illustrates a new diagnostic tool for studies of star formation that will be exploited in the near future with the advent of high spatial resolution spectro-imaging in particular with data from the Atacama Large Millimeter Array (ALMA). Methods: We used published radial and proper motion velocities obtained from the shock-excited vibrational emission in the H2 v = 1-0 S(1) line at 2.122 μm obtained with the GriF instrument on the Canada-France-Hawaii Telescope, the Apache Point Observatory, the Anglo-Australian Observatory, and the Subaru Telescope. Results: These data give the 3D velocity of ejecta yielding a 3D reconstruction of the outflows. This allows one to view the material from different vantage points in space giving considerable insight into the geometry. Our analysis indicates that the ejection occurred ≲720 years ago from a distorted ring-like structure of ~15″ (6000 AU) in diameter centered on the proposed point of close encounter of the stars BN, source I and maybe also source n. We propose a simple model involving curvature of shock trajectories in magnetic fields through which the origin of the explosion and the center defined by extrapolated proper motions of BN, I and n may be brought into spatial

  1. Fabrication and characterization of strontium incorporated 3-D bioactive glass scaffolds for bone tissue from biosilica.

    PubMed

    Özarslan, Ali Can; Yücel, Sevil

    2016-11-01

    Bioactive glass scaffolds that contain silica are high viable biomaterials as bone supporters for bone tissue engineering due to their bioactive behaviour in simulated body fluid (SBF). In the human body, these materials help inorganic bone structure formation due to a combination of the particular ratio of elements such as silicon (Si), calcium (Ca), sodium (Na) and phosphorus (P), and the doping of strontium (Sr) into the scaffold structure increases their bioactive behaviour. In this study, bioactive glass scaffolds were produced by using rice hull ash (RHA) silica and commercial silica based bioactive glasses. The structural properties of scaffolds such as pore size, porosity and also the bioactive behaviour were investigated. The results showed that undoped and Sr-doped RHA silica-based bioactive glass scaffolds have better bioactivity than that of commercial silica based bioactive glass scaffolds. Moreover, undoped and Sr-doped RHA silica-based bioactive glass scaffolds will be able to be used instead of undoped and Sr-doped commercial silica based bioactive glass scaffolds for bone regeneration applications. Scaffolds that are produced from undoped or Sr-doped RHA silica have high potential to form new bone for bone defects in tissue engineering. PMID:27524030

  2. Construction of Extended 3D Field of Views of the Internal Bladder Wall Surface: A Proof of Concept

    NASA Astrophysics Data System (ADS)

    Ben-Hamadou, Achraf; Daul, Christian; Soussen, Charles

    2016-09-01

    3D extended field of views (FOVs) of the internal bladder wall facilitate lesion diagnosis, patient follow-up and treatment traceability. In this paper, we propose a 3D image mosaicing algorithm guided by 2D cystoscopic video-image registration for obtaining textured FOV mosaics. In this feasibility study, the registration makes use of data from a 3D cystoscope prototype providing, in addition to each small FOV image, some 3D points located on the surface. This proof of concept shows that textured surfaces can be constructed with minimally modified cystoscopes. The potential of the method is demonstrated on numerical and real phantoms reproducing various surface shapes. Pig and human bladder textures are superimposed on phantoms with known shape and dimensions. These data allow for quantitative assessment of the 3D mosaicing algorithm based on the registration of images simulating bladder textures.

  3. Automatic alignment of standard views in 3D echocardiograms using real-time tracking

    NASA Astrophysics Data System (ADS)

    Orderud, Fredrik; Torp, Hans; Rabben, Stein Inge

    2009-02-01

    In this paper, we present an automatic approach for alignment of standard apical and short-axis slices, and correcting them for out-of-plane motion in 3D echocardiography. This is enabled by using real-time Kalman tracking to perform automatic left ventricle segmentation using a coupled deformable model, consisting of a left ventricle model, as well as structures for the right ventricle and left ventricle outflow tract. Landmark points from the segmented model are then used to generate standard apical and short-axis slices. The slices are automatically updated after tracking in each frame to correct for out-of-plane motion caused by longitudinal shortening of the left ventricle. Results from a dataset of 35 recordings demonstrate the potential for automating apical slice initialization and dynamic short-axis slices. Apical 4-chamber, 2-chamber and long-axis slices are generated based on an assumption of fixed angle between the slices, and short-axis slices are generated so that they follow the same myocardial tissue over the entire cardiac cycle. The error compared to manual annotation was 8.4 +/- 3.5 mm for apex, 3.6 +/- 1.8 mm for mitral valve and 8.4 +/- 7.4 for apical 4-chamber view. The high computational efficiency and automatic behavior of the method enables it to operate in real-time, potentially during image acquisition.

  4. 3-D view of erosional scars on U. S. Mid-Atlantic continental margin

    SciTech Connect

    Farre, J.A.; Ryan, W.B.

    1985-06-01

    Deep-towed side-scan and bathymetric data have been merged to present a 3-D view of the lower continental slope and upper continental rise offshore Atlantic City, New Jersey. Carteret Canyon narrows and becomes nearly stranded on the lower slope where it leads into one of two steep-walled, flat-floored erosional chutes. The floors of the chutes, cut into semilithified middle Eocene siliceous limestones, are marked by downslope-trending grooves. The grooves are interpreted to be gouge marks formed during rock and sediment slides. On the uppermost rise, beneath the chutes, is a 40-m deep depression. The origin of the depression is believed to be related to material moving downslope and encountering the change in gradient at the slope/rise boundary. Downslope of the depression are channels, trails, and allochthonous blocks. The lack of significant post-early Miocene deposits implies that the lower slope offshore New Jersey has yet to reach a configuration conducive to sediment accumulation. The age of erosion on the lower slope apparently ranges from late Eocene-early Miocene to the recent geologic past.

  5. Beat the diffraction limit in 3D direct laser writing in photosensitive glass.

    PubMed

    Bellec, Matthieu; Royon, Arnaud; Bousquet, Bruno; Bourhis, Kevin; Treguer, Mona; Cardinal, Thierry; Richardson, Martin; Canioni, Lionel

    2009-06-01

    Three-dimensional (3D) femtosecond laser direct structuring in transparent materials is widely used for photonic applications. However, the structure size is limited by the optical diffraction. Here we report on a direct laser writing technique that produces subwavelength nanostructures independently of the experimental limiting factors. We demonstrate 3D nanostructures of arbitrary patterns with feature sizes down to 80 nm, less than one tenth of the laser processing wavelength. Its ease of implementation for novel nanostructuring, with its accompanying high precision will open new opportunities for the fabrication of nanostructures for plasmonic and photonic devices and for applications in metamaterials. PMID:19506684

  6. Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras

    NASA Astrophysics Data System (ADS)

    El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid

    2015-03-01

    In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.

  7. Single view-based 3D face reconstruction robust to self-occlusion

    NASA Astrophysics Data System (ADS)

    Lee, Youn Joo; Lee, Sung Joo; Park, Kang Ryoung; Jo, Jaeik; Kim, Jaihie

    2012-12-01

    State-of-the-art 3D morphable model (3DMM) is used widely for 3D face reconstruction based on a single image. However, this method has a high computational cost, and hence, a simplified 3D morphable model (S3DMM) was proposed as an alternative. Unlike the original 3DMM, S3DMM uses only a sparse 3D facial shape, and therefore, it incurs a lower computational cost. However, this method is vulnerable to self-occlusion due to head rotation. Therefore, we propose a solution to the self-occlusion problem in S3DMM-based 3D face reconstruction. This research is novel compared with previous works, in the following three respects. First, self-occlusion of the input face is detected automatically by estimating the head pose using a cylindrical head model. Second, a 3D model fitting scheme is designed based on selected visible facial feature points, which facilitates 3D face reconstruction without any effect from self-occlusion. Third, the reconstruction performance is enhanced by using the estimated pose as the initial pose parameter during the 3D model fitting process. The experimental results showed that the self-occlusion detection had high accuracy and our proposed method delivered a noticeable improvement in the 3D face reconstruction performance compared with previous methods.

  8. Effect of 3d-transition metal doping on the shielding behavior of barium borate glasses: a spectroscopic study.

    PubMed

    ElBatal, H A; Abdelghany, A M; Ghoneim, N A; ElBatal, F H

    2014-12-10

    UV-visible and FT infrared spectra were measured for prepared samples before and after gamma irradiation. Base undoped barium borate glass of the basic composition (BaO 40%-B2O3 60mol.%) reveals strong charge transfer UV absorption bands which are related to unavoidable trace iron impurities (Fe(3+)) within the chemical raw materials. 3d transition metal (TM)-doped glasses exhibit extra characteristic absorption bands due to each TM in its specific valence or coordinate state. The optical spectra show that TM ions favor generally the presence in the high valence or tetrahedral coordination state in barium borate host glass. Infrared absorption bands of all prepared glasses reveal the appearance of both triangular BO3 units and tetrahedral BO4 units within their characteristic vibrational modes and the TM-ions cause minor effects because of the low doping level introduced (0.2%). Gamma irradiation of the undoped barium borate glass increases the intensity of the UV absorption together with the generation of an induced broad visible band at about 580nm. These changes are correlated with suggested photochemical reactions of trace iron impurities together with the generation of positive hole center (BHC or OHC) within the visible region through generated electrons and positive holes during the irradiation process. PMID:24983922

  9. Diffraction effects incorporated design of a parallax barrier for a high-density multi-view autostereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu

    2016-02-22

    We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7. PMID:26907057

  10. Stereo 3-D Imagery Uses for Definition of Geologic Structures and Geomorphic Features (Anaglyph colored glasses employed)

    NASA Astrophysics Data System (ADS)

    Hicks, B. G.; Fuente, J. D.

    2008-12-01

    Recently completed projects incorporating TopoMorpher* digital images as adjuncts to commonly employed tools has emphasized the distinct advantage gained with STEREO 3-D DIGITAL IMAGERY. By manipulating scale, relief (four types of digital shading), sun angle, direction of viewing and tilt of scene, etc. -- to produce differing views of the same terrain -- aids in identifying, tracing, and interpreting ground surface anomalies. *TopoMorpher is a digital software product of Eighteen Software (18 software.com). The advantage of Stereo 3-D views combined with digital removal of vegetation which blocked interpretation (commonly called 'bare earth/naked' views) cannot be over-emphasized. The TopoMorpher program creates scenes transferable to disk for printing at any size. Included is with computer projector which allows large display and discussion ease for groups. The examples include (1) fault systems for targeting water well locations in bedrock and (2) delineation of debris slide and avalanche terrain. Combining geologic mapping and spring locations with Stereo 3-D TopoMorpher tracing of fault lineaments has allowed targeting of water well drilling sites. Selection of geophysical study areas for well siting has been simplified. Stereo 3-D TopoMorpher has a specific "relief/terrain setting" to define potential failure sites by producing detailed colored slope maps keyed to field-data derived parameters. Posters display individual project images and large scale overviews for identifying unusual major terrain features. Images at scales using 10 and 30 meter digital data as well as Lidar (< 1 meter) will be shown.

  11. Motion-parallax smoothness of short-, medium-, and long-distance 3D image presentation using multi-view displays.

    PubMed

    Takaki, Yasuhiro; Urano, Yohei; Nishio, Hiroyuki

    2012-11-19

    The discontinuity of motion parallax offered by multi-view displays was assessed by subjective evaluation. A super multi-view head-up display, which provides dense viewing points and has short-, medium-, and long-distance display ranges, was used. The results showed that discontinuity perception depended on the ratio of an image shift between adjacent parallax images to a pixel pitch of three-dimensional (3D) images and the crosstalk between viewing points. When the ratio was less than 0.2 and the crosstalk was small, the discontinuity was not perceived. When the ratio was greater than 1 and the crosstalk was small, the discontinuity was perceived, and the resolution of the 3D images decreased twice. When the crosstalk was large, the discontinuity was not perceived even when the ratio was 1 or 2. However, the resolution decreased two or more times. PMID:23187574

  12. Multi-view alignment with database of features for an improved usage of high-end 3D scanners

    NASA Astrophysics Data System (ADS)

    Bonarrigo, Francesco; Signoroni, Alberto; Leonardi, Riccardo

    2012-12-01

    The usability of high-precision and high-resolution 3D scanners is of crucial importance due to the increasing demand of 3D data in both professional and general-purpose applications. Simplified, intuitive and rapid object modeling requires effective and automated alignment pipelines capable to trace back each independently acquired range image of the scanned object into a common reference system. To this end, we propose a reliable and fast feature-based multiple-view alignment pipeline that allows interactive registration of multiple views according to an unchained acquisition procedure. A robust alignment of each new view is estimated with respect to the previously aligned data through fast extraction, representation and matching of feature points detected in overlapping areas from different views. The proposed pipeline guarantees a highly reliable alignment of dense range image datasets on a variety of objects in few seconds per million of points.

  13. Development of a Top-View Numeric Coding Teaching-Learning Trajectory within an Elementary Grades 3-D Visualization Design Research Project

    ERIC Educational Resources Information Center

    Sack, Jacqueline J.

    2013-01-01

    This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…

  14. Extended field-of-view and increased-signal 3D holographic illumination with time-division multiplexing

    PubMed Central

    Yang, Samuel J.; Allen, William E.; Kauvar, Isaac; Andalman, Aaron S.; Young, Noah P.; Kim, Christina K.; Marshel, James H.; Wetzstein, Gordon; Deisseroth, Karl

    2016-01-01

    Phase spatial light modulators (SLMs) are widely used for generating multifocal three-dimensional (3D) illumination patterns, but these are limited to a field of view constrained by the pixel count or size of the SLM. Further, with two-photon SLM-based excitation, increasing the number of focal spots penalizes the total signal linearly—requiring more laser power than is available or can be tolerated by the sample. Here we analyze and demonstrate a method of using galvanometer mirrors to time-sequentially reposition multiple 3D holograms, both extending the field of view and increasing the total time-averaged two-photon signal. We apply our approach to 3D two-photon in vivo neuronal calcium imaging. PMID:26699047

  15. Extended field-of-view and increased-signal 3D holographic illumination with time-division multiplexing.

    PubMed

    Yang, Samuel J; Allen, William E; Kauvar, Isaac; Andalman, Aaron S; Young, Noah P; Kim, Christina K; Marshel, James H; Wetzstein, Gordon; Deisseroth, Karl

    2015-12-14

    Phase spatial light modulators (SLMs) are widely used for generating multifocal three-dimensional (3D) illumination patterns, but these are limited to a field of view constrained by the pixel count or size of the SLM. Further, with two-photon SLM-based excitation, increasing the number of focal spots penalizes the total signal linearly--requiring more laser power than is available or can be tolerated by the sample. Here we analyze and demonstrate a method of using galvanometer mirrors to time-sequentially reposition multiple 3D holograms, both extending the field of view and increasing the total time-averaged two-photon signal. We apply our approach to 3D two-photon in vivo neuronal calcium imaging. PMID:26699047

  16. Direct laser-writing of ferroelectric single-crystal waveguide architectures in glass for 3D integrated optics.

    PubMed

    Stone, Adam; Jain, Himanshu; Dierolf, Volkmar; Sakakura, Masaaki; Shimotsuma, Yasuhiko; Miura, Kiyotaka; Hirao, Kazuyuki; Lapointe, Jerome; Kashyap, Raman

    2015-01-01

    Direct three-dimensional laser writing of amorphous waveguides inside glass has been studied intensely as an attractive route for fabricating photonic integrated circuits. However, achieving essential nonlinear-optic functionality in such devices will also require the ability to create high-quality single-crystal waveguides. Femtosecond laser irradiation is capable of crystallizing glass in 3D, but producing optical-quality single-crystal structures suitable for waveguiding poses unique challenges that are unprecedented in the field of crystal growth. In this work, we use a high angular-resolution electron diffraction method to obtain the first conclusive confirmation that uniform single crystals can be grown inside glass by femtosecond laser writing under optimized conditions. We confirm waveguiding capability and present the first quantitative measurement of power transmission through a laser-written crystal-in-glass waveguide, yielding loss of 2.64 dB/cm at 1530 nm. We demonstrate uniformity of the crystal cross-section down the length of the waveguide and quantify its birefringence. Finally, as a proof-of-concept for patterning more complex device geometries, we demonstrate the use of dynamic phase modulation to grow symmetric crystal junctions with single-pass writing. PMID:25988599

  17. Direct laser-writing of ferroelectric single-crystal waveguide architectures in glass for 3D integrated optics

    PubMed Central

    Stone, Adam; Jain, Himanshu; Dierolf, Volkmar; Sakakura, Masaaki; Shimotsuma, Yasuhiko; Miura, Kiyotaka; Hirao, Kazuyuki; Lapointe, Jerome; Kashyap, Raman

    2015-01-01

    Direct three-dimensional laser writing of amorphous waveguides inside glass has been studied intensely as an attractive route for fabricating photonic integrated circuits. However, achieving essential nonlinear-optic functionality in such devices will also require the ability to create high-quality single-crystal waveguides. Femtosecond laser irradiation is capable of crystallizing glass in 3D, but producing optical-quality single-crystal structures suitable for waveguiding poses unique challenges that are unprecedented in the field of crystal growth. In this work, we use a high angular-resolution electron diffraction method to obtain the first conclusive confirmation that uniform single crystals can be grown inside glass by femtosecond laser writing under optimized conditions. We confirm waveguiding capability and present the first quantitative measurement of power transmission through a laser-written crystal-in-glass waveguide, yielding loss of 2.64 dB/cm at 1530 nm. We demonstrate uniformity of the crystal cross-section down the length of the waveguide and quantify its birefringence. Finally, as a proof-of-concept for patterning more complex device geometries, we demonstrate the use of dynamic phase modulation to grow symmetric crystal junctions with single-pass writing. PMID:25988599

  18. Optimization of composition, structure and mechanical strength of bioactive 3-D glass-ceramic scaffolds for bone substitution.

    PubMed

    Baino, Francesco; Ferraris, Monica; Bretcanu, Oana; Verné, Enrica; Vitale-Brovarone, Chiara

    2013-03-01

    Fabrication of 3-D highly porous, bioactive, and mechanically competent scaffolds represents a significant challenge of bone tissue engineering. In this work, Bioglass®-derived glass-ceramic scaffolds actually fulfilling this complex set of requirements were successfully produced through the sponge replication method. Scaffold processing parameters and sintering treatment were carefully designed in order to obtain final porous bodies with pore content (porosity above 70 %vol), trabecular architecture and mechanical properties (compressive strength up to 3 MPa) analogous to those of the cancellous bone. Influence of the Bioglass® particles size on the structural and mechanical features of the sintered scaffolds was considered and discussed. Relationship between porosity and mechanical strength was investigated and modeled. Three-dimensional architecture, porosity, mechanical strength and in vitro bioactivity of the optimized Bioglass®-derived scaffolds were also compared to those of CEL2-based glass-ceramic scaffolds (CEL2 is an experimental bioactive glass originally developed by the authors at Politecnico di Torino) fabricated by the same processing technique, in an attempt at understanding the role of different bioactive glass composition on the major features of scaffolds prepared by the same method. PMID:22207602

  19. Guided Evolution of Bulk Metallic Glass Nanostructures: A Platform for Designing 3D Electrocatalytic Surfaces.

    PubMed

    Doubek, Gustavo; Sekol, Ryan C; Li, Jinyang; Ryu, Won-Hee; Gittleson, Forrest S; Nejati, Siamak; Moy, Eric; Reid, Candy; Carmo, Marcelo; Linardi, Marcelo; Bordeenithikasem, Punnathat; Kinser, Emily; Liu, Yanhui; Tong, Xiao; Osuji, Chinedum O; Schroers, Jan; Mukherjee, Sundeep; Taylor, André D

    2016-03-01

    Electrochemical devices such as fuel cells, electrolyzers, lithium-air batteries, and pseudocapacitors are expected to play a major role in energy conversion/storage in the near future. Here, it is demonstrated how desirable bulk metallic glass compositions can be obtained using a combinatorial approach and it is shown that these alloys can serve as a platform technology for a wide variety of electrochemical applications through several surface modification techniques. PMID:26689722

  20. Shape measurement by a multi-view methodology based on the remote tracking of a 3D optical scanner

    NASA Astrophysics Data System (ADS)

    Barone, Sandro; Paoli, Alessandro; Viviano Razionale, Armando

    2012-03-01

    Full field optical techniques can be reliably used for 3D measurements of complex shapes by multi-view processes, which require the computation of transformation parameters relating different views into a common reference system. Although, several multi-view approaches have been proposed, the alignment process is still the crucial step of a shape reconstruction. In this paper, a methodology to automatically align 3D views has been developed by integrating a stereo vision system and a full field optical scanner. In particular, the stereo vision system is used to remotely track the optical scanner within a working volume. The tracking system uses stereo images to detect the 3D coordinates of retro-reflective infrared markers rigidly connected to the scanner. Stereo correspondences are established by a robust methodology based on combining the epipolar geometry with an image spatial transformation constraint. The proposed methodology has been validated by experimental tests regarding both the evaluation of the measurement accuracy and the 3D reconstruction of an industrial shape.

  1. Effectiveness of Applying 2D Static Depictions and 3D Animations to Orthographic Views Learning in Graphical Course

    ERIC Educational Resources Information Center

    Wu, Chih-Fu; Chiang, Ming-Chin

    2013-01-01

    This study provides experiment results as an educational reference for instructors to help student obtain a better way to learn orthographic views in graphical course. A visual experiment was held to explore the comprehensive differences between 2D static and 3D animation object features; the goal was to reduce the possible misunderstanding…

  2. Fibroblasts Lead the Way: A Unified View of 3D Cell Motility.

    PubMed

    Petrie, Ryan J; Yamada, Kenneth M

    2015-11-01

    Primary human fibroblasts are remarkably adaptable, able to migrate in differing types of physiological 3D tissue and on rigid 2D tissue culture surfaces. The crawling behavior of these and other vertebrate cells has been studied intensively, which has helped generate the concept of the cell motility cycle as a comprehensive model of 2D cell migration. However, this model fails to explain how cells force their large nuclei through the confines of a 3D matrix environment and why primary fibroblasts can use more than one mechanism to move in 3D. Recent work shows that the intracellular localization of myosin II activity is governed by cell-matrix interactions to both force the nucleus through the extracellular matrix (ECM) and dictate the type of protrusions used to migrate in 3D. PMID:26437597

  3. From pixel to voxel: a deeper view of biological tissue by 3D mass spectral imaging

    PubMed Central

    Ye, Hui; Greer, Tyler; Li, Lingjun

    2011-01-01

    Three dimensional mass spectral imaging (3D MSI) is an exciting field that grants the ability to study a broad mass range of molecular species ranging from small molecules to large proteins by creating lateral and vertical distribution maps of select compounds. Although the general premise behind 3D MSI is simple, factors such as choice of ionization method, sample handling, software considerations and many others must be taken into account for the successful design of a 3D MSI experiment. This review provides a brief overview of ionization methods, sample preparation, software types and technological advancements driving 3D MSI research of a wide range of low- to high-mass analytes. Future perspectives in this field are also provided to conclude that the positive and promises ever-growing applications in the biomedical field with continuous developments of this powerful analytical tool. PMID:21320052

  4. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  5. Hubble and ESO's VLT provide unique 3D views of remote galaxies

    NASA Astrophysics Data System (ADS)

    2009-03-01

    Astronomers have obtained exceptional 3D views of distant galaxies, seen when the Universe was half its current age, by combining the twin strengths of the NASA/ESA Hubble Space Telescope's acute eye, and the capacity of ESO's Very Large Telescope to probe the motions of gas in tiny objects. By looking at this unique "history book" of our Universe, at an epoch when the Sun and the Earth did not yet exist, scientists hope to solve the puzzle of how galaxies formed in the remote past. ESO PR Photo 10a/09 A 3D view of remote galaxies ESO PR Photo 10b/09 Measuring motions in 3 distant galaxies ESO PR Video 10a/09 Galaxies in collision For decades, distant galaxies that emitted their light six billion years ago were no more than small specks of light on the sky. With the launch of the Hubble Space Telescope in the early 1990s, astronomers were able to scrutinise the structure of distant galaxies in some detail for the first time. Under the superb skies of Paranal, the VLT's FLAMES/GIRAFFE spectrograph (ESO 13/02) -- which obtains simultaneous spectra from small areas of extended objects -- can now also resolve the motions of the gas in these distant galaxies (ESO 10/06). "This unique combination of Hubble and the VLT allows us to model distant galaxies almost as nicely as we can close ones," says François Hammer, who led the team. "In effect, FLAMES/GIRAFFE now allows us to measure the velocity of the gas at various locations in these objects. This means that we can see how the gas is moving, which provides us with a three-dimensional view of galaxies halfway across the Universe." The team has undertaken the Herculean task of reconstituting the history of about one hundred remote galaxies that have been observed with both Hubble and GIRAFFE on the VLT. The first results are coming in and have already provided useful insights for three galaxies. In one galaxy, GIRAFFE revealed a region full of ionised gas, that is, hot gas composed of atoms that have been stripped of

  6. Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system.

    PubMed

    Tao, Tianyang; Chen, Qian; Da, Jian; Feng, Shijie; Hu, Yan; Zuo, Chao

    2016-09-01

    In recent years, fringe projection has become an established and essential method for dynamic three-dimensional (3-D) shape measurement in different fields such as online inspection and real-time quality control. Numerous high-speed 3-D shape measurement methods have been developed by either employing high-speed hardware, minimizing the number of pattern projection, or both. However, dynamic 3-D shape measurement of arbitrarily-shaped objects with full sensor resolution without the necessity of additional pattern projections is still a big challenge. In this work, we introduce a high-speed 3-D shape measurement technique based on composite phase-shifting fringes and a multi-view system. The geometry constraint is adopted to search the corresponding points independently without additional images. Meanwhile, by analysing the 3-D position and the main wrapped phase of the corresponding point, pairs with an incorrect 3-D position or a considerable phase difference are effectively rejected. All of the qualified corresponding points are then corrected, and the unique one as well as the related period order is selected through the embedded triangular wave. Finally, considering that some points can only be captured by one of the cameras due to the occlusions, these points may have different fringe orders in the two views, so a left-right consistency check is employed to eliminate those erroneous period orders in this case. Several experiments on both static and dynamic scenes are performed, verifying that our method can achieve a speed of 120 frames per second (fps) with 25-period fringe patterns for fast, dense, and accurate 3-D measurement. PMID:27607632

  7. 3D analysis of thermal and stress evolution during laser cladding of bioactive glass coatings.

    PubMed

    Krzyzanowski, Michal; Bajda, Szymon; Liu, Yijun; Triantaphyllou, Andrew; Mark Rainforth, W; Glendenning, Malcolm

    2016-06-01

    Thermal and strain-stress transient fields during laser cladding of bioactive glass coatings on the Ti6Al4V alloy basement were numerically calculated and analysed. Conditions leading to micro-cracking susceptibility of the coating have been investigated using the finite element based modelling supported by experimental results of microscopic investigation of the sample coatings. Consecutive temperature and stress peaks are developed within the cladded material as a result of the laser beam moving along the complex trajectory, which can lead to micro-cracking. The preheated to 500°C base plate allowed for decrease of the laser power and lowering of the cooling speed between the consecutive temperature peaks contributing in such way to achievement of lower cracking susceptibility. The cooling rate during cladding of the second and the third layer was lower than during cladding of the first one, in such way, contributing towards improvement of cracking resistance of the subsequent layers due to progressive accumulation of heat over the process. PMID:26953962

  8. Multi-scale Characterisation of the 3D Microstructure of a Thermally-Shocked Bulk Metallic Glass Matrix Composite.

    PubMed

    Zhang, Wei; Bodey, Andrew J; Sui, Tan; Kockelmann, Winfried; Rau, Christoph; Korsunsky, Alexander M; Mi, Jiawei

    2016-01-01

    Bulk metallic glass matrix composites (BMGMCs) are a new class of metal alloys which have significantly increased ductility and impact toughness, resulting from the ductile crystalline phases distributed uniformly within the amorphous matrix. However, the 3D structures and their morphologies of such composite at nano and micrometre scale have never been reported before. We have used high density electric currents to thermally shock a Zr-Ti based BMGMC to different temperatures, and used X-ray microtomography, FIB-SEM nanotomography and neutron diffraction to reveal the morphologies, compositions, volume fractions and thermal stabilities of the nano and microstructures. Understanding of these is essential for optimizing the design of BMGMCs and developing viable manufacturing methods. PMID:26725519

  9. Multi-scale Characterisation of the 3D Microstructure of a Thermally-Shocked Bulk Metallic Glass Matrix Composite

    PubMed Central

    Zhang, Wei; Bodey, Andrew J.; Sui, Tan; Kockelmann, Winfried; Rau, Christoph; Korsunsky, Alexander M.; Mi, Jiawei

    2016-01-01

    Bulk metallic glass matrix composites (BMGMCs) are a new class of metal alloys which have significantly increased ductility and impact toughness, resulting from the ductile crystalline phases distributed uniformly within the amorphous matrix. However, the 3D structures and their morphologies of such composite at nano and micrometre scale have never been reported before. We have used high density electric currents to thermally shock a Zr-Ti based BMGMC to different temperatures, and used X-ray microtomography, FIB-SEM nanotomography and neutron diffraction to reveal the morphologies, compositions, volume fractions and thermal stabilities of the nano and microstructures. Understanding of these is essential for optimizing the design of BMGMCs and developing viable manufacturing methods. PMID:26725519

  10. Reconstruction of quadratic curves in 3D using two or more perspective views: simulation studies

    NASA Astrophysics Data System (ADS)

    Kumar, Sanjeev; Sukavanam, N.; Balasubramanian, R.

    2006-01-01

    The shapes of many natural and man-made objects have planar and curvilinear surfaces. The images of such curves usually do not have sufficient distinctive features to apply conventional feature-based reconstruction algorithms. In this paper, we describe a method of reconstruction of a quadratic curve in 3-D space as an intersection of two cones containing the respective projected curve images. The correspondence between this pair of projections of the curve is assumed to be established in this work. Using least-square curve fitting, the parameters of a curve in 2-D space are found. From this we are reconstructing the 3-D quadratic curve. Relevant mathematical formulations and analytical solutions for obtaining the equation of reconstructed curve are given. The result of the described reconstruction methodology are studied by simulation studies. This reconstruction methodology is applicable to LBW decision in cricket, path of the missile, Robotic Vision, path lanning etc.

  11. View planetary differentiation process through high-resolution 3D imaging

    NASA Astrophysics Data System (ADS)

    Fei, Y.

    2011-12-01

    Core-mantle separation is one of the most important processes in planetary evolution, defining the structure and chemical distribution in the planets. Iron-dominated core materials could migrate through silicate mantle to the core by efficient liquid-liquid separation and/or by percolation of liquid metal through solid silicate matrix. We can experimentally simulate these processes to examine the efficiency and time of core formation and its geochemical signatures. The quantitative measure of the efficiency of percolation is usually the dihedral angle, related to the interfacial energies of the liquid and solid phases. To determine the true dihedral angle at high pressure and temperatures, it is necessary to measure the relative frequency distributions of apparent dihedral angles between the quenched liquid metal and silicate grains for each experiment. Here I present a new imaging technique to visualize the distribution of liquid metal in silicate matrix in 3D by combination of focus ion beam (FIB) milling and high-resolution SEM image. The 3D volume rendering provides precise determination of the dihedral angle and quantitative measure of volume fraction and connectivity. I have conducted a series of experiments using mixtures of San Carlos olivine and Fe-S (10wt%S) metal with different metal-silicate ratios, up to 25 GPa and at temperatures above 1800C. High-quality 3D volume renderings were reconstructed from FIB serial sectioning and imaging with 10-nm slice thickness and 14-nm image resolution for each quenched sample. The unprecedented spatial resolution at nano scale allows detailed examination of textural features and precise determination of the dihedral angle as a function of pressure, temperature and composition. The 3D reconstruction also allows direct assessment of connectivity in multi-phase matrix, providing a new way to investigate the efficiency of metal percolation in a real silicate mantle.

  12. Image-Based Rendering of LOD1 3D City Models for traffic-augmented Immersive Street-view Navigation

    NASA Astrophysics Data System (ADS)

    Brédif, M.

    2013-10-01

    It may be argued that urban areas may now be modeled with sufficient details for realistic fly-through over the cities at a reasonable price point. Modeling cities at the street level for immersive street-view navigation is however still a very expensive (or even impossible) operation if one tries to match the level of detail acquired by street-view mobile mapping imagery. This paper proposes to leverage the richness of these street-view images with the common availability of nation-wide LOD1 3D city models, using an image-based rendering technique : projective multi-texturing. Such a coarse 3D city model may be used as a lightweight scene proxy of approximate coarse geometry. The images neighboring the interpolated viewpoint are projected onto this scene proxy using their estimated poses and calibrations and blended together according to their relative distance. This enables an immersive navigation within the image dataset that is perfectly equal to - and thus as rich as - original images when viewed from their viewpoint location, and which degrades gracefully in between viewpoint locations. Beyond proving the applicability of this preprocessing-free computer graphics technique to mobile mapping images and LOD1 3D city models, our contributions are three-fold. Firstly, image distortion is corrected online in the GPU, preventing an extra image resampling step. Secondly, externally-computed binary masks may be used to discard pixels corresponding to moving objects. Thirdly, we propose a shadowmap-inspired technique that prevents, at marginal cost, the projective texturing of surfaces beyond the first, as seen from the projected image viewpoint location. Finally, an augmented visualization application is introduced to showcase the proposed immersive navigation: images are unpopulated from vehicles using externally-computed binary masks and repopulated using a 3D visualization of a 2D traffic simulation.

  13. Micro-electrical discharge machining of 3D micro-molds from Pd40Cu30P20Ni10 metallic glass by using laminated 3D micro-electrodes

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Wu, Xiao-yu; Ma, Jiang; Liang, Xiong; Lei, Jian-guo; Wu, Bo; Ruan, Shuang-chen; Wang, Zhen-long

    2016-03-01

    For obtaining 3D micro-molds with better surface quality (slight ridges) and mechanical properties, in this paper 3D micro-electrodes were fabricated and applied to micro-electrical discharge machining (micro-EDM) to process Pd40Cu30P20Ni10 metallic glass. First, 100 μm-thick Cu foil was cut to obtain multilayer 2D micro-structures and these were connected to fit 3D micro-electrodes (with feature sizes of less than 1 mm). Second, under the voltage of 80 V, pulse frequency of 0.2MHZ, pulse width of 800 ns and pulse interval of 4200 ns, the 3D micro-electrodes were applied to micro-EDM for processing Pd40Cu30P20Ni10 metallic glass. The 3D micro-molds with feature within 1 mm were obtained. Third, scanning electron microscope, energy dispersive spectroscopy and x-ray diffraction analysis were carried out on the processed results. The analysis results indicate that with an increase in the depth of micro-EDM, carbon on the processed surface gradually increased from 0.5% to 5.8%, and the processed surface contained new phases (Ni12P5 and Cu3P).

  14. 3D high-efficiency video coding for multi-view video and depth data.

    PubMed

    Muller, Karsten; Schwarz, Heiko; Marpe, Detlev; Bartnik, Christian; Bosse, Sebastian; Brust, Heribert; Hinz, Tobias; Lakshman, Haricharan; Merkle, Philipp; Rhee, Franz Hunn; Tech, Gerhard; Winken, Martin; Wiegand, Thomas

    2013-09-01

    This paper describes an extension of the high efficiency video coding (HEVC) standard for coding of multi-view video and depth data. In addition to the known concept of disparity-compensated prediction, inter-view motion parameter, and inter-view residual prediction for coding of the dependent video views are developed and integrated. Furthermore, for depth coding, new intra coding modes, a modified motion compensation and motion vector coding as well as the concept of motion parameter inheritance are part of the HEVC extension. A novel encoder control uses view synthesis optimization, which guarantees that high quality intermediate views can be generated based on the decoded data. The bitstream format supports the extraction of partial bitstreams, so that conventional 2D video, stereo video, and the full multi-view video plus depth format can be decoded from a single bitstream. Objective and subjective results are presented, demonstrating that the proposed approach provides 50% bit rate savings in comparison with HEVC simulcast and 20% in comparison with a straightforward multi-view extension of HEVC without the newly developed coding tools. PMID:23715605

  15. 3D reconstruction of scintillation light emission from proton pencil beams using limited viewing angles-a simulation study.

    PubMed

    Hui, CheukKai; Robertson, Daniel; Beddar, Sam

    2014-08-21

    An accurate and high-resolution quality assurance (QA) method for proton radiotherapy beams is necessary to ensure correct dose delivery to the target. Detectors based on a large volume of liquid scintillator have shown great promise in providing fast and high-resolution measurements of proton treatment fields. However, previous work with these detectors has been limited to two-dimensional measurements, and the quantitative measurement of dose distributions was lacking. The purpose of the current study is to assess the feasibility of reconstructing three-dimensional (3D) scintillation light distributions of spot scanning proton beams using a scintillation system. The proposed system consists of a tank of liquid scintillator imaged by charge-coupled device cameras at three orthogonal viewing angles. Because of the limited number of viewing angles, we developed a profile-based technique to obtain an initial estimate that can improve the quality of the 3D reconstruction. We found that our proposed scintillator system and profile-based technique can reconstruct a single energy proton beam in 3D with a gamma passing rate (3%/3 mm local) of 100.0%. For a single energy layer of an intensity modulated proton therapy prostate treatment plan, the proposed method can reconstruct the 3D light distribution with a gamma pass rate (3%/3 mm local) of 99.7%. In addition, we also found that the proposed method is effective in detecting errors in the treatment plan, indicating that it can be a very useful tool for 3D proton beam QA. PMID:25054735

  16. 3D reconstruction of scintillation light emission from proton pencil beams using limited viewing angles—a simulation study

    NASA Astrophysics Data System (ADS)

    Hui, CheukKai; Robertson, Daniel; Beddar, Sam

    2014-08-01

    An accurate and high-resolution quality assurance (QA) method for proton radiotherapy beams is necessary to ensure correct dose delivery to the target. Detectors based on a large volume of liquid scintillator have shown great promise in providing fast and high-resolution measurements of proton treatment fields. However, previous work with these detectors has been limited to two-dimensional measurements, and the quantitative measurement of dose distributions was lacking. The purpose of the current study is to assess the feasibility of reconstructing three-dimensional (3D) scintillation light distributions of spot scanning proton beams using a scintillation system. The proposed system consists of a tank of liquid scintillator imaged by charge-coupled device cameras at three orthogonal viewing angles. Because of the limited number of viewing angles, we developed a profile-based technique to obtain an initial estimate that can improve the quality of the 3D reconstruction. We found that our proposed scintillator system and profile-based technique can reconstruct a single energy proton beam in 3D with a gamma passing rate (3%/3 mm local) of 100.0%. For a single energy layer of an intensity modulated proton therapy prostate treatment plan, the proposed method can reconstruct the 3D light distribution with a gamma pass rate (3%/3 mm local) of 99.7%. In addition, we also found that the proposed method is effective in detecting errors in the treatment plan, indicating that it can be a very useful tool for 3D proton beam QA.

  17. 18. INTERIOR DETAIL VIEW OF STAINED GLASS WINDOW LOCATED AT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. INTERIOR DETAIL VIEW OF STAINED GLASS WINDOW LOCATED AT SOUTH SIDE OF ALTAR, NOTE INSCRIPTION DEDICATED IN THE MEMORY OF FATHER DAMIEN - St. Francis Catholic Church, Moloka'i Island, Kalaupapa, Kalawao County, HI

  18. VIEW OF THREE SOUTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF THREE SOUTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ARE LOCATED ADJACENT TO THE ALTER. - U.S. Naval Base, Pearl Harbor, Chapel, Corner of Oakley & Nimitz Street, Pearl City, Honolulu County, HI

  19. VIEW OF THREE NORTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF THREE NORTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ARE LOCATED ADJACENT TO THE ALTAR. - U.S. Naval Base, Pearl Harbor, Chapel, Corner of Oakley & Nimitz Street, Pearl City, Honolulu County, HI

  20. VIEW OF THREE NORTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF THREE NORTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ARE LOCATED JUST BELOW THE CHOIR LOFT. - U.S. Naval Base, Pearl Harbor, Chapel, Corner of Oakley & Nimitz Street, Pearl City, Honolulu County, HI

  1. INTERIOR VIEW SHOWING FURNACE KEEPER OBSERVING FURNACE THROUGH BLUE GLASS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    INTERIOR VIEW SHOWING FURNACE KEEPER OBSERVING FURNACE THROUGH BLUE GLASS EVERY TWENTY MINUTES TO DETERMINE SIZE AND TEXTURE OF BATCH AND OTHER VARIABLES. FAN IN FRONT COOLS WORKERS AS THEY CONDUCT REPAIRS. FURNACE TEMPERATURE AT 1572 DEGREES FAHRENHEIT. - Chambers-McKee Window Glass Company, Furnace No. 2, Clay Avenue Extension, Jeannette, Westmoreland County, PA

  2. The Relationship Between Glass Formability and the Properties of the Bcc Phase in TITANIUM-3D Metal Alloys

    NASA Astrophysics Data System (ADS)

    Sinkler, Wharton

    The present study concerns glass formation and the beta (bcc) phase in Ti-3d metal systems. beta phase stability is related to amorphization, because the formability and stability of metallic glasses depends on the relative thermodynamic instability of chemically disordered crystalline solid solution phases (Johnson 1986). Correlations are found in this series of alloys which support a connection between electronic characteristics of the bcc phase and the tendency for glass formation. Electron irradiation-induced amorphization in Ti-3d metal systems is investigated as a function of temperature and DeltaN, the group number difference between Ti and the solute. DeltaN is made continuous by using a series of pseudobinary Laves compounds Ti(M1_{x}M2 _{(1-x)}_2. For DeltaN <= 2.2 (between TiCr_2 and TiMn _2) low temperature irradiation damage induces oriented precipitation of the beta (bcc) solid solution phase from the damaged compound. For DeltaN > 2.2 amorphization occurs. beta-phase precipitation under irradiation suggests that beta phase stability is continuously enhanced as Delta N decreases. Diffuse omega scattering in the quenched Ti-Cr beta phase is investigated using electron diffraction and low temperature electron irradiation. A new model of the short range ordered atomic displacements causing the diffuse scattering is developed. Based on this model, it is proposed that the structure reflects chemical short range order. This is supported by irradiation results on the beta phase. A correlation is found between the diffuse scattering and the valence electron concentration. The explanation proposed for this correlation is that the chemical ordering in the beta phase is driven by Fermi surface nesting. Results of annealing of quenched beta Ti-Cr are presented, and are compared with reports of annealing-induced amorphization of this phase (Blatter et al. 1988; Yan et al. 1993). Amorphization is not reproduced. A metastable compound phase beta ^{''} precipitates

  3. Stereoscopic 3D entertainment and its effect on viewing comfort: comparison of children and adults.

    PubMed

    Pölönen, Monika; Järvenpää, Toni; Bilcu, Beatrice

    2013-01-01

    Children's and adults' viewing comfort during stereoscopic three-dimensional film viewing and computer game playing was studied. Certain mild changes in visual function, heterophoria and near point of accommodation values, as well as eyestrain and visually induced motion sickness levels were found when single setups were compared. The viewing system had an influence on viewing comfort, in particular for eyestrain levels, but no clear difference between two- and three-dimensional systems was found. Additionally, certain mild changes in visual functions and visually induced motion sickness levels between adults and children were found. In general, all of the system-task combinations caused mild eyestrain and possible changes in visual functions, but these changes in magnitude were small. According to subjective opinions that further support these measurements, using a stereoscopic three-dimensional system for up to 2 h was acceptable for most of the users regardless of their age. PMID:22818394

  4. A wavelet-based image quality metric for the assessment of 3D synthesized views

    NASA Astrophysics Data System (ADS)

    Bosc, Emilie; Battisti, Federica; Carli, Marco; Le Callet, Patrick

    2013-03-01

    In this paper we present a novel image quality assessment technique for evaluating virtual synthesized views in the context of multi-view video. In particular, Free Viewpoint Videos are generated from uncompressed color views and their compressed associated depth maps by means of the View Synthesis Reference Software, provided by MPEG. Prior to the synthesis step, the original depth maps are encoded with different coding algorithms thus leading to the creation of additional artifacts in the synthesized views. The core of proposed wavelet-based metric is in the registration procedure performed to align the synthesized view and the original one, and in the skin detection that has been applied considering that the same distortion is more annoying if visible on human subjects rather than on other parts of the scene. The effectiveness of the metric is evaluated by analyzing the correlation of the scores obtained with the proposed metric with Mean Opinion Scores collected by means of subjective tests. The achieved results are also compared against those of well known objective quality metrics. The experimental results confirm the effectiveness of the proposed metric.

  5. An Automatic 3d Reconstruction Method Based on Multi-View Stereo Vision for the Mogao Grottoes

    NASA Astrophysics Data System (ADS)

    Xiong, J.; Zhong, S.; Zheng, L.

    2015-05-01

    This paper presents an automatic three-dimensional reconstruction method based on multi-view stereo vision for the Mogao Grottoes. 3D digitization technique has been used in cultural heritage conservation and replication over the past decade, especially the methods based on binocular stereo vision. However, mismatched points are inevitable in traditional binocular stereo matching due to repeatable or similar features of binocular images. In order to reduce the probability of mismatching greatly and improve the measure precision, a portable four-camera photographic measurement system is used for 3D modelling of a scene. Four cameras of the measurement system form six binocular systems with baselines of different lengths to add extra matching constraints and offer multiple measurements. Matching error based on epipolar constraint is introduced to remove the mismatched points. Finally, an accurate point cloud can be generated by multi-images matching and sub-pixel interpolation. Delaunay triangulation and texture mapping are performed to obtain the 3D model of a scene. The method has been tested on 3D reconstruction several scenes of the Mogao Grottoes and good results verify the effectiveness of the method.

  6. A molecular view of vapor deposited glasses

    SciTech Connect

    Singh, Sadanand; Pablo, Juan J. de

    2011-05-21

    Recently, novel organic glassy materials that exhibit remarkable stability have been prepared by vapor deposition. The thermophysical properties of these new ''stable'' glasses are equivalent to those that common glasses would exhibit after aging over periods lasting thousands of years. The origin of such enhanced stability has been elusive; in the absence of detailed models, past studies have discussed the formation of new polyamorphs or that of nanocrystals to explain the observed behavior. In this work, an atomistic molecular model of trehalose, a disaccharide of glucose, is used to examine the properties of vapor-deposited stable glasses. Consistent with experiment, the model predicts the formation of stable glasses having a higher density, a lower enthalpy, and higher onset temperatures than those of the corresponding ''ordinary'' glass formed by quenching the bulk liquid. Simulations reveal that newly formed layers of the growing vapor-deposited film exhibit greater mobility than the remainder of the material, thereby enabling a reorganization of the film as it is grown. They also reveal that ''stable'' glasses exhibit a distinct layered structure in the direction normal to the substrate that is responsible for their unusual properties.

  7. Quantitative analysis of 3D stent reconstruction from a limited number of views in cardiac rotational angiography

    NASA Astrophysics Data System (ADS)

    Perrenot, Béatrice; Vaillant, Régis; Prost, Rémy; Finet, Gérard; Douek, Philippe; Peyrin, Françoise

    2007-03-01

    Percutaneous coronary angioplasty consists in conducting a guidewire carrying a balloon and a stent through the lesion and deploying the stent by balloon inflation. A stent is a small 3D complex mesh hardly visible in X-ray images : the control of stent deployment is difficult although it is important to avoid post intervention complications. In a previous work, we proposed a method to reconstruct 3D stent images from a set of 2D cone-beam projections acquired in rotational acquisition mode. The process involves a motion compensation procedure based on the position of two markers located on the guidewire in the 2D radiographic sequence. Under the hypothesis that the stent and markers motions are identical, the method was shown to generate a negligible error. If this hypothesis is not fulfilled, a solution could be to use only the images where motion is weakest, at the detriment of having a limiter number of views. In this paper, we propose a simulation based study of the impact of a limited number of views in our context. The chain image involved in the acquisition of X-ray sequences is first modeled to simulate realistic noisy projections of stent animated by a motion close to cardiac motion. Then, the 3D stent images are reconstructed using the proposed motion compensation method from gated projections. Two gating strategies are examined to select projection in the sequences. A quantitative analysis is carried out to assess reconstruction quality as a function of noise and acquisition strategy.

  8. 3D Segmentation of the Left Ventricle Combining Long- and Shortaxis Views

    NASA Astrophysics Data System (ADS)

    Relan, Jatin; Säring, Dennis; Groth, Michael; Müllerleile, Kai; Handels, Heinz

    Segmentation of the left ventricle (LV) is required to quantify LV remodeling after myocardial infarction. Therefore spatiotemporal Cine MR sequences including longaxis and shortaxis images are acquired. In this paper a new segmentation method for fast and robust segmentation of the left ventricle is presented. The new approach considers the position of the mitral valve and the apex as well as the longaxis contours to generate a 3D LV surface model. The segmentation result can be checked and adjusted in the shortaxis images. Finally quantitative parameters were extracted. For evaluation the LV was segmented in eight datasets of the same subject by two medical experts using a contour drawing tool and the new segmentation tool. The results of both methods were compared concerning interaction time and intra- and interobserver variance. The presented segmentation method proved to be fast. The intra- and interobserver variance is decreased for all extracted parameters.

  9. A 3D view of the Hydra I galaxy cluster core - I. Kinematic substructures

    NASA Astrophysics Data System (ADS)

    Hilker, Michael; Barbosa, Carlos Eduardo; Richtler, Tom; Coccato, Lodovico; Arnaboldi, Magda; Mendes de Oliveira, Claudia

    2015-02-01

    We used FORS2 in MXU mode to mimic a coarse `IFU' in order to measure the 3D large-scale kinematics around the central Hydra I cluster galaxy NGC 3311. Our data show that the velocity dispersion field varies as a function of radius and azimuthal angle and violates point symmetry. Also, the velocity field shows similar dependence, hence the stellar halo of NGC 3311 is a dynamically young structure. The kinematic irregularities coincide in position with a displaced diffuse halo North-East of NGC 3311 and with tidal features of a group of disrupting dwarf galaxies. This suggests that the superposition of different velocity components is responsible for the kinematic substructure in the Hydra I cluster core.

  10. Potential hazards of viewing 3-D stereoscopic television, cinema and computer games: a review.

    PubMed

    Howarth, Peter A

    2011-03-01

    The visual stimulus provided by a 3-D stereoscopic display differs from that of the real world because the image provided to each eye is produced on a flat surface. The distance from the screen to the eye remains fixed, providing a single focal distance, but the introduction of disparity between the images allows objects to be located geometrically in front of, or behind, the screen. Unlike in the real world, the stimulus to accommodation and the stimulus to convergence do not match. Although this mismatch is used positively in some forms of Orthoptic treatment, a number of authors have suggested that it could negatively lead to the development of asthenopic symptoms. From knowledge of the zone of clear, comfortable, single binocular vision one can predict that, for people with normal binocular vision, adverse symptoms will not be present if the discrepancy is small, but are likely if it is large, and that what constitutes 'large' and 'small' are idiosyncratic to the individual. The accommodation-convergence mismatch is not, however, the only difference between the natural and the artificial stimuli. In the former case, an object located in front of, or behind, a fixated object will not only be perceived as double if the images fall outside Panum's fusional areas, but it will also be defocused and blurred. In the latter case, however, it is usual for the producers of cinema, TV or computer game content to provide an image that is in focus over the whole of the display, and as a consequence diplopic images will be sharply in focus. The size of Panum's fusional area is spatial frequency-dependent, and because of this the high spatial frequencies present in the diplopic 3-D image will provide a different stimulus to the fusion system from that found naturally. PMID:21309798

  11. Automated bone segmentation from large field of view 3D MR images of the hip joint.

    PubMed

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-10-21

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation. PMID:24077264

  12. Automated bone segmentation from large field of view 3D MR images of the hip joint

    NASA Astrophysics Data System (ADS)

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S.; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-10-01

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation.

  13. 31. Interior detail view of arched, steelframed, stained glass windows ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    31. Interior detail view of arched, steel-framed, stained glass windows at the landing of the south stairs in main lobby, view looking south from second floor lobby - University of Oregon Museum of Art, 1470 Johnson Lane, Eugene, Lane County, OR

  14. Supervised and unsupervised MRF based 3D scene classification in multiple view airborne oblique images

    NASA Astrophysics Data System (ADS)

    Gerke, M.; Xiao, J.

    2013-10-01

    In this paper we develop and compare two methods for scene classification in 3D object space, that is, not single image pixels get classified, but voxels which carry geometric, textural and color information collected from the airborne oblique images and derived products like point clouds from dense image matching. One method is supervised, i.e. relies on training data provided by an operator. We use Random Trees for the actual training and prediction tasks. The second method is unsupervised, thus does not ask for any user interaction. We formulate this classification task as a Markov-Random-Field problem and employ graph cuts for the actual optimization procedure. Two test areas are used to test and evaluate both techniques. In the Haiti dataset we are confronted with largely destroyed built-up areas since the images were taken after the earthquake in January 2010, while in the second case we use images taken over Enschede, a typical Central European city. For the Haiti case it is difficult to provide clear class definitions, and this is also reflected in the overall classification accuracy; it is 73% for the supervised and only 59% for the unsupervised method. If classes are defined more unambiguously like in the Enschede area, results are much better (85% vs. 78%). In conclusion the results are acceptable, also taking into account that the point cloud used for geometric features is not of good quality and no infrared channel is available to support vegetation classification.

  15. 3D numerical model for a focal plane view in case of mosaic grating compressor for high energy CPA chain.

    PubMed

    Montant, S; Marre, G; Blanchot, N; Rouyer, C; Videau, L; Sauteret, C

    2006-12-11

    An important issue, mosaic grating compressor, is studied to recompress pulses for multiPetawatt, high energy laser systems. Alignment of the mosaic elements is crucial to control the focal spot and thus the intensity on target. No theoretical approach analyses the influence of compressor misalignment on spatial and temporal profiles in the focal plane. We describe a simple 3D numerical model giving access to the focal plane view after a compressor. This model is computationally inexpensive since it needs only 1D Fourier transforms to access to the temporal profile. We present simulations of monolithic and mosaic grating compressors. PMID:19529688

  16. Highly optimized simulations on single- and multi-GPU systems of the 3D Ising spin glass model

    NASA Astrophysics Data System (ADS)

    Lulli, M.; Bernaschi, M.; Parisi, G.

    2015-11-01

    We present a highly optimized implementation of a Monte Carlo (MC) simulator for the three-dimensional Ising spin-glass model with bimodal disorder, i.e., the 3D Edwards-Anderson model running on CUDA enabled GPUs. Multi-GPU systems exchange data by means of the Message Passing Interface (MPI). The chosen MC dynamics is the classic Metropolis one, which is purely dissipative, since the aim was the study of the critical off-equilibrium relaxation of the system. We focused on the following issues: (i) the implementation of efficient memory access patterns for nearest neighbours in a cubic stencil and for lagged-Fibonacci-like pseudo-Random Numbers Generators (PRNGs); (ii) a novel implementation of the asynchronous multispin-coding Metropolis MC step allowing to store one spin per bit and (iii) a multi-GPU version based on a combination of MPI and CUDA streams. Cubic stencils and PRNGs are two subjects of very general interest because of their widespread use in many simulation codes.

  17. Venus - 3D Perspective View of Latona Corona and Dali Chasma

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This computer-generated perspective view of Latona Corona and Dali Chasma on Venus shows Magellan radar data superimposed on topography. The view is from the northeast and vertical exaggeration is 10 times. Exaggeration of relief is a common tool scientists use to detect relationships between structure (i.e. faults and fractures) and topography. Latona Corona, a circular feature approximately 1,000 kilometers (620 miles) in diameter whose eastern half is shown at the left of the image, has a relatively smooth, radar-bright raised rim. Bright lines or fractures within the corona appear to radiate away from its center toward the rim. The rest of the bright fractures in the area are associated with the relatively deep (approximately 3 kilometers or 1.9 miles) troughs of Dali Chasma. The Dali and Diana Chasma system consist of deep troughs that extend for 7,400 kilometers (4,588 miles) and are very distinct features on Venus. Those chasma connect the Ovda and Thetis highlands with the large volcanoes at Atla Regio and thus are considered to be the 'Scorpion Tail' of Aphrodite Terra. The broad, curving scarp resembles some of Earth's subduction zones where crustal plates are pushed over each other. The radar-bright surface at the highest elevation along the scarp is similar to surfaces in other elevated regions where some metallic mineral such as pyrite (fool's gold) may occur on the surface.

  18. Automatic thermographic scanning with the creation of 3D panoramic views of buildings

    NASA Astrophysics Data System (ADS)

    Ferrarini, G.; Cadelano, G.; Bortolin, A.

    2016-05-01

    Infrared thermography is widely applied to the inspection of building, enabling the identification of thermal anomalies due to the presence of hidden structures, air leakages, and moisture. One of the main advantages of this technique is the possibility to acquire rapidly a temperature map of a surface. However, due to the actual low-resolution of thermal camera and the necessity of scanning surfaces with different orientation, during a building survey it is necessary to take multiple images. In this work a device based on quantitative infrared thermography, called aIRview, has been applied during building surveys to automatically acquire thermograms with a camera mounted on a robotized pan tilt unit. The goal is to perform a first rapid survey of the building that could give useful information for the successive quantitative thermal investigations. For each data acquisition, the instrument covers a rotational field of view of 360° around the vertical axis and up to 180° around the horizontal one. The obtained images have been processed in order to create a full equirectangular projection of the ambient. For this reason the images have been integrated into a web visualization tool, working with web panorama viewers such as Google Street View, creating a webpage where it is possible to have a three dimensional virtual visit of the building. The thermographic data are embedded with the visual imaging and with other sensor data, facilitating the understanding of the physical phenomena underlying the temperature distribution.

  19. Towards An Understanding of Mobile Touch Navigation in a Stereoscopic Viewing Environment for 3D Data Exploration.

    PubMed

    Lopez, David; Oehlberg, Lora; Doger, Candemir; Isenberg, Tobias

    2016-05-01

    We discuss touch-based navigation of 3D visualizations in a combined monoscopic and stereoscopic viewing environment. We identify a set of interaction modes, and a workflow that helps users transition between these modes to improve their interaction experience. In our discussion we analyze, in particular, the control-display space mapping between the different reference frames of the stereoscopic and monoscopic displays. We show how this mapping supports interactive data exploration, but may also lead to conflicts between the stereoscopic and monoscopic views due to users' movement in space; we resolve these problems through synchronization. To support our discussion, we present results from an exploratory observational evaluation with domain experts in fluid mechanics and structural biology. These experts explored domain-specific datasets using variations of a system that embodies the interaction modes and workflows; we report on their interactions and qualitative feedback on the system and its workflow. PMID:27045916

  20. Fast, large field-of-view, telecentric optical-CT scanning system for 3D radiochromic dosimetry

    PubMed Central

    Thomas, A; Oldham, M

    2010-01-01

    We describe initial experiences with an in-house, fast, large field-of-view optical-CT telecentric scanner (the Duke Large field of view Optical-CT Scanner (DLOS)). The DLOS system is designed to enable telecentric optical-CT imaging of dosimeters up to 24 cm in diameter with a spatial resolution of 1 mm3, in approximately 10 minutes. These capabilities render the DLOS system a unique device at present. The system is a scaled up version of early prototypes in our lab. This scaling introduces several challenges, including the accurate measurement of a greatly increased range of light attenuation within the dosimeter, and the need to reduce even minor reflections and scattered light within the imaging chain. We present several corrections and techniques that enable accurate, low noise, 3D dosimetery with the DLOS system. PMID:21218169

  1. Effects Of Long-Term Viewing Of VISIDEP tm 3-D Television

    NASA Astrophysics Data System (ADS)

    McLaurin, A. P.; Jones, Edwin R.

    1988-06-01

    A comparison was made between viewing normal television and VISIDEPTM television which produces three-dimensional images by the method of alternating images. Two separate groups of fifteen university students reviewed fifty minute unrelieved exposure to television; one group watched standard television and the other watched VISIDEP. Both groups were surveyed regarding questions of eye strain, fatigue, headache, or other discomforts, as well as questions of apparent depth and image quality. One week later the participants were all shown the VISIDEP television and surveyed in the same manner as before. In addition, they were given a chance to make a direct side-by-side comparison and evaluate the images. Analysis of the viewer responses shows that in relation to viewer comfort, VISIDEP television is as acceptable to viewers as normal television, for it introduces no additional problems. However, the VISIDEP images were clearly superior in there ability to invoke an enhanced perception of depth.

  2. A user-friendly nano-CT image alignment and 3D reconstruction platform based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Wang, Sheng-Hao; Zhang, Kai; Wang, Zhi-Li; Gao, Kun; Wu, Zhao; Zhu, Pei-Ping; Wu, Zi-Yu

    2015-01-01

    X-ray computed tomography at the nanometer scale (nano-CT) offers a wide range of applications in scientific and industrial areas. Here we describe a reliable, user-friendly, and fast software package based on LabVIEW that may allow us to perform all procedures after the acquisition of raw projection images in order to obtain the inner structure of the investigated sample. A suitable image alignment process to address misalignment problems among image series due to mechanical manufacturing errors, thermal expansion, and other external factors has been considered, together with a novel fast parallel beam 3D reconstruction procedure that was developed ad hoc to perform the tomographic reconstruction. We have obtained remarkably improved reconstruction results at the Beijing Synchrotron Radiation Facility after the image calibration, the fundamental role of this image alignment procedure was confirmed, which minimizes the unwanted blurs and additional streaking artifacts that are always present in reconstructed slices. Moreover, this nano-CT image alignment and its associated 3D reconstruction procedure are fully based on LabVIEW routines, significantly reducing the data post-processing cycle, thus making the activity of the users faster and easier during experimental runs.

  3. Interior detail view, surviving stained glass panel in an east ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior detail view, surviving stained glass panel in an east aisle window. Most of the stained glass has been removed from the building and relocated to other area churches. (Similar to HABS No. PA-6694-25). - Acts of the Apostles Church in Jesus Christ, 1400-28 North Twenty-eighth Street, northwest corner of North Twenty-eighth & Master Streets, Philadelphia, Philadelphia County, PA

  4. The MUSE 3D view of the Hubble Deep Field South

    NASA Astrophysics Data System (ADS)

    Bacon, R.; Brinchmann, J.; Richard, J.; Contini, T.; Drake, A.; Franx, M.; Tacchella, S.; Vernet, J.; Wisotzki, L.; Blaizot, J.; Bouché, N.; Bouwens, R.; Cantalupo, S.; Carollo, C. M.; Carton, D.; Caruana, J.; Clément, B.; Dreizler, S.; Epinat, B.; Guiderdoni, B.; Herenz, C.; Husser, T.-O.; Kamann, S.; Kerutt, J.; Kollatschny, W.; Krajnovic, D.; Lilly, S.; Martinsson, T.; Michel-Dansac, L.; Patricio, V.; Schaye, J.; Shirazi, M.; Soto, K.; Soucail, G.; Steinmetz, M.; Urrutia, T.; Weilbacher, P.; de Zeeuw, T.

    2015-03-01

    We observed Hubble Deep Field South with the new panoramic integral-field spectrograph MUSE that we built and have just commissioned at the VLT. The data cube resulting from 27 h of integration covers one arcmin2 field of view at an unprecedented depth with a 1σ emission-line surface brightness limit of 1 × 10-19 erg s-1 cm-2 arcsec-2, and contains ~90 000 spectra. We present the combined and calibrated data cube, and we performed a first-pass analysis of the sources detected in the Hubble Deep Field South imaging. We measured the redshifts of 189 sources up to a magnitude I814 = 29.5, increasing the number of known spectroscopic redshifts in this field by more than an order of magnitude. We also discovered 26 Lyα emitting galaxies that are not detected in the HST WFPC2 deep broad-band images. The intermediate spectral resolution of 2.3 Å allows us to separate resolved asymmetric Lyα emitters, [O ii]3727 emitters, and C iii]1908 emitters, and the broad instantaneous wavelength range of 4500 Å helps to identify single emission lines, such as [O iii]5007, Hβ, and Hα, over a very wide redshift range. We also show how the three-dimensional information of MUSE helps to resolve sources that are confused at ground-based image quality. Overall, secure identifications are provided for 83% of the 227 emission line sources detected in the MUSE data cube and for 32% of the 586 sources identified in the HST catalogue. The overall redshift distribution is fairly flat to z = 6.3, with a reduction between z = 1.5 to 2.9, in the well-known redshift desert. The field of view of MUSE also allowed us to detect 17 groups within the field. We checked that the number counts of [O ii]3727 and Lyα emitters are roughly consistent with predictions from the literature. Using two examples, we demonstrate that MUSE is able to provide exquisite spatially resolved spectroscopic information on the intermediate-redshift galaxies present in the field. Thisunique data set can be used for a

  5. 3D pulse EPR imaging from sparse-view projections via constrained, total variation minimization

    PubMed Central

    Qiao, Zhiwei; Redler, Gage; Epel, Boris; Halpern, Howard

    2016-01-01

    Tumors and tumor portions with low oxygen concentrations (pO2) have been shown to be resistant to radiation therapy. As such, radiation therapy efficacy may be enhanced if delivered radiation dose is tailored based on the spatial distribution of pO2 within the tumor. A technique for accurate imaging of tumor oxygenation is critically important to guide radiation treatment that accounts for the effects of local pO2. Electron paramagnetic resonance imaging (EPRI) has been considered one of the leading methods for quantitatively imaging pO2 within tumors in vivo. However, current EPRI techniques require relatively long imaging times. Reducing the number of projections can considerably reduce the imaging time. Conventional image reconstruction algorithms, such as filtered back projection (FBP), may produce severe artifacts in images reconstructed from sparse-view projections. This can lower the utility of these reconstructed images. In this work, an optimization based image reconstruction algorithm using constrained, total variation (TV) minimization, subject to data consistency, is developed and evaluated. The algorithm was evaluated using simulated phantom, physical phantom and pre-clinical EPRI data. The TV algorithm is compared with FBP using subjective and objective metrics. The results demonstrate the merits of the proposed reconstruction algorithm. PMID:26225440

  6. 3D pulse EPR imaging from sparse-view projections via constrained, total variation minimization

    NASA Astrophysics Data System (ADS)

    Qiao, Zhiwei; Redler, Gage; Epel, Boris; Qian, Yuhua; Halpern, Howard

    2015-09-01

    Tumors and tumor portions with low oxygen concentrations (pO2) have been shown to be resistant to radiation therapy. As such, radiation therapy efficacy may be enhanced if delivered radiation dose is tailored based on the spatial distribution of pO2 within the tumor. A technique for accurate imaging of tumor oxygenation is critically important to guide radiation treatment that accounts for the effects of local pO2. Electron paramagnetic resonance imaging (EPRI) has been considered one of the leading methods for quantitatively imaging pO2 within tumors in vivo. However, current EPRI techniques require relatively long imaging times. Reducing the number of projection scan considerably reduce the imaging time. Conventional image reconstruction algorithms, such as filtered back projection (FBP), may produce severe artifacts in images reconstructed from sparse-view projections. This can lower the utility of these reconstructed images. In this work, an optimization based image reconstruction algorithm using constrained, total variation (TV) minimization, subject to data consistency, is developed and evaluated. The algorithm was evaluated using simulated phantom, physical phantom and pre-clinical EPRI data. The TV algorithm is compared with FBP using subjective and objective metrics. The results demonstrate the merits of the proposed reconstruction algorithm.

  7. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    NASA Astrophysics Data System (ADS)

    Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.

    2014-06-01

    This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  8. Feasibility of low-dose single-view 3D fiducial tracking concurrent with external beam delivery

    SciTech Connect

    Speidel, Michael A.; Wilfley, Brian P.; Hsu, Annie; Hristov, Dimitre

    2012-04-15

    Purpose: In external-beam radiation therapy, existing on-board x-ray imaging chains orthogonal to the delivery beam cannot recover 3D target trajectories from a single view in real-time. This limits their utility for real-time motion management concurrent with beam delivery. To address this limitation, the authors propose a novel concept for on-board imaging based on the inverse-geometry Scanning-Beam Digital X-ray (SBDX) system and evaluate its feasibility for single-view 3D intradelivery fiducial tracking. Methods: A chest phantom comprising a posterior wall, a central lung volume, and an anterior wall was constructed. Two fiducials were placed along the mediastinal ridge between the lung cavities: a 1.5 mm diameter steel sphere superiorly and a gold cylinder (2.6 mm length x 0.9 mm diameter) inferiorly. The phantom was placed on a linear motion stage that moved sinusoidally. Fiducial motion was along the source-detector (z) axis of the SBDX system with {+-}10 mm amplitude and a programmed period of either 3.5 s or 5 s. The SBDX system was operated at 15 frames per second, 100 kVp, providing good apparent conspicuity of the fiducials. With the stage moving, detector data were acquired and subsequently reconstructed into 15 planes with a 12 mm plane-to-plane spacing using digital tomosynthesis. A tracking algorithm was applied to the image planes for each temporal frame to determine the position of each fiducial in (x,y,z)-space versus time. A 3D time-sinusoidal motion model was fit to the measured 3D coordinates and root mean square (RMS) deviations about the fitted trajectory were calculated. Results: Tracked motion was sinusoidal and primarily along the source-detector (z) axis. The RMS deviation of the tracked z-coordinate ranged from 0.53 to 0.71 mm. The motion amplitude derived from the model fit agreed with the programmed amplitude to within 0.28 mm for the steel sphere and within -0.77 mm for the gold seed. The model fit periods agreed with the programmed

  9. An Affordable, Indigenous Polarizer-Analyser System with Inbuilt Retardation Plate Function to Detect Birefringence using 3D Glasses: An Experience

    PubMed Central

    Sudhir, Dange Prasad; Saksena, Annapurna; Khurana, Nita

    2016-01-01

    Introduction Polarizing microscope plays a vital role in few but unique situations. A pair of cross polarizers is used to confirm the presence of birefringent substances. Also, a red retardation plate is needed to evaluate the sign of birefringence. However, a polarizing microscope especially with retardation plate is very expensive. Thus, an affordable yet effective substitute using the 3D Polaroid glasses used for ‘3D movies’ would enable widespread use of the polarizing system. Aim To study the use of 3D polaroid glasses procured from cinema halls in detecting birefringence substances and to study the red retardation plate function in them. Materials and Methods Passive 3D Polaroid glasses were procured from cinema halls. They were arranged in aspecific manner to obtain polarized light. Red retardation plate function can be obtained by changing the arrangement of the glasses. These glasses were used with various available models of different light microscope manufacturers. Various specimens observed included amyloid deposits, woven and lamellar bone, skeletal muscle striations, urate crystals, cholesterol crystals, suture material and glove powder. The comparison was based on subjective interpretation of intensity and quality of birefringence. Sign of birefringence was also determined whenever relevant. Results The birefringence observed by our system was comparable to the commercially available polarizing system with respect to intensity and quality. Also, there were no false positive /negative results when compared with the commercial Polarizing microscope. Moreover, the system had an inbuilt red retardation plate to determine sign of birefringence. Conclusion The system is efficient, cheap, easily accessible, portable and compatible with all models of light microscopes. PMID:26894072

  10. 19. Photocopy of photograph. VIEW OF WORKER MANIPULATING SMALL GLASS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    19. Photocopy of photograph. VIEW OF WORKER MANIPULATING SMALL GLASS OBJECTS IN THE HOT BAY WITH MANIPULATOR ARMS AT WORK STATION E-2. Photographer unknown, ca. 1969, original photograph and negative on file at the Remote Sensing Laboratory, Department of Energy, Nevada Operations Office. - Nevada Test Site, Engine Maintenance Assembly & Disassembly Facility, Area 25, Jackass Flats, Mercury, Nye County, NV

  11. 11. GENERAL VIEW IN SENATE CHAMBER, FROM WEST; PAINTED GLASS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. GENERAL VIEW IN SENATE CHAMBER, FROM WEST; PAINTED GLASS WINDOW BEHIND COLUMNS DEPICTS 'THE LANDING OF DE SOTO;' MURAL TO LEFT SHOWS 'THOMAS HART BENTON'S SPEECH AT ST. LOUIS 1849;' MURAL TO RIGHT SHOWS 'PRESIDENT JEFFERSON GREETING LEWIS AND CLARK' - Missouri State Capitol, High Street between Broadway & Jefferson Streets, Jefferson City, Cole County, MO

  12. View forward from stern showing skylight with rippled glass over ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View forward from stern showing skylight with rippled glass over compartment c-110, officer's quarters; note manually operated capstan at center, and simulated eight inch guns in sheet metal mock-up turret; also note five inch guns in sponsons port and starboard. (p37) - USS Olympia, Penn's Landing, 211 South Columbus Boulevard, Philadelphia, Philadelphia County, PA

  13. SOUTH PORCH REFLECTED PLAN; DETAIL VIEW, SOUTHWEST SIDE. Glass plate ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    SOUTH PORCH REFLECTED PLAN; DETAIL VIEW, SOUTHWEST SIDE. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-ST-D-5 157.5007. Right (not printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  14. SOUTH PORCH REFLECTED PLAN; DETAIL VIEW, SOUTHWEST SIDE. Glass plate ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    SOUTH PORCH REFLECTED PLAN; DETAIL VIEW, SOUTHWEST SIDE. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-ST-D-5 157.5007. Left (printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  15. fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays.

    PubMed

    Yoshida, Shunsuke

    2016-06-13

    A novel glasses-free tabletop 3D display to float virtual objects on a flat tabletop surface is proposed. This method employs circularly arranged projectors and a conical rear-projection screen that serves as an anisotropic diffuser. Its practical implementation installs them beneath a round table and produces horizontal parallax in a circumferential direction without the use of high speed or a moving apparatus. Our prototype can display full-color, 5-cm-tall 3D characters on the table. Multiple viewers can share and enjoy its real-time animation from any angle of 360 degrees with appropriate perspectives as if the animated figures were present. PMID:27410336

  16. A 3-D view of field-scale fault-zone cementation from geologically ground-truthed electrical resistivity

    NASA Astrophysics Data System (ADS)

    Barnes, H.; Spinelli, G. A.; Mozley, P.

    2015-12-01

    Fault-zones are an important control on fluid flow, affecting groundwater supply, hydrocarbon/contaminant migration, and waste/carbon storage. However, current models of fault seal are inadequate, primarily focusing on juxtaposition and entrainment effects, despite the recognition that fault-zone cementation is common and can dramatically reduce permeability. We map the 3D cementation patterns of the variably cemented Loma Blanca fault from the land surface to ~40 m depth, using electrical resistivity and induced polarization (IP). The carbonate-cemented fault zone is a region of anomalously low normalized chargeability, relative to the surrounding host material. Zones of low-normalized chargeability immediately under the exposed cement provide the first ground-truth that a cemented fault yields an observable IP anomaly. Low-normalized chargeability extends down from the surface exposure, surrounded by zones of high-normalized chargeability, at an orientation consistent with normal faults in the region; this likely indicates cementation of the fault zone at depth, which could be confirmed by drilling and coring. Our observations are consistent with: 1) the expectation that carbonate cement in a sandstone should lower normalized chargeability by reducing pore-surface area and bridging gaps in the pore space, and 2) laboratory experiments confirming that calcite precipitation within a column of glass beads decreases polarization magnitude. The ability to characterize spatial variations in the degree of fault-zone cementation with resistivity and IP has exciting implications for improving predictive models of the hydrogeologic impacts of cementation within faults.

  17. Optical and infrared absorption spectra of 3d transition metal ions-doped sodium borophosphate glasses and effect of gamma irradiation

    NASA Astrophysics Data System (ADS)

    Abdelghany, A. M.; ElBatal, F. H.; Azooz, M. A.; Ouis, M. A.; ElBatal, H. A.

    2012-12-01

    Undoped and transition metals (3d TM) doped sodium borophosphate glasses were prepared. UV-visible absorption spectra were measured in the region 200-900 nm before and after gamma irradiation. Experimental optical data indicate that the undoped sodium borophosphate glass reveals before irradiation strong and broad UV absorption and no visible bands could be identified. Such UV absorption is related to the presence of unavoidable trace iron impurities within the raw materials used for preparation of this base borophosphate glass. The TMs-doped glasses show absorption bands within the UV and/or visible regions which are characteristic to each respective TM ion in addition to the UV absorption observed from the host base glass. Infrared absorption spectra of the undoped and TMs-doped glasses reveal complex FTIR consisting of extended characteristic vibrational bands which are specific for phosphate groups as a main constituent but with the sharing of some vibrations due to the borate groups. This criterion was investigated and approved using DAT (deconvolution analysis technique). The effects of different TMs ions on the FTIR spectra are very limited due to the low doping level (0.2%) introduced in the glass composition. Gamma irradiation causes minor effect on the FTIR spectra specifically the decrease of intensities of some bands. Such behavior is related to the change of bond angles and/or bond lengths of some structural building units upon gamma irradiation.

  18. High-speed 3-D measurement with a large field of view based on direct-view confocal microscope with an electrically tunable lens.

    PubMed

    Jeong, Hyeong-jun; Yoo, Hongki; Gweon, DaeGab

    2016-02-22

    We propose a new structure of confocal imaging system based on a direct-view confocal microscope (DVCM) with an electrically tunable lens (ETL). Since it has no mechanical moving parts to scan both the lateral (x-y) and axial (z) directions, the DVCM with an ETL allows for high-speed 3-dimensional (3-D) imaging. Axial response and signal intensity of the DVCM were analyzed theoretically according to the pinhole characteristics. The system was designed to have an isotropic spatial resolution of 20 µm in both lateral and axial direction with a large field of view (FOV) of 10 × 10 mm. The FOV was maintained according to the various focal shifts as a result of an integrated design of an objective lens with the ETL. The developed system was calibrated to have linear focal shift over a range of 9 mm with an applied current to the ETL. The system performance of 3-D volume imaging was demonstrated using standard height specimens and a dental plaster. PMID:26907034

  19. NavOScan: hassle-free handheld 3D scanning with automatic multi-view registration based on combined optical and inertial pose estimation

    NASA Astrophysics Data System (ADS)

    Munkelt, C.; Kleiner, B.; Thorhallsson, T.; Mendoza, C.; Bräuer-Burchardt, C.; Kühmstedt, P.; Notni, G.

    2013-05-01

    Portable 3D scanners with low measurement uncertainty are ideally suited for capturing the 3D shape of objects right in their natural environment. However, elaborate manual post processing was usually necessary to build a complete 3D model from several overlapping scans (multiple views), or expensive or complex additional hardware (like trackers etc.) was needed. On the contrary, the NavOScan project[1] aims at fully automatic multi-view 3D scan assembly through a Navigation Unit attached to the scanner. This light weight device combines an optical tracking system with an inertial measurement unit (IMU) for robust relative scanner position estimation. The IMU provides robustness against swift scanner movements during view changes, while the wide angle, high dynamic range (HDR) optical tracker focused on the measurement object and its background ensures accurate sensor position estimations. The underlying software framework, partly implemented in hardware (FPGA) for performance reasons, fusions both data streams in real time and estimates the navigation unit's current pose. Using this pose to calculate the starting solution of the Iterative Closest Point registration approach allows for automatic registration of multiple 3D scans. After finishing the individual scans required to fully acquire the object in question, the operator is readily presented with its finalized complete 3D model! The paper presents an overview over the NavOScan architecture, highlights key aspects of the registration and navigation pipeline and shows several measurement examples obtained with the Navigation Unit attached to a hand held structured-light 3D scanner.

  20. New 3-D view of a middle-shelf grounding-zone wedge in Eastern Basin Ross Sea, Antarctica

    NASA Astrophysics Data System (ADS)

    Bart, P. J.; Tomkin, J.

    2008-12-01

    A new large-area multibeam survey of a previously identified grounding zone wedge on the central Ross Sea middle continental shelf was acquired during NBP0802 and NBP0803 in February 2008. Within a regional framework, this wedge corresponds to the third grounding event since the WAIS began the post-LGM retreat. The survey reveals the 3-D detailed view of a lineated topset with iceberg gouges, a smooth multi-lobed foreset and distinct downdip pinchout of the grounding zone wedge. Beyond the down-dip pinchout, older subglacial lineations, oblique to the younger lineations, are evident. The multibeam survey along with sub- bottom profiler records permitted us to precisely position piston cores for each of these morphologic sectors. The combined data may serve as a proxy for evaluating some aspects of the WAIS's modern grounding- zone system. For example, sediment cores at the wedge's thin landward and basinward limits obtained homogenous gray mud below a thin olive-green pelagic drape. The absence of a similar pelagic drape embedded in the homogenous gray muds suggests that grounded ice did not retreat past this location before the WAIS occupied the middle-shelf grounding position. In other words, the pause in WAIS retreat was not associated with any significant re-advance.

  1. 3D micro- and nano-machining of hydrogenated amorphous silicon films on SiO2/Si and glass substrates

    NASA Astrophysics Data System (ADS)

    Soleimani-Amiri, S.; Zanganeh, S.; Ramzani, R.; Talei, R.; Mohajerzadeh, S.; Azimi, S.; Sanaee, Z.

    2015-07-01

    We report on the hydrogen-assisted deep reactive ion etching of hydrogenated amorphous silicon (a-Si:H) films deposited using radio-frequency plasma enhanced chemical vapor deposition (RF-PECVD). High aspect-ratio vertical and 3D amorphous silicon features, with the desired control over the shaping of the sidewalls, in micro and nano scales, were fabricated in ordered arrays. The suitable adhesion of amorphous Si film to the underlayer allows one to apply deep micro- and nano-machining to these layers. By means of a second deposition of amorphous silicon on highly curved 3D structures and subsequent etching, the fabrication of amorphous silicon rings is feasible. In addition to photolithography, nanosphere colloidal lithography and electron beam lithography were exploited to realize ultra-small features of amorphous silicon. We have also investigated the optical properties of fabricated hexagonally patterned a-Si nanowire arrays on glass substrates and demonstrated their high potential as active layers for solar cells. This etching process presents an inexpensive method for the formation of highly featured arrays of vertical and 3D amorphous silicon rods on both glass and silicon substrates, suitable for large-area applications.

  2. 3D FEA of cemented glass fiber and cast posts with various dental cements in a maxillary central incisor.

    PubMed

    Madfa, Ahmed A; Al-Hamzi, Mohsen A; Al-Sanabani, Fadhel A; Al-Qudaimi, Nasr H; Yue, Xiao-Guang

    2015-01-01

    This study aimed to analyse and compare the stability of two dental posts cemented with four different luting agents by examining their shear stress transfer through the FEM. Eight three-dimensional finite element models of a maxillary central incisor restored with glass fiber and Ni-Cr alloy cast dental posts. Each dental post was luted with zinc phosphate, Panavia resin, super bond C&B resin and glass ionomer materials. Finite element models were constructed and oblique loading of 100 N was applied. The distribution of shear stress was investigated at posts and cement/dentine interfaces using ABAQUS/CAE software. The peak shear stress for glass fiber post models minimized approximately three to four times of those for Ni-Cr alloy cast post models. There was negligible difference in peak of shear stress when various cements were compared, irrespective of post materials. The shear stress had same trend for all cement materials. This study found that the glass fiber dental post reduced the shear stress concentration at interfacial of post and cement/dentine compared to Ni-Cr alloy cast dental post. PMID:26543733

  3. Radionuclide Incorporation in Secondary Crystalline Minerals Resulting from Chemical Weathering of Selected Waste Glasses: Progress Report for Subtask 3d

    SciTech Connect

    SV Mattigod; DI Kaplan; VL LeGore; RD Orr; HT Schaef; JS Young

    1998-10-23

    Experiments were conducted in fiscal year 1998 by Pacific Northwest National Laboratory to evaluate potential incorporation of radionuclides in secondary mineral phases that form from weathering vitrified nuclear waste glasses. These experiments were conducted as part of the Immobilized Low- Activity Waste-Petiormance Assessment (ILAW-PA) to generate data on radionuclide mobilization and transport in a near-field enviromnent of disposed vitrified wastes. An initial experiment was conducted to identify the types of secondary minerals that form from two glass samples of differing compositions, LD6 and SRL202. Chemical weathering of LD6 glass at 90oC in contact with an aliquot of uncontaminated Hanford Site groundwater resulted in the formation of a Crystalline zeolitic mineral, phillipsite. In contrast similar chemical weathering of SRL202 glass at 90"C resulted in the formation of a microcrystalline smectitic mineral, nontronite. A second experiment was conducted at 90"C to assess the degree to which key radionuclides would be sequestered in the structure of secondary crystalline minerals; namely, phillipsite and nontronite. Chemical weathering of LD6 in contact with radionuclide-spiked Hanford Site groundwater indicated that substantial ilactions of the total activities were retained in the phillipsite structure. Similar chemical weathering of SRL202 at 90"C, also in contact with radionuclide-spiked Hanford Site groundwater, showed that significant fractions of the total activities were retained in the nontronite structure. These results have important implications regarding the radionuclide mobilization aspects of the ILAW-PA. Additional studies are required to confkm the results and to develop an improved under- standing of mechanisms of sequestration and attenuated release of radionuclides to help refine certain aspects of their mobilization.

  4. Electrical manipulation of biological samples in glass-based electrofluidics fabricated by 3D femtosecond laser processing

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Midorikawa, Katsumi; Sugioka, Koji

    2014-03-01

    Electrical manipulation of biological samples using glass-based electrofluidics fabricated by femtosecond laser, in which the microfluidic structures are integrated with microelectric components, is presented. Electro-orientation of movement of living cells with asymmetric shapes such as Euglena gracilis of aquatic microorganisms in microfluidic channels is demonstrated using the fabricated electrofluidics. By integrating the properly designed microelectrodes into microfluidic channels, the orientation direction of Euglena cells can be well controlled.

  5. Fabrication of a three dimensional particle focusing microfluidic device using a 3D printer, PDMS, and glass

    NASA Astrophysics Data System (ADS)

    Collette, Robyn; Rosen, Daniel; Shirk, Kathryn

    Microfluidic devices have high importance in fields such as bioanalysis because they can manipulate volumes of fluid in the range of microliters to picoliters. Small samples can be quickly and easily tested using complex microfluidic devices. Typically, these devices are created through lithography techniques, which can be costly and time consuming. It has been shown that inexpensive microfluidic devices can be produced quickly using a 3D printer and PDMS. However, a size limitation prohibits the fabrication of precisely controlled microchannels. By using shrinking materials in combination with 3D printing of flow-focusing geometries, this limitation can be overcome. This research seeks to employ these techniques to quickly fabricate an inexpensive, working device with three dimensional particle focusing capabilities. By modifying the channel geometry, colloidal particles in a solution will be focused into a single beam when passed through this device. The ability to focus particles is necessary for a variety of biological applications which requires precise detection and characterization of particles in a sample. We would like to thank the Shippensburg University Undergraduate Research Grant Program for their generous funding.

  6. High speed large viewing angle shutters for triple-flash active glasses

    NASA Astrophysics Data System (ADS)

    Caillaud, B.; Bellini, B.; de Bougrenet de la Tocnaye, J.-L.

    2009-02-01

    We present a new generation of liquid crystal shutters for active glasses, well suited to 3-D cinema current trends, involving triple flash regimes. Our technology uses a composite smectic C* liquid crystal mixture1. In this paper we focus on the electro-optical characterization of composite smectic-based shutters, and compare their performance with nematic ones, demonstrating their advantages for the new generation of 3-D cinema and more generally 3-D HDTV.

  7. SummitView 1.0: a code to automatically generate 3D solid models of surface micro-machining based MEMS designs.

    SciTech Connect

    McBride, Cory L. (Elemental Technologies, American Fort, UT); Yarberry, Victor R.; Schmidt, Rodney Cannon; Meyers, Ray J.

    2006-11-01

    This report describes the SummitView 1.0 computer code developed at Sandia National Laboratories. SummitView is designed to generate a 3D solid model, amenable to visualization and meshing, that represents the end state of a microsystem fabrication process such as the SUMMiT (Sandia Ultra-Planar Multilevel MEMS Technology) V process. Functionally, SummitView performs essentially the same computational task as an earlier code called the 3D Geometry modeler [1]. However, because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages. As input it requires a definition of both the process itself and the collection of individual 2D masks created by the designer and associated with each of the process steps. The definition of the process is contained in a special process definition file [2] and the 2D masks are contained in MEM format files [3]. The code is written in C++ and consists of a set of classes and routines. The classes represent the geometric data and the SUMMiT V process steps. Classes are provided for the following process steps: Planar Deposition, Planar Etch, Conformal Deposition, Dry Etch, Wet Etch and Release Etch. SummitView is built upon the 2D Boolean library GBL-2D [4], and thus contains all of that library's functionality.

  8. Laser gated viewing at ISL for vision through smoke, active polarimetry, and 3D imaging in NIR and SWIR wavelength bands

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Christnacher, Frank

    2013-12-01

    In this article, we want to give a review on the application of laser gated viewing for the improvement of vision cross-diffusing obstacles (smoke, turbid medium, …), the capturing of 3D scene information, or the study of material properties by polarimetric analysis at near-infrared (NIR) and shortwave-infrared (SWIR) wavelengths. Laser gated viewing has been studied since the 1960s as an active night vision method. Owing to enormous improvements in the development of compact and highly efficient laser sources and in the development of modern sensor technologies, the maturity of demonstrator systems rose during the past decades. Further, it was demonstrated that laser gated viewing has versatile sensing capabilities with application for long-range observation under certain degraded weather conditions, vision through obstacles and fog, active polarimetry, and 3D imaging.

  9. The influence of 3d-impurities on magnetic and transport properties of CoSiB metallic glasses

    NASA Astrophysics Data System (ADS)

    Zakharenko, M.; Babich, M.; Yeremenko, G.; Semen'ko, M.

    2006-09-01

    Temperature dependencies of magnetic susceptibility and electric resistivity of Co-based metallic glasses (MGs) of the general composition CoMex(Si,B)28(Me=Fe,Cr,Si:B=18:10) have been studied up to 950 K. The studied MGs were found to be ferromagnets at the room temperature and their Curie point TC ranges within 260-560 K depending on the dopant contents. At the temperatures higher than TC, a wide paramagnetic region exists. The regularities of magnetic moment variation upon Cr doping evidence a formation of antiferromagnetic clusters, which determine the anomalous behavior of resistivity.

  10. Accuracy of x-ray image-based 3D localization from two C-arm views: a comparison between an ideal system and a real device

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Strobel, Norbert; Yatziv, Liron; Gilson, Wesley; Meyer, Bernhard; Hornegger, Joachim; Lewin, Jonathan; Wacker, Frank

    2009-02-01

    arm X-ray imaging devices are commonly used for minimally invasive cardiovascular or other interventional procedures. Calibrated state-of-the-art systems can, however, not only be used for 2D imaging but also for three-dimensional reconstruction either using tomographic techniques or even stereotactic approaches. To evaluate the accuracy of X-ray object localization from two views, a simulation study assuming an ideal imaging geometry was carried out first. This was backed up with a phantom experiment involving a real C-arm angiography system. Both studies were based on a phantom comprising five point objects. These point objects were projected onto a flat-panel detector under different C-arm view positions. The resulting 2D positions were perturbed by adding Gaussian noise to simulate 2D point localization errors. In the next step, 3D point positions were triangulated from two views. A 3D error was computed by taking differences between the reconstructed 3D positions using the perturbed 2D positions and the initial 3D positions of the five points. This experiment was repeated for various C-arm angulations involving angular differences ranging from 15° to 165°. The smallest 3D reconstruction error was achieved, as expected, by views that were 90° degrees apart. In this case, the simulation study yielded a 3D error of 0.82 mm +/- 0.24 mm (mean +/- standard deviation) for 2D noise with a standard deviation of 1.232 mm (4 detector pixels). The experimental result for this view configuration obtained on an AXIOM Artis C-arm (Siemens AG, Healthcare Sector, Forchheim, Germany) system was 0.98 mm +/- 0.29 mm, respectively. These results show that state-of-the-art C-arm systems can localize instruments with millimeter accuracy, and that they can accomplish this almost as well as an idealized theoretical counterpart. High stereotactic localization accuracy, good patient access, and CT-like 3D imaging capabilities render state-of-the-art C-arm systems ideal devices for X

  11. Spin glass and semiconducting behavior in one-dimensional BaFe2-dSe3 (d~2) crystals

    SciTech Connect

    Saparov, Bayrammurad I; Calder, Stuart A; Sipos, Balazs; Cao, Huibo; Chi, Songxue; Singh, David J; Christianson, Andrew D; Lumsden, Mark D; Sefat, A. S.

    2011-01-01

    We investigate the physical properties and electronic structure of BaFe{sub 1.79(2)}Se{sub 3} crystals, which were grown out of tellurium flux. The crystal structure of the compound, an iron-deficient derivative of the ThCr{sub 2}Si{sub 2}-type, is built upon edge-shared FeSe{sub 4} tetrahedra fused into double chains. The semiconducting BaFe{sub 1.79(2)}Se{sub 3} ({rho}{sub 295K} = 0.18 {Omega} {center_dot} cm and E{sub g} = 0.30 eV) does not order magnetically; however, there is evidence for short-range magnetic correlations of spin glass type (T{sub f} {approx} 50 K) in magnetization, heat capacity, and neutron diffraction results. A one-third substitution of selenium with sulfur leads to a slightly higher electrical conductivity ({rho}{sub 295K } = 0.11 {Omega} {center_dot} cm and E{sub g} = 0.22 eV) and a lower spin glass freezing temperature (T{sub f} {approx} 15 K), corroborating with higher electrical conductivity reported for BaFe{sub 2}S{sub 3}. According to the electronic structure calculations, BaFe{sub 2}Se{sub 3} can be considered as a one-dimensional ladder structure with a weak interchain coupling.

  12. Mechanical and in vitro performance of apatite-wollastonite glass ceramic reinforced hydroxyapatite composite fabricated by 3D-printing.

    PubMed

    Suwanprateeb, J; Sanngam, R; Suvannapruk, W; Panyathanmaporn, T

    2009-06-01

    In situ hydroxyapatite/apatite-wollastonite glass ceramic composite was fabricated by a three dimensional printing (3DP) technique and characterized. It was found that the as-fabricated mean green strength of the composite was 1.27 MPa which was sufficient for general handling. After varying sintering temperatures (1050-1300 degrees C) and times (1-10 h), it was found that sintering at 1300 degrees C for 3 h gave the greatest flexural modulus and strength, 34.10 GPa and 76.82 MPa respectively. This was associated with a decrease in porosity and increase in densification ability of the composite resulting from liquid phase sintering. Bioactivity tested by soaking in simulated body fluid (SBF) and In Vitro toxicity studies showed that 3DP hydroxyapatite/A-W glass ceramic composite was non-toxic and bioactive. A new calcium phosphate layer was observed on the surface of the composite after soaking in SBF for only 1 day while osteoblast cells were able to attach and attain normal morphology on the surface of the composite. PMID:19225870

  13. The Best of Both Worlds: 3D X-ray Microscopy with Ultra-high Resolution and a Large Field of View

    NASA Astrophysics Data System (ADS)

    Li, W.; Gelb, J.; Yang, Y.; Guan, Y.; Wu, W.; Chen, J.; Tian, Y.

    2011-09-01

    3D visualizations of complex structures within various samples have been achieved with high spatial resolution by X-ray computed nanotomography (nano-CT). While high spatial resolution generally comes at the expense of field of view (FOV). Here we proposed an approach that stitched several 3D volumes together into a single large volume to significantly increase the size of the FOV while preserving resolution. Combining this with nano-CT, 18-μm FOV with sub-60-nm resolution has been achieved for non-destructive 3D visualization of clustered yeasts that were too large for a single scan. It shows high promise for imaging other large samples in the future.

  14. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  15. Characterization by combined optical and FT infrared spectra of 3d-transition metal ions doped-bismuth silicate glasses and effects of gamma irradiation

    NASA Astrophysics Data System (ADS)

    ElBatal, F. H.; Abdelghany, A. M.; ElBatal, H. A.

    2014-03-01

    Optical and infrared absorption spectral measurements were carried out for binary bismuth silicate glass and other derived prepared samples with the same composition and containing additional 0.2% of one of 3d transition metal oxides. The same combined spectroscopic properties were also measured after subjecting the prepared glasses to a gamma dose of 8 Mrad. The experimental optical spectra reveal strong UV-near visible absorption bands from the base and extended to all TMs-doped samples and these specific extended and strong UV-near visible absorption bands are related to the contributions of absorption from both trace iron (Fe3+) ions present as contaminated impurities within the raw materials and from absorption of main constituent trivalent bismuth (Bi3+) ions. The strong UV-near visible absorption bands are observed to suppress any further UV bands from TM ions. The studied glasses show obvious resistant to gamma irradiation and only small changes are observed upon gamma irradiation. This observed shielding behavior is related to the presence of high Bi3+ ions with heavy mass causing the observed stability of the optical absorption. Infrared absorption spectra of the studied glasses reveal characteristic vibrational bands due to both modes from silicate network and the sharing of Bi-O linkages and the presence of TMs in the doping level (0.2%) causes no distinct changes within the number or position of the vibrational modes. The presence of high Bi2O3 content (70 mol%) appears to cause stability of the structural building units towards gamma irradiation as revealed by FTIR measurements.

  16. FACET: a radiation view factor computer code for axisymmetric, 2D planar, and 3D geometries with shadowing

    SciTech Connect

    Shapiro, A.B.

    1983-08-01

    The computer code FACET calculates the radiation geometric view factor (alternatively called shape factor, angle factor, or configuration factor) between surfaces for axisymmetric, two-dimensional planar and three-dimensional geometries with interposed third surface obstructions. FACET was developed to calculate view factors for input to finite-element heat-transfer analysis codes. The first section of this report is a brief review of previous radiation-view-factor computer codes. The second section presents the defining integral equation for the geometric view factor between two surfaces and the assumptions made in its derivation. Also in this section are the numerical algorithms used to integrate this equation for the various geometries. The third section presents the algorithms used to detect self-shadowing and third-surface shadowing between the two surfaces for which a view factor is being calculated. The fourth section provides a user's input guide followed by several example problems.

  17. Compatibility of glass-guided recording microelectrodes in the brain stem of squirrel monkeys with high-resolution 3D MRI.

    PubMed

    Tammer, R; Ehrenreich, L; Boretius, S; Watanabe, T; Frahm, J; Michaelis, T

    2006-06-15

    Knowledge of the precise position of recording microelectrodes within the brain of a non-human primate is essential for a reliable exploration of very small anatomic structures. This work demonstrates the compatibility of a newly developed glass-guided microelectrode design and microfeed equipment with high-resolution 3D magnetic resonance imaging (MRI). T1- and T2-weighted images allow for the non-invasive visualization of chronically implanted microelectrodes within the brain stem of squirrel monkeys in vivo. Neural extracellular multi-unit recordings proved the functionality of the microelectrode before and after the use of 3D MRI suggesting the preservation of normal brain tissue at the tip of the electrode. Because histology confirmed the absence of lesions attributable to MRI, the approach offers an interactive monitoring during the course of neuroethological experiments. Consequently, MRI may become an in vivo alternative to common histological post mortem verifications of electrode tracks and hence may avoid the early sacrificing of primates after only a small number of experiments. PMID:16343640

  18. Femtosecond laser processing of evanescence field coupled waveguides in single mode glass fibers for optical 3D shape sensing and navigation

    NASA Astrophysics Data System (ADS)

    Waltermann, Christian; Baumann, Anna Lena; Bethmann, Konrad; Doering, Alexander; Koch, Jan; Angelmahr, Martin; Schade, Wolfgang

    2015-05-01

    Fiber Bragg grating based optical shape sensing is a new and promising approach to gather position and path information in environments where classical imaging systems fail. Especially a real-time in-vivo navigation of medical catheter or endoscope without any further requirements (such as the continuous exposure to x-rays) could provide a huge advantage in countless areas in medicine. Multicore fibers or bundles of glass fibers have been suggested for realizing such shape sensors, but to date all suffer from severe disadvantages. We present the realization of a third approach. With femtosecond laser pulses local waveguides are inscribed into the cladding of a standard single mode glass fiber. The evanescence field of the main fiber core couples to two S-shaped waveguides, which carry the light to high reflective fiber Bragg gratings located approx. 30 μm away from the centered fiber core in an orthogonal configuration. Part of the reflected light is coupled back to the fiber core and can be read out by a fiber Bragg grating interrogator. A typical spectrum is presented as well as the sensor signal for bending in all directions and with different radii. The entire sensor plane has an elongation of less than 4 mm and therefore enables even complicated and localized navigation applications such as medical catheters. Finally a complete 3D shape sensor in a single mode fiber is presented together with an exemplary application for motion capturing.

  19. Radiometric and geometric evaluation of GeoEye-1, WorldView-2 and Pléiades-1A stereo images for 3D information extraction

    NASA Astrophysics Data System (ADS)

    Poli, D.; Remondino, F.; Angiuli, E.; Agugiaro, G.

    2015-02-01

    Today the use of spaceborne Very High Resolution (VHR) optical sensors for automatic 3D information extraction is increasing in the scientific and civil communities. The 3D Optical Metrology (3DOM) unit of the Bruno Kessler Foundation (FBK) in Trento (Italy) has collected VHR satellite imagery, as well as aerial and terrestrial data over Trento for creating a complete testfield for investigations on image radiometry, geometric accuracy, automatic digital surface model (DSM) generation, 2D/3D feature extraction, city modelling and data fusion. This paper addresses the radiometric and the geometric aspects of the VHR spaceborne imagery included in the Trento testfield and their potential for 3D information extraction. The dataset consist of two stereo-pairs acquired by WorldView-2 and by GeoEye-1 in panchromatic and multispectral mode, and a triplet from Pléiades-1A. For reference and validation, a DSM from airborne LiDAR acquisition is used. The paper gives details on the project, dataset characteristics and achieved results.

  20. 3D crustal architecture of the Alps-Apennines join — a new view on seismic data

    NASA Astrophysics Data System (ADS)

    Schumacher, M. E.; Laubscher, H. P.

    1996-08-01

    Seismic data from the Alps-Apennines join have usually been interpreted in the form of 2D cross-sections, passing either through the Western Alps or the Ligurian Alps-Monferrato Apennines. However, the oblique SE-NW convergence of Adria and Europa and superimposed rotations imply a distinct 3D kinematic development around the Adriatic Indenter (AI), the westernmost spur of Adria. In order to develop kinematic models, data on motion at the different margins of AI must be coordinated. Along the northern margin, the dextrally transpressive Insubric line (IL) was active between 25 and 16 Ma (Insubric-Helvetic phase of Alpine orogeny). Contemporaneously, along the southern margin (Paleo-Apenninic phase), a complementary sinistral motion took place along the Villalvernia-Varzi line (VVL). It emplaced the Monferrato Apennines westward to the north of the Ligurian Alps by carrying them westward on top of AI. Between 14 and 6 Ma (Jura-Lombardic phase of Alpine orogeny) the Lombardic thrust belt developed on the northern margin of AI, now largely hidden under the Po plain. Its continuation to the southwest is impeded by older thrust masses along the Western Alps that consist largely of basement, their sediments having been eroded, as noted on the deep reflection line CROP ALPI-1 by earlier investigators. This line, moreover, contains a deep reflection band originating in the autochthonous Mesozoic of the Apenninic foredeep. In order to better visualize this origin and the relation of further elements identified on reflection lines around the northwestern end of the Monferrato Apennines, a 3D fence diagram was constructed. It helps in establishing a 3D structural-kinematic model of the Alps-Apennines join based on the kinematics of AI. This model features an underthrust of AI under the western Alps in the Paleo-Apenninic phase. In the course of this underthrust, the Paleo-Apenninic elements of the Monferrato moved under the marginal thrusts of the western Alps. Subsequent Neo

  1. Gypsies in the palace: Experimentalist's view on the use of 3-D physics-based simulation of hillslope hydrological response

    USGS Publications Warehouse

    James, A.L.; McDonnell, Jeffery J.; Tromp-Van Meerveld, I.; Peters, N.E.

    2010-01-01

    As a fundamental unit of the landscape, hillslopes are studied for their retention and release of water and nutrients across a wide range of ecosystems. The understanding of these near-surface processes is relevant to issues of runoff generation, groundwater-surface water interactions, catchment export of nutrients, dissolved organic carbon, contaminants (e.g. mercury) and ultimately surface water health. We develop a 3-D physics-based representation of the Panola Mountain Research Watershed experimental hillslope using the TOUGH2 sub-surface flow and transport simulator. A recent investigation of sub-surface flow within this experimental hillslope has generated important knowledge of threshold rainfall-runoff response and its relation to patterns of transient water table development. This work has identified components of the 3-D sub-surface, such as bedrock topography, that contribute to changing connectivity in saturated zones and the generation of sub-surface stormflow. Here, we test the ability of a 3-D hillslope model (both calibrated and uncalibrated) to simulate forested hillslope rainfall-runoff response and internal transient sub-surface stormflow dynamics. We also provide a transparent illustration of physics-based model development, issues of parameterization, examples of model rejection and usefulness of data types (e.g. runoff, mean soil moisture and transient water table depth) to the model enterprise. Our simulations show the inability of an uncalibrated model based on laboratory and field characterization of soil properties and topography to successfully simulate the integrated hydrological response or the distributed water table within the soil profile. Although not an uncommon result, the failure of the field-based characterized model to represent system behaviour is an important challenge that continues to vex scientists at many scales. We focus our attention particularly on examining the influence of bedrock permeability, soil anisotropy and

  2. On the Use of Uavs in Mining and Archaeology - Geo-Accurate 3d Reconstructions Using Various Platforms and Terrestrial Views

    NASA Astrophysics Data System (ADS)

    Tscharf, A.; Rumpler, M.; Fraundorfer, F.; Mayer, G.; Bischof, H.

    2015-08-01

    During the last decades photogrammetric computer vision systems have been well established in scientific and commercial applications. Especially the increasing affordability of unmanned aerial vehicles (UAVs) in conjunction with automated multi-view processing pipelines have resulted in an easy way of acquiring spatial data and creating realistic and accurate 3D models. With the use of multicopter UAVs, it is possible to record highly overlapping images from almost terrestrial camera positions to oblique and nadir aerial images due to the ability to navigate slowly, hover and capture images at nearly any possible position. Multi-copter UAVs thus are bridging the gap between terrestrial and traditional aerial image acquisition and are therefore ideally suited to enable easy and safe data collection and inspection tasks in complex or hazardous environments. In this paper we present a fully automated processing pipeline for precise, metric and geo-accurate 3D reconstructions of complex geometries using various imaging platforms. Our workflow allows for georeferencing of UAV imagery based on GPS-measurements of camera stations from an on-board GPS receiver as well as tie and control point information. Ground control points (GCPs) are integrated directly in the bundle adjustment to refine the georegistration and correct for systematic distortions of the image block. We discuss our approach based on three different case studies for applications in mining and archaeology and present several accuracy related analyses investigating georegistration, camera network configuration and ground sampling distance. Our approach is furthermore suited for seamlessly matching and integrating images from different view points and cameras (aerial and terrestrial as well as inside views) into one single reconstruction. Together with aerial images from a UAV, we are able to enrich 3D models by combining terrestrial images as well inside views of an object by joint image processing to

  3. Creating 3D visualizations of MRI data: A brief guide

    PubMed Central

    Madan, Christopher R.

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3Dglass brain’ rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study’s findings. PMID:26594340

  4. 3D microscope

    NASA Astrophysics Data System (ADS)

    Iizuka, Keigo

    2008-02-01

    In order to circumvent the fact that only one observer can view the image from a stereoscopic microscope, an attachment was devised for displaying the 3D microscopic image on a large LCD monitor for viewing by multiple observers in real time. The principle of operation, design, fabrication, and performance are presented, along with tolerance measurements relating to the properties of the cellophane half-wave plate used in the design.

  5. Split view Time-resolved PIV with a CW laser for 3-D measurements of planar velocity field

    NASA Astrophysics Data System (ADS)

    Elzawawy, Amir; Andreopoulos, Yiannis

    2011-11-01

    The demand to increase the temporal resolution of Stereo-PIV systems used in the measurement of highly unsteady flow fields is limited by the low repetition rate of the pulsed lasers and cameras. The availability of high-frame-rate digital cameras and CW lasers opens new possibilities in the development of continuous PIV systems with increased temporal resolution. The present setup consists of a single high-frame-rate camera which can accommodate two simultaneous stereo view images of the deforming fluid on its CMOS sensor obtained by using four different planar mirrors, appropriately positioned This approach offers several advantages over traditional systems with two different cameras. First, it provides identical system parameters for the two views which minimize their differences and thus facilitating robust stereo matching. Second, it eliminates any need of synchronization between both cameras and the laser. And third its cost is substantially lower than the cost of a system with two cameras. The development of the technique will be described and the results of qualification tests in several wind tunnel flows will be presented and discussed. Sponsored by NSF Grant #1033117.

  6. Wavelet-Based 3D Reconstruction of Microcalcification Clusters from Two Mammographic Views: New Evidence That Fractal Tumors Are Malignant and Euclidean Tumors Are Benign

    PubMed Central

    Batchelder, Kendra A.; Tanenbaum, Aaron B.; Albert, Seth; Guimond, Lyne; Kestener, Pierre; Arneodo, Alain; Khalil, Andre

    2014-01-01

    The 2D Wavelet-Transform Modulus Maxima (WTMM) method was used to detect microcalcifications (MC) in human breast tissue seen in mammograms and to characterize the fractal geometry of benign and malignant MC clusters. This was done in the context of a preliminary analysis of a small dataset, via a novel way to partition the wavelet-transform space-scale skeleton. For the first time, the estimated 3D fractal structure of a breast lesion was inferred by pairing the information from two separate 2D projected mammographic views of the same breast, i.e. the cranial-caudal (CC) and mediolateral-oblique (MLO) views. As a novelty, we define the “CC-MLO fractal dimension plot”, where a “fractal zone” and “Euclidean zones” (non-fractal) are defined. 118 images (59 cases, 25 malignant and 34 benign) obtained from a digital databank of mammograms with known radiologist diagnostics were analyzed to determine which cases would be plotted in the fractal zone and which cases would fall in the Euclidean zones. 92% of malignant breast lesions studied (23 out of 25 cases) were in the fractal zone while 88% of the benign lesions were in the Euclidean zones (30 out of 34 cases). Furthermore, a Bayesian statistical analysis shows that, with 95% credibility, the probability that fractal breast lesions are malignant is between 74% and 98%. Alternatively, with 95% credibility, the probability that Euclidean breast lesions are benign is between 76% and 96%. These results support the notion that the fractal structure of malignant tumors is more likely to be associated with an invasive behavior into the surrounding tissue compared to the less invasive, Euclidean structure of benign tumors. Finally, based on indirect 3D reconstructions from the 2D views, we conjecture that all breast tumors considered in this study, benign and malignant, fractal or Euclidean, restrict their growth to 2-dimensional manifolds within the breast tissue. PMID:25222610

  7. The New Realm of 3-D Vision

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  8. Sojourner near Barnacle Bill - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    At right, Sojourner has traveled off the lander's rear ramp and onto the surface of Mars. 3D glasses are necessary to identify surface detail. The rock Barnacle Bill and the rear ramp is to the left of Sojourner.

    The image was taken by the Imager for Mars Pathfinder (IMP) on Sol 3. The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  9. Increasing the effective aperture of a detector and enlarging the receiving field of view in a 3D imaging lidar system through hexagonal prism beam splitting.

    PubMed

    Lee, Xiaobao; Wang, Xiaoyi; Cui, Tianxiang; Wang, Chunhui; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-07-11

    The detector in a highly accurate and high-definition scanning 3D imaging lidar system requires high frequency bandwidth and sufficient photosensitive area. To solve the problem of small photosensitive area of an existing indium gallium arsenide detector with a certain frequency bandwidth, this study proposes a method for increasing the receiving field of view (FOV) and enlarging the effective photosensitive aperture of such detector through hexagonal prism beam splitting. The principle and construction of hexagonal prism beam splitting is also discussed in this research. Accordingly, a receiving optical system with two hexagonal prisms is provided and the splitting beam effect of the simulation experiment is analyzed. Using this novel method, the receiving optical system's FOV can be improved effectively up to ±5°, and the effective photosensitive aperture of the detector is increased from 0.5 mm to 1.5 mm. PMID:27410800

  10. TU-C-BRE-04: 3D Gel Dosimetry Using ViewRay On-Board MR Scanner: A Feasibility Study

    SciTech Connect

    Zhang, L; Du, D; Green, O; Rodriguez, V; Wooten, H; Xiao, Z; Yang, D; Hu, Y; Li, H

    2014-06-15

    Purpose: MR based 3D gel has been proposed for radiation therapy dosimetry. However, access to MR scanner has been one of the limiting factors for its wide acceptance. Recent commercialization of an on-board MR-IGRT device (ViewRay) may render the availability issue less of a concern. This work reports our attempts to simulate MR based dose measurement accuracy on ViewRay using three different gels. Methods: A spherical BANG gel dosimeter was purchased from MGS Research. Cylindrical MAGIC gel and Fricke gel were fabricated in-house according to published recipes. After irradiation, BANG and MAGIC were imaged using a dual-echo spin echo sequence for T2 measurement on a Philips 1.5T MR scanner, while Fricke gel was imaged using multiple spin echo sequences. Difference between MR measured and TPS calculated dose was defined as noise. The noise power spectrum was calculated and then simulated for the 0.35 T magnetic field associated with ViewRay. The estimated noise was then added to TG-119 test cases to simulate measured dose distributions. Simulated measurements were evaluated against TPS calculated doses using gamma analysis. Results: Given same gel, sequence and coil setup, with a FOV of 180×90×90 mm3, resolution of 3×3×3 mm3, and scanning time of 30 minutes, the simulated measured dose distribution using BANG would have a gamma passing rate greater than 90% (3%/3mm and absolute). With a FOV 180×90×90 mm3, resolution of 4×4×5 mm3, and scanning time of 45 minutes, the simulated measuremened dose distribution would have a gamma passing rate greater than 97%. MAGIC exhibited similar performance while Fricke gel was inferior due to much higher noise. Conclusions: The simulation results demonstrated that it may be feasible to use MAGIC and BANG gels for 3D dose verification using ViewRay low-field on-board MRI scanner.

  11. Development of an automultiscopic true 3D display (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Kurtz, Russell M.; Pradhan, Ranjit D.; Aye, Tin M.; Yu, Kevin H.; Okorogu, Albert O.; Chua, Kang-Bin; Tun, Nay; Win, Tin; Schindler, Axel

    2005-05-01

    True 3D displays, whether generated by volume holography, merged stereopsis (requiring glasses), or autostereoscopic methods (stereopsis without the need for special glasses), are useful in a great number of applications, ranging from training through product visualization to computer gaming. Holography provides an excellent 3D image but cannot yet be produced in real time, merged stereopsis results in accommodation-convergence conflict (where distance cues generated by the 3D appearance of the image conflict with those obtained from the angular position of the eyes) and lacks parallax cues, and autostereoscopy produces a 3D image visible only from a small region of space. Physical Optics Corporation is developing the next step in real-time 3D displays, the automultiscopic system, which eliminates accommodation-convergence conflict, produces 3D imagery from any position around the display, and includes true image parallax. Theory of automultiscopic display systems is presented, together with results from our prototype display, which produces 3D video imagery with full parallax cues from any viewing direction.

  12. 6. Building E9; view of glass lines for dilute liquor ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. Building E-9; view of glass lines for dilute liquor and spent acid; second floor, looking ESE. Bottom of wash tank is at the top of the view. (Ryan and Harms) - Holston Army Ammunition Plant, RDX-and-Composition-B Manufacturing Line 9, Kingsport, Sullivan County, TN

  13. 3d visualization of atomistic simulations on every desktop

    NASA Astrophysics Data System (ADS)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-08-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given.

  14. VPython: Python plus Animations in Stereo 3D

    NASA Astrophysics Data System (ADS)

    Sherwood, Bruce

    2004-03-01

    Python is a modern object-oriented programming language. VPython (http://vpython.org) is a combination of Python (http://python.org), the Numeric module from LLNL (http://www.pfdubois.com/numpy), and the Visual module created by David Scherer, all of which have been under continuous development as open source projects. VPython makes it easy to write programs that generate real-time, navigable 3D animations. The Visual module includes a set of 3D objects (sphere, cylinder, arrow, etc.), tools for creating other shapes, and support for vector algebra. The 3D renderer runs in a parallel thread, and animations are produced as a side effect of computations, freeing the programmer to concentrate on the physics. Applications include educational and research visualization. In the Fall of 2003 Hugh Fisher at the Australian National University, John Zelle at Wartburg College, and I contributed to a new stereo capability of VPython. By adding a single statement to an existing VPython program, animations can be viewed in true stereo 3D. One can choose several modes: active shutter glasses, passive polarized glasses, or colored glasses (e.g. red-cyan). The talk will demonstrate the new stereo capability and discuss the pros and cons of various schemes for display of stereo 3D for a large audience. Supported in part by NSF grant DUE-0237132.

  15. Image quality improvement for a 3D structure exhibiting multiple 2D patterns and its implementation.

    PubMed

    Hirayama, Ryuji; Nakayama, Hirotaka; Shiraki, Atsushi; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2016-04-01

    A three-dimensional (3D) structure designed by our proposed algorithm can simultaneously exhibit multiple two-dimensional patterns. The 3D structure provides multiple patterns having directional characteristics by distributing the effects of the artefacts. In this study, we proposed an iterative algorithm to improve the image quality of the exhibited patterns and have verified the effectiveness of the proposed algorithm using numerical simulations. Moreover, we fabricated different 3D glass structures (an octagonal prism, a cube and a sphere) using the proposed algorithm. All 3D structures exhibit four patterns, and different patterns can be observed depending on the viewing direction. PMID:27137021

  16. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  17. `We put on the glasses and Moon comes closer!' Urban Second Graders Exploring the Earth, the Sun and Moon Through 3D Technologies in a Science and Literacy Unit

    NASA Astrophysics Data System (ADS)

    Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin

    2014-01-01

    This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day and night, Moon phases and seasons. These modules were used in a science and literacy unit for 35 second graders at an urban elementary school in Midwestern USA. Data included pre- and post-interviews, audio-taped lessons and classroom observations. Post-interviews demonstrated that children's knowledge of the shapes and the movements of the Earth and Moon, alternation of day and night, the occurrence of the seasons, and Moon's changing appearance increased. Second graders reported that they enjoyed expanding their knowledge through hands-on experiences; through its reality effect, 3D visualization enabled them to observe the space objects that move in the virtual space. The teachers noted that 3D visualization stimulated children's interest in space and that using 3D visualization in combination with other teaching methods-literacy experiences, videos and photos, simulations, discussions, and presentations-supported student learning. The teachers and the students still experienced challenges using 3D visualization due to technical problems with 3D vision and time constraints. We conclude that 3D visualization offers hands-on experiences for challenging science concepts and may support young children's ability to view phenomena that would typically be observed through direct, long-term observations in outer space. Results imply a reconsideration of assumed capabilities of young children to understand astronomical phenomena.

  18. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible. PMID:7919882

  19. Multiviewer 3D monitor

    NASA Astrophysics Data System (ADS)

    Kostrzewski, Andrew A.; Aye, Tin M.; Kim, Dai Hyun; Esterkin, Vladimir; Savant, Gajendra D.

    1998-09-01

    Physical Optics Corporation has developed an advanced 3-D virtual reality system for use with simulation tools for training technical and military personnel. This system avoids such drawbacks of other virtual reality (VR) systems as eye fatigue, headaches, and alignment for each viewer, all of which are due to the need to wear special VR goggles. The new system is based on direct viewing of an interactive environment. This innovative holographic multiplexed screen technology makes it unnecessary for the viewer to wear special goggles.

  20. Yogi the rock - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Yogi, a rock taller than rover Sojourner, is the subject of this image, taken in stereo by the deployed Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. The soil in the foreground has been the location of multiple soil mechanics experiments performed by Sojourner's cleated wheels. Pathfinder scientists were able to control the force inflicted on the soil beneath the rover's wheels, giving them insight into the soil's mechanical properties. The soil mechanics experiments were conducted after this image was taken.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  1. Electrocatalysts: Guided Evolution of Bulk Metallic Glass Nanostructures: A Platform for Designing 3D Electrocatalytic Surfaces (Adv. Mater. 10/2016).

    PubMed

    Doubek, Gustavo; Sekol, Ryan C; Li, Jinyang; Ryu, Won-Hee; Gittleson, Forrest S; Nejati, Siamak; Moy, Eric; Reid, Candy; Carmo, Marcelo; Linardi, Marcelo; Bordeenithikasem, Punnathat; Kinser, Emily; Liu, Yanhui; Tong, Xiao; Osuji, Chinedum O; Schroers, Jan; Mukherjee, Sundeep; Taylor, André D

    2016-03-01

    On page 1940, A. D. Taylor and co-workers demonstrate nanoporous bicontinuous structures using controlled structural evolution of metallic glass. By using techniques such as dealloying, galvanic replacement, and under-potential deposition, bulk-metallic-glass alloys can be pushed beyond their compositional limitations and tuned for a wide variety of interfacial and electrochemical reactions. Examples are illustrated for hydrogen and methanol oxidation, as well as oxygen reduction reactions. PMID:26947938

  2. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  3. Timescales of quartz crystallization estimated from glass inclusion faceting using 3D propagation phase-contrast x-ray tomography: examples from the Bishop (California, USA) and Oruanui (Taupo Volcanic Zone, New Zealand) Tuffs

    NASA Astrophysics Data System (ADS)

    Pamukcu, A.; Gualda, G. A.; Anderson, A. T.

    2012-12-01

    Compositions of glass inclusions have long been studied for the information they provide on the evolution of magma bodies. Textures - sizes, shapes, positions - of glass inclusions have received less attention, but they can also provide important insight into magmatic processes, including the timescales over which magma bodies develop and erupt. At magmatic temperatures, initially round glass inclusions will become faceted (attain a negative crystal shape) through the process of dissolution and re-precipitation, such that the extent to which glass inclusions are faceted can be used to estimate timescales. The size and position of the inclusion within a crystal will influence how much faceting occurs: a larger inclusion will facet more slowly; an inclusion closer to the rim will have less time to facet. As a result, it is critical to properly document the size, shape, and position of glass inclusions to assess faceting timescales. Quartz is an ideal mineral to study glass inclusion faceting, as Si is the only diffusing species of concern, and Si diffusion rates are relatively well-constrained. Faceting time calculations to date (Gualda et al., 2012) relied on optical microscopy to document glass inclusions. Here we use 3D propagation phase-contrast x-ray tomography to image glass inclusions in quartz. This technique enhances inclusion edges such that images can be processed more successfully than with conventional tomography. We have developed a set of image processing tools to isolate inclusions and more accurately obtain information on the size, shape, and position of glass inclusions than with optical microscopy. We are studying glass inclusions from two giant tuffs. The Bishop Tuff is ~1000 km3 of high-silica rhyolite ash fall, ignimbrite, and intracaldera deposits erupted ~760 ka in eastern California (USA). Glass inclusions in early-erupted Bishop Tuff range from non-faceted to faceted, and faceting times determined using both optical microscopy and x

  4. Integrated 3D view of postmating responses by the Drosophila melanogaster female reproductive tract, obtained by micro-computed tomography scanning.

    PubMed

    Mattei, Alexandra L; Riccio, Mark L; Avila, Frank W; Wolfner, Mariana F

    2015-07-01

    Physiological changes in females during and after mating are triggered by seminal fluid components in conjunction with female-derived molecules. In insects, these changes include increased egg production, storage of sperm, and changes in muscle contraction within the reproductive tract (RT). Such postmating changes have been studied in dissected RT tissues, but understanding their coordination in vivo requires a holistic view of the tissues and their interrelationships. Here, we used high-resolution, multiscale micro-computed tomography (CT) scans to visualize and measure postmating changes in situ in the Drosophila female RT before, during, and after mating. These studies reveal previously unidentified dynamic changes in the conformation of the female RT that occur after mating. Our results also reveal how the reproductive organs temporally shift in concert within the confines of the abdomen. For example, we observed chiral loops in the uterus and in the upper common oviduct that relax and constrict throughout sperm storage and egg movement. We found that specific seminal fluid proteins or female secretions mediate some of the postmating changes in morphology. The morphological movements, in turn, can cause further changes due to the connections among organs. In addition, we observed apparent copulatory damage to the female intima, suggesting a mechanism for entry of seminal proteins, or other exogenous components, into the female's circulatory system. The 3D reconstructions provided by high-resolution micro-CT scans reveal how male and female molecules and anatomy interface to carry out and coordinate mating-dependent changes in the female's reproductive physiology. PMID:26041806

  5. Integrated 3D view of postmating responses by the Drosophila melanogaster female reproductive tract, obtained by micro-computed tomography scanning

    PubMed Central

    Mattei, Alexandra L.; Riccio, Mark L.; Avila, Frank W.; Wolfner, Mariana F.

    2015-01-01

    Physiological changes in females during and after mating are triggered by seminal fluid components in conjunction with female-derived molecules. In insects, these changes include increased egg production, storage of sperm, and changes in muscle contraction within the reproductive tract (RT). Such postmating changes have been studied in dissected RT tissues, but understanding their coordination in vivo requires a holistic view of the tissues and their interrelationships. Here, we used high-resolution, multiscale micro-computed tomography (CT) scans to visualize and measure postmating changes in situ in the Drosophila female RT before, during, and after mating. These studies reveal previously unidentified dynamic changes in the conformation of the female RT that occur after mating. Our results also reveal how the reproductive organs temporally shift in concert within the confines of the abdomen. For example, we observed chiral loops in the uterus and in the upper common oviduct that relax and constrict throughout sperm storage and egg movement. We found that specific seminal fluid proteins or female secretions mediate some of the postmating changes in morphology. The morphological movements, in turn, can cause further changes due to the connections among organs. In addition, we observed apparent copulatory damage to the female intima, suggesting a mechanism for entry of seminal proteins, or other exogenous components, into the female’s circulatory system. The 3D reconstructions provided by high-resolution micro-CT scans reveal how male and female molecules and anatomy interface to carry out and coordinate mating-dependent changes in the female’s reproductive physiology. PMID:26041806

  6. TOWER, 1750'S STAIRWAY; ANGLE VIEW LOOKING NORTHEAST. Glass plate stereopair ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    TOWER, 1750'S STAIRWAY; ANGLE VIEW LOOKING NORTHEAST. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-TR-D-3 157.4895. Right (not printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  7. TOWER, 1750'S STAIRWAY; ANGLE VIEW LOOKING NORTHEAST. Glass plate stereopair ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    TOWER, 1750'S STAIRWAY; ANGLE VIEW LOOKING NORTHEAST. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-TR-D-3 157.4895. Left (printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  8. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  9. The effect of activity outside the field of view on image quality for a 3D LSO-based whole body PET/CT scanner.

    PubMed

    Matheoud, R; Secco, C; Della Monica, P; Leva, L; Sacchetti, G; Inglese, E; Brambilla, M

    2009-10-01

    The purpose of this study was to quantify the influence of outside field of view (FOV) activity concentration (A(c)(,out)) on the noise equivalent count rate (NECR), scatter fraction (SF) and image quality of a 3D LSO whole-body PET/CT scanner. The contrast-to-noise ratio (CNR) was the figure of merit used to characterize the image quality of PET scans. A modified International Electrotechnical Commission (IEC) phantom was used to obtain SF and counting rates similar to those found in average patients. A scatter phantom was positioned at the end of the modified IEC phantom to simulate an activity that extends beyond the scanner. The modified IEC phantom was filled with (18)F (11 kBq mL(-1)) and the spherical targets, with internal diameter (ID) ranging from 10 to 37 mm, had a target-to-background ratio of 10. PET images were acquired with background activity concentrations into the FOV (A(c)(,bkg)) about 11, 9.2, 6.6, 5.2 and 3.5 kBq mL(-1). The emission scan duration (ESD) was set to 1, 2, 3 and 4 min. The tube inside the scatter phantom was filled with activities to provide A(c)(,out) in the whole scatter phantom of zero, half, unity, twofold and fourfold the one of the modified IEC phantom. Plots of CNR versus the various parameters are provided. Multiple linear regression was employed to study the effects of A(c)(,out) on CNR, adjusted for the presence of variables (sphere ID, A(c)(,bkg) and ESD) related to CNR. The presence of outside FOV activity at the same concentration as the one inside the FOV reduces peak NECR of 30%. The increase in SF is marginal (1.2%). CNR diminishes significantly with increasing outside FOV activity, in the range explored. ESD and A(c)(,out) have a similar weight in accounting for CNR variance. Thus, an experimental law that adjusts the scan duration to the outside FOV activity can be devised. Recovery of CNR loss due to an elevated A(c)(,out) activity seems feasible by modulating the ESD in individual bed positions according to A

  10. 3D laser-written silica glass step-index high-contrast waveguides for the 3.5 μm mid-infrared range.

    PubMed

    Martínez, Javier; Ródenas, Airán; Fernandez, Toney; Vázquez de Aldana, Javier R; Thomson, Robert R; Aguiló, Magdalena; Kar, Ajoy K; Solis, Javier; Díaz, Francesc

    2015-12-15

    We report on the direct laser fabrication of step-index waveguides in fused silica substrates for operation in the 3.5 μm mid-infrared wavelength range. We demonstrate core-cladding index contrasts of 0.7% at 3.39 μm and propagation losses of 1.3 (6.5) dB/cm at 3.39 (3.68) μm, close to the intrinsic losses of the glass. We also report on the existence of three different laser modified SiO₂ glass volumes, their different micro-Raman spectra, and their different temperature-dependent populations of color centers, tentatively clarifying the SiO₂ lattice changes that are related to the large index changes. PMID:26670520

  11. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  12. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  13. A cross-platform solution for light field based 3D telemedicine.

    PubMed

    Wang, Gengkun; Xiang, Wei; Pickering, Mark

    2016-03-01

    Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The "immersiveness" provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience. PMID:26689324

  14. 'Bonneville' in 3-D!

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Mars Exploration Rover Spirit took this 3-D navigation camera mosaic of the crater called 'Bonneville' after driving approximately 13 meters (42.7 feet) to get a better vantage point. Spirit's current position is close enough to the edge to see the interior of the crater, but high enough and far enough back to get a view of all of the walls. Because scientists and rover controllers are so pleased with this location, they will stay here for at least two more martian days, or sols, to take high resolution panoramic camera images of 'Bonneville' in its entirety. Just above the far crater rim, on the left side, is the rover's heatshield, which is visible as a tiny reflective speck.

  15. 3D recovery of human gaze in natural environments

    NASA Astrophysics Data System (ADS)

    Paletta, Lucas; Santner, Katrin; Fritz, Gerald; Mayer, Heinz

    2013-01-01

    The estimation of human attention has recently been addressed in the context of human robot interaction. Today, joint work spaces already exist and challenge cooperating systems to jointly focus on common objects, scenes and work niches. With the advent of Google glasses and increasingly affordable wearable eye-tracking, monitoring of human attention will soon become ubiquitous. The presented work describes for the first time a method for the estimation of human fixations in 3D environments that does not require any artificial landmarks in the field of view and enables attention mapping in 3D models. It enables full 3D recovery of the human view frustum and the gaze pointer in a previously acquired 3D model of the environment in real time. The study on the precision of this method reports a mean projection error ≈1.1 cm and a mean angle error ≈0.6° within the chosen 3D model - the precision does not go below the one of the technical instrument (≈1°). This innovative methodology will open new opportunities for joint attention studies as well as for bringing new potential into automated processing for human factors technologies.

  16. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  17. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  18. Analyzing the 3D Structure of Human Carbonic Anhydrase II and Its Mutants Using Deep View and the Protein Data Bank

    ERIC Educational Resources Information Center

    Ship, Noam J.; Zamble, Deborah B.

    2005-01-01

    The self directed study of a 3D image of a biomolecule stresses the complex nature of the intra- and intermolecular interactions that come together to define its structure. This is made up of a series of in vitro experiments with a wild-type and mutants forms of human carbonic anhydrase II (hCAII) that examine the structure function relationship…

  19. Lunar and Planetary Science XXXV: Viewing the Lunar Interior Through Titanium-Colored Glasses

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The session"Viewing the Lunar Interior Through Titanium-Colored Glasses" included the following reports:Consequences of High Crystallinity for the Evolution of the Lunar Magma Ocean: Trapped Plagioclase; Low Abundances of Highly Siderophile Elements in the Lunar Mantle: Evidence for Prolonged Late Accretion; Fast Anorthite Dissolution Rates in Lunar Picritic Melts: Petrologic Implications; Searching the Moon for Aluminous Mare Basalts Using Compositional Remote-Sensing Constraints II: Detailed analysis of ROIs; Origin of Lunar High Titanium Ultramafic Glasses: A Hybridized Source?; Ilmenite Solubility in Lunar Basalts as a Function of Temperature and Pressure: Implications for Petrogenesis; Garnet in the Lunar Mantle: Further Evidence from Volcanic Glasses; Preliminary High Pressure Phase Relations of Apollo 15 Green C Glass: Assessment of the Role of Garnet; Oxygen Fugacity of Mare Basalts and the Lunar Mantle. Application of a New Microscale Oxybarometer Based on the Valence State of Vanadium; A Model for the Origin of the Dark Ring at Orientale Basin; Petrology and Geochemistry of LAP 02 205: A New Low-Ti Mare-Basalt Meteorite; Thorium and Samarium in Lunar Pyroclastic Glasses: Insights into the Composition of the Lunar Mantle and Basaltic Magmatism on the Moon; and Eu2+ and REE3+ Diffusion in Enstatite, Diopside, Anorthite, and a Silicate Melt: A Database for Understanding Kinetic Fractionation of REE in the Lunar Mantle and Crust.

  20. MTF characterization in 2D and 3D for a high resolution, large field of view flat panel imager for cone beam CT

    NASA Astrophysics Data System (ADS)

    Shah, Jainil; Mann, Steve D.; Tornai, Martin P.; Richmond, Michelle; Zentai, George

    2014-03-01

    The 2D and 3D modulation transfer functions (MTFs) of a custom made, large 40x30cm2 area, 600- micron CsI-TFT based flat panel imager having 127-micron pixellation, along with the micro-fiber scintillator structure, were characterized in detail using various techniques. The larger area detector yields a reconstructed FOV of 25cm diameter with an 80cm SID in CT mode. The MTFs were determined with 1x1 (intrinsic) binning. The 2D MTFs were determined using a 50.8 micron tungsten wire and a solid lead edge, and the 3D MTF was measured using a custom made phantom consisting of three nearly orthogonal 50.8 micron tungsten wires suspended in an acrylic cubic frame. The 2D projection data was reconstructed using an iterative OSC algorithm using 16 subsets and 5 iterations. As additional verification of the resolution, along with scatter, the Catphan® phantom was also imaged and reconstructed with identical parameters. The measured 2D MTF was ~4% using the wire technique and ~1% using the edge technique at the 3.94 lp/mm Nyquist cut-off frequency. The average 3D MTF measured along the wires was ~8% at the Nyquist. At 50% MTF, the resolutions were 1.2 and 2.1 lp/mm in 2D and 3D, respectively. In the Catphan® phantom, the 1.7 lp/mm bars were easily observed. Lastly, the 3D MTF measured on the three wires has an observed 5.9% RMSD, indicating that the resolution of the imaging system is uniform and spatially independent. This high performance detector is integrated into a dedicated breast SPECT-CT imaging system.

  1. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  2. Arena3D: visualization of biological networks in 3D

    PubMed Central

    Pavlopoulos, Georgios A; O'Donoghue, Seán I; Satagopam, Venkata P; Soldatos, Theodoros G; Pafilis, Evangelos; Schneider, Reinhard

    2008-01-01

    Background Complexity is a key problem when visualizing biological networks; as the number of entities increases, most graphical views become incomprehensible. Our goal is to enable many thousands of entities to be visualized meaningfully and with high performance. Results We present a new visualization tool, Arena3D, which introduces a new concept of staggered layers in 3D space. Related data – such as proteins, chemicals, or pathways – can be grouped onto separate layers and arranged via layout algorithms, such as Fruchterman-Reingold, distance geometry, and a novel hierarchical layout. Data on a layer can be clustered via k-means, affinity propagation, Markov clustering, neighbor joining, tree clustering, or UPGMA ('unweighted pair-group method with arithmetic mean'). A simple input format defines the name and URL for each node, and defines connections or similarity scores between pairs of nodes. The use of Arena3D is illustrated with datasets related to Huntington's disease. Conclusion Arena3D is a user friendly visualization tool that is able to visualize biological or any other network in 3D space. It is free for academic use and runs on any platform. It can be downloaded or lunched directly from . Java3D library and Java 1.5 need to be pre-installed for the software to run. PMID:19040715

  3. Development of 3D mobile receiver for stereoscopic video and data service in T-DMB

    NASA Astrophysics Data System (ADS)

    Lee, Gwangsoon; Lee, Hyun; Yun, Kugjin; Hur, Namho; Lee, Soo In

    2011-02-01

    In this paper, we present a development of 3D-T DMB (three-dimensional digital multimedia broadcasting) receiver for providing 3D video and data service. First, for a 3D video service, the developed receiver is capable of decoding and playing 3D AV contents that is encoded by simulcast encoding method and that is transmitted via T-DMB network. Second, the developed receiver can render stereoscopic multimedia objects delivered using MPEG-4 BIFS technology that is also employed in T-DMB. Specially, this paper introduces hardware and software architecture and its implementation of 3D T-DMB receiver. The developed 3D T-DMB receiver has capabilities of generating stereoscopic viewing on the glasses-free 3D mobile display, therefore we propose parameters for designing the 3D display, together with evaluating the viewing angle and distance through both computer simulation and actual measurement. Finally, the availability of 3D video and data service is verified using the experimental system including the implemented receiver and a variety of service examples.

  4. 3D visualization of polymer nanostructure

    SciTech Connect

    Werner, James H

    2009-01-01

    Soft materials and structured polymers are extremely useful nanotechnology building blocks. Block copolymers, in particular, have served as 2D masks for nanolithography and 3D scaffolds for photonic crystals, nanoparticle fabrication, and solar cells. F or many of these applications, the precise 3 dimensional structure and the number and type of defects in the polymer is important for ultimate function. However, directly visualizing the 3D structure of a soft material from the nanometer to millimeter length scales is a significant technical challenge. Here, we propose to develop the instrumentation needed for direct 3D structure determination at near nanometer resolution throughout a nearly millimeter-cubed volume of a soft, potentially heterogeneous, material. This new capability will be a valuable research tool for LANL missions in chemistry, materials science, and nanoscience. Our approach to soft materials visualization builds upon exciting developments in super-resolution optical microscopy that have occurred over the past two years. To date, these new, truly revolutionary, imaging methods have been developed and almost exclusively used for biological applications. However, in addition to biological cells, these super-resolution imaging techniques hold extreme promise for direct visualization of many important nanostructured polymers and other heterogeneous chemical systems. Los Alamos has a unique opportunity to lead the development of these super-resolution imaging methods for problems of chemical rather than biological significance. While these optical methods are limited to systems transparent to visible wavelengths, we stress that many important functional chemicals such as polymers, glasses, sol-gels, aerogels, or colloidal assemblies meet this requirement, with specific examples including materials designed for optical communication, manipulation, or light-harvesting Our Research Goals are: (1) Develop the instrumentation necessary for imaging materials

  5. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  6. Accurate registration of random radiographic projections based on three spherical references for the purpose of few-view 3D reconstruction

    SciTech Connect

    Schulze, Ralf; Heil, Ulrich; Weinheimer, Oliver; Gross, Daniel; Bruellmann, Dan; Thomas, Eric; Schwanecke, Ulrich; Schoemer, Elmar

    2008-02-15

    Precise registration of radiographic projection images acquired in almost arbitrary geometries for the purpose of three-dimensional (3D) reconstruction is beset with difficulties. We modify and enhance a registration method [R. Schulze, D. D. Bruellmann, F. Roeder, and B. d'Hoedt, Med. Phys. 31, 2849-2854 (2004)] based on coupling a minimum amount of three reference spheres in arbitrary positions to a rigid object under study for precise a posteriori pose estimation. Two consecutive optimization procedures (a, initial guess; b, iterative coordinate refinement) are applied to completely exploit the reference's shadow information for precise registration of the projections. The modification has been extensive, i.e., only the idea of using the sphere shadows to locate each sphere in three dimensions from each projection was retained whereas the approach to extract the shadow information has been changed completely and extended. The registration information is used for subsequent algebraic reconstruction of the 3D information inherent in the projections. We present a detailed mathematical theory of the registration process as well as simulated data investigating its performance in the presence of error. Simulation of the initial guess revealed a mean relative error in the critical depth coordinate ranging between 2.1% and 4.4%, and an evident error reduction by the subsequent iterative coordinate refinement. To prove the applicability of the method for real-world data, algebraic 3D reconstructions from few ({<=}9) projection radiographs of a human skull, a human mandible and a teeth-containing mandible segment are presented. The method facilitates extraction of 3D information from only few projections obtained from off-the-shelf radiographic projection units without the need for costly hardware. Technical requirements as well as radiation dose are low.

  7. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  8. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  9. Odyssey over Martian Sunrise, 3-D

    NASA Technical Reports Server (NTRS)

    2003-01-01

    NASA's Mars Odyssey spacecraft passes above a portion of the planet that is rotating into the sunlight in this artist's concept illustration. This red-blue anaglyph artwork can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue (cyan) 3-D glasses.

    The spacecraft has been orbiting Mars since October 24, 2001.

    NASA's Jet Propulsion Laboratory manages the Mars Odyssey mission for the NASA Office of Space Science, Washington, D.C. Investigators at Arizona State University in Tempe, the University of Arizona in Tucson, and NASA's Johnson Space Center, Houston, operate the science instruments. The gamma-ray spectrometer was provided by the University of Arizona in collaboration with the Russian Aviation and Space Agency and Institute for Space Research, which provided the high-energy neutron detector, and the Los Alamos National Laboratories, New Mexico, which provided the neutron spectrometer. Lockheed Martin Space Systems, Denver, is the prime contractor for the project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  10. Filling gaps in cultural heritage documentation by 3D photography

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.

    2015-08-01

    geometry" and to multistage concepts of 3D photographs in Cultural Heritage just started. Furthermore a revised list of the 3D visualization principles, claiming completeness, has been carried out. Beside others in an outlook *It is highly recommended, to list every historical and current stereo view with relevance to Cultural Heritage in a global Monument Information System (MIS), like in google earth. *3D photographs seem to be very suited, to complete and/or at least partly to replace manual archaeological sketches. In this concern the still underestimated 3D effect will be demonstrated, which even allows, e.g., the spatial perception of extremely small scratches etc... *A consequent dealing with 3D Technology even seems to indicate, currently we experience the beginning of a new age of "real 3DPC- screens", which at least could add or even partly replace the conventional 2D screens. Here the spatial visualization is verified without glasses in an all-around vitreous body. In this respect nowadays widespread lasered crystals showing monuments are identified as "Early Bird" 3D products, which, due to low resolution and contrast and due to lack of color, currently might even remember to the status of the invention of photography by Niepce (1827), but seem to promise a great future also in 3D Cultural Heritage documentation. *Last not least 3D printers more and more seem to conquer the IT-market, obviously showing an international competition.

  11. Stereo 3-D Vision in Teaching Physics

    NASA Astrophysics Data System (ADS)

    Zabunov, Svetoslav

    2012-03-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The current paper describes the modern stereo 3-D technologies that are applicable to various tasks in teaching physics in schools, colleges, and universities. Examples of stereo 3-D simulations developed by the author can be observed on online.

  12. 3D visualization techniques for the STEREO-mission

    NASA Astrophysics Data System (ADS)

    Wiegelmann, T.; Podlipnik, B.; Inhester, B.; Feng, L.; Ruan, P.

    The forthcoming STEREO-mission will observe the Sun from two different viewpoints We expect about 2GB data per day which ask for suitable data presentation techniques A key feature of STEREO is that it will provide for the first time a 3D-view of the Sun and the solar corona In our normal environment we see objects three dimensional because the light from real 3D objects needs different travel times to our left and right eye As a consequence we see slightly different images with our eyes which gives us information about the depth of objects and a corresponding 3D impression Techniques for the 3D-visualization of scientific and other data on paper TV computer screen cinema etc are well known e g two colour anaglyph technique shutter glasses polarization filters and head-mounted displays We discuss advantages and disadvantages of these techniques and how they can be applied to STEREO-data The 3D-visualization techniques are not limited to visual images but can be also used to show the reconstructed coronal magnetic field and energy and helicity distribution In the advent of STEREO we test the method with data from SOHO which provides us different viewpoints by the solar rotation This restricts the analysis to structures which remain stationary for several days Real STEREO-data will not be affected by these limitations however

  13. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  14. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  15. Facile aqueous synthesis and electromagnetic properties of novel 3D urchin-like glass/Ni-Ni(3)P/Co(2)P(2)O(7) core/shell/shell composite hollow structures.

    PubMed

    An, Zhenguo; Zhang, Jingjie; Pan, Shunlong

    2010-04-14

    Novel 3D urchin-like glass/Ni-Ni(3)P/Co(2)P(2)O(7) core/shell/shell composite hollow structures are fabricated for the first time by controlled stepwise assembly of granular Ni-Ni(3)P alloy and ribbon-like Co(2)P(2)O(7) nanocrystals on hollow glass spheres in aqueous solutions at mild conditions. It is found that the shell structure and the overall morphology of the products can be tailored by properly tuning the annealing temperature. The as-obtained composite core/shell/shell products possess low density (ca. 1.18 g cm(-3)) and shape-dependent magnetic and microwave absorbing properties, and thus may have some promising applications in the fields of low-density magnetic materials, microwave absorbers, etc. Based on a series of contrast experiments, the probable formation mechanism of the core/shell/shell hierarchical structures is proposed. This work provides an additional strategy to prepare core/shell composite spheres with tailored shell morphology and electromagnetic properties. PMID:20379530

  16. Rossby-wave driven stirring of the UTLS - a detailed view on the intricately layered structure by the 3-D imaging limb-sounder GLORIA

    NASA Astrophysics Data System (ADS)

    Ungermann, J.; Friedl-Vallon, F.; Hoepfner, M.; Oelhaf, H.; Preusse, P.; Riese, M.

    2014-12-01

    The Gimballed Limb Radiance Imager of the Atmosphere (GLORIA) is a new instrument that combines a classical Fourier transform spectrometer (FTS) with a 2-D detector array. Imaging allows the spatial sampling to be improved by up to an order of magnitude when compared to a limb scanning instrument. GLORIA is designed to operate on various high altitude research platforms. The instrument is a joint development of the German Helmholtz Large Research Facilities Karlsruhe Institute of Technology (KIT) and Research Centre Juelich (FZJ). GLORIA builds upon the heritage of KIT and FZJ in developing and operating IR limb sounders (MIPAS, CRISTA). In Summer 2012, GLORIA was an integral part of the first large missions for the German research aircraft HALO dedicated to atmospheric research, TACTS and ESMVAL. The data span latitudes from 80°N to 65°S and include several tomographic flight patterns that allow the 3-D reconstruction of observed air masses. We provide an overview of the heterogeneous structure of the upper troposphere/lower stratosphere (UTLS) as observed over Europe. Retrieved water vapor and ozone are used to identify the tropospheric or stratospheric character of air masses and can thus be used to visualize the multi-species 2-D (and partly 3-D) chemical structure of the UTLS. A highly intricate structure is found consisting often of fine-scale layers extending only several hundred meters in the vertical. These horizontally large-scale structures are thus below the typical vertical resolution of current chemistry climate models. Trajectory studies reveal the origin of the filaments to be Rossby wave-breaking events over the Pacific and Atlantic that cause tropical air stemming from the general area of the Asian monsoon to be mixed across the jet-stream into the subtropical lowermost stratosphere. These results demonstrate a rich spatial structure of the UTLS region at the subtropical jet, where the tropopause break is perturbed by breaking Rossby waves. The

  17. Toward a 3D video format for auto-stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha

    2008-08-01

    There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.

  18. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  19. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  20. Super stereoscopy technique for comfortable and realistic 3D displays.

    PubMed

    Akşit, Kaan; Niaki, Amir Hossein Ghanbari; Ulusoy, Erdem; Urey, Hakan

    2014-12-15

    Two well-known problems of stereoscopic displays are the accommodation-convergence conflict and the lack of natural blur for defocused objects. We present a new technique that we name Super Stereoscopy (SS3D) to provide a convenient solution to these problems. Regular stereoscopic glasses are replaced by SS3D glasses which deliver at least two parallax images per eye through pinholes equipped with light selective filters. The pinholes generate blur-free retinal images so as to enable correct accommodation, while the delivery of multiple parallax images per eye creates an approximate blur effect for defocused objects. Experiments performed with cameras and human viewers indicate that the technique works as desired. In case two, pinholes equipped with color filters per eye are used; the technique can be used on a regular stereoscopic display by only uploading a new content, without requiring any change in display hardware, driver, or frame rate. Apart from some tolerable loss in display brightness and decrease in natural spatial resolution limit of the eye because of pinholes, the technique is quite promising for comfortable and realistic 3D vision, especially enabling the display of close objects that are not possible to display and comfortably view on regular 3DTV and cinema. PMID:25503026

  1. Hands-on guide for 3D image creation for geological purposes

    NASA Astrophysics Data System (ADS)

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red

  2. Web Based Interactive Anaglyph Stereo Visualization of 3D Model of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Han, J.

    2014-12-01

    The objectives of this study were to create interactive online tool for generating and viewing the anaglyph 3D stereo image on a Web browser via Internet. To achieve this, we designed and developed the prototype system. Three-dimensional visualization is well known and becoming popular in recent years to understand the target object and the related physical phenomena. Geoscience data have the complex data model, which combines large extents with rich small scale visual details. So, the real-time visualization of 3D geoscience data model on the Internet is a challenging work. In this paper, we show the result of creating which can be viewed in 3D anaglyph of geoscience data in any web browser which supports WebGL. We developed an anaglyph image viewing prototype system, and some representative results are displayed by anaglyph 3D stereo image generated in red-cyan colour from pairs of air-photo/digital elevation model and geological map/digital elevation model respectively. The best viewing is achieved by using suitable 3D red-cyan glasses, although alternatively red-blue or red-green spectacles can be also used. The middle mouse wheel can be used to zoom in/out the anaglyph image on a Web browser. Application of anaglyph 3D stereo image is a very important and easy way to understand the underground geologic system and active tectonic geomorphology. The integrated strata with fine three-dimensional topography and geologic map data can help to characterise the mineral potential area and the active tectonic abnormal characteristics. To conclude, it can be stated that anaglyph 3D stereo image provides a simple and feasible method to improve the relief effect of geoscience data such as geomorphology and geology. We believe that with further development, the anaglyph 3D stereo imaging system could as a complement to 3D geologic modeling, constitute a useful tool for better understanding of the underground geology and the active tectonic

  3. TRACE 3-D documentation

    SciTech Connect

    Crandall, K.R.

    1987-08-01

    TRACE 3-D is an interactive beam-dynamics program that calculates the envelopes of a bunched beam, including linear space-charge forces, through a user-defined transport system. TRACE 3-D provides an immediate graphics display of the envelopes and the phase-space ellipses and allows nine types of beam-matching options. This report describes the beam-dynamics calculations and gives detailed instruction for using the code. Several examples are described in detail.

  4. Northern Terra Meridiani Rocks and Cliffs in 3-D

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Extended Mission operations for the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) include opportunities that come up about 10 times a week to turn and point the MGS spacecraft so that MOC can photograph a feature of high scientific interest. Many of these images are targeted to the site of a previous MOC image, so that a stereoscopic (3-D) view can be obtained.

    The stereo view, which requires red (left-eye) and blue (right-eye) 3-D glasses to be seen, covers an area approximately 2.3 km (1.4 mi) wide by 6.2 km (3.9 mi) long. The full-resolution view is seen at nearly 1.5 meters (5 ft) per pixel, a scale at which objects the size of airplanes and school buses might be seen.

    The landscape revealed by the 3-D view is a rugged terrain with steep cliffs and no fresh impact craters. This terrain seems most un-Mars-like compared to the typical cratered and dusty views MOC has provided since it began taking data in September 1997. In fact, one of the MOC science team members remarked, 'If I'd seen this landscape used in a movie about Mars five years ago, I'd have said the director had no clue what Mars is supposed to look like.' An irregular depression with a flat, mottled, light-toned floor dominates the scene. Small dark ridges on the depression floor near the top center of the image are dunes or drifts formed by wind transport of sandy sediment. The sharp buttes, mesas, and steep cliffs are all indicators that this terrain consists of a broad exposure of martian bedrock. North is up and sunlight illuminates each picture from the left/upper left.

  5. 3D Dynamic Echocardiography with a Digitizer

    NASA Astrophysics Data System (ADS)

    Oshiro, Osamu; Matani, Ayumu; Chihara, Kunihiro

    1998-05-01

    In this paper,a three-dimensional (3D) dynamic ultrasound (US) imaging system,where a US brightness-mode (B-mode) imagetriggered with an R-wave of electrocardiogram (ECG)was obtained with an ultrasound diagnostic deviceand the location and orientation of the US probewere simultaneously measured with a 3D digitizer, is described.The obtained B-mode imagewas then projected onto a virtual 3D spacewith the proposed interpolation algorithm using a Gaussian operator.Furthermore, a 3D image was presented on a cathode ray tube (CRT)and stored in virtual reality modeling language (VRML).We performed an experimentto reconstruct a 3D heart image in systole using this system.The experimental results indicatethat the system enables the visualization ofthe 3D and internal structure of a heart viewed from any angleand has potential for use in dynamic imaging,intraoperative ultrasonography and tele-medicine.

  6. Grounding-line migration in plan-view marine ice-sheet models: results of the ice2sea MISMIP3d intercomparison

    NASA Astrophysics Data System (ADS)

    Pattyn, Frank; Perichon, Laura; Durand, Gaël; Gagliardini, Olivier; Favier, Lionel; Hindmarsh, Richard; Zwinger, Thomas; Participants, Mismip3d

    2013-04-01

    Predictions of marine ice-sheet behaviour require models able to simulate grounding line migration. We present results of an intercomparison experiment for plan-view marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no buttressing effects from lateral drag). A unique steady state grounding line position exists for ice sheets on a downward sloping bed under those simplified conditions. Perturbation experiments specifying spatial (lateral) variation in basal sliding parameters permitted the evolution of curved grounding lines, generating buttressing effects. The experiments showed regions of compression and extensional flow across the grounding line, thereby invalidating the boundary layer theory. Models based on the shallow ice approximation, which neither resolve membrane stresses, nor reproduce the approximate analytical results unless appropriate parameterizations for ice flux are imposed at the grounding line, are invalid. Steady-state grounding line positions were found to be dependent on the level of physical model approximation. Models that only include membrane stresses result in ice sheets with a larger span than those that also incorporate vertical shearing at the grounding line, such as higher-order and full-Stokes models. From a numerical perspective, resolving grounding lines requires a sufficiently small grid size (

  7. FPGA implementation of glass-free stereo vision

    NASA Astrophysics Data System (ADS)

    Tang, Weidong; Yan, Xiaolin

    2016-04-01

    This paper presents a real-time efficient glass-free 3D system, which is based on FPGA. The system converts two-view input that is 60 frames per second (fps) 1080P stream into a multi-view video with 30fps and 4K resolution. In order to provide smooth and comfortable viewing experience, glass-free 3D systems must display multi-view videos. To generate a multi-view video from a two-view input includes three steps, the first is to compute disparity maps from two input views; the second is to synthesize a couple of new views based on the computed disparity maps and input views; the last is to produce video from the new views according to the specifications of the lens installed on TV sets.

  8. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  9. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  10. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  11. Bootstrapping 3D fermions

    NASA Astrophysics Data System (ADS)

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions < ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  12. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  13. Sojourner's favorite rocks - in 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, and Little Flat Top are at center. The 'Twin Peaks' in the distance are one to two kilometers away. Curvature in the image is due to parallax.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  14. Forward ramp and Twin Peaks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A lander petal and the forward ramp are featured in this image, taken by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. There are several prominent rocks, including Wedge at left; Shark, Half-Dome, and Pumpkin in the background; and Flat Top and Little Flat Top at center.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  15. "We Put on the Glasses and Moon Comes Closer!" Urban Second Graders Exploring the Earth, the Sun and Moon through 3D Technologies in a Science and Literacy Unit

    ERIC Educational Resources Information Center

    Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin

    2014-01-01

    This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that…

  16. "We Put on the Glasses and Moon Comes Closer!" Urban Second Graders Exploring the Earth, the Sun and Moon through 3D Technologies in a Science and Literacy Unit

    ERIC Educational Resources Information Center

    Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin

    2014-01-01

    This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day…

  17. 3D Radiative Transfer Effects in Multi-Angle/Multi-Spectral Radio-Polarimetric Signals from a Mixture of Clouds and Aerosols Viewed by a Non-Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Davis, Anthony B.; Garay, Michael J.; Xu, Feng; Qu, Zheng; Emde, Claudia

    2013-01-01

    When observing a spatially complex mix of aerosols and clouds in a single relatively large field-of-view, nature entangles their signals non-linearly through polarized radiation transport processes that unfold in the 3D position and direction spaces. In contrast, any practical forward model in a retrieval algorithm will use only 1D vector radiative transfer (vRT) in a linear mixing technique. We assess the difference between the observed and predicted signals using synthetic data from a high-fidelity 3D vRT model with clouds generated using a Large Eddy Simulation model and an aerosol climatology. We find that this difference is signal--not noise--for the Aerosol Polarimetry Sensor (APS), an instrument developed by NASA. Moreover, the worst case scenario is also the most interesting case, namely, when the aerosol burden is large, hence hase the most impact on the cloud microphysics and dynamics. Based on our findings, we formulate a mitigation strategy for these unresolved cloud adjacency effects assuming that some spatial information is available about the structure of the clouds at higher resolution from "context" cameras, as was planned for NASA's ill-fated Glory mission that was to carry the APS but failed to reach orbit. Application to POLDER (POLarization and Directionality of Earth Reflectances) data from the period when PARASOL (Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) was in the A-train is briefly discussed.

  18. 3D Printable Graphene Composite

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  19. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  20. Subjective experiences of watching stereoscopic Avatar and U2 3D in a cinema

    NASA Astrophysics Data System (ADS)

    Pölönen, Monika; Salmimaa, Marja; Takatalo, Jari; Häkkinen, Jukka

    2012-01-01

    A stereoscopic 3-D version of the film Avatar was shown to 85 people who subsequently answered questions related to sickness, visual strain, stereoscopic image quality, and sense of presence. Viewing Avatar for 165 min induced some symptoms of visual strain and sickness, but the symptom levels remained low. A comparison between Avatar and previously published results for the film U2 3D showed that sickness and visual strain levels were similar despite the films' runtimes. The genre of the film had a significant effect on the viewers' opinions and sense of presence. Avatar, which has been described as a combination of action, adventure, and sci-fi genres, was experienced as more immersive and engaging than the music documentary U2 3D. However, participants in both studies were immersed, focused, and absorbed in watching the stereoscopic 3-D (S3-D) film and were pleased with the film environments. The results also showed that previous stereoscopic 3-D experience significantly reduced the amount of reported eye strain and complaints about the weight of the viewing glasses.

  1. Rapid 360 degree imaging and stitching of 3D objects using multiple precision 3D cameras

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Yin, Stuart; Zhang, Jianzhong; Li, Jiangan; Wu, Frank

    2008-02-01

    In this paper, we present the system architecture of a 360 degree view 3D imaging system. The system consists of multiple 3D sensors synchronized to take 3D images around the object. Each 3D camera employs a single high-resolution digital camera and a color-coded light projector. The cameras are synchronized to rapidly capture the 3D and color information of a static object or a live person. The color encoded structure lighting ensures the precise reconstruction of the depth of the object. A 3D imaging system architecture is presented. The architecture employs the displacement of the camera and the projector to triangulate the depth information. The 3D camera system has achieved high depth resolution down to 0.1mm on a human head sized object and 360 degree imaging capability.

  2. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    NASA Astrophysics Data System (ADS)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  3. Crosstalk in automultiscopic 3-D displays: blessing in disguise?

    NASA Astrophysics Data System (ADS)

    Jain, Ashish; Konrad, Janusz

    2007-02-01

    Most of 3-D displays suffer from interocular crosstalk, i.e., the perception of an unintended view in addition to intended one. The resulting "ghosting" at high-contrast object boundaries is objectionable and interferes with depth perception. In automultiscopic (no glasses, multiview) displays using microlenses or parallax barrier, the effect is compounded since several unintended views may be perceived at once. However, we recently discovered that crosstalk in automultiscopic displays can be also beneficial. Since spatial multiplexing of views in order to prepare a composite image for automultiscopic viewing involves sub-sampling, prior anti-alias filtering is required. To date, anti-alias filter design has ignored the presence of crosstalk in automultiscopic displays. In this paper, we propose a simple multiplexing model that takes crosstalk into account. Using this model we derive a mathematical expression for the spectrum of single view with crosstalk, and we show that it leads to reduced spectral aliasing compared to crosstalk-free case. We then propose a new criterion for the characterization of ideal anti-alias pre-filter. In the experimental part, we describe a simple method to measure optical crosstalk between views using digital camera. We use the measured crosstalk parameters to find the ideal frequency response of anti-alias filter and we design practical digital filters approximating this response. Having applied the designed filters to a number of multiview images prior to multiplexing, we conclude that, due to their increased bandwidth, the filters lead to visibly sharper 3-D images without increasing aliasing artifacts.

  4. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  5. 3D Visualization of Recent Sumatra Earthquake

    NASA Astrophysics Data System (ADS)

    Nayak, Atul; Kilb, Debi

    2005-04-01

    Scientists and visualization experts at the Scripps Institution of Oceanography have created an interactive three-dimensional visualization of the 28 March 2005 magnitude 8.7 earthquake in Sumatra. The visualization shows the earthquake's hypocenter and aftershocks recorded until 29 March 2005, and compares it with the location of the 26 December 2004 magnitude 9 event and the consequent seismicity in that region. The 3D visualization was created using the Fledermaus software developed by Interactive Visualization Systems (http://www.ivs.unb.ca/) and stored as a ``scene'' file. To view this visualization, viewers need to download and install the free viewer program iView3D (http://www.ivs3d.com/products/iview3d).

  6. Optically rewritable 3D liquid crystal displays.

    PubMed

    Sun, J; Srivastava, A K; Zhang, W; Wang, L; Chigrinov, V G; Kwok, H S

    2014-11-01

    Optically rewritable liquid crystal display (ORWLCD) is a concept based on the optically addressed bi-stable display that does not need any power to hold the image after being uploaded. Recently, the demand for the 3D image display has increased enormously. Several attempts have been made to achieve 3D image on the ORWLCD, but all of them involve high complexity for image processing on both hardware and software levels. In this Letter, we disclose a concept for the 3D-ORWLCD by dividing the given image in three parts with different optic axis. A quarter-wave plate is placed on the top of the ORWLCD to modify the emerging light from different domains of the image in different manner. Thereafter, Polaroid glasses can be used to visualize the 3D image. The 3D image can be refreshed, on the 3D-ORWLCD, in one-step with proper ORWLCD printer and image processing, and therefore, with easy image refreshing and good image quality, such displays can be applied for many applications viz. 3D bi-stable display, security elements, etc. PMID:25361316

  7. NASA's 3D View of Celestial Lightsabers

    NASA Video Gallery

    This movie envisions a three-dimensional perspective on the Hubble Space Telescope's striking image of the Herbig-Haro object known as HH 24. The central star is hidden by gas and dust, but its pro...

  8. 3-D Visualizations At (Almost) No Expense

    NASA Astrophysics Data System (ADS)

    Sedlock, R. L.

    2003-12-01

    Like most teaching-oriented public universities, San José State University (part of the California State University system) currently faces severe budgetary constraints. These circumstances prohibit the construction of one or more Geo-Walls on-campus. Nevertheless, the Department of Geology has pursued alternatives that enable our students to benefit from 3-D visualizations such as those used with the Geo-Wall. This experience - a sort of virtual virtuality - depends only on the availability of a computer lab and an optional plotter. Starting in June 2003, we have used the methods described here with two diverse groups of participants: middle- and high-school teachers taking professional development workshops through grants funded by NSF and NASA, and regular university students enrolled in introductory earth science and geology laboratory courses. We use two types of three-dimensional images with our students: visualizations from the on-line Gallery of Virtual Topography (Steve Reynolds), and USGS digital topographic quadrangles that have been transformed into anaglyph files for viewing with 3-D glasses. The procedure for transforming DEMs into these anaglyph files, developed by Paul Morin, is available at http://geosun.sjsu.edu/~sedlock/anaglyph.html. The resulting images can be used with students in one of two ways. First, maps can be printed on a suitable plotter, laminated (optional but preferable), and used repeatedly with different classes. Second, the images can be viewed in school computer labs or by students on their own computers. Chief advantages of the plotter option are (1) full-size maps (single or tiled) viewable in their entirety, and (2) dependability (independent of Internet connections and electrical power). Chief advantages of the computer option are (1) minimal preparation time and no other needed resources, assuming a computer lab with Internet access, and (2) students can work with the images outside of regularly scheduled courses. Both

  9. The psychology of the 3D experience

    NASA Astrophysics Data System (ADS)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence.

  10. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    , even if one data object lies behind another. Stereoscopic viewing is another powerful tool to investigate 3-D relationships between objects. This form of immersion is constructed through viewing two separate images that are interleaved--typically 48 frames per second, per eye--and synced through an emitter and a set of specialized polarizing eyeglasses. The polarizing lenses flicker at an equivalent rate, blanking the eye for which a particular image was not drawn, producing the desired stereo effect. Volumetric visualization of the ARAD 3-D seismic dataset will be presented. The effective use of transparency reveals detailed structure of the melt-lens beneath the 9°03'N overlapping spreading center (OSC) along the East Pacific Rise, including melt-filled fractures within the propagating rift-tip. In addition, range-gated images of seismic reflectivity will be co-registered to investigate the physical properties (melt versus mush) of the magma chamber at this locale. Surface visualization of a dense, 2-D grid of MCS seismic data beneath Axial seamount (Juan de Fuca Ridge) will also be highlighted, including relationships between the summit caldera and rift zones, and the underlying (and humongous) magma chamber. A selection of Quicktime movies will be shown. Popcorn will be served, really!

  11. Characterizing and reducing crosstalk in printed anaglyph stereoscopic 3D images

    NASA Astrophysics Data System (ADS)

    Woods, Andrew J.; Harris, Chris R.; Leggo, Dean B.; Rourke, Tegan M.

    2013-04-01

    The anaglyph three-dimensional (3D) method is a widely used technique for presenting stereoscopic 3D images. Its primary advantages are that it will work on any full-color display and only requires that the user view the anaglyph image using a pair of anaglyph 3D glasses with usually one lens tinted red and the other lens tinted cyan. A common image quality problem of anaglyph 3D images is high levels of crosstalk-the incomplete isolation of the left and right image channels such that each eye sees a "ghost" of the opposite perspective view. In printed anaglyph images, the crosstalk levels are often very high-much higher than when anaglyph images are presented on emissive displays. The sources of crosstalk in printed anaglyph images are described and a simulation model is developed that allows the amount of printed anaglyph crosstalk to be estimated based on the spectral characteristics of the light source, paper, ink set, and anaglyph glasses. The model is validated using a visual crosstalk ranking test, which indicates good agreement. The model is then used to consider scenarios for the reduction of crosstalk in printed anaglyph systems and finds a number of options that are likely to reduce crosstalk considerably.

  12. Innovations in 3D printing: a 3D overview from optics to organs.

    PubMed

    Schubert, Carl; van Langeveld, Mark C; Donoso, Larry A

    2014-02-01

    3D printing is a method of manufacturing in which materials, such as plastic or metal, are deposited onto one another in layers to produce a three dimensional object, such as a pair of eye glasses or other 3D objects. This process contrasts with traditional ink-based printers which produce a two dimensional object (ink on paper). To date, 3D printing has primarily been used in engineering to create engineering prototypes. However, recent advances in printing materials have now enabled 3D printers to make objects that are comparable with traditionally manufactured items. In contrast with conventional printers, 3D printing has the potential to enable mass customisation of goods on a large scale and has relevance in medicine including ophthalmology. 3D printing has already been proved viable in several medical applications including the manufacture of eyeglasses, custom prosthetic devices and dental implants. In this review, we discuss the potential for 3D printing to revolutionise manufacturing in the same way as the printing press revolutionised conventional printing. The applications and limitations of 3D printing are discussed; the production process is demonstrated by producing a set of eyeglass frames from 3D blueprints. PMID:24288392

  13. The influence of autostereoscopic 3D displays on subsequent task performance

    NASA Astrophysics Data System (ADS)

    Barkowsky, Marcus; Le Callet, Patrick

    2010-02-01

    Viewing 3D content on an autostereoscopic is an exciting experience. This is partly due to the fact that the 3D effect is seen without glasses. Nevertheless, it is an unnatural condition for the eyes as the depth effect is created by the disparity of the left and the right view on a flat screen instead of having a real object at the corresponding location. Thus, it may be more tiring to watch 3D than 2D. This question is investigated in this contribution by a subjective experiment. A search task experiment is conducted and the behavior of the participants is recorded with an eyetracker. Several indicators both for low level perception as well as for the task performance itself are evaluated. In addition two optometric tests are performed. A verification session with conventional 2D viewing is included. The results are discussed in detail and it can be concluded that the 3D viewing does not have a negative impact on the task performance used in the experiment.

  14. 3D dynamic roadmapping for abdominal catheterizations.

    PubMed

    Bender, Frederik; Groher, Martin; Khamene, Ali; Wein, Wolfgang; Heibel, Tim Hauke; Navab, Nassir

    2008-01-01

    Despite rapid advances in interventional imaging, the navigation of a guide wire through abdominal vasculature remains, not only for novice radiologists, a difficult task. Since this navigation is mostly based on 2D fluoroscopic image sequences from one view, the process is slowed down significantly due to missing depth information and patient motion. We propose a novel approach for 3D dynamic roadmapping in deformable regions by predicting the location of the guide wire tip in a 3D vessel model from the tip's 2D location, respiratory motion analysis, and view geometry. In a first step, the method compensates for the apparent respiratory motion in 2D space before backprojecting the 2D guide wire tip into three dimensional space, using a given projection matrix. To countervail the error connected to the projection parameters and the motion compensation, as well as the ambiguity caused by vessel deformation, we establish a statistical framework, which computes a reliable estimate of the guide wire tip location within the 3D vessel model. With this 2D-to-3D transfer, the navigation can be performed from arbitrary viewing angles, disconnected from the static perspective view of the fluoroscopic sequence. Tests on a realistic breathing phantom and on synthetic data with a known ground truth clearly reveal the superiority of our approach compared to naive methods for 3D roadmapping. The concepts and information presented in this paper are based on research and are not commercially available. PMID:18982662

  15. Recent developments in DFD (depth-fused 3D) display and arc 3D display

    NASA Astrophysics Data System (ADS)

    Suyama, Shiro; Yamamoto, Hirotsugu

    2015-05-01

    We will report our recent developments in DFD (Depth-fused 3D) display and arc 3D display, both of which have smooth movement parallax. Firstly, fatigueless DFD display, composed of only two layered displays with a gap, has continuous perceived depth by changing luminance ratio between two images. Two new methods, called "Edge-based DFD display" and "Deep DFD display", have been proposed in order to solve two severe problems of viewing angle and perceived depth limitations. Edge-based DFD display, layered by original 2D image and its edge part with a gap, can expand the DFD viewing angle limitation both in 2D and 3D perception. Deep DFD display can enlarge the DFD image depth by modulating spatial frequencies of front and rear images. Secondly, Arc 3D display can provide floating 3D images behind or in front of the display by illuminating many arc-shaped directional scattering sources, for example, arcshaped scratches on a flat board. Curved Arc 3D display, composed of many directional scattering sources on a curved surface, can provide a peculiar 3D image, for example, a floating image in the cylindrical bottle. The new active device has been proposed for switching arc 3D images by using the tips of dual-frequency liquid-crystal prisms as directional scattering sources. Directional scattering can be switched on/off by changing liquid-crystal refractive index, resulting in switching of arc 3D image.

  16. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  17. 3D Printed Microscope for Mobile Devices that Cost Pennies

    ScienceCinema

    Erikson, Rebecca; Baird, Cheryl; Hutchinson, Janine

    2015-06-23

    Scientists at PNNL have designed a 3D-printable microscope for mobile devices using pennies worth of plastic and glass materials. The microscope has a wide range of uses, from education to in-the-field science.

  18. 3D Printed Microscope for Mobile Devices that Cost Pennies

    SciTech Connect

    Erikson, Rebecca; Baird, Cheryl; Hutchinson, Janine

    2014-09-15

    Scientists at PNNL have designed a 3D-printable microscope for mobile devices using pennies worth of plastic and glass materials. The microscope has a wide range of uses, from education to in-the-field science.

  19. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    NASA Astrophysics Data System (ADS)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J.

    2015-09-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.

  20. 'Endurance' Untouched (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1 [figure removed for brevity, see original site] Figure 2

    This navigation camera mosaic, created from images taken by NASA's Mars Exploration Rover Opportunity on sols 115 and 116 (May 21 and 22, 2004) provides a dramatic view of 'Endurance Crater.' The rover engineering team carefully plotted the safest path into the football field-sized crater, eventually easing the rover down the slopes around sol 130 (June 12, 2004). To the upper left of the crater sits the rover's protective heatshield, which sheltered Opportunity as it passed through the martian atmosphere. The 360-degree, stereo view is presented in a cylindrical-perspective projection, with geometric and radiometric seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  1. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  2. ShowMe3D

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  3. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  4. Analysis of optical characteristics of photopolymer-based VHOE for multiview autostereoscopic 3D display system

    NASA Astrophysics Data System (ADS)

    Cho, Byung-Chul; Gu, Jung-Sik; Kim, Eun-Soo

    2002-06-01

    Generally, an autostereoscopic display presents a 3D image to a viewer without the need for glasses or other encumbering viewing aids. In this paper, we propose a new autostereoscopic 3D video display system which allows viewers to observe 3D images in the same range of viewing angle. In this system, a photopolymer-based VHOE is made from volume holographic recording materials and it is used for projecting a multiview images to the spatially different directions sequentially in time. Since this technique is based on the VHOE made from the photorefractive photopolymer instead of the conventional parallax barrier or lenticular sheet, the resolution and parallax number of the proposed VHOE-based 3D display system are limited by the photopolymer's physical and optical properties. To make the photopolymer to be applicable for a multiview autostereoscopic 3D display system, the photopolymer must be capable of achieving some properties such as a low distortion of the diffracted light beam, high diffraction efficiency, and uniform intensities of the reconstructed diffracted lights from the fully recorded diffraction gratings. In this paper, the optical and physical characteristics of the DuPont HRF photopolymer-based VHOE such as a distortion of displayed image, uniformity of the diffracted light intensity, photosensitivity and diffraction efficiency are measured and discussed.

  5. The rendering context for stereoscopic 3D web

    NASA Astrophysics Data System (ADS)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  6. 3D polarimetric purity

    NASA Astrophysics Data System (ADS)

    Gil, José J.; San José, Ignacio

    2010-11-01

    From our previous definition of the indices of polarimetric purity for 3D light beams [J.J. Gil, J.M. Correas, P.A. Melero and C. Ferreira, Monogr. Semin. Mat. G. de Galdeano 31, 161 (2004)], an analysis of their geometric and physical interpretation is presented. It is found that, in agreement with previous results, the first parameter is a measure of the degree of polarization, whereas the second parameter (called the degree of directionality) is a measure of the mean angular aperture of the direction of propagation of the corresponding light beam. This pair of invariant, non-dimensional, indices of polarimetric purity contains complete information about the polarimetric purity of a light beam. The overall degree of polarimetric purity is obtained as a weighted quadratic average of the degree of polarization and the degree of directionality.

  7. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  8. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  9. Modeling Cellular Processes in 3-D

    PubMed Central

    Mogilner, Alex; Odde, David

    2011-01-01

    Summary Recent advances in photonic imaging and fluorescent protein technology offer unprecedented views of molecular space-time dynamics in living cells. At the same time, advances in computing hardware and software enable modeling of ever more complex systems, from global climate to cell division. As modeling and experiment become more closely integrated, we must address the issue of modeling cellular processes in 3-D. Here, we highlight recent advances related to 3-D modeling in cell biology. While some processes require full 3-D analysis, we suggest that others are more naturally described in 2-D or 1-D. Keeping the dimensionality as low as possible reduces computational time and makes models more intuitively comprehensible; however, the ability to test full 3-D models will build greater confidence in models generally and remains an important emerging area of cell biological modeling. PMID:22036197

  10. Eyes on the Earth 3D

    NASA Technical Reports Server (NTRS)

    Kulikov, anton I.; Doronila, Paul R.; Nguyen, Viet T.; Jackson, Randal K.; Greene, William M.; Hussey, Kevin J.; Garcia, Christopher M.; Lopez, Christian A.

    2013-01-01

    Eyes on the Earth 3D software gives scientists, and the general public, a realtime, 3D interactive means of accurately viewing the real-time locations, speed, and values of recently collected data from several of NASA's Earth Observing Satellites using a standard Web browser (climate.nasa.gov/eyes). Anyone with Web access can use this software to see where the NASA fleet of these satellites is now, or where they will be up to a year in the future. The software also displays several Earth Science Data sets that have been collected on a daily basis. This application uses a third-party, 3D, realtime, interactive game engine called Unity 3D to visualize the satellites and is accessible from a Web browser.

  11. 360-degree panorama in 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This 360-degree panorama was taken in stereo by the deployed Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses (red left lens, blue right lens) are necessary to help identify surface detail. All three petals, the perimeter of the deflated airbags, deployed rover Sojourner, forward and backward ramps and prominent surface features are visible, including the double Twin Peaks at the horizon. Sojourner would later investigate the rock Barnacle Bill just to its left in this image, and the larger rock Yogi at its forward right.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters. Stereoscopic imaging brings exceptional clarity and depth to many of the features in this image, particularly the ridge beyond the far left petal and the large rock Yogi. The curvature and misalignment of several section are due to image parallax.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  12. Inflammation in 3D.

    PubMed

    Kobayashi, Scott D; DeLeo, Frank R

    2012-06-14

    Our view of the response to infection is limited by current methodologies, which provide minimal spatial information on the systemic inflammatory response. In this issue, Attia et al. (2012) describe a cutting-edge approach to image the inflammatory response to infection, which includes identification of host proteins in three dimensions. PMID:22704615

  13. [3D virtual endoscopy of heart].

    PubMed

    Du, Aan; Yang, Xin; Xue, Haihong; Yao, Liping; Sun, Kun

    2012-10-01

    In this paper, we present a virtual endoscopy (VE) for diagnosis of heart diseases, which is proved efficient and affordable, easy to popularize for viewing the interior of the heart. The dual source CT (DSCT) data were used as primary data in our system. The 3D structure of virtual heart was reconstructed with 3D texture mapping technology based on graphics processing unit (GPU), and could be displayed dynamically in real time. When we displayed it in real time, we could not only observe the inside of the chambers of heart but also examine from the new angle of view by the 3D data which were already clipped according to doctor's desire. In the pattern of observation, we used both mutual interactive mode and auto mode. In the auto mode, we used Dijkstra Algorithm which treated the 3D Euler distance as weighting factor to find out the view path quickly, and, used view path to calculate the four chamber plane. PMID:23198444

  14. Stereoscopic-3D display design: a new paradigm with Intel Adaptive Stable Image Technology [IA-SIT

    NASA Astrophysics Data System (ADS)

    Jain, Sunil

    2012-03-01

    Stereoscopic-3D (S3D) proliferation on personal computers (PC) is mired by several technical and business challenges: a) viewing discomfort due to cross-talk amongst stereo images; b) high system cost; and c) restricted content availability. Users expect S3D visual quality to be better than, or at least equal to, what they are used to enjoying on 2D in terms of resolution, pixel density, color, and interactivity. Intel Adaptive Stable Image Technology (IA-SIT) is a foundational technology, successfully developed to resolve S3D system design challenges and deliver high quality 3D visualization at PC price points. Optimizations in display driver, panel timing firmware, backlight hardware, eyewear optical stack, and synch mechanism combined can help accomplish this goal. Agnostic to refresh rate, IA-SIT will scale with shrinking of display transistors and improvements in liquid crystal and LED materials. Industry could profusely benefit from the following calls to action:- 1) Adopt 'IA-SIT S3D Mode' in panel specs (via VESA) to help panel makers monetize S3D; 2) Adopt 'IA-SIT Eyewear Universal Optical Stack' and algorithm (via CEA) to help PC peripheral makers develop stylish glasses; 3) Adopt 'IA-SIT Real Time Profile' for sub-100uS latency control (via BT Sig) to extend BT into S3D; and 4) Adopt 'IA-SIT Architecture' for Monitors and TVs to monetize via PC attach.

  15. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  16. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  17. Gravity and spatial orientation in virtual 3D-mazes.

    PubMed

    Vidal, Manuel; Lipshits, Mark; McIntyre, Joseph; Berthoz, Alain

    2003-01-01

    In order to bring new insights into the processing of 3D spatial information, we conducted experiments on the capacity of human subjects to memorize 3D-structured environments, such as buildings with several floors or the potentially complex 3D structure of an orbital space station. We had subjects move passively in one of two different exploration modes, through a visual virtual environment that consisted of a series of connected tunnels. In upright displacement, self-rotation when going around corners in the tunnels was limited to yaw rotations. For horizontal translations, subjects faced forward in the direction of motion. When moving up or down through vertical segments of the 3D tunnels, however, subjects facing the tunnel wall, remaining upright as if moving up and down in a glass elevator. In the unconstrained displacement mode, subjects would appear to climb or dive face-forward when moving vertically; thus, in this mode subjects could experience visual flow consistent with rotations about any of the 3 canonical axes. In a previous experiment, subjects were asked to determine whether a static, outside view of a test tunnel corresponded or not to the tunnel through which they had just passed. Results showed that performance was better on this task for the upright than for the unconstrained displacement mode; i.e. when subjects remained "upright" with respect to the virtual environment as defined by subject's posture in the first segment. This effect suggests that gravity may provide a key reference frame used in the shift between egocentric and allocentric representations of the 3D virtual world. To check whether it is the polarizing effects of gravity that leads to the favoring of the upright displacement mode, the experimental paradigm was adapted for orbital flight and performed by cosmonauts onboard the International Space Station. For these flight experiments the previous recognition task was replaced by a computerized reconstruction task, which proved

  18. 3-D transient analysis of pebble-bed HTGR by TORT-TD/ATTICA3D

    SciTech Connect

    Seubert, A.; Sureda, A.; Lapins, J.; Buck, M.; Bader, J.; Laurien, E.

    2012-07-01

    As most of the acceptance criteria are local core parameters, application of transient 3-D fine mesh neutron transport and thermal hydraulics coupled codes is mandatory for best estimate evaluations of safety margins. This also applies to high-temperature gas cooled reactors (HTGR). Application of 3-D fine-mesh transient transport codes using few energy groups coupled with 3-D thermal hydraulics codes becomes feasible in view of increasing computing power. This paper describes the discrete ordinates based coupled code system TORT-TD/ATTICA3D that has recently been extended by a fine-mesh diffusion solver. Based on transient analyses for the PBMR-400 design, the transport/diffusion capabilities are demonstrated and 3-D local flux and power redistribution effects during a partial control rod withdrawal are shown. (authors)

  19. Aging kinetics of levoglucosan orientational glass as a rate dispersion process and consequences for the heterogeneous dynamics view.

    PubMed

    Righetti, Maria Cristina; Tombari, Elpidio; Johari, G P

    2016-08-01

    Aging kinetics of a glass is currently modeled in terms of slowing of its α-relaxation dynamics, whose features are interpreted in terms of dynamic heterogeneity, i.e., formation and decay of spatially and temporally distinct nm-size regions. To test the merits of this view, we studied the calorimetric effects of aging an orientational glass of levoglucosan crystal in which such regions would not form in the same way as they form in liquids, and persist in structural glasses, because there is no liquid-like molecular diffusion in the crystal. By measuring the heat capacity, Cp, we determined the change in the enthalpy, H, and the entropy, S, during two aging-protocols: (a) keeping the samples isothermally at temperature, Ta, and measuring the changes after different aging times, ta, and (b) keeping the samples at different Tas and measuring the changes after the same ta. A model-free analysis of the data shows that as ta is increased (procedure (a)), H and S decrease according to a dispersive rate kinetics, and as Ta is increased (procedure (b)), H and S first increase, reach a local maximum at a certain Ta, and then decrease. Even though there is no translational diffusion to produce (liquid-like) free volume, and no translational-rotational decoupling, the aging features are indistinguishable from those of structural glasses. We also find that the Kohlrausch parameter, originally fitted to the glass-aging data, decreases with decrease in Ta, which is incompatible with the current use of the aging data for estimating the α-relaxation time. We argue that the vibrational state of a glass is naturally incompatible with its configurational state, and both change on aging until they are compatible, in the equilibrium liquid. So, dipolar fluctuations seen as the α-relaxation would not be the same motions that cause aging. We suggest that aging kinetics is intrinsically dispersive with its own characteristic rate constant and it does not yield the α-relaxation rate

  20. Aging kinetics of levoglucosan orientational glass as a rate dispersion process and consequences for the heterogeneous dynamics view

    NASA Astrophysics Data System (ADS)

    Righetti, Maria Cristina; Tombari, Elpidio; Johari, G. P.

    2016-08-01

    Aging kinetics of a glass is currently modeled in terms of slowing of its α-relaxation dynamics, whose features are interpreted in terms of dynamic heterogeneity, i.e., formation and decay of spatially and temporally distinct nm-size regions. To test the merits of this view, we studied the calorimetric effects of aging an orientational glass of levoglucosan crystal in which such regions would not form in the same way as they form in liquids, and persist in structural glasses, because there is no liquid-like molecular diffusion in the crystal. By measuring the heat capacity, Cp, we determined the change in the enthalpy, H, and the entropy, S, during two aging-protocols: (a) keeping the samples isothermally at temperature, Ta, and measuring the changes after different aging times, ta, and (b) keeping the samples at different Tas and measuring the changes after the same ta. A model-free analysis of the data shows that as ta is increased (procedure (a)), H and S decrease according to a dispersive rate kinetics, and as Ta is increased (procedure (b)), H and S first increase, reach a local maximum at a certain Ta, and then decrease. Even though there is no translational diffusion to produce (liquid-like) free volume, and no translational-rotational decoupling, the aging features are indistinguishable from those of structural glasses. We also find that the Kohlrausch parameter, originally fitted to the glass-aging data, decreases with decrease in Ta, which is incompatible with the current use of the aging data for estimating the α-relaxation time. We argue that the vibrational state of a glass is naturally incompatible with its configurational state, and both change on aging until they are compatible, in the equilibrium liquid. So, dipolar fluctuations seen as the α-relaxation would not be the same motions that cause aging. We suggest that aging kinetics is intrinsically dispersive with its own characteristic rate constant and it does not yield the α-relaxation rate

  1. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  2. The hype cycle in 3D displays: inherent limits of autostereoscopy

    NASA Astrophysics Data System (ADS)

    Grasnick, Armin

    2013-06-01

    Since a couple of years, a renaissance of 3dimensional cinema can be observed. Even though the stereoscopy was quite popular within the last 150 years, the 3d cinema has disappeared and re-established itself several times. The first boom in the late 19th century stagnated and vanished after a few years of success, the same happened again in 50's and 80's of the 20th century. With the commercial success of the 3d blockbuster "Avatar" in 2009, at the latest, it is obvious that the 3d cinema is having a comeback. How long will it last this time? There are already some signs of a declining interest in 3d movies, as the discrepancy between expectations and the results delivered becomes more evident. From the former hypes it is known: After an initial phase of curiosity (high expectations and excessive fault tolerance), a phase of frustration and saturation (critical analysis and subsequent disappointment) will follow. This phenomenon is known as "Hype Cycle" The everyday experienced evolution of technology has conditioned the consumers. The expectation "any technical improvement will preserve all previous properties" cannot be fulfilled with present 3d technologies. This is an inherent problem of stereoscopy and autostereoscopy: The presentation of an additional dimension caused concessions in relevant characteristics (i.e. resolution, brightness, frequency, viewing area) or leads to undesirable physical side effects (i.e. subjective discomfort, eye strain, spatial disorientation, feeling of nausea). It will be verified that the 3d apparatus (3d glasses or 3d display) is also the source for these restrictions and a reason for decreasing fascination. The limitations of present autostereoscopic technologies will be explained.

  3. 3D Display Using Conjugated Multiband Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; White, Victor E.; Shcheglov, Kirill

    2012-01-01

    Stereoscopic display techniques are based on the principle of displaying two views, with a slightly different perspective, in such a way that the left eye views only by the left eye, and the right eye views only by the right eye. However, one of the major challenges in optical devices is crosstalk between the two channels. Crosstalk is due to the optical devices not completely blocking the wrong-side image, so the left eye sees a little bit of the right image and the right eye sees a little bit of the left image. This results in eyestrain and headaches. A pair of interference filters worn as an optical device can solve the problem. The device consists of a pair of multiband bandpass filters that are conjugated. The term "conjugated" describes the passband regions of one filter not overlapping with those of the other, but the regions are interdigitated. Along with the glasses, a 3D display produces colors composed of primary colors (basis for producing colors) having the spectral bands the same as the passbands of the filters. More specifically, the primary colors producing one viewpoint will be made up of the passbands of one filter, and those of the other viewpoint will be made up of the passbands of the conjugated filter. Thus, the primary colors of one filter would be seen by the eye that has the matching multiband filter. The inherent characteristic of the interference filter will allow little or no transmission of the wrong side of the stereoscopic images.

  4. Odyssey over Mars' South Pole in 3-D

    NASA Technical Reports Server (NTRS)

    2003-01-01

    NASA's Mars Odyssey spacecraft passes above Mars' south pole in this artist's concept illustration. This red-blue anaglyph artwork can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue (cyan) 3-D glasses.

    The spacecraft has been orbiting Mars since October 24, 2001.

    NASA's Jet Propulsion Laboratory manages the Mars Odyssey mission for the NASA Office of Space Science, Washington, D.C. Investigators at Arizona State University in Tempe, the University of Arizona in Tucson, and NASA's Johnson Space Center, Houston, operate the science instruments. The gamma-ray spectrometer was provided by the University of Arizona in collaboration with the Russian Aviation and Space Agency and Institute for Space Research, which provided the high-energy neutron detector, and the Los Alamos National Laboratories, New Mexico, which provided the neutron spectrometer. Lockheed Martin Space Systems, Denver, is the prime contractor for the project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  5. R3D-2-MSA: the RNA 3D structure-to-multiple sequence alignment server.

    PubMed

    Cannone, Jamie J; Sweeney, Blake A; Petrov, Anton I; Gutell, Robin R; Zirbel, Craig L; Leontis, Neocles

    2015-07-01

    The RNA 3D Structure-to-Multiple Sequence Alignment Server (R3D-2-MSA) is a new web service that seamlessly links RNA three-dimensional (3D) structures to high-quality RNA multiple sequence alignments (MSAs) from diverse biological sources. In this first release, R3D-2-MSA provides manual and programmatic access to curated, representative ribosomal RNA sequence alignments from bacterial, archaeal, eukaryal and organellar ribosomes, using nucleotide numbers from representative atomic-resolution 3D structures. A web-based front end is available for manual entry and an Application Program Interface for programmatic access. Users can specify up to five ranges of nucleotides and 50 nucleotide positions per range. The R3D-2-MSA server maps these ranges to the appropriate columns of the corresponding MSA and returns the contents of the columns, either for display in a web browser or in JSON format for subsequent programmatic use. The browser output page provides a 3D interactive display of the query, a full list of sequence variants with taxonomic information and a statistical summary of distinct sequence variants found. The output can be filtered and sorted in the browser. Previous user queries can be viewed at any time by resubmitting the output URL, which encodes the search and re-generates the results. The service is freely available with no login requirement at http://rna.bgsu.edu/r3d-2-msa. PMID:26048960

  6. R3D-2-MSA: the RNA 3D structure-to-multiple sequence alignment server

    PubMed Central

    Cannone, Jamie J.; Sweeney, Blake A.; Petrov, Anton I.; Gutell, Robin R.; Zirbel, Craig L.; Leontis, Neocles

    2015-01-01

    The RNA 3D Structure-to-Multiple Sequence Alignment Server (R3D-2-MSA) is a new web service that seamlessly links RNA three-dimensional (3D) structures to high-quality RNA multiple sequence alignments (MSAs) from diverse biological sources. In this first release, R3D-2-MSA provides manual and programmatic access to curated, representative ribosomal RNA sequence alignments from bacterial, archaeal, eukaryal and organellar ribosomes, using nucleotide numbers from representative atomic-resolution 3D structures. A web-based front end is available for manual entry and an Application Program Interface for programmatic access. Users can specify up to five ranges of nucleotides and 50 nucleotide positions per range. The R3D-2-MSA server maps these ranges to the appropriate columns of the corresponding MSA and returns the contents of the columns, either for display in a web browser or in JSON format for subsequent programmatic use. The browser output page provides a 3D interactive display of the query, a full list of sequence variants with taxonomic information and a statistical summary of distinct sequence variants found. The output can be filtered and sorted in the browser. Previous user queries can be viewed at any time by resubmitting the output URL, which encodes the search and re-generates the results. The service is freely available with no login requirement at http://rna.bgsu.edu/r3d-2-msa. PMID:26048960

  7. Super long viewing distance light homogeneous emitting three-dimensional display

    NASA Astrophysics Data System (ADS)

    Liao, Hongen

    2015-04-01

    Three-dimensional (3D) display technology has continuously been attracting public attention with the progress in today's 3D television and mature display technologies. The primary characteristics of conventional glasses-free autostereoscopic displays, such as spatial resolution, image depths, and viewing angle, are often limited due to the use of optical lenses or optical gratings. We present a 3D display using MEMS-scanning-mechanism-based light homogeneous emitting (LHE) approach and demonstrate that the display can directly generate an autostereoscopic 3D image without the need for optical lenses or gratings. The generated 3D image has the advantages of non-aberration and a high-definition spatial resolution, making it the first to exhibit animated 3D images with image depth of six meters. Our LHE 3D display approach can be used to generate a natural flat-panel 3D display with super long viewing distance and alternative real-time image update.

  8. Super long viewing distance light homogeneous emitting three-dimensional display.

    PubMed

    Liao, Hongen

    2015-01-01

    Three-dimensional (3D) display technology has continuously been attracting public attention with the progress in today's 3D television and mature display technologies. The primary characteristics of conventional glasses-free autostereoscopic displays, such as spatial resolution, image depths, and viewing angle, are often limited due to the use of optical lenses or optical gratings. We present a 3D display using MEMS-scanning-mechanism-based light homogeneous emitting (LHE) approach and demonstrate that the display can directly generate an autostereoscopic 3D image without the need for optical lenses or gratings. The generated 3D image has the advantages of non-aberration and a high-definition spatial resolution, making it the first to exhibit animated 3D images with image depth of six meters. Our LHE 3D display approach can be used to generate a natural flat-panel 3D display with super long viewing distance and alternative real-time image update. PMID:25828029

  9. SOLIDFELIX: a transportable 3D static volume display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Kreft, Alexander; Wörden, Henrik Tom

    2009-02-01

    Flat 2D screens cannot display complex 3D structures without the usage of different slices of the 3D model. Volumetric displays like the "FELIX 3D-Displays" can solve the problem. They provide space-filling images and are characterized by "multi-viewer" and "all-round view" capabilities without requiring cumbersome goggles. In the past many scientists tried to develop similar 3D displays. Our paper includes an overview from 1912 up to today. During several years of investigations on swept volume displays within the "FELIX 3D-Projekt" we learned about some significant disadvantages of rotating screens, for example hidden zones. For this reason the FELIX-Team started investigations also in the area of static volume displays. Within three years of research on our 3D static volume display at a normal high school in Germany we were able to achieve considerable results despite minor funding resources within this non-commercial group. Core element of our setup is the display volume which consists of a cubic transparent material (crystal, glass, or polymers doped with special ions, mainly from the rare earth group or other fluorescent materials). We focused our investigations on one frequency, two step upconversion (OFTS-UC) and two frequency, two step upconversion (TFTSUC) with IR-Lasers as excitation source. Our main interest was both to find an appropriate material and an appropriate doping for the display volume. Early experiments were carried out with CaF2 and YLiF4 crystals doped with 0.5 mol% Er3+-ions which were excited in order to create a volumetric pixel (voxel). In addition to that the crystals are limited to a very small size which is the reason why we later investigated on heavy metal fluoride glasses which are easier to produce in large sizes. Currently we are using a ZBLAN glass belonging to the mentioned group and making it possible to increase both the display volume and the brightness of the images significantly. Although, our display is currently

  10. Recent development of 3D display technology for new market

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Sik

    2003-11-01

    A multi-view 3D video processor was designed and implemented with several FPGAs for real-time applications and a projection-type 3D display was introduced for low-cost commercialization. One high resolution projection panel and only one projection lens is capable of displaying multiview autostereoscopic images. It can cope with various arrangements of 3D camera systems (or pixel arrays) and resolutions of 3D displays. This system shows high 3-D image quality in terms of resolution, brightness, and contrast so it is well suited for the commercialization in the field of game and advertisement market.

  11. Wax-bonding 3D microfluidic chips.

    PubMed

    Gong, Xiuqing; Yi, Xin; Xiao, Kang; Li, Shunbo; Kodzius, Rimantas; Qin, Jianhua; Wen, Weijia

    2010-10-01

    We report a simple, low-cost and detachable microfluidic chip incorporating easily accessible paper, glass slides or other polymer films as the chip materials along with adhesive wax as the recycling bonding material. We use a laser to cut through the paper or film to form patterns and then sandwich the paper and film between glass sheets or polymer membranes. The hot-melt adhesive wax can realize bridge bonding between various materials, for example, paper, polymethylmethacrylate (PMMA) film, glass sheets, or metal plate. The bonding process is reversible and the wax is reusable through a melting and cooling process. With this process, a three-dimensional (3D) microfluidic chip is achievable by vacuating and venting the chip in a hot-water bath. To study the biocompatibility and applicability of the wax-based microfluidic chip, we tested the PCR compatibility with the chip materials first. Then we applied the wax-paper based microfluidic chip to HeLa cell electroporation (EP). Subsequently, a prototype of a 5-layer 3D chip was fabricated by multilayer wax bonding. To check the sealing ability and the durability of the chip, green fluorescence protein (GFP) recombinant Escherichia coli (E. coli) bacteria were cultured, with which the chemotaxis of E. coli was studied in order to determine the influence of antibiotic ciprofloxacin concentration on the E. coli migration. PMID:20689865

  12. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  13. GestAction3D: A Platform for Studying Displacements and Deformations of 3D Objects Using Hands

    NASA Astrophysics Data System (ADS)

    Lingrand, Diane; Renevier, Philippe; Pinna-Déry, Anne-Marie; Cremaschi, Xavier; Lion, Stevens; Rouel, Jean-Guilhem; Jeanne, David; Cuisinaud, Philippe; Soula*, Julien

    We present a low-cost hand-based device coupled with a 3D motion recovery engine and 3D visualization. This platform aims at studying ergonomic 3D interactions in order to manipulate and deform 3D models by interacting with hands on 3D meshes. Deformations are done using different modes of interaction that we will detail in the paper. Finger extremities are attached to vertices, edges or facets. Switching from one mode to another or changing the point of view is done using gestures. The determination of the more adequate gestures is part of the work

  14. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  15. Holography of incoherently illuminated 3D scenes

    NASA Astrophysics Data System (ADS)

    Shaked, Natan T.; Rosen, Joseph

    2008-04-01

    We review several methods of generating holograms of 3D realistic objects illuminated by incoherent white light. Using these methods, it is possible to obtain holograms with a simple digital camera, operating in regular light conditions. Thus, most disadvantages characterizing conventional holography, namely the need for a powerful, highly coherent laser and meticulous stability of the optical system are avoided. These holograms can be reconstructed optically by illuminating them with a coherent plane wave, or alternatively by using a digital reconstruction technique. In order to generate the proposed hologram, the 3D scene is captured from multiple points of view by a simple digital camera. Then, the acquired projections are digitally processed to yield the final hologram of the 3D scene. Based on this principle, we can generate Fourier, Fresnel, image or other types of holograms. To obtain certain advantages over the regular holograms, we also propose new digital holograms, such as modified Fresnel holograms and protected correlation holograms. Instead of shifting the camera mechanically to acquire a different projection of the 3D scene each time, it is possible to use a microlens array for acquiring the entire projections in a single camera shot. Alternatively, only the extreme projections can be acquired experimentally, while the middle projections are predicted digitally by using the view synthesis algorithm. The prospective goal of these methods is to facilitate the design of a simple, portable digital holographic camera which can be useful for a variety of practical applications.

  16. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  17. Integral 3D display using multiple LCDs

    NASA Astrophysics Data System (ADS)

    Okaichi, Naoto; Miura, Masato; Arai, Jun; Mishina, Tomoyuki

    2015-03-01

    The quality of the integral 3D images created by a 3D imaging system was improved by combining multiple LCDs to utilize a greater number of pixels than that possible with one LCD. A prototype of the display device was constructed by using four HD LCDs. An integral photography (IP) image displayed by the prototype is four times larger than that reconstructed by a single display. The pixel pitch of the HD display used is 55.5 μm, and the number of elemental lenses is 212 horizontally and 119 vertically. The 3D image pixel count is 25,228, and the viewing angle is 28°. Since this method is extensible, it is possible to display an integral 3D image of higher quality by increasing the number of LCDs. Using this integral 3D display structure makes it possible to make the whole device thinner than a projector-based display system. It is therefore expected to be applied to the home television in the future.

  18. Embedding objects during 3D printing to add new functionalities.

    PubMed

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  19. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology. PMID:26832611

  20. A Joint Approach to the Study of S-Type and P-Type Habitable Zones in Binary Systems: New Results in the View of 3-D Planetary Climate Models

    NASA Astrophysics Data System (ADS)

    Cuntz, Manfred

    2015-01-01

    In two previous papers, given by Cuntz (2014a,b) [ApJ 780, A14 (19 pages); arXiv:1409.3796], a comprehensive approach has been provided for the study of S-type and P-type habitable zones in stellar binary systems, P-type orbits occur when the planet orbits both binary components, whereas in case of S-type orbits, the planet orbits only one of the binary components with the second component considered a perturbator. The selected approach considers a variety of aspects, including (1) the consideration of a joint constraint including orbital stability and a habitable region for a possible system planet through the stellar radiative energy fluxes; (2) the treatment of conservative (CHZ), general (GHZ) and extended zones of habitability (EHZ) [see Paper I for definitions] for the systems as previously defined for the Solar System; (3) the provision of a combined formalism for the assessment of both S-type and P-type habitability; in particular, mathematical criteria are devised for which kind of system S-type and P-type habitability is realized; and (4) the applications of the theoretical approach to systems with the stars in different kinds of orbits, including elliptical orbits (the most expected case). Particularly, an algebraic formalism for the assessment of both S-type and P-type habitability is given based on a higher-order polynomial expression. Thus, an a prior specification for the presence or absence of S-type or P-type radiative habitable zones is - from a mathematical point of view - neither necessary nor possible, as those are determined by the adopted formalism. Previously, numerous applications of the method have been given encompassing theoretical star-panet systems and and observations. Most recently, this method has been upgraded to include recent studies of 3-D planetary climate models. Originally, this type of work affects the extent and position of habitable zones around single stars; however, it has also profound consequence for the habitable

  1. Concurrent 3-D motion segmentation and 3-D interpretation of temporal sequences of monocular images.

    PubMed

    Sekkati, Hicham; Mitiche, Amar

    2006-03-01

    The purpose of this study is to investigate a variational method for joint multiregion three-dimensional (3-D) motion segmentation and 3-D interpretation of temporal sequences of monocular images. Interpretation consists of dense recovery of 3-D structure and motion from the image sequence spatiotemporal variations due to short-range image motion. The method is direct insomuch as it does not require prior computation of image motion. It allows movement of both viewing system and multiple independently moving objects. The problem is formulated following a variational statement with a functional containing three terms. One term measures the conformity of the interpretation within each region of 3-D motion segmentation to the image sequence spatiotemporal variations. The second term is of regularization of depth. The assumption that environmental objects are rigid accounts automatically for the regularity of 3-D motion within each region of segmentation. The third and last term is for the regularity of segmentation boundaries. Minimization of the functional follows the corresponding Euler-Lagrange equations. This results in iterated concurrent computation of 3-D motion segmentation by curve evolution, depth by gradient descent, and 3-D motion by least squares within each region of segmentation. Curve evolution is implemented via level sets for topology independence and numerical stability. This algorithm and its implementation are verified on synthetic and real image sequences. Viewers presented with anaglyphs of stereoscopic images constructed from the algorithm's output reported a strong perception of depth. PMID:16519351

  2. bioWeb3D: an online webGL 3D data visualisation tool

    PubMed Central

    2013-01-01

    Background Data visualization is critical for interpreting biological data. However, in practice it can prove to be a bottleneck for non trained researchers; this is especially true for three dimensional (3D) data representation. Whilst existing software can provide all necessary functionalities to represent and manipulate biological 3D datasets, very few are easily accessible (browser based), cross platform and accessible to non-expert users. Results An online HTML5/WebGL based 3D visualisation tool has been developed to allow biologists to quickly and easily view interactive and customizable three dimensional representations of their data along with multiple layers of information. Using the WebGL library Three.js written in Javascript, bioWeb3D allows the simultaneous visualisation of multiple large datasets inputted via a simple JSON, XML or CSV file, which can be read and analysed locally thanks to HTML5 capabilities. Conclusions Using basic 3D representation techniques in a technologically innovative context, we provide a program that is not intended to compete with professional 3D representation software, but that instead enables a quick and intuitive representation of reasonably large 3D datasets. PMID:23758781

  3. An Interactive 3D Virtual Anatomy Puzzle for Learning and Simulation - Initial Demonstration and Evaluation.

    PubMed

    Messier, Erik; Wilcox, Jascha; Dawson-Elli, Alexander; Diaz, Gabriel; Linte, Cristian A

    2016-01-01

    To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls. PMID:27046584

  4. MRI Volume Fusion Based on 3D Shearlet Decompositions.

    PubMed

    Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong

    2014-01-01

    Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods. PMID:24817880

  5. MRI Volume Fusion Based on 3D Shearlet Decompositions

    PubMed Central

    Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong

    2014-01-01

    Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods. PMID:24817880

  6. Modular 3-D Transport model

    EPA Science Inventory

    MT3D was first developed by Chunmiao Zheng in 1990 at S.S. Papadopulos & Associates, Inc. with partial support from the U.S. Environmental Protection Agency (USEPA). Starting in 1990, MT3D was released as a pubic domain code from the USEPA. Commercial versions with enhanced capab...

  7. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  8. LLNL-Earth3D

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  9. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  10. 3D World Building System

    ScienceCinema

    None

    2014-02-26

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  11. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  12. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  13. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  14. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  15. On 3D instability of wake behind a cylinder

    NASA Astrophysics Data System (ADS)

    Uruba, Václav

    2016-06-01

    The canonical case of cross-flow behind prismatic circular cylinder is analyzed from the point of view of 3D instabilities appearance. Various flow conditions defined by various Reynolds number values are considered. All cases in question exhibit significant 3D features in close wake playing significant role in physical mechanisms of force generation.

  16. Cosmic origins: experiences making a stereoscopic 3D movie

    NASA Astrophysics Data System (ADS)

    Holliman, Nick

    2010-02-01

    Context: Stereoscopic 3D movies are gaining rapid acceptance commercially. In addition our previous experience with the short 3D movie "Cosmic Cookery" showed that there is great public interest in the presentation of cosmology research using this medium. Objective: The objective of the work reported in this paper was to create a three-dimensional stereoscopic movie describing the life of the Milky way galaxy. This was a technical and artistic exercise to take observed and simulated data from leading scientists and produce a short (six minute) movie that describes how the Milky Way was created and what happens in its future. The initial target audience was the visitors to the Royal Society's 2009 Summer Science Exhibition in central London, UK. The movie is also intended to become a presentation tool for scientists and educators following the exhibition. Apparatus: The presentation and playback systems used consisted of off-the shelf devices and software. The display platform for the Royal Society presentation was a RealD LP Pro switch used with a DLP projector to rear project a 4 metre diagonal image. The LP Pro enables the use of cheap disposable linearly polarising glasses so that the high turnover rate of the audience (every ten minutes at peak times) could be sustained without needing delays to clean the glasses. The playback system was a high speed PC with an external 8Tb RAID driving the projectors at 30Hz per eye, the Lightspeed DepthQ software was used to decode and generate the video stream. Results: A wide range of tools were used to render the image sequences, ranging from commercial to custom software. Each tool was able to produce a stream of 1080p images in stereo at 30fps. None of the rendering tools used allowed precise calibration of the stereo effect at render time and therefore all sequences were tuned extensively in a trial and error process until the stereo effect was acceptable and supported a comfortable viewing experience. Conclusion: We

  17. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  18. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  19. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  20. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  1. 3D Surface Reconstruction and Automatic Camera Calibration

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre

    2004-01-01

    Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.

  2. Bioprinting of 3D hydrogels.

    PubMed

    Stanton, M M; Samitier, J; Sánchez, S

    2015-08-01

    Three-dimensional (3D) bioprinting has recently emerged as an extension of 3D material printing, by using biocompatible or cellular components to build structures in an additive, layer-by-layer methodology for encapsulation and culture of cells. These 3D systems allow for cell culture in a suspension for formation of highly organized tissue or controlled spatial orientation of cell environments. The in vitro 3D cellular environments simulate the complexity of an in vivo environment and natural extracellular matrices (ECM). This paper will focus on bioprinting utilizing hydrogels as 3D scaffolds. Hydrogels are advantageous for cell culture as they are highly permeable to cell culture media, nutrients, and waste products generated during metabolic cell processes. They have the ability to be fabricated in customized shapes with various material properties with dimensions at the micron scale. 3D hydrogels are a reliable method for biocompatible 3D printing and have applications in tissue engineering, drug screening, and organ on a chip models. PMID:26066320

  3. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  4. 3D Printed Micro Free-Flow Electrophoresis Device.

    PubMed

    Anciaux, Sarah K; Geiger, Matthew; Bowser, Michael T

    2016-08-01

    The cost, time, and restrictions on creative flexibility associated with current fabrication methods present significant challenges in the development and application of microfluidic devices. Additive manufacturing, also referred to as three-dimensional (3D) printing, provides many advantages over existing methods. With 3D printing, devices can be made in a cost-effective manner with the ability to rapidly prototype new designs. We have fabricated a micro free-flow electrophoresis (μFFE) device using a low-cost, consumer-grade 3D printer. Test prints were performed to determine the minimum feature sizes that could be reproducibly produced using 3D printing fabrication. Microfluidic ridges could be fabricated with dimensions as small as 20 μm high × 640 μm wide. Minimum valley dimensions were 30 μm wide × 130 μm wide. An acetone vapor bath was used to smooth acrylonitrile-butadiene-styrene (ABS) surfaces and facilitate bonding of fully enclosed channels. The surfaces of the 3D-printed features were profiled and compared to a similar device fabricated in a glass substrate. Stable stream profiles were obtained in a 3D-printed μFFE device. Separations of fluorescent dyes in the 3D-printed device and its glass counterpart were comparable. A μFFE separation of myoglobin and cytochrome c was also demonstrated on a 3D-printed device. Limits of detection for rhodamine 110 were determined to be 2 and 0.3 nM for the 3D-printed and glass devices, respectively. PMID:27377354

  5. Fdf in US3D

    NASA Astrophysics Data System (ADS)

    Otis, Collin; Ferrero, Pietro; Candler, Graham; Givi, Peyman

    2013-11-01

    The scalar filtered mass density function (SFMDF) methodology is implemented into the computer code US3D. This is an unstructured Eulerian finite volume hydrodynamic solver and has proven very effective for simulation of compressible turbulent flows. The resulting SFMDF-US3D code is employed for large eddy simulation (LES) on unstructured meshes. Simulations are conducted of subsonic and supersonic flows under non-reacting and reacting conditions. The consistency and the accuracy of the simulated results are assessed along with appraisal of the overall performance of the methodology. The SFMDF-US3D is now capable of simulating high speed flows in complex configurations.

  6. 3D-printed microfluidic devices.

    PubMed

    Amin, Reza; Knowlton, Stephanie; Hart, Alexander; Yenilmez, Bekir; Ghaderinezhad, Fariba; Katebifar, Sara; Messina, Michael; Khademhosseini, Ali; Tasoglu, Savas

    2016-06-01

    Microfluidics is a flourishing field, enabling a wide range of biochemical and clinical applications such as cancer screening, micro-physiological system engineering, high-throughput drug testing, and point-of-care diagnostics. However, fabrication of microfluidic devices is often complicated, time consuming, and requires expensive equipment and sophisticated cleanroom facilities. Three-dimensional (3D) printing presents a promising alternative to traditional techniques such as lithography and PDMS-glass bonding, not only by enabling rapid design iterations in the development stage, but also by reducing the costs associated with institutional infrastructure, equipment installation, maintenance, and physical space. With the recent advancements in 3D printing technologies, highly complex microfluidic devices can be fabricated via single-step, rapid, and cost-effective protocols, making microfluidics more accessible to users. In this review, we discuss a broad range of approaches for the application of 3D printing technology to fabrication of micro-scale lab-on-a-chip devices. PMID:27321137

  7. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  8. What is 3D good for? A review of human performance on stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  9. Spacecraft 3D Augmented Reality Mobile App

    NASA Technical Reports Server (NTRS)

    Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M.

    2013-01-01

    The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker.

  10. 3D puzzle reconstruction for archeological fragments

    NASA Astrophysics Data System (ADS)

    Jampy, F.; Hostein, A.; Fauvet, E.; Laligant, O.; Truchetet, F.

    2015-03-01

    The reconstruction of broken artifacts is a common task in archeology domain; it can be supported now by 3D data acquisition device and computer processing. Many works have been dedicated in the past to reconstructing 2D puzzles but very few propose a true 3D approach. We present here a complete solution including a dedicated transportable 3D acquisition set-up and a virtual tool with a graphic interface allowing the archeologists to manipulate the fragments and to, interactively, reconstruct the puzzle. The whole lateral part is acquired by rotating the fragment around an axis chosen within a light sheet thanks to a step-motor synchronized with the camera frame clock. Another camera provides a top view of the fragment under scanning. A scanning accuracy of 100μm is attained. The iterative automatic processing algorithm is based on segmentation into facets of the lateral part of the fragments followed by a 3D matching providing the user with a ranked short list of possible assemblies. The device has been applied to the reconstruction of a set of 1200 fragments from broken tablets supporting a Latin inscription dating from the first century AD.

  11. Wavefront construction in 3-D

    SciTech Connect

    Chilcoat, S.R. Hildebrand, S.T.

    1995-12-31

    Travel time computation in inhomogeneous media is essential for pre-stack Kirchhoff imaging in areas such as the sub-salt province in the Gulf of Mexico. The 2D algorithm published by Vinje, et al, has been extended to 3D to compute wavefronts in complicated inhomogeneous media. The 3D wavefront construction algorithm provides many advantages over conventional ray tracing and other methods of computing travel times in 3D. The algorithm dynamically maintains a reasonably consistent ray density without making a priori guesses at the number of rays to shoot. The determination of caustics in 3D is a straight forward geometric procedure. The wavefront algorithm also enables the computation of multi-valued travel time surfaces.

  12. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  13. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  14. 3-D Haiku: A New Way To Teach a Traditional Form.

    ERIC Educational Resources Information Center

    Tweedie, Sanford; Kolitsky, Michael A.

    2002-01-01

    Describes a three dimensional poetry genre--a way of rewriting two dimensional haiku in a three dimensional cube that can only be viewed in cyberspace. Discusses traditional versus 3-D haiku, introducing 3-D haiku into the classroom, reasons to teach 3-D haiku, and creating 3-D haiku. (RS)

  15. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  16. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  17. A heads-up display for diabetic limb salvage surgery: a view through the google looking glass.

    PubMed

    Armstrong, David G; Rankin, Timothy M; Giovinco, Nicholas A; Mills, Joseph L; Matsuoka, Yoky

    2014-09-01

    Although the use of augmented reality has been well described over the past several years, available devices suffer from high cost, an uncomfortable form factor, suboptimal battery life, and lack an app-based developer ecosystem. This article describes the potential use of a novel, consumer-based, wearable device to assist surgeons in real time during limb preservation surgery and clinical consultation. Using routine intraoperative, clinical, and educational case examples, we describe the use of a wearable augmented reality device (Google Glass; Google, Mountain View, CA). The device facilitated hands-free, rapid communication, documentation, and consultation. An eyeglass-mounted screen form factor has the potential to improve communication, safety, and efficiency of intraoperative and clinical care. We believe this represents a natural progression toward union of medical devices with consumer technology. PMID:24876445

  18. A 2D driven 3D vessel segmentation algorithm for 3D digital subtraction angiography data.

    PubMed

    Spiegel, M; Redel, T; Struffert, T; Hornegger, J; Doerfler, A

    2011-10-01

    Cerebrovascular disease is among the leading causes of death in western industrial nations. 3D rotational angiography delivers indispensable information on vessel morphology and pathology. Physicians make use of this to analyze vessel geometry in detail, i.e. vessel diameters, location and size of aneurysms, to come up with a clinical decision. 3D segmentation is a crucial step in this pipeline. Although a lot of different methods are available nowadays, all of them lack a method to validate the results for the individual patient. Therefore, we propose a novel 2D digital subtraction angiography (DSA)-driven 3D vessel segmentation and validation framework. 2D DSA projections are clinically considered as gold standard when it comes to measurements of vessel diameter or the neck size of aneurysms. An ellipsoid vessel model is applied to deliver the initial 3D segmentation. To assess the accuracy of the 3D vessel segmentation, its forward projections are iteratively overlaid with the corresponding 2D DSA projections. Local vessel discrepancies are modeled by a global 2D/3D optimization function to adjust the 3D vessel segmentation toward the 2D vessel contours. Our framework has been evaluated on phantom data as well as on ten patient datasets. Three 2D DSA projections from varying viewing angles have been used for each dataset. The novel 2D driven 3D vessel segmentation approach shows superior results against state-of-the-art segmentations like region growing, i.e. an improvement of 7.2% points in precision and 5.8% points for the Dice coefficient. This method opens up future clinical applications requiring the greatest vessel accuracy, e.g. computational fluid dynamic modeling. PMID:21908904

  19. A 2D driven 3D vessel segmentation algorithm for 3D digital subtraction angiography data

    NASA Astrophysics Data System (ADS)

    Spiegel, M.; Redel, T.; Struffert, T.; Hornegger, J.; Doerfler, A.

    2011-10-01

    Cerebrovascular disease is among the leading causes of death in western industrial nations. 3D rotational angiography delivers indispensable information on vessel morphology and pathology. Physicians make use of this to analyze vessel geometry in detail, i.e. vessel diameters, location and size of aneurysms, to come up with a clinical decision. 3D segmentation is a crucial step in this pipeline. Although a lot of different methods are available nowadays, all of them lack a method to validate the results for the individual patient. Therefore, we propose a novel 2D digital subtraction angiography (DSA)-driven 3D vessel segmentation and validation framework. 2D DSA projections are clinically considered as gold standard when it comes to measurements of vessel diameter or the neck size of aneurysms. An ellipsoid vessel model is applied to deliver the initial 3D segmentation. To assess the accuracy of the 3D vessel segmentation, its forward projections are iteratively overlaid with the corresponding 2D DSA projections. Local vessel discrepancies are modeled by a global 2D/3D optimization function to adjust the 3D vessel segmentation toward the 2D vessel contours. Our framework has been evaluated on phantom data as well as on ten patient datasets. Three 2D DSA projections from varying viewing angles have been used for each dataset. The novel 2D driven 3D vessel segmentation approach shows superior results against state-of-the-art segmentations like region growing, i.e. an improvement of 7.2% points in precision and 5.8% points for the Dice coefficient. This method opens up future clinical applications requiring the greatest vessel accuracy, e.g. computational fluid dynamic modeling.

  20. 3-D target-based distributed smart camera network localization.

    PubMed

    Kassebaum, John; Bulusu, Nirupama; Feng, Wu-Chi

    2010-10-01

    For distributed smart camera networks to perform vision-based tasks such as subject recognition and tracking, every camera's position and orientation relative to a single 3-D coordinate frame must be accurately determined. In this paper, we present a new camera network localization solution that requires successively showing a 3-D feature point-rich target to all cameras, then using the known geometry of a 3-D target, cameras estimate and decompose projection matrices to compute their position and orientation relative to the coordinatization of the 3-D target's feature points. As each 3-D target position establishes a distinct coordinate frame, cameras that view more than one 3-D target position compute translations and rotations relating different positions' coordinate frames and share the transform data with neighbors to facilitate realignment of all cameras to a single coordinate frame. Compared to other localization solutions that use opportunistically found visual data, our solution is more suitable to battery-powered, processing-constrained camera networks because it requires communication only to determine simultaneous target viewings and for passing transform data. Additionally, our solution requires only pairwise view overlaps of sufficient size to see the 3-D target and detect its feature points, while also giving camera positions in meaningful units. We evaluate our algorithm in both real and simulated smart camera networks. In the real network, position error is less than 1 ('') when the 3-D target's feature points fill only 2.9% of the frame area. PMID:20679031

  1. Optical characterization and measurements of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Salmimaa, Marja; Järvenpää, Toni

    2008-04-01

    3D or autostereoscopic display technologies offer attractive solutions for enriching the multimedia experience. However, both characterization and comparison of 3D displays have been challenging when the definitions for the consistent measurement methods have been lacking and displays with similar specifications may appear quite different. Earlier we have investigated how the optical properties of autostereoscopic (3D) displays can be objectively measured and what are the main characteristics defining the perceived image quality. In this paper the discussion is extended to cover the viewing freedom (VF) and the definition for the optimum viewing distance (OVD) is elaborated. VF is the volume inside which the eyes have to be to see an acceptable 3D image. Characteristics limiting the VF space are proposed to be 3D crosstalk, luminance difference and color difference. Since the 3D crosstalk can be presumed to be dominating the quality of the end user experience and in our approach is forming the basis for the calculations of the other optical parameters, the reliability of the 3D crosstalk measurements is investigated. Furthermore the effect on the derived VF definition is evaluated. We have performed comparison 3D crosstalk measurements with different measurement device apertures and the effect of different measurement geometry on the results on actual 3D displays is reported.

  2. Simulation of 3D infrared scenes using random fields model

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Zhang, Jianqi

    2001-09-01

    Analysis and simulation of smart munitions requires imagery for the munition's sensor to view. The traditional infrared background simulations are always limited in the plane scene studies. A new method is described to synthesize the images in 3D view and with various terrains texture. We develop the random fields model and temperature fields to simulate 3D infrared scenes. Generalized long-correlation (GLC) model, one of random field models, will generate both the 3D terrains skeleton data and the terrains texture in this work. To build the terrain mesh with the random fields, digital elevation models (DEM) are introduced in the paper. And texture mapping technology will perform the task of pasting the texture in the concavo-convex surfaces of the 3D scene. The simulation using random fields model is a very available method to produce 3D infrared scene with great randomicity and reality.

  3. Color and brightness uniformity compensation of a multi-projection 3D display

    NASA Astrophysics Data System (ADS)

    Lee, Jin-Ho; Park, Juyong; Nam, Dongkyung; Park, Du-Sik

    2015-09-01

    Light-field displays are good candidates in the field of glasses-free 3D display for showing real 3D images without decreasing the image resolution. Light-field displays can create light rays using a large number of projectors in order to express the natural 3D images. However, in light-field displays using multi-projectors, the compensation is very critical due to different characteristics and arrangement positions of each projector. In this paper, we present an enhanced 55- inch, 100-Mpixel multi-projection 3D display consisting of 96 micro projectors for immersive natural 3D viewing in medical and educational applications. To achieve enhanced image quality, color and brightness uniformity compensation methods are utilized along with an improved projector configuration design and a real-time calibration process of projector alignment. For color uniformity compensation, projected images from each projector are captured by a camera arranged in front of the screen, the number of pixels based on RGB color intensities of each captured image is analyzed, and the distributions of RGB color intensities are adjusted by using the respective maximum values of RGB color intensities. For brightness uniformity compensation, each light-field ray emitted from a screen pixel is modeled by a radial basis function, and compensating weights of each screen pixel are calculated and transferred to the projection images by the mapping relationship between the screen and projector coordinates. Finally, brightness compensated images are rendered for each projector. Consequently, the display shows improved color and brightness uniformity, and consistent, exceptional 3D image quality.

  4. Examination of 3D visual attention in stereoscopic video content

    NASA Astrophysics Data System (ADS)

    Huynh-Thu, Quan; Schiatti, Luca

    2011-03-01

    Recent advances in video technology and digital cinema have made it possible to produce entertaining 3D stereoscopic content that can be viewed for an extended duration without necessarily causing extreme fatigue, visual strain and discomfort. Viewers focus naturally their attention on specific areas of interest in their visual field. Visual attention is an important aspect of perception and its understanding is therefore an important aspect for the creation of 3D stereoscopic content. Most of the studies on visual attention have focused on the case of still images or 2D video. Only a very few studies have investigated eye movement patterns in 3D stereoscopic moving sequences, and how these may differ from viewing 2D video content. In this paper, we present and discuss the results of a subjective experiment that we conducted using an eye-tracking apparatus to record observers' gaze patterns. Participants were asked to watch the same set of video clips in a free-viewing task. Each clip was shown in a 3D stereoscopic version and 2D version. Our results indicate that the extent of areas of interests is not necessarily wider in 3D. We found a very strong content dependency in the difference of density and locations of fixations between 2D and 3D stereoscopic content. However, we found that saccades were overall faster and that fixation durations were overall lower when observers viewed the 3D stereoscopic version.

  5. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  6. 3D measurement using circular gratings

    NASA Astrophysics Data System (ADS)

    Harding, Kevin

    2013-09-01

    3D measurement using methods of structured light are well known in the industry. Most such systems use some variation of straight lines, either as simple lines or with some form of encoding. This geometry assumes the lines will be projected from one side and viewed from another to generate the profile information. But what about applications where a wide triangulation angle may not be practical, particularly at longer standoff distances. This paper explores the use of circular grating patterns projected from a center point to achieve 3D information. Originally suggested by John Caulfield around 1990, the method had some interesting potential, particularly if combined with alternate means of measurement from traditional triangulation including depth from focus methods. The possible advantages of a central reference point in the projected pattern may offer some different capabilities not as easily attained with a linear grating pattern. This paper will explore the pros and cons of the method and present some examples of possible applications.

  7. Faint object 3D spectroscopy with PMAS

    NASA Astrophysics Data System (ADS)

    Roth, Martin M.; Becker, Thomas; Kelz, Andreas; Bohm, Petra

    2004-09-01

    PMAS is a fiber-coupled lens array type of integral field spectrograph, which was commissioned at the Calar Alto 3.5m Telescope in May 2001. The optical layout of the instrument was chosen such as to provide a large wavelength coverage, and good transmission from 0.35 to 1 μm. One of the major objectives of the PMAS development has been to perform 3D spectrophotometry, taking advantage of the contiguous array of spatial elements over the 2-dimensional field-of-view of the integral field unit. With science results obtained during the first two years of operation, we illustrate that 3D spectroscopy is an ideal tool for faint object spectrophotometry.

  8. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  9. 6. Looking glass aircraft in the project looking glass historic ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. Looking glass aircraft in the project looking glass historic district. View to north. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  10. Real time 3D scanner: investigations and results

    NASA Astrophysics Data System (ADS)

    Nouri, Taoufik; Pflug, Leopold

    1993-12-01

    This article presents a concept of reconstruction of 3-D objects using non-invasive and touch loss techniques. The principle of this method is to display parallel interference optical fringes on an object and then to record the object under two angles of view. According to an appropriated treatment one reconstructs the 3-D object even when the object has no symmetrical plan. The 3-D surface data is available immediately in digital form for computer- visualization and for analysis software tools. The optical set-up for recording the 3-D object, the 3-D data extraction and treatment, as well as the reconstruction of the 3-D object are reported and commented on. This application is dedicated for reconstructive/cosmetic surgery, CAD, animation and research purposes.

  11. 3D scene reconstruction from multi-aperture images

    NASA Astrophysics Data System (ADS)

    Mao, Miao; Qin, Kaihuai

    2014-04-01

    With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.

  12. Recognition methods for 3D textured surfaces

    NASA Astrophysics Data System (ADS)

    Cula, Oana G.; Dana, Kristin J.

    2001-06-01

    Texture as a surface representation is the subject of a wide body of computer vision and computer graphics literature. While texture is always associated with a form of repetition in the image, the repeating quantity may vary. The texture may be a color or albedo variation as in a checkerboard, a paisley print or zebra stripes. Very often in real-world scenes, texture is instead due to a surface height variation, e.g. pebbles, gravel, foliage and any rough surface. Such surfaces are referred to here as 3D textured surfaces. Standard texture recognition algorithms are not appropriate for 3D textured surfaces because the appearance of these surfaces changes in a complex manner with viewing direction and illumination direction. Recent methods have been developed for recognition of 3D textured surfaces using a database of surfaces observed under varied imaging parameters. One of these methods is based on 3D textons obtained using K-means clustering of multiscale feature vectors. Another method uses eigen-analysis originally developed for appearance-based object recognition. In this work we develop a hybrid approach that employs both feature grouping and dimensionality reduction. The method is tested using the Columbia-Utrecht texture database and provides excellent recognition rates. The method is compared with existing recognition methods for 3D textured surfaces. A direct comparison is facilitated by empirical recognition rates from the same texture data set. The current method has key advantages over existing methods including requiring less prior information on both the training and novel images.

  13. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  14. 3D Face Modeling Using the Multi-Deformable Method

    PubMed Central

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  15. Usability of stereoscopic view in teleoperation

    NASA Astrophysics Data System (ADS)

    Boonsuk, Wutthigrai

    2015-03-01

    Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.

  16. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia. PMID:26854878

  17. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  18. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2003-05-12

    This project is in its first full year after the combining of two previously funded projects: ''3D Code Development'' and ''Dynamic Material Properties''. The motivation behind this move was to emphasize and strengthen the ties between the experimental work and the computational model development in the materials area. The next year's activities will indicate the merging of the two efforts. The current activity is structured in two tasks. Task A, ''Simulations and Measurements'', combines all the material model development and associated numerical work with the materials-oriented experimental activities. Task B, ''ALE3D Development'', is a continuation of the non-materials related activities from the previous project.

  19. Hybrid 3D laser sensor based on a high-performance long-range wide-field-of-view laser scanner and a calibrated high-resolution digital camera

    NASA Astrophysics Data System (ADS)

    Ullrich, Andreas; Studnicka, Nikolaus; Riegl, Johannes

    2004-09-01

    We present a hybrid sensor consisting of a high-performance 3D imaging laser sensor and a high-resolution digital camera. The laser sensor uses the time-of-flight principle based on near-infrared pulses. We demonstrate the performance capabilities of the system by presenting example data and we describe the software package used for data acquisition, data merging and visualization. The advantages of using both, near range photogrammetry and laser scanning, for data registration and data extraction are discussed.

  20. 3D ladar ATR based on recognition by parts

    NASA Astrophysics Data System (ADS)

    Sobel, Erik; Douglas, Joel; Ettinger, Gil

    2003-09-01

    LADAR imaging is unique in its potential to accurately measure the 3D surface geometry of targets. We exploit this 3D geometry to perform automatic target recognition on targets in the domain of military and civilian ground vehicles. Here we present a robust model based 3D LADAR ATR system which efficiently searches through target hypothesis space by reasoning hierarchically from vehicle parts up to identification of a whole vehicle with specific pose and articulation state. The LADAR data consists of one or more 3D point clouds generated by laser returns from ground vehicles viewed from multiple sensor locations. The key to this approach is an automated 3D registration process to precisely align and match multiple data views to model based predictions of observed LADAR data. We accomplish this registration using robust 3D surface alignment techniques which we have also used successfully in 3D medical image analysis applications. The registration routine seeks to minimize a robust 3D surface distance metric to recover the best six-degree-of-freedom pose and fit. We process the observed LADAR data by first extracting salient parts, matching these parts to model based predictions and hierarchically constructing and testing increasingly detailed hypotheses about the identity of the observed target. This cycle of prediction, extraction, and matching efficiently partitions the target hypothesis space based on the distinctive anatomy of the target models and achieves effective recognition by progressing logically from a target's constituent parts up to its complete pose and articulation state.

  1. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  2. 3D MR imaging in real time

    NASA Astrophysics Data System (ADS)

    Guttman, Michael A.; McVeigh, Elliot R.

    2001-05-01

    A system has been developed to produce live 3D volume renderings from an MR scanner. Whereas real-time 2D MR imaging has been demonstrated by several groups, 3D volumes are currently rendered off-line to gain greater understanding of anatomical structures. For example, surgical planning is sometimes performed by viewing 2D images or 3D renderings from previously acquired image data. A disadvantage of this approach is misregistration which could occur if the anatomy changes due to normal muscle contractions or surgical manipulation. The ability to produce volume renderings in real-time and present them in the magnet room could eliminate this problem, and enable or benefit other types of interventional procedures. The system uses the data stream generated by a fast 2D multi- slice pulse sequence to update a volume rendering immediately after a new slice is available. We demonstrate some basic types of user interaction with the rendering during imaging at a rate of up to 20 frames per second.

  3. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  4. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  5. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  6. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  7. 3D Imaging with Holographic Tomography

    NASA Astrophysics Data System (ADS)

    Sheppard, Colin J. R.; Kou, Shan Shan

    2010-04-01

    There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we

  8. Fish body surface data measurement based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Jiang, Ming; Qian, Chen; Yang, Wenkai

    2016-01-01

    To film the moving fish in the glass tank, light will be bent at the interface of air and glass, glass and water. Based on binocular stereo vision and refraction principle, we establish a mathematical model of 3D image correlation to reconstruct the 3D coordinates of samples in the water. Marking speckle in fish surface, a series of real-time speckle images of swimming fish will be obtained by two high-speed cameras, and instantaneous 3D shape, strain, displacement etc. of fish will be reconstructed.

  9. 3D geometry applied to atmospheric layers

    NASA Astrophysics Data System (ADS)

    Nadjib Kouahla, Mohamed; Moreels, Guy; Faivre, Michael

    Epipolar geometry is an efficient method for generating 3D representations of objects. Here we present an original application of this method to the case of atmospheric layers. Two synchronized simultaneous images of the same scene are taken in two sites at a distance D. The 36*36 fields of view are oriented face to face along the same line of sight, but in opposite directions. The elevation angle of the optical axis above the horizon is 17. The observed objects are airglow emissions or cirrus clouds or aircraft trails. In the case of clouds, the shape of the objects is diffuse. To obtain a superposition of the common observed zone, it is necessary to calculate a normalized cross-correlation coefficient (NCC) to identify pairs of matching points in both images. The perspective effect in the rectangular images is inverted to produce a satellite-type view of the atmospheric layer as could be seen from an overlying satellite. We developed a triangulation algorithm to retrieve the 3D surface of the observed layer. The stereoscopic method was used to retrieve the wavy structure of the OH emissive layer at the altitude of 87 km. The distance between the observing sites was 600 km. Results obtained in Peru from the sites of Cerro Cosmos and Cerro Verde will be presented. We are currently extending the stereoscopic procedure to the study of troposphere cirruses, of natural origin or induced by aircraft engines. In this case, the distance between observation sites is D 60 km.

  10. Streamlined, Inexpensive 3D Printing of the Brain and Skull.

    PubMed

    Naftulin, Jason S; Kimchi, Eyal Y; Cash, Sydney S

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3-4 in consumable plastic filament as described, and the total process takes 14-17 hours, almost all of which is unsupervised (preprocessing = 4-6 hr; printing = 9-11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1-5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes. PMID:26295459

  11. Streamlined, Inexpensive 3D Printing of the Brain and Skull

    PubMed Central

    Cash, Sydney S.

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3–4 in consumable plastic filament as described, and the total process takes 14–17 hours, almost all of which is unsupervised (preprocessing = 4–6 hr; printing = 9–11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1–5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes. PMID:26295459

  12. 3D Tissue Culturing: Tissue in Cube: In Vitro 3D Culturing Platform with Hybrid Gel Cubes for Multidirectional Observations (Adv. Healthcare Mater. 13/2016).

    PubMed

    Hagiwara, Masaya; Kawahara, Tomohiro; Nobata, Rina

    2016-07-01

    An in vitro 3D culturing platform enabling multidirectional observations of 3D biosamples is presented by M. Hagiwara and co-workers on page 1566. 3D recognition of a sample structure can be achieved by facilitating multi-directional views using a standard microscope without a laser system. The cubic platform has the potential to promote 3D culture studies, offering easy handling and compatibility with commercial culture plates at a low price tag. PMID:27384934

  13. Met.3D - a new open-source tool for interactive 3D visualization of ensemble weather forecasts

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Kern, Michael; Schäfler, Andreas; Westermann, Rüdiger

    2015-04-01

    We introduce Met.3D, a new open-source tool for the interactive 3D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns, however, is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output -- 3D visualisation, ensemble visualization, and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2D visualization methods commonly used in meteorology to 3D visualization by combining both visualization types in a 3D context. It implements methods that address the issue of spatial perception in the 3D view as well as approaches to using the ensemble in order to assess forecast uncertainty. Interactivity is key to the Met.3D approach. The tool uses modern graphics hardware technology to achieve interactive visualization of present-day numerical weather prediction datasets on standard consumer hardware. Met.3D supports forecast data from the European Centre for Medium Range Weather Forecasts and operates directly on ECMWF hybrid sigma-pressure level grids. In this presentation, we provide an overview of the software --illustrated with short video examples--, and give information on its availability.

  14. 3-D visualization of ensemble weather forecasts - Part 1: The visualization tool Met.3D (version 1.0)

    NASA Astrophysics Data System (ADS)

    Rautenhaus, M.; Kern, M.; Schäfler, A.; Westermann, R.

    2015-02-01

    We present Met.3D, a new open-source tool for the interactive 3-D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns, however, is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output - 3-D visualization, ensemble visualization, and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2-D visualization methods commonly used in meteorology to 3-D visualization by combining both visualization types in a 3-D context. We address the issue of spatial perception in the 3-D view and present approaches to using the ensemble to allow the user to assess forecast uncertainty. Interactivity is key to our approach. Met.3D uses modern graphics technology to achieve interactive visualization on standard consumer hardware. The tool supports forecast data from the European Centre for Medium Range Weather Forecasts and can operate directly on ECMWF hybrid sigma-pressure level grids. We describe the employed visualization algorithms, and analyse the impact of the ECMWF grid topology on computing 3-D ensemble statistical quantitites. Our techniques are demonstrated with examples from the T-NAWDEX-Falcon 2012 campaign.

  15. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  16. Optoplasmonics: hybridization in 3D

    NASA Astrophysics Data System (ADS)

    Rosa, L.; Gervinskas, G.; Žukauskas, A.; Malinauskas, M.; Brasselet, E.; Juodkazis, S.

    2013-12-01

    Femtosecond laser fabrication has been used to make hybrid refractive and di ractive micro-optical elements in photo-polymer SZ2080. For applications in micro- uidics, axicon lenses were fabricated (both single and arrays), for generation of light intensity patterns extending through the entire depth of a typically tens-of-micrometers deep channel. Further hybridisation of an axicon with a plasmonic slot is fabricated and demonstrated nu- merically. Spiralling chiral grooves were inscribed into a 100-nm-thick gold coating sputtered over polymerized micro-axicon lenses, using a focused ion beam. This demonstrates possibility of hybridisation between optical and plasmonic 3D micro-optical elements. Numerical modelling of optical performance by 3D-FDTD method is presented.

  17. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.; Frank, J.; Koide, S.; Sakai, J.-I.; Christodoulou, D. M.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W = 4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure.

  18. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  19. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  20. 360-degree 3D profilometry

    NASA Astrophysics Data System (ADS)

    Song, Yuanhe; Zhao, Hong; Chen, Wenyi; Tan, Yushan

    1997-12-01

    A new method of 360 degree turning 3D shape measurement in which light sectioning and phase shifting techniques are both used is presented in this paper. A sine light field is applied in the projected light stripe, meanwhile phase shifting technique is used to calculate phases of the light slit. Thereafter wrapped phase distribution of the slit is formed and the unwrapping process is made by means of the height information based on the light sectioning method. Therefore phase measuring results with better precision can be obtained. At last the target 3D shape data can be produced according to geometric relationships between phases and the object heights. The principles of this method are discussed in detail and experimental results are shown in this paper.

  1. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  2. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  3. 3D-graphite structure

    SciTech Connect

    Belenkov, E. A. Ali-Pasha, V. A.

    2011-01-15

    The structure of clusters of some new carbon 3D-graphite phases have been calculated using the molecular-mechanics methods. It is established that 3D-graphite polytypes {alpha}{sub 1,1}, {alpha}{sub 1,3}, {alpha}{sub 1,5}, {alpha}{sub 2,1}, {alpha}{sub 2,3}, {alpha}{sub 3,1}, {beta}{sub 1,2}, {beta}{sub 1,4}, {beta}{sub 1,6}, {beta}{sub 2,1}, and {beta}{sub 3,2} consist of sp{sup 2}-hybridized atoms, have hexagonal unit cells, and differ in regards to the structure of layers and order of their alternation. A possible way to experimentally synthesize new carbon phases is proposed: the polymerization and carbonization of hydrocarbon molecules.

  4. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  5. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  6. 3D-model building of the jaw impression

    NASA Astrophysics Data System (ADS)

    Ahmed, Moumen T.; Yamany, Sameh M.; Hemayed, Elsayed E.; Farag, Aly A.

    1997-03-01

    A novel approach is proposed to obtain a record of the patient's occlusion using computer vision. Data acquisition is obtained using intra-oral video cameras. The technique utilizes shape from shading to extract 3D information from 2D views of the jaw, and a novel technique for 3D data registration using genetic algorithms. The resulting 3D model can be used for diagnosis, treatment planning, and implant purposes. The overall purpose of this research is to develop a model-based vision system for orthodontics to replace traditional approaches. This system will be flexible, accurate, and will reduce the cost of orthodontic treatments.

  7. Preference for motion and depth in 3D film

    NASA Astrophysics Data System (ADS)

    Hartle, Brittney; Lugtigheid, Arthur; Kazimi, Ali; Allison, Robert S.; Wilcox, Laurie M.

    2015-03-01

    While heuristics have evolved over decades for the capture and display of conventional 2D film, it is not clear these always apply well to stereoscopic 3D (S3D) film. Further, while there has been considerable recent research on viewer comfort in S3D media, little attention has been paid to audience preferences for filming parameters in S3D. Here we evaluate viewers' preferences for moving S3D film content in a theatre setting. Specifically we examine preferences for combinations of camera motion (speed and direction) and stereoscopic depth (IA). The amount of IA had no impact on clip preferences regardless of the direction or speed of camera movement. However, preferences were influenced by camera speed, but only in the in-depth condition where viewers preferred faster motion. Given that previous research shows that slower speeds are more comfortable for viewing S3D content, our results show that viewing preferences cannot be predicted simply from measures of comfort. Instead, it is clear that viewer response to S3D film is complex and that film parameters selected to enhance comfort may in some instances produce less appealing content.

  8. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  9. Modeling, Prediction, and Reduction of 3D Crosstalk in Circular Polarized Stereoscopic LCDs.

    PubMed

    Zeng, Menglin; Robinson, Alan E; Nguyen, Truong Q

    2015-12-01

    Crosstalk, which is the incomplete separation between the left and right views in 3D displays, induces ghosting and causes difficulty of the eyes to fuse the stereo image for depth perception. Circularly polarized (CP) liquid crystal display (LCD) is one of the main-stream consumer 3D displays with the prospering of 3D movies and gamings. The polarizing system including the patterned retarder is one of the major causes of crosstalk in CP LCD. The contributions of this paper are the modeling of the polarizing system of CP LCD, and a crosstalk reduction method that efficiently cancels crosstalk and preserves image contrast. For the modeling, the practical orientation of the polarized glasses (PG) is considered. In addition, this paper calculates the rotation of the light-propagation coordinate for the Stokes vector as light propagates from LCD to PG, and this calculation is missing in the previous works when applying Mueller calculus. The proposed crosstalk reduction method is formulated as a linear programming problem, which can be easily solved. In addition, we propose excluding the highly textured areas in the input images to further preserve image contrast in crosstalk reduction. PMID:26259220

  10. 3D colour visualization of label images using volume rendering techniques.

    PubMed

    Vandenhouten, R; Kottenhoff, R; Grebe, R

    1995-01-01

    Volume rendering methods for the visualization of 3D image data sets have been developed and collected in a C library. The core algorithm consists of a perspective ray casting technique for a natural and realistic view of the 3D scene. New edge operator shading methods are employed for a fast and information preserving representation of surfaces. Control parameters of the algorithm can be tuned to have either smoothed surfaces or a very detailed rendering of the geometrical structure. Different objects can be distinguished by different colours. Shadow ray tracing has been implemented to improve the realistic impression of the 3D image. For a simultaneous representation of objects in different depths, hiding each other, two types of transparency mode are used (wireframe and glass transparency). Single objects or groups of objects can be excluded from the rendering (peeling). Three orthogonal cutting planes or one arbitrarily placed cutting plane can be applied to the rendered objects in order to get additional information about inner structures, contours, and relative positions. PMID:8569308

  11. A Primitive-Based 3D Object Recognition System

    NASA Astrophysics Data System (ADS)

    Dhawan, Atam P.

    1988-08-01

    A knowledge-based 3D object recognition system has been developed. The system uses the hierarchical structural, geometrical and relational knowledge in matching the 3D object models to the image data through pre-defined primitives. The primitives, we have selected, to begin with, are 3D boxes, cylinders, and spheres. These primitives as viewed from different angles covering complete 3D rotation range are stored in a "Primitive-Viewing Knowledge-Base" in form of hierarchical structural and relational graphs. The knowledge-based system then hypothesizes about the viewing angle and decomposes the segmented image data into valid primitives. A rough 3D structural and relational description is made on the basis of recognized 3D primitives. This description is now used in the detailed high-level frame-based structural and relational matching. The system has several expert and knowledge-based systems working in both stand-alone and cooperative modes to provide multi-level processing. This multi-level processing utilizes both bottom-up (data-driven) and top-down (model-driven) approaches in order to acquire sufficient knowledge to accept or reject any hypothesis for matching or recognizing the objects in the given image.

  12. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  13. Microtomography with 3-D visualization

    SciTech Connect

    Peskin, A.; Andrews, B.; Dowd, B.; Jones, K.; Siddons, P.

    1996-11-01

    The facility has been developed for producing high quality tomographs of order one micrometer resolution. Three dimensional volumes derived from groups of adjacent tomographic slices are then viewed and navigated in a stereographic viewing facility. This facility is being applied to problems in geological evaluation of oil reservoir rock, medical imaging, protein chemistry, and CADCAM.

  14. A 3-D Look at Wind-Sculpted Ridges in Aeolis

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Layers of bedrock etched by wind to form sharp, elongated ridges known to geomorphologists as yardangs are commonplace in the southern Elysium Planitia/southern Amazonis region of Mars. The ridges shown in this 3-D composite of two overlapping Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) images occur in the eastern Aeolis region of southern Elysium Planitia near 2.3oS, 206.8oW. To view the picture in stereo, you need red-blue 3-D glasses (red filter over the left eye, blue over the right). For wind to erode bedrock into the patterns seen here, the rock usually must consist of something that is fine-grained and of nearly uniform grain size, such as sand. It must also be relatively easy to erode. For decades, most Mars researchers have interpreted these materials to be eroded deposits of volcanic ash. Nothing in the new picture shown here can support nor refute this earlier speculation. The entire area is mantled by light-toned dust. Small landslides within this thin dust layer form dark streaks on some of the steeper slopes in this picture (for more examples and explanations for these streaks, see previous web pages listed below).

    The stereo (3-D) picture was compiled using an off-nadir view taken by the MOC during the Aerobrake-1 subphase of the mission in January 1998 with a nadir (straight-down-looking) view acquired in October 2000. The total area shown is about 6.7 kilometers (4.2 miles) wide by 2.5 kilometers (1.5 miles) high and is illuminated by sunlight from the upper right. The relief in the stereo image is quite exaggerated: the ridges are between about 50 and 100 meters (about 165-330 feet) high. North is toward the lower right.

  15. Real-time, 3-D ultrasound with multiple transducer arrays.

    PubMed

    Fronheiser, Matthew P; Light, Edward D; Idriss, Salim F; Wolf, Patrick D; Smith, Stephen W

    2006-01-01

    Modifications were made to a commercial real-time, three-dimensional (3-D) ultrasound system for near simultaneous 3-D scanning with two matrix array transducers. As a first illustration, a transducer cable assembly was modified to incorporate two independent, 3-D intra-cardiac echo catheters, a 7 Fr (2.3 mm O.D.) side scanning catheter and a 14 Fr (4.7 mm O.D) forward viewing catheter with accessory port, each catheter using 85 channels operating at 5 MHz. For applications in treatment of atrial fibrillation, the goal is to place the sideviewing catheter within the coronary sinus to view the whole left atrium, including a pulmonary vein. Meanwhile, the forward-viewing catheter inserted within the left atrium is directed toward the ostium of a pulmonary vein for therapy using the integrated accessory port. Using preloaded, phasing data, the scanner switches between catheters automatically, at the push of a button, with a delay of about 1 second, so that the clinician can view the therapy catheter with the coronary sinus catheter and vice versa. Preliminary imaging studies in a tissue phantom and in vivo show that our system successfully guided the forward-viewing catheter toward a target while being imaged with the sideviewing catheter. The forward-viewing catheter then was activated to monitor the target while we mimicked therapy delivery. In the future, the system will switch between 3-D probes on a line-by-line basis and display both volumes simultaneously. PMID:16471436

  16. Projection type transparent 3D display using active screen

    NASA Astrophysics Data System (ADS)

    Kamoshita, Hiroki; Yendo, Tomohiro

    2015-05-01

    Equipment to enjoy a 3D image, such as a movie theater, television and so on have been developed many. So 3D video are widely known as a familiar image of technology now. The display representing the 3D image are there such as eyewear, naked-eye, the HMD-type, etc. They has been used for different applications and location. But have not been widely studied for the transparent 3D display. If transparent large 3D display is realized, it is useful to display 3D image overlaid on real scene in some applications such as road sign, shop window, screen in the conference room etc. As a previous study, to produce a transparent 3D display by using a special transparent screen and number of projectors is proposed. However, for smooth motion parallax, many projectors are required. In this paper, we propose a display that has transparency and large display area by time multiplexing projection image in time-division from one or small number of projectors to active screen. The active screen is composed of a number of vertically-long small rotate mirrors. It is possible to realize the stereoscopic viewing by changing the image of the projector in synchronism with the scanning of the beam.3D vision can be realized by light is scanned. Also, the display has transparency, because it is possible to see through the display when the mirror becomes perpendicular to the viewer. We confirmed the validity of the proposed method by using simulation.

  17. A clearer view of the insect brain—combining bleaching with standard whole-mount immunocytochemistry allows confocal imaging of pigment-covered brain areas for 3D reconstruction

    PubMed Central

    Stöckl, Anna L.; Heinze, Stanley

    2015-01-01

    In the study of insect neuroanatomy, three-dimensional (3D) reconstructions of neurons and neuropils have become a standard technique. As images have to be obtained from whole-mount brain preparations, pigmentation on the brain surface poses a serious challenge to imaging. In insects, this is a major problematic in the first visual neuropil of the optic lobe, the lamina, which is obstructed by the pigment of the retina as well as by the pigmented fenestration layer. This has prevented inclusion of this major processing center of the insect visual system into most neuroanatomical brain atlases and hinders imaging of neurons within the lamina by confocal microscopy. It has recently been shown that hydrogen peroxide bleaching is compatible with immunohistochemical labeling in insect brains, and we therefore developed a simple technique for removal of pigments on the surface of insect brains by chemical bleaching. We show that our technique enables imaging of the pigment-obstructed regions of insect brains when combined with standard protocols for both anti-synapsin-labeled as well as neurobiotin-injected samples. This method can be combined with different fixation procedures, as well as different fluorophore excitation wavelengths without negative effects on staining quality. It can therefore serve as an effective addition to most standard histology protocols used in insect neuroanatomy. PMID:26441552

  18. A clearer view of the insect brain-combining bleaching with standard whole-mount immunocytochemistry allows confocal imaging of pigment-covered brain areas for 3D reconstruction.

    PubMed

    Stöckl, Anna L; Heinze, Stanley

    2015-01-01

    In the study of insect neuroanatomy, three-dimensional (3D) reconstructions of neurons and neuropils have become a standard technique. As images have to be obtained from whole-mount brain preparations, pigmentation on the brain surface poses a serious challenge to imaging. In insects, this is a major problematic in the first visual neuropil of the optic lobe, the lamina, which is obstructed by the pigment of the retina as well as by the pigmented fenestration layer. This has prevented inclusion of this major processing center of the insect visual system into most neuroanatomical brain atlases and hinders imaging of neurons within the lamina by confocal microscopy. It has recently been shown that hydrogen peroxide bleaching is compatible with immunohistochemical labeling in insect brains, and we therefore developed a simple technique for removal of pigments on the surface of insect brains by chemical bleaching. We show that our technique enables imaging of the pigment-obstructed regions of insect brains when combined with standard protocols for both anti-synapsin-labeled as well as neurobiotin-injected samples. This method can be combined with different fixation procedures, as well as different fluorophore excitation wavelengths without negative effects on staining quality. It can therefore serve as an effective addition to most standard histology protocols used in insect neuroanatomy. PMID:26441552

  19. 3D micromanipulation at low numerical aperture with a single light beam: the focused-Bessel trap.

    PubMed

    Ayala, Yareni A; Arzola, Alejandro V; Volke-Sepúlveda, Karen

    2016-02-01

    Full-three-dimensional (3D) manipulation of individual glass beads with radii in the range of 2-8 μm is experimentally demonstrated by using a single Bessel light beam focused through a low-numerical-aperture lens (NA=0.40). Although we have a weight-assisted trap with the beam propagating upward, we obtain a stable equilibrium position well away from the walls of the sample cell, and we are able to move the particle across the entire cell in three dimensions. A theoretical analysis for the optical field and trapping forces along the lateral and axial directions is presented for the focused-Bessel trap. This trap offers advantages for 3D manipulation, such as an extended working distance, a large field of view, and reduced aberrations. PMID:26907437

  20. 3D Medical Collaboration Technology to Enhance Emergency Healthcare

    PubMed Central

    Welch, Greg; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M.; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E.

    2009-01-01

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare. PMID:19521951

  1. 3D annotation and manipulation of medical anatomical structures

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  2. A Clean Adirondack (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is a 3-D anaglyph showing a microscopic image taken of an area measuring 3 centimeters (1.2 inches) across on the rock called Adirondack. The image was taken at Gusev Crater on the 33rd day of the Mars Exploration Rover Spirit's journey (Feb. 5, 2004), after the rover used its rock abrasion tool brush to clean the surface of the rock. Dust, which was pushed off to the side during cleaning, can still be seen to the left and in low areas of the rock.

  3. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  4. Vacant Lander in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D image captured by the Mars Exploration Rover Opportunity's rear hazard-identification camera shows the now-empty lander that carried the rover 283 million miles to Meridiani Planum, Mars. Engineers received confirmation that Opportunity's six wheels successfully rolled off the lander and onto martian soil at 3:01 a.m. PST, January 31, 2004, on the seventh martian day, or sol, of the mission. The rover is approximately 1 meter (3 feet) in front of the lander, facing north.

  5. 10. Interior view of communications compartment. View toward front of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. Interior view of communications compartment. View toward front of aircraft. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  6. 11. Interior view of communications compartment. View toward rear of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Interior view of communications compartment. View toward rear of aircraft. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  7. 9. Interior view of electronics compartment. View toward rear of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. Interior view of electronics compartment. View toward rear of aircraft. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  8. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  9. 3D acoustic atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Rogers, Kevin; Finn, Anthony

    2014-10-01

    This paper presents a method for tomographically reconstructing spatially varying 3D atmospheric temperature profiles and wind velocity fields based. Measurements of the acoustic signature measured onboard a small Unmanned Aerial Vehicle (UAV) are compared to ground-based observations of the same signals. The frequency-shifted signal variations are then used to estimate the acoustic propagation delay between the UAV and the ground microphones, which are also affected by atmospheric temperature and wind speed vectors along each sound ray path. The wind and temperature profiles are modelled as the weighted sum of Radial Basis Functions (RBFs), which also allow local meteorological measurements made at the UAV and ground receivers to supplement any acoustic observations. Tomography is used to provide a full 3D reconstruction/visualisation of the observed atmosphere. The technique offers observational mobility under direct user control and the capacity to monitor hazardous atmospheric environments, otherwise not justifiable on the basis of cost or risk. This paper summarises the tomographic technique and reports on the results of simulations and initial field trials. The technique has practical applications for atmospheric research, sound propagation studies, boundary layer meteorology, air pollution measurements, analysis of wind shear, and wind farm surveys.

  10. 3D Printed Bionic Ears

    PubMed Central

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  11. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikaw, K.-I.; Frank, J.; Christodoulou, D. M.; Koide, S.; Sakai, J.-I.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W=4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure. We also simulate jets with the more realistic initial conditions for injecting jets for helical mangetic field, perturbed density, velocity, and internal energy, which are supposed to be caused in the process of jet generation. Three possible explanations for the observed variability are (i) tidal disruption of a star falling into the black hole, (ii) instabilities in the relativistic accretion disk, and (iii) jet-related PRocesses. New results will be reported at the meeting.

  12. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  13. 3D Ion Temperature Reconstruction

    NASA Astrophysics Data System (ADS)

    Tanabe, Hiroshi; You, Setthivoine; Balandin, Alexander; Inomoto, Michiaki; Ono, Yasushi

    2009-11-01

    The TS-4 experiment at the University of Tokyo collides two spheromaks to form a single high-beta compact toroid. Magnetic reconnection during the merging process heats and accelerates the plasma in toroidal and poloidal directions. The reconnection region has a complex 3D topology determined by the pitch of the spheromak magnetic fields at the merging plane. A pair of multichord passive spectroscopic diagnostics have been established to measure the ion temperature and velocity in the reconnection volume. One setup measures spectral lines across a poloidal plane, retrieving velocity and temperature from Abel inversion. The other, novel setup records spectral lines across another section of the plasma and reconstructs velocity and temperature from 3D vector and 2D scalar tomography techniques. The magnetic field linking both measurement planes is determined from in situ magnetic probe arrays. The ion temperature is then estimated within the volume between the two measurement planes and at the reconnection region. The measurement is followed over several repeatable discharges to follow the heating and acceleration process during the merging reconnection.

  14. Tissue in Cube: In Vitro 3D Culturing Platform with Hybrid Gel Cubes for Multidirectional Observations.

    PubMed

    Hagiwara, Masaya; Kawahara, Tomohiro; Nobata, Rina

    2016-07-01

    An in vitro 3D culturing platform enabling multidirectional observations of 3D biosamples is presented. The 3D structure of biosamples can be recognized without fluorescence. The cubic platform employs two types of hydrogels that are compatible with conventional culture dishes or well plates, facilitating growth in culture, ease of handling, and viewing at multiple angles. PMID:27128576

  15. Video coding and transmission standards for 3D television — a survey

    NASA Astrophysics Data System (ADS)

    Buchowicz, A.

    2013-03-01

    The emerging 3D television systems require effective techniques for transmission and storage of data representing a 3-D scene. The 3-D scene representations based on multiple video sequences or multiple views plus depth maps are especially important since they can be processed with existing video technologies. The review of the video coding and transmission techniques is presented in this paper.

  16. LOTT RANCH 3D PROJECT

    SciTech Connect

    Larry Lawrence; Bruce Miller

    2004-09-01

    The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data

  17. 3D Integration for Superconducting Qubits

    NASA Astrophysics Data System (ADS)

    Rosenberg, Danna; Yost, Donna-Ruth; Das, Rabindra; Hover, David; Racz, Livia; Weber, Steven; Yoder, Jonilyn; Kerman, Andrew; Oliver, William

    As the field of superconducting quantum computing advances from the few-qubit stage to large-scale fault-tolerant devices, scalability requirements will necessitate the use of standard 3D packaging and integration processes. While the field of 3D integration is well-developed, relatively little work has been performed to determine the compatibility of the associated processes with superconducting qubits. Qubit coherence time could potentially be affected by required process steps or by the proximity of an interposer that could introduce extra sources of charge or flux noise. As a first step towards a large-scale quantum information processor, we have used a flip-chip process to bond a chip with flux qubits to an interposer containing structures for qubit readout and control. We will present data on the effect of the presence of the interposer on qubit coherence time for various qubit-chip-interposer spacings and discuss the implications for integrated multi-qubit devices. This research was funded by the ODNI and IARPA under Air Force Contract No. FA8721-05-C-0002. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, or the US Government.

  18. 3D structure and nuclear targets

    NASA Astrophysics Data System (ADS)

    Dupré, Raphaël; Scopetta, Sergio

    2016-06-01

    Recent experimental and theoretical ideas are laying the ground for a new era in the knowledge of the parton structure of nuclei. We report on two promising directions beyond inclusive deep inelastic scattering experiments, aimed at, among other goals, unveiling the three-dimensional structure of the bound nucleon. The 3D structure in coordinate space can be accessed through deep exclusive processes, whose non-perturbative content is parametrized in terms of generalized parton distributions. In this way the distribution of partons in the transverse plane will be obtained, providing a pictorial view of the realization of the European Muon Collaboration effect. In particular, we show how, through the generalized parton distribution framework, non-nucleonic degrees of freedom in nuclei can be unveiled. Analogously, the momentum space 3D structure can be accessed by studying transverse-momentum-dependent parton distributions in semi-inclusive deep inelastic scattering processes. The status of measurements is also summarized, in particular novel coincidence measurements at high-luminosity facilities, such as Jefferson Laboratory. Finally the prospects for the next years at future facilities, such as the 12GeV Jefferson Laboratory and the Electron Ion Collider, are presented.

  19. Virtual VMASC: A 3D Game Environment

    NASA Technical Reports Server (NTRS)

    Manepalli, Suchitra; Shen, Yuzhong; Garcia, Hector M.; Lawsure, Kaleen

    2010-01-01

    The advantages of creating interactive 3D simulations that allow viewing, exploring, and interacting with land improvements, such as buildings, in digital form are manifold and range from allowing individuals from anywhere in the world to explore those virtual land improvements online, to training military personnel in dealing with war-time environments, and to making those land improvements available in virtual worlds such as Second Life. While we haven't fully explored the true potential of such simulations, we have identified a requirement within our organization to use simulations like those to replace our front-desk personnel and allow visitors to query, naVigate, and communicate virtually with various entities within the building. We implemented the Virtual VMASC 3D simulation of the Virginia Modeling Analysis and Simulation Center (VMASC) office building to not only meet our front-desk requirement but also to evaluate the effort required in designing such a simulation and, thereby, leverage the experience we gained in future projects of this kind. This paper describes the goals we set for our implementation, the software approach taken, the modeling contribution made, and the technologies used such as XNA Game Studio, .NET framework, Autodesk software packages, and, finally, the applicability of our implementation on a variety of architectures including Xbox 360 and PC. This paper also summarizes the result of our evaluation and the lessons learned from our effort.

  20. Tomographic system for 3D temperature reconstruction

    NASA Astrophysics Data System (ADS)

    Antos, Martin; Malina, Radomir

    2003-11-01

    The novel laboratory system for the optical tomography is used to obtain three-dimensional temperature field around a heated element. The Mach-Zehnder holographic interferometers with diffusive illumination of the phase object provide the possibility to scan of multidirectional holographic interferograms in the range of viewing angles from 0 deg to 108 deg. These interferograms form the input data for the computer tomography of the 3D distribution of the refractive index variation, which characterizes the physical state of the studied medium. The configuration of the system allows automatic projection scanning of the studied phase object. The computer calculates the wavefront deformation for each projection, making use of different methods of Fourier-transform and phase-sampling evaluations. The experimental set-up together with experimental results is presented.

  1. Glasses-free large size high-resolution three-dimensional display based on the projector array

    NASA Astrophysics Data System (ADS)

    Sang, Xinzhu; Wang, Peng; Yu, Xunbo; Zhao, Tianqi; Gao, Xing; Xing, Shujun; Yu, Chongxiu; Xu, Daxiong

    2014-11-01

    Normally, it requires a huge amount of spatial information to increase the number of views and to provide smooth motion parallax for natural three-dimensional (3D) display similar to real life. To realize natural 3D video display without eye-wears, a huge amount of 3D spatial information is normal required. However, minimum 3D information for eyes should be used to reduce the requirements for display devices and processing time. For the 3D display with smooth motion parallax similar to the holographic stereogram, the size the virtual viewing slit should be smaller than the pupil size of eye at the largest viewing distance. To increase the resolution, two glass-free 3D display systems rear and front projection are presented based on the space multiplexing with the micro-projector array and the special designed 3D diffuse screens with the size above 1.8 m× 1.2 m. The displayed clear depths are larger 1.5m. The flexibility in terms of digitized recording and reconstructed based on the 3D diffuse screen relieves the limitations of conventional 3D display technologies, which can realize fully continuous, natural 3-D display. In the display system, the aberration is well suppressed and the low crosstalk is achieved.

  2. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction. PMID:26861680

  3. Creating 3D realistic head: from two orthogonal photos to multiview face contents

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Lin, Qian; Tang, Feng; Tang, Liang; Lim, Sukhwan; Wang, Shengjin

    2011-03-01

    3D Head models have many applications, such as virtual conference, 3D web game, and so on. The existing several web-based face modeling solutions that can create a 3D face model from one or two user uploaded face images, are limited to generating the 3D model of only face region. The accuracy of such reconstruction is very limited for side views, as well as hair regions. The goal of our research is to develop a framework for reconstructing the realistic 3D human head based on two approximate orthogonal views. Our framework takes two images, and goes through segmentation, feature points detection, 3D bald head reconstruction, 3D hair reconstruction and texture mapping to create a 3D head model. The main contribution of the paper is that the processing steps are applies to both the face region as well as the hair region.

  4. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal

  5. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  6. Stereoscopic 3D video games and their effects on engagement

    NASA Astrophysics Data System (ADS)

    Hogue, Andrew; Kapralos, Bill; Zerebecki, Chris; Tawadrous, Mina; Stanfield, Brodie; Hogue, Urszula

    2012-03-01

    With television manufacturers developing low-cost stereoscopic 3D displays, a large number of consumers will undoubtedly have access to 3D-capable televisions at home. The availability of 3D technology places the onus on content creators to develop interesting and engaging content. While the technology of stereoscopic displays and content generation are well understood, there are many questions yet to be answered surrounding its effects on the viewer. Effects of stereoscopic display on passive viewers for film are known, however video games are fundamentally different since the viewer/player is actively (rather than passively) engaged in the content. Questions of how stereoscopic viewing affects interaction mechanics have previously been studied in the context of player performance but very few have attempted to quantify the player experience to determine whether stereoscopic 3D has a positive or negative influence on their overall engagement. In this paper we present a preliminary study of the effects stereoscopic 3D have on player engagement in video games. Participants played a video game in two conditions, traditional 2D and stereoscopic 3D and their engagement was quantified using a previously validated self-reporting tool. The results suggest that S3D has a positive effect on immersion, presence, flow, and absorption.

  7. Characterizing targets and backgrounds for 3D laser radars

    NASA Astrophysics Data System (ADS)

    Steinvall, Ove K.; Larsson, Hakan; Gustafsson, Frank; Chevalier, Tomas R.; Persson, Asa; Klasen, Lena M.

    2004-12-01

    Exciting development is taking place in 3 D sensing laser radars. Scanning systems are well established for mapping from airborne and ground sensors. 3 D sensing focal plane arrays (FPAs) enable a full range and intensity image can be captured in one laser shot. Gated viewing systems also produces 3 D target information. Many applications for 3 D laser radars are found in robotics, rapid terrain visualization, augmented vision, reconnaissance and target recognition, weapon guidance including aim point selection and others. The net centric warfare will demand high resolution geo-data for a common description of the environment. At FOI we have a measurement program to collect data relevant for 3 D laser radars using airborne and tripod mounted equipment for data collection. Data collection spans from single pixel waveform collection (1 D) over 2 D using range gated imaging to full 3 D imaging using scanning systems. This paper will describe 3 D laser data from different campaigns with emphasis on range distribution and reflections properties for targets and background during different seasonal conditions. Example of the use of the data for system modeling, performance prediction and algorithm development will be given. Different metrics to characterize the data set will also be discussed.

  8. Automatic Reconstruction of Spacecraft 3D Shape from Imagery

    NASA Astrophysics Data System (ADS)

    Poelman, C.; Radtke, R.; Voorhees, H.

    We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.

  9. A systematized WYSIWYG pipeline for digital stereoscopic 3D filmmaking

    NASA Astrophysics Data System (ADS)

    Mueller, Robert; Ward, Chris; Hušák, Michal

    2008-02-01

    Digital tools are transforming stereoscopic 3D content creation and delivery, creating an opportunity for the broad acceptance and success of stereoscopic 3D films. Beginning in late 2005, a series of mostly CGI features has successfully initiated the public to this new generation of highly-comfortable, artifact-free digital 3D. While the response has been decidedly favorable, a lack of high-quality live-action films could hinder long-term success. Liveaction stereoscopic films have historically been more time-consuming, costly, and creatively-limiting than 2D films - thus a need arises for a live-action 3D filmmaking process which minimizes such limitations. A unique 'systematized' what-you-see-is-what-you-get (WYSIWYG) pipeline is described which allows the efficient, intuitive and accurate capture and integration of 3D and 2D elements from multiple shoots and sources - both live-action and CGI. Throughout this pipeline, digital tools utilize a consistent algorithm to provide meaningful and accurate visual depth references with respect to the viewing audience in the target theater environment. This intuitive, visual approach introduces efficiency and creativity to the 3D filmmaking process by eliminating both the need for a 'mathematician mentality' of spreadsheets and calculators, as well as any trial and error guesswork, while enabling the most comfortable, 'pixel-perfect', artifact-free 3