Science.gov

Sample records for 3-d viewing glasses

  1. User experience while viewing stereoscopic 3D television

    PubMed Central

    Read, Jenny C.A.; Bohr, Iwo

    2014-01-01

    3D display technologies have been linked to visual discomfort and fatigue. In a lab-based study with a between-subjects design, 433 viewers aged from 4 to 82 years watched the same movie in either 2D or stereo 3D (S3D), and subjectively reported on a range of aspects of their viewing experience. Our results suggest that a minority of viewers, around 14%, experience adverse effects due to viewing S3D, mainly headache and eyestrain. A control experiment where participants viewed 2D content through 3D glasses suggests that around 8% may report adverse effects which are not due directly to viewing S3D, but instead are due to the glasses or to negative preconceptions about S3D (the ‘nocebo effect'). Women were slightly more likely than men to report adverse effects with S3D. We could not detect any link between pre-existing eye conditions or low stereoacuity and the likelihood of experiencing adverse effects with S3D. Practitioner Summary: Stereoscopic 3D (S3D) has been linked to visual discomfort and fatigue. Viewers watched the same movie in either 2D or stereo 3D (between-subjects design). Around 14% reported effects such as headache and eyestrain linked to S3D itself, while 8% report adverse effects attributable to 3D glasses or negative expectations. PMID:24874550

  2. User experience while viewing stereoscopic 3D television.

    PubMed

    Read, Jenny C A; Bohr, Iwo

    2014-01-01

    3D display technologies have been linked to visual discomfort and fatigue. In a lab-based study with a between-subjects design, 433 viewers aged from 4 to 82 years watched the same movie in either 2D or stereo 3D (S3D), and subjectively reported on a range of aspects of their viewing experience. Our results suggest that a minority of viewers, around 14%, experience adverse effects due to viewing S3D, mainly headache and eyestrain. A control experiment where participants viewed 2D content through 3D glasses suggests that around 8% may report adverse effects which are not due directly to viewing S3D, but instead are due to the glasses or to negative preconceptions about S3D (the 'nocebo effect'). Women were slightly more likely than men to report adverse effects with S3D. We could not detect any link between pre-existing eye conditions or low stereoacuity and the likelihood of experiencing adverse effects with S3D. PMID:24874550

  3. User experience while viewing stereoscopic 3D television.

    PubMed

    Read, Jenny C A; Bohr, Iwo

    2014-01-01

    3D display technologies have been linked to visual discomfort and fatigue. In a lab-based study with a between-subjects design, 433 viewers aged from 4 to 82 years watched the same movie in either 2D or stereo 3D (S3D), and subjectively reported on a range of aspects of their viewing experience. Our results suggest that a minority of viewers, around 14%, experience adverse effects due to viewing S3D, mainly headache and eyestrain. A control experiment where participants viewed 2D content through 3D glasses suggests that around 8% may report adverse effects which are not due directly to viewing S3D, but instead are due to the glasses or to negative preconceptions about S3D (the 'nocebo effect'). Women were slightly more likely than men to report adverse effects with S3D. We could not detect any link between pre-existing eye conditions or low stereoacuity and the likelihood of experiencing adverse effects with S3D.

  4. Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio

    2013-05-01

    With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.

  5. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  6. 3D View of Los Angeles

    NASA Technical Reports Server (NTRS)

    2002-01-01

    California's topography poses challenges for road builders. Northwest of Los Angeles, deformation of Earth's crust along the Pacific-North American crustal plate boundary has made transportation difficult. Direct connection between metropolitan Los Angeles (image lower left) and California's Central Valley (image top center) through the rugged terrain seen on the left side of this image was long avoided in favor of longer but easier paths. However, over the last century, three generations of roads have traversed this terrain. The first was 'The Ridge Route', a two-lane road, built in 1915, which followed long winding ridge lines that included 697 curves. The second, built in 1933, was to become four-lane U.S. Highway 99. It generally followed widened canyon bottoms. The third is the current eight lane Interstate 5 freeway, built in the 1960s, which is generally notched into hillsides, but also includes a stretch of several miles where the two directions of travel are widely separated and driving is 'on the left', a rarity in the United States. Such an unusual highway configuration was necessary in order to optimize the road grades for uphill and downhill traffic in this topographically challenging setting. This anaglyph was generated by first draping a Landsat satellite image over a preliminary topographic map from the Shuttle Radar Topography Mission (SRTM), then generating two differing perspectives, one for each eye. When viewed through special glasses, the result is a vertically exaggerated view of the Earth's surface in its full three dimensions. Anaglyph glasses cover the left eye with a red filter and cover the right eye with a blue filter. Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30 meter resolution of most Landsat images and will substantially help in analyses of the large and growing Landsat image archive. The elevation data used in this image was acquired by the Shuttle Radar

  7. 3D object retrieval using salient views

    PubMed Central

    Shapiro, Linda G.

    2013-01-01

    This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points (Atmosukarto and Shapiro in International workshop on structural, syntactic, and statistical pattern recognition, 2008; Atmosukarto and Shapiro in ACM multimedia information retrieval, 2008). The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from Chen et al. (Comput Graph Forum 22(3):223–232, 2003) are then used to calculate the similarity between two 3D objects. Retrieval experiments were performed on three datasets: the Heads dataset, the SHREC2008 dataset, and the Princeton dataset. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method (Chen et al. in Comput Graph Forum 22(3):223–232, 2003), and our method achieves a 15-fold speedup in the feature extraction computation time. PMID:23833704

  8. Spirit's View of 'Columbia Hills' (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    NASA's Mars Exploration Rover Spirit looked up at the 'Columbia Hills' from its location on the 265th martian day, or sol, of its mission (Sept. 30, 2004) and captured this 3-D view. This cropped mosaic image, presented here in a cylindrical-perspective projection with geometric seam correction, was taken by the rover's navigation camera.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  9. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  10. Viewing 3D MRI data in perspective

    NASA Astrophysics Data System (ADS)

    Liu, Haiying; Chin, Chialei

    2000-10-01

    In medical imaging applications, 3D morphological data set is often presented in 2D format without considering visual perspective. Without perspective, the resulting image can be counterintuitive to natural human visual perception, specially in a setting of MR guided neurosurgical procedure where depth perception is crucial. To address this problem we have developed a new projection scheme that incorporates linear perspective transformation in various image reconstructions, including MR angiographical projection. In the scheme, an imaginary picture plane (PP) can be placed within or immediately in front of a 3D object, and the stand point (SP) of an observer is fixed at a normal viewing distance os 25 cm in front of the picture plane. A clinical 3D angiography data set (TR/TF/Flipequals30/5.4/15) was obtained from a patient head on a 1.5T MR scanner in 4 min 10 sec (87.5% rectangular, 52% scan). The length, width and height of the image volume were 200mm, 200mm and 72.4mm respectively, corresponding to an effective matrix size of 236x512x44 in transverse orientation (512x512x88 after interpolation). Maximum intensity project (MaxIP) algorithm was used along the viewing trace of perspective projection than rather the parallel projection. Consecutive 36 views were obtained at a 10 degree interval azimuthally. When displayed in cine-mode, the new MaxIP images appeared realistic with an improved depth perception.

  11. EEG-based usability assessment of 3D shutter glasses

    NASA Astrophysics Data System (ADS)

    Wenzel, Markus A.; Schultze-Kraft, Rafael; Meinecke, Frank C.; Cardinaux, Fabien; Kemp, Thomas; Müller, Klaus-Robert; Curio, Gabriel; Blankertz, Benjamin

    2016-02-01

    Objective. Neurotechnology can contribute to the usability assessment of products by providing objective measures of neural workload and can uncover usability impediments that are not consciously perceived by test persons. In this study, the neural processing effort imposed on the viewer of 3D television by shutter glasses was quantified as a function of shutter frequency. In particular, we sought to determine the critical shutter frequency at which the ‘neural flicker’ vanishes, such that visual fatigue due to this additional neural effort can be prevented by increasing the frequency of the system. Approach. Twenty-three participants viewed an image through 3D shutter glasses, while multichannel electroencephalogram (EEG) was recorded. In total ten shutter frequencies were employed, selected individually for each participant to cover the range below, at and above the threshold of flicker perception. The source of the neural flicker correlate was extracted using independent component analysis and the flicker impact on the visual cortex was quantified by decoding the state of the shutter from the EEG. Main Result. Effects of the shutter glasses were traced in the EEG up to around 67 Hz—about 20 Hz over the flicker perception threshold—and vanished at the subsequent frequency level of 77 Hz. Significance. The impact of the shutter glasses on the visual cortex can be detected by neurotechnology even when a flicker is not reported by the participants. Potential impact. Increasing the shutter frequency from the usual 50 Hz or 60 Hz to 77 Hz reduces the risk of visual fatigue and thus improves shutter-glass-based 3D usability.

  12. First 3D view of solar eruptions

    NASA Astrophysics Data System (ADS)

    2004-07-01

    arrival times and impact angles at the Earth," says Dr Thomas Moran of the Catholic University, Washington, USA. In collaboration with Dr Joseph Davila, of NASA’s Goddard Space Flight Center, Greenbelt, USA, Moran has analysed two-dimensional images from the ESA/NASA Solar and Heliospheric Observatory (SOHO) in a new way to yield 3D images. Their technique is able to reveal the complex and distorted magnetic fields that travel with the CME cloud and sometimes interact with Earth's own magnetic field, pouring tremendous amounts of energy into the space near Earth. "These magnetic fields are invisible," Moran explains, "but since the CME gas is electrified, it spirals around the magnetic fields, tracing out their shapes." Therefore, a 3D view of the CME electrified gas (called a plasma) gives scientists valuable information on the structure and behaviour of the magnetic fields powering the CME. The new analysis technique for SOHO data determines the three-dimensional structure of a CME by taking a sequence of three SOHO Large Angle and Spectrometric Coronagraph (LASCO) images through various polarisers, at different angles. Whilst the light emitted by the Sun is not polarised, once it is scattered off electrons in the CME plasma it takes up some polarisation. This means that the electric fields of some of the scattered light are forced to oscillate in certain directions, whereas the electric field in the light emitted by the Sun is free to oscillate in all directions. Moran and Davila knew that light from CME structures closer to the plane of the Sun (as seen on the LASCO images) had to be more polarised than light from structures farther from that plane. Thus, by computing the ratio of polarised to unpolarised light for each CME structure, they could measure its distance from the plane. This provided the missing third dimension to the LASCO images. With this technique, the team has confirmed that the structure of CMEs directed towards Earth is an expanding arcade of

  13. 3-D Perspective View, Kamchatka Peninsula, Russia

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western side of the volcanically active Kamchatka Peninsula in eastern Russia. The image was generated using the first data collected during the Shuttle Radar Topography Mission (SRTM). In the foreground is the Sea of Okhotsk. Inland from the coast, vegetated floodplains and low relief hills rise toward snow capped peaks. The topographic effects on snow and vegetation distribution are very clear in this near-horizontal view. Forming the skyline is the Sredinnyy Khrebet, the volcanic mountain range that makes up the spine of the peninsula. High resolution SRTM topographic data will be used by geologists to study how volcanoes form and to understand the hazards posed by future eruptions.

    This image was generated using topographic data from SRTM and an enhanced true-color image from the Landsat 7 satellite. This image contains about 2,400 meters (7,880 feet) of total relief. The topographic expression was enhanced by adding artificial shading as calculated from the SRTM elevation model. The Landsat data was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    SRTM, launched on February 11, 2000, used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar(SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. To collect the 3-D SRTM data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. SRTM collected three-dimensional measurements of nearly 80 percent of the Earth's surface. SRTM is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 33.3 km (20.6 miles) wide x

  14. First 3D view of solar eruptions

    NASA Astrophysics Data System (ADS)

    2004-07-01

    arrival times and impact angles at the Earth," says Dr Thomas Moran of the Catholic University, Washington, USA. In collaboration with Dr Joseph Davila, of NASA’s Goddard Space Flight Center, Greenbelt, USA, Moran has analysed two-dimensional images from the ESA/NASA Solar and Heliospheric Observatory (SOHO) in a new way to yield 3D images. Their technique is able to reveal the complex and distorted magnetic fields that travel with the CME cloud and sometimes interact with Earth's own magnetic field, pouring tremendous amounts of energy into the space near Earth. "These magnetic fields are invisible," Moran explains, "but since the CME gas is electrified, it spirals around the magnetic fields, tracing out their shapes." Therefore, a 3D view of the CME electrified gas (called a plasma) gives scientists valuable information on the structure and behaviour of the magnetic fields powering the CME. The new analysis technique for SOHO data determines the three-dimensional structure of a CME by taking a sequence of three SOHO Large Angle and Spectrometric Coronagraph (LASCO) images through various polarisers, at different angles. Whilst the light emitted by the Sun is not polarised, once it is scattered off electrons in the CME plasma it takes up some polarisation. This means that the electric fields of some of the scattered light are forced to oscillate in certain directions, whereas the electric field in the light emitted by the Sun is free to oscillate in all directions. Moran and Davila knew that light from CME structures closer to the plane of the Sun (as seen on the LASCO images) had to be more polarised than light from structures farther from that plane. Thus, by computing the ratio of polarised to unpolarised light for each CME structure, they could measure its distance from the plane. This provided the missing third dimension to the LASCO images. With this technique, the team has confirmed that the structure of CMEs directed towards Earth is an expanding arcade of

  15. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  16. Fabrication of 3-D Submicron Glass Structures by FIB

    NASA Astrophysics Data System (ADS)

    Chao, C. H.; Shen, S. C.; Wu, J. R.

    2009-10-01

    The fabrication characteristic of focused ion beam (FIB) for Pyrex glass was investigated. FIB has several advantages such as high resolution, high material removal rates, low forward scattering, and direct fabrication in selective area without any etching mask. In this study, FIB-etched Pyrex glass was used for fast fabrication of 3-D submicron structures. A glass structure with 0.39 μm in width was fabricated. The experimental results in terms of limiting beam size, ion dose (ion/cm2), and beam current are discussed. The influence of XeF2 gas on FIB glass fabrication was investigated.

  17. 3D View of Death Valley, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This 3-D perspective view looking north over Death Valley, California, was produced by draping ASTER nighttime thermal infrared data over topographic data from the US Geological Survey. The ASTER data were acquired April 7, 2000 with the multi-spectral thermal infrared channels, and cover an area of 60 by 80 km (37 by 50 miles). Bands 13, 12, and 10 are displayed in red, green and blue respectively. The data have been computer enhanced to exaggerate the color variations that highlight differences in types of surface materials. Salt deposits on the floor of Death Valley appear in shades of yellow, green, purple, and pink, indicating presence of carbonate, sulfate, and chloride minerals. The Panamint Mtns. to the west, and the Black Mtns. to the east, are made up of sedimentary limestones, sandstones, shales, and metamorphic rocks. The bright red areas are dominated by the mineral quartz, such as is found in sandstones; green areas are limestones. In the lower center part of the image is Badwater, the lowest point in North America.

    Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of International Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products. Dr. Anne Kahle at NASA's Jet Propulsion Laboratory, Pasadena, Calif., is the U.S. Science team leader; Moshe Pniel of JPL is the project manager. ASTER is the only high resolution imaging sensor on Terra. The primary goal of the ASTER mission is to obtain high-resolution image data in 14 channels over the entire land surface, as well as black and white stereo images. With revisit time of between 4 and 16 days, ASTER will provide the capability for repeat coverage of changing areas on Earth's surface.

    The broad spectral coverage and high spectral resolution of ASTER

  18. Spirit's View on Sol 390 (3D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    NASA's Mars Exploration Rover Spirit used its navigation camera to capture this view during the rover's 390th martian day, or sol, (Feb. 6, 2005). The rover advanced about 13 meters (43 feet) driving backwards uphill on that sol. The view is uphill toward 'Cumberland Ridge' on 'Husband Hill.' It is presented in a cylindrical projection with geometric seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  19. Opportunity View on Sol 398 (3D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this 360-degree view of the rover's surroundings on the 398th martian day, or sol, of its surface mission (March 7, 2005). Opportunity drove 95 meters (312 feet) toward 'Vostok Crater' that sol before taking the images. The drive was done in four steps: three 'blind-drive' segments followed by a segment using the rover's autonomous navigation. This location is catalogued as Opportunity's site 49. This three-dimensional view is presented as a cylindrical-perspective projection with geometric and brightness seam correction. Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  20. Opportunity View on Sol 397 (3D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this 360-degree view of the rover's surroundings on the 397th martian day, or sol, of its surface mission (March 6, 2005). Opportunity had completed a drive of 124 meters (407 feet) across the rippled flatland of the Meridiani Planum region on the previous sol, but did not drive on this sol. This location is catalogued as Opportunity's site 48. This three-dimensional view is presented as a cylindrical-perspective projection with geometric and brightness seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  1. Spirit's View on Sol 399 (3D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    NASA's Mars Exploration Rover Spirit used its navigation camera to capture this view during the rover's 399th martian day, or sol, (Feb. 15, 2005). An attempted drive on that sol did not gain any ground toward nearby 'Larry's Lookout' because of slippage that churned the soil on the slope. Spirit used its alpha particle X-ray spectrometer to examine the churned soil. This view is presented in a cylindrical-perspective projection with geometric seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  2. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ). PMID:26440264

  3. Opportunity's View, Sol 381 (3D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    NASA's Mars Exploration Rover Opportunity used its navigation camera on the rover's 381st and 382nd martian days, or sols, (Feb. 18 and 19, 2005) to take the images combined into this 360-degree panorama. Opportunity had driven 64 meters (209 feet) on sol 381 to arrive at this location close to a small crater dubbed 'Alvin.' The location is catalogued as Opportunity's Site 43. This view is presented in a cylindrical-perspective projection with geometric seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  4. True 3-D View of 'Columbia Hills' from an Angle

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This mosaic of images from NASA's Mars Exploration Rover Spirit shows a panorama of the 'Columbia Hills' without any adjustment for rover tilt. When viewed through 3-D glasses, depth is much more dramatic and easier to see, compared with a tilt-adjusted version. This is because stereo views are created by producing two images, one corresponding to the view from the panoramic camera's left-eye camera, the other corresponding to the view from the panoramic camera's right-eye camera. The brain processes the visual input more accurately when the two images do not have any vertical offset. In this view, the vertical alignment is nearly perfect, but the horizon appears to curve because of the rover's tilt (because the rover was parked on a steep slope, it was tilted approximately 22 degrees to the west-northwest). Spirit took the images for this 360-degree panorama while en route to higher ground in the 'Columbia Hills.'

    The highest point visible in the hills is 'Husband Hill,' named for space shuttle Columbia Commander Rick Husband. To the right are the rover's tracks through the soil, where it stopped to perform maintenance on its right front wheel in July. In the distance, below the hills, is the floor of Gusev Crater, where Spirit landed Jan. 3, 2004, before traveling more than 3 kilometers (1.8 miles) to reach this point. This vista comprises 188 images taken by Spirit's panoramic camera from its 213th day, or sol, on Mars to its 223rd sol (Aug. 9 to 19, 2004). Team members at NASA's Jet Propulsion Laboratory and Cornell University spent several weeks processing images and producing geometric maps to stitch all the images together in this mosaic. The 360-degree view is presented in a cylindrical-perspective map projection with geometric seam correction.

  5. World Wind 3D Earth Viewing

    NASA Technical Reports Server (NTRS)

    Hogan, Patrick; Maxwell, Christopher; Kim, Randolph; Gaskins, Tom

    2007-01-01

    World Wind allows users to zoom from satellite altitude down to any place on Earth, leveraging high-resolution LandSat imagery and SRTM (Shuttle Radar Topography Mission) elevation data to experience Earth in visually rich 3D. In addition to Earth, World Wind can also visualize other planets, and there are already comprehensive data sets for Mars and the Earth's moon, which are as easily accessible as those of Earth. There have been more than 20 million downloads to date, and the software is being used heavily by the Department of Defense due to the code s ability to be extended and the evolution of the code courtesy of NASA and the user community. Primary features include the dynamic access to public domain imagery and its ease of use. All one needs to control World Wind is a two-button mouse. Additional guides and features can be accessed through a simplified menu. A JAVA version will be available soon. Navigation is automated with single clicks of a mouse, or by typing in any location to automatically zoom in to see it. The World Wind install package contains the necessary requirements such as the .NET runtime and managed DirectX library. World Wind can display combinations of data from a variety of sources, including Blue Marble, LandSat 7, SRTM, NASA Scientific Visualization Studio, GLOBE, and much more. A thorough list of features, the user manual, a key chart, and screen shots are available at http://worldwind.arc.nasa.gov.

  6. 3D View of Mars Particle

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This is a 3D representation of the pits seen in the first Atomic Force Microscope, or AFM, images sent back from NASA's Phoenix Mars Lander. Red represents the highest point and purple represents the lowest point.

    The particle in the upper left corner shown at the highest magnification ever seen from another world is a rounded particle about one micrometer, or one millionth of a meter, across. It is a particle of the dust that cloaks Mars. Such dust particles color the Martian sky pink, feed storms that regularly envelop the planet and produce Mars' distinctive red soil.

    The particle was part of a sample informally called 'Sorceress' delivered to the AFM on the 38th Martian day, or sol, of the mission (July 2, 2008). The AFM is part of Phoenix's microscopic station called MECA, or the Microscopy, Electrochemistry, and Conductivity Analyzer.

    The AFM was developed by a Swiss-led consortium, with Imperial College London producing the silicon substrate that holds sampled particles.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  7. Dual-view 3D displays based on integral imaging

    NASA Astrophysics Data System (ADS)

    Wang, Qiong-Hua; Deng, Huan; Wu, Fei

    2016-03-01

    We propose three dual-view integral imaging (DVII) three-dimensional (3D) displays. In the spatial-multiplexed DVII 3D display, each elemental image (EI) is cut into a left and right sub-EIs, and they are refracted to the left and right viewing zones by the corresponding micro-lens array (MLA). Different 3D images are reconstructed in the left and right viewing zones, and the viewing angle is decreased. In the DVII 3D display using polarizer parallax barriers, a polarizer parallax barrier is used in front of both the display panel and the MLA. The polarizer parallax barrier consists of two parts with perpendicular polarization directions. The elemental image array (EIA) is cut to left and right parts. The lights emitted from the left part are modulated by the left MLA and reconstruct a 3D image in the right viewing zone, whereas the lights emitted from the right part reconstruct another 3D image in the left viewing zone. The 3D resolution is decreased. In the time-multiplexed DVII 3D display, an orthogonal polarizer array is attached onto both the display panel and the MLA. The orthogonal polarizer array consists of horizontal and vertical polarizer units and the polarization directions of the adjacent units are orthogonal. In State 1, each EI is reconstructed by its corresponding micro-lens, whereas in State 2, each EI is reconstructed by its adjacent micro-lens. 3D images 1 and 2 are reconstructed alternately with a refresh rate up to 120HZ. The viewing angle and 3D resolution are the same as the conventional II 3D display.

  8. 3D View of Grand Canyon, Arizona

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Grand Canyon is one of North America's most spectacular geologic features. Carved primarily by the Colorado River over the past six million years, the canyon sports vertical drops of 5,000 feet and spans a 445-kilometer-long stretch of Arizona desert. The strata along the steep walls of the canyon form a record of geologic time from the Paleozoic Era (250 million years ago) to the Precambrian (1.7 billion years ago).

    The above view was acquired by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument aboard the Terra spacecraft. Visible and near infrared data were combined to form an image that simulates the natural colors of water and vegetation. Rock colors, however, are not accurate. The image data were combined with elevation data to produce this perspective view, with no vertical exaggeration, looking from above the South Rim up Bright Angel Canyon towards the North Rim. The light lines on the plateau at lower right are the roads around the Canyon View Information Plaza. The Bright Angel Trail, which reaches the Colorado in 11.3 kilometers, can be seen dropping into the canyon over Plateau Point at bottom center. The blue and black areas on the North Rim indicate a forest fire that was smoldering as the data were acquired on May 12, 2000.

    Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of International Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products. Dr. Anne Kahle at NASA's Jet Propulsion Laboratory, Pasadena, Calif., is the U.S. Science team leader; Moshe Pniel of JPL is the project manager. ASTER is the only high resolution imaging sensor on Terra. The primary goal of the ASTER mission is to obtain high-resolution image data in 14 channels over the entire land

  9. Complete 3D model reconstruction from multiple views

    NASA Astrophysics Data System (ADS)

    Lin, Huei-Yung; Subbarao, Murali; Park, Soon-Yong

    2002-02-01

    New algorithms are presented for automatically acquiring the complete 3D model of single and multiple objects using rotational stereo. The object is placed on a rotation stage. Stereo images for several viewing directions are taken by rotating the object by known angles. Partial 3D shapes and the corresponding texture maps are obtained using rotational stereo and shape from focus. First, for each view, shape from focus is used to obtain a rough 3D shape and the corresponding focused image. Then, the rough 3D shape and focused images are used in rotational stereo to obtain a more accurate measurement of 3D shape. The rotation axis is calibrated using three fixed points on a planar object and refined during surface integration. The complete 3D model is reconstructed by integrating partial 3D shapes and the corresponding texture maps of the object from multiple views. New algorithms for range image registration, surface integration and texture mapping are presented. Our method can generate 3D models very fast and preserve the texture of objects. A new prototype vision system named Stonybrook VIsion System 2 (SVIS-2) has been built and used in the experiments. In the experiments, 4 viewing directions at 90-degree intervals are used. SVIS-2 can acquire the 3D model of objects within a 250 mm x 250 mm x 250 mm cubic workspace placed about 750 mm from the camera. Both computational algorithms and experimental results on several objects are presented.

  10. Evaluation of usefulness of 3D views for clinical photography.

    PubMed

    Jinnin, Masatoshi; Fukushima, Satoshi; Masuguchi, Shinichi; Tanaka, Hiroki; Kawashita, Yoshio; Ishihara, Tsuyoshi; Ihn, Hironobu

    2011-01-01

    This is the first report investigating the usefulness of a 3D viewing technique (parallel viewing and cross-eyed viewing) for presenting clinical photography. Using the technique, we can grasp 3D structure of various lesions (e.g. tumors, wounds) or surgical procedures (e.g. lymph node dissection, flap) much more easily even without any cost and optical aids compared to 2D photos. Most recently 3D cameras started to be commercially available, but they may not be useful for presentation in scientific papers or poster sessions. To create a stereogram, two different pictures were taken from the right and left eye views using a digital camera. Then, the two pictures were placed next to one another. Using 9 stereograms, we performed a questionnaire-based survey. Our survey revealed 57.7% of the doctors/students had acquired the 3D viewing technique and an additional 15.4% could learn parallel viewing with 10 minutes training. Among the subjects capable of 3D views, 73.7% used the parallel view technique whereas only 26.3% chose the cross-eyed view. There was no significant difference in the results of the questionnaire about the efficiency and usefulness of 3D views between parallel view users and cross-eyed users. Almost all subjects (94.7%) answered that the technique is useful. Lesions with multiple undulations are a good application. 3D views, especially parallel viewing, are likely to be common and easy enough to consider for practical use in doctors/students. The wide use of the technique may revolutionize presentation of clinical pictures in meetings, educational lectures, or manuscripts. PMID:22101377

  11. Balance and coordination after viewing stereoscopic 3D television

    PubMed Central

    Read, Jenny C. A.; Simonotto, Jennifer; Bohr, Iwo; Godfrey, Alan; Galna, Brook; Rochester, Lynn; Smulders, Tom V.

    2015-01-01

    Manufacturers and the media have raised the possibility that viewing stereoscopic 3D television (S3D TV) may cause temporary disruption to balance and visuomotor coordination. We looked for evidence of such effects in a laboratory-based study. Four hundred and thirty-three people aged 4–82 years old carried out tests of balance and coordination before and after viewing an 80 min movie in either conventional 2D or stereoscopic 3D, while wearing two triaxial accelerometers. Accelerometry produced little evidence of any change in body motion associated with S3D TV. We found no evidence that viewing the movie in S3D causes a detectable impairment in balance or in visuomotor coordination. PMID:26587261

  12. Evaluation of viewing experiences induced by curved 3D display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul; Yano, Sumio

    2015-05-01

    As advanced display technology has been developed, much attention has been given to flexible panels. On top of that, with the momentum of the 3D era, stereoscopic 3D technique has been combined with the curved displays. However, despite the increased needs for 3D function in the curved displays, comparisons between curved and flat panel displays with 3D views have rarely been tested. Most of the previous studies have investigated their basic ergonomic aspects such as viewing posture and distance with only 2D views. It has generally been known that curved displays are more effective in enhancing involvement in specific content stories because field of views and distance from the eyes of viewers to both edges of the screen are more natural in curved displays than in flat panel ones. For flat panel displays, ocular torsions may occur when viewers try to move their eyes from the center to the edges of the screen to continuously capture rapidly moving 3D objects. This is due in part to differences in viewing distances from the center of the screen to eyes of viewers and from the edges of the screen to the eyes. Thus, this study compared S3D viewing experiences induced by a curved display with those of a flat panel display by evaluating significant subjective and objective measures.

  13. 3-D Television Without Glasses: On Standard Bandwidth

    NASA Astrophysics Data System (ADS)

    Collender, Robert B.

    1983-10-01

    This system for stereoscopic television uses relative camera to scene translating motion and does not require optical aids at the observer's eyes, presents a horizontal parallax (hologram like) 3-D full motion scene to a wide audience, has no dead zones or pseudo 3-D zones over the entire horizontal viewing field and operates on standard telecast signals requiring no changes to the television studio equipment or the home television antenna. The only change required at the receiving end is a special television projector. The system is compatible with pre-recorded standard color television signals. The cathode ray tube is eliminated by substituting an array of solid state charge couple device liquid crystal light valves which have the property to receive television fields in parallel from memory and which are arrayed in an arc for scanning purposes. The array contains a scrolled sequence of successive television frames which serve as the basis for 3-D horizontal viewing parallax. These light valves reflect polarized light with the degree of polarization made a function of the scene brightness. The array is optically scanned and the sequence rapidly projected onto a cylindrical concaved semi-specular screen that returns all of the light to a rapidly translating vertical "aerial" exit slit of light through which the audience views the reconstructed 3-D scene.

  14. A closer view of prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Shark, Half-Dome, Pumpkin, Flat Top and Frog are at center. Little Flat Top is at right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  15. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  16. System crosstalk measurement of a time-sequential 3D display using ideal shutter glasses

    NASA Astrophysics Data System (ADS)

    Chen, Fu-Hao; Huang, Kuo-Chung; Lin, Lang-Chin; Chou, Yi-Heng; Lee, Kuen

    2011-03-01

    The market of stereoscopic 3D TV grows up fast recently; however, for 3D TV really taking off, the interoperability of shutter glasses (SG) to view different TV sets must be solved, so we developed a measurement method with ideal shutter glasses (ISG) to separate time-sequential stereoscopic displays and SG. For measuring the crosstalk from time-sequential stereoscopic 3D displays, the influences from SG must be eliminated. The advantages are that the sources to crosstalk are distinguished, and the interoperability of SG is broadened. Hence, this paper proposed ideal shutter glasses, whose non-ideal properties are eliminated, as a platform to evaluate the crosstalk purely from the display. In the ISG method, the illuminance of the display was measured in time domain to analyze the system crosstalk SCT of the display. In this experiment, the ISG method was used to measure SCT with a high-speed-response illuminance meter. From the time-resolved illuminance signals, the slow time response of liquid crystal leading to SCT is visualized and quantified. Furthermore, an intriguing phenomenon that SCT measured through SG increases with shortening view distance was observed, and it may arise from LC leakage of the display and shutter leakage at large view angle. Thus, we measured how LC and shutter leakage depending on view angle and verified our argument. Besides, we used the ISG method to evaluate two displays.

  17. Efficient fabrication method of nano-grating for 3D holographic display with full parallax views.

    PubMed

    Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Fang, Zongbao; Pu, Donglin; Ye, Yan; Liu, Yanhua; Chen, Linsen

    2016-03-21

    Without any special glasses, multiview 3D displays based on the diffractive optics can present high resolution, full-parallax 3D images in an ultra-wide viewing angle. The enabling optical component, namely the phase plate, can produce arbitrarily distributed view zones by carefully designing the orientation and the period of each nano-grating pixel. However, such 3D display screen is restricted to a limited size due to the time-consuming fabricating process of nano-gratings on the phase plate. In this paper, we proposed and developed a lithography system that can fabricate the phase plate efficiently. Here we made two phase plates with full nano-grating pixel coverage at a speed of 20 mm2/mins, a 500 fold increment in the efficiency when compared to the method of E-beam lithography. One 2.5-inch phase plate generated 9-view 3D images with horizontal-parallax, while the other 6-inch phase plate produced 64-view 3D images with full-parallax. The angular divergence in horizontal axis and vertical axis was 1.5 degrees, and 1.25 degrees, respectively, slightly larger than the simulated value of 1.2 degrees by Finite Difference Time Domain (FDTD). The intensity variation was less than 10% for each viewpoint, in consistency with the simulation results. On top of each phase plate, a high-resolution binary masking pattern containing amplitude information of all viewing zone was well aligned. We achieved a resolution of 400 pixels/inch and a viewing angle of 40 degrees for 9-view 3D images with horizontal parallax. In another prototype, the resolution of each view was 160 pixels/inch and the view angle was 50 degrees for 64-view 3D images with full parallax. As demonstrated in the experiments, the homemade lithography system provided the key fabricating technology for multiview 3D holographic display.

  18. 3D Viewing: Odd Perception - Illusion? reality? or both?

    NASA Astrophysics Data System (ADS)

    Kisimoto, K.; Iizasa, K.

    2008-12-01

    We live in the three dimensional space, don't we? It could be at least four dimensions, but that is another story. In either way our perceptual capability of 3D-Viewing is constrained by our 2D-perception (our intrinsic tools of perception). I carried out a few visual experiments using topographic data to show our intrinsic (or biological) disability (or shortcoming) in 3D-recognition of our world. Results of the experiments suggest: (1) 3D-surface model displayed on a 2D-computer screen (or paper) always has two interpretations of the 3D- surface geometry, if we choose one of the interpretation (in other word, if we are hooked by one perception of the two), we maintain its perception even if the 3D-model changes its viewing perspective in time shown on the screen, (2) more interesting is that 3D-real solid object (e.g.,made of clay) also gives above mentioned two interpretations of the geometry of the object, if we observe the object with one-eye. Most famous example of this viewing illusion is exemplified by a magician, who died in 2007, Jerry Andrus who made a super-cool paper crafted dragon which causes visual illusion to one-eyed viewer. I, by the experiments, confirmed this phenomenon in another perceptually persuasive (deceptive?) way. My conclusion is that this illusion is intrinsic, i.e. reality for human, because, even if we live in 3D-space, our perceptional tool (eyes) is composed of 2D sensors whose information is reconstructed or processed to 3D by our experience-based brain. So, (3) when we observe the 3D-surface-model on the computer screen, we are always one eye short even if we use both eyes. One last suggestion from my experiments is that recent highly sophisticated 3D- models might include too many information that human perceptions cannot handle properly, i.e. we might not be understanding the 3D world (geospace) at all, just illusioned.

  19. Thermal 3D modeling system based on 3-view geometry

    NASA Astrophysics Data System (ADS)

    Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-11-01

    In this paper, we propose a novel thermal three-dimensional (3D) modeling system that includes 3D shape, visual, and thermal infrared information and solves a registration problem among these three types of information. The proposed system consists of a projector, a visual camera and, a thermal camera (PVT). To generate 3D shape information, we use a structured light technique, which consists of a visual camera and a projector. A thermal camera is added to the structured light system in order to provide thermal information. To solve the correspondence problem between the three sensors, we use three-view geometry. Finally, we obtain registered PVT data, which includes visual, thermal, and 3D shape information. Among various potential applications such as industrial measurements, biological experiments, military usage, and so on, we have adapted the proposed method to biometrics, particularly for face recognition. With the proposed method, we obtain multi-modal 3D face data that includes not only textural information but also data regarding head pose, 3D shape, and thermal information. Experimental results show that the performance of the proposed face recognition system is not limited by head pose variation which is a serious problem in face recognition.

  20. Video retargeting for stereoscopic content under 3D viewing constraints

    NASA Astrophysics Data System (ADS)

    Chamaret, C.; Boisson, G.; Chevance, C.

    2012-03-01

    The imminent deployment of new devices such as TV, tablet, smart phone supporting stereoscopic display creates a need for retargeting the content. New devices bring their own aspect ratio and potential small screen size. Aspect ratio conversion becomes mandatory and an automatic solution will be of high value especially if it maximizes the visual comfort. Some issues inherent to 3D domain are considered in this paper: no vertical disparity, no object having negative disparity (outward perception) on the border of the cropping window. A visual attention model is applied on each view and provides saliency maps with most attractive pixels. Dedicated 3D retargeting correlates the 2D attention maps for each view as well as additional computed information to ensure the best cropping window. Specific constraints induced by 3D experience influence the retargeted window through the map computation presenting objects that should not be cropped. The comparison with original content of 2:35 ratio having black stripes provide limited 3D experience on TV screen, while the automatic cropping and exploitation of full screen show more immersive experience. The proposed system is fully automatic, ensures a good final quality without missing fundamental parts for the global understanding of the scene. Eye-tracking data recorded on stereoscopic content have been confronted to retargeted window in order to ensure that the most attractive areas are inside the final video.

  1. 3-D object recognition using 2-D views.

    PubMed

    Li, Wenjing; Bebis, George; Bourbakis, Nikolaos G

    2008-11-01

    We consider the problem of recognizing 3-D objects from 2-D images using geometric models and assuming different viewing angles and positions. Our goal is to recognize and localize instances of specific objects (i.e., model-based) in a scene. This is in contrast to category-based object recognition methods where the goal is to search for instances of objects that belong to a certain visual category (e.g., faces or cars). The key contribution of our work is improving 3-D object recognition by integrating Algebraic Functions of Views (AFoVs), a powerful framework for predicting the geometric appearance of an object due to viewpoint changes, with indexing and learning. During training, we compute the space of views that groups of object features can produce under the assumption of 3-D linear transformations, by combining a small number of reference views that contain the object features using AFoVs. Unrealistic views (e.g., due to the assumption of 3-D linear transformations) are eliminated by imposing a pair of rigidity constraints based on knowledge of the transformation between the reference views of the object. To represent the space of views that an object can produce compactly while allowing efficient hypothesis generation during recognition, we propose combining indexing with learning in two stages. In the first stage, we sample the space of views of an object sparsely and represent information about the samples using indexing. In the second stage, we build probabilistic models of shape appearance by sampling the space of views of the object densely and learning the manifold formed by the samples. Learning employs the Expectation-Maximization (EM) algorithm and takes place in a "universal," lower-dimensional, space computed through Random Projection (RP). During recognition, we extract groups of point features from the scene and we use indexing to retrieve the most feasible model groups that might have produced them (i.e., hypothesis generation). The likelihood

  2. Crosstalk minimization in autostereoscopic multiveiw 3D display by eye tracking and fusion (overlapping) of viewing zones

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Seon-Kyu; Yoon, Ki-Hyuk

    2012-06-01

    An autostereoscopic 3D display provides the binocular perception without eye glasses, but induces the low 3D effect and dizziness due to the crosstalk effect. The crosstalk related problems give the deterioration of 3D effect, clearness, and reality of 3D image. A novel method of reducing the crosstalk is designed and tested; the method is based on the fusion of viewing zones and the real time eye position. It is shown experimentally that the crosstalk is effectively reduced at any position around the optimal viewing distance.

  3. [3D reconstruction of multiple views based on trifocal tensor].

    PubMed

    Chen, Chunxiao; Zhang, Juan

    2012-08-01

    Reconstruction of 3D structure of an object from 2D views plays an important role in plastic surgery and orthopedics. This method doesn't need camera to do specific movements, such as translation or rotation independently. It only needs a hand-hold camera arbitrarily to take a few pictures, and apply the geometry relationship among the three views to obtain the projective reconstruction of the object. Then, it needs to introduce cheirality constraint in stratified reconstruction to determine the search area of the infinity plane, and finally achieve the camera intrinsic parameters calibration, and complete the metric reconstruction. This model has also been reconstructed with mouse and keyboard response coordinates to observe the model from different angles. Experiments with both pictures of object and face pictures show that the proposed method is very robust and accurate. PMID:23016433

  4. A method of multi-view intraoral 3D measurement

    NASA Astrophysics Data System (ADS)

    Zhao, Huijie; Wang, Zhen; Jiang, Hongzhi; Xu, Yang; Lv, Peijun; Sun, Yunchun

    2015-02-01

    In dental restoration, its important to achieve a high-accuracy digital impression. Most of the existing intraoral measurement systems can only measure the tooth from a single view. Therfore - if we are wilng to acquire the whole data of a tooth, the scans of the tooth from multi-direction ad the data stitching based on the features of the surface are needed, which increases the measurement duration and influence the measurement accuracy. In this paper, we introduce a fringe-projection based on multi-view intraoral measurement system. It can acquire 3D data of the occlusal surface, the buccal surface and the lingual surface of a tooth synchronously, by using a senor with three mirrors, which aim at the three surfaces respectively and thus expand the measuring area. The constant relationship of the three mirrors is calibrated before measurement and can help stitch the data clouds acquired through different mirrors accurately. Therefore the system can obtain the 3D data of a tooth without the need to measure it from different directions for many times. Experiments proved the availability and reliability of this miniaturized measurement system.

  5. 3. General view showing rear of looking glass aircraft. View ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. General view showing rear of looking glass aircraft. View to north. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  6. 5. Headon view of looking glass aircraft. View to southwest. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. Head-on view of looking glass aircraft. View to southwest. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  7. 4. View showing underside of wing, looking glass aircraft. View ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. View showing underside of wing, looking glass aircraft. View to north. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Aircraft, On Operational Apron covering northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  8. Depth-fused 3D (DFD) display with multiple viewing zones

    NASA Astrophysics Data System (ADS)

    Date, Munekazu; Sugimoto, Satoshi; Takada, Hideaki; Nakazawa, Kenji

    2007-09-01

    A new depth-fused 3-D (DFD) display for multiple users is presented. A DFD display, which consists of a stack of layered screens, is expected to be a visually comfortable 3-D display because it can satisfy not only binocular disparity, convergence, accommodation, but also motion parallax for a small observer displacement. However, the display cannot be observed from an oblique angle due to image doubling caused by the layered screen structure, so the display is applicable only for single-observer use. In this paper, we present a multi-viewing-zone DFD display using a stack of a see-through screen and a multi-viewing-zone 2-D display. We used a film, which causes polarization-selective scattering, as the front screen, and an anisotropic scattering film for the rear screen. The front screen was illuminated by one projector, and the screen displayed an image at all viewing angles. The rear screen was illuminated by multiple projectors from different directions. The displayed images on the rear screen were arranged to be well overlapped for each viewing direction to create multiple viewing zones without image doubling. This design is promising for a large-area 3-D display that does not require special glasses because the display uses projection and has a simple structure.

  9. A 3D view of the SN 1987A Ejecta

    NASA Astrophysics Data System (ADS)

    Fransson, Claes

    2013-10-01

    SN 1987A represents the most important source of information about the explosion physics of any SN. For this the morphology of the ejecta is together with the radioactive isotopes the best diagnostics. From HST imaging in H-alpha and NIR AO imaging in Si/Fe at 1.64 mu one finds completely different morphologies, with the 1.64 mu image dominated by the processed core and H-alpha by the surrounding H envelope. Besides Cas A (Type IIb), this is the only core collapse SN where we have this information. We propose to use STIS to map the debris in SN 1987A in 3D with the best possible angular resolution. There has been no such STIS map since 2004, while the physics of the emission has undergone some profound changes. From being powered by radioactivity the energy input is now dominated by X-rays from the collision with the circumstellar ring. Compared to 2004 the 3D structure can be determined with a factor of 3 better spatial resolution and also better spectral resolution. The 3D structure in H-alpha can also give independent clues to where the large mass of dust detected by Herschel is located as well as its properties. It also gives a complementary view of the ejecta to the future ALMA imaging in CO which will have similar spatial resolution. Besides the debris we will be able to probe the 10,000 km/s reverse shock close to the ring in H-alpha. By observing this also in Ly-alpha one may test different emission processes which have been proposed, as well as probing the region producing the synchrotron emission observed by ALMA. The opportunity to observe the SN in this stage will never come back

  10. FIT3D toolbox: multiple view geometry and 3D reconstruction for Matlab

    NASA Astrophysics Data System (ADS)

    Esteban, Isaac; Dijk, Judith; Groen, Frans

    2010-10-01

    FIT3D is a Toolbox built for Matlab that aims at unifying and distributing a set of tools that will allow the researcher to obtain a complete 3D model from a set of calibrated images. In this paper we motivate and present the structure of the toolbox in a tutorial and example based approach. Given its flexibility and scope we believe that FIT3D represents an exciting opportunity for researchers that want to apply one particular method with real data without the need for extensive additional programming.

  11. Measuring heterogenous stress fields in a 3D colloidal glass

    NASA Astrophysics Data System (ADS)

    Lin, Neil; Bierbaum, Matthew; Bi, Max; Sethna, James; Cohen, Itai

    Glass in our common experience is hard and fragile. But it still bends, yields, and flows slowly under loads. The yielding of glass, a well documented yet not fully understood flow behavior, is governed by the heterogenous local stresses in the material. While resolving stresses at the atomic scale is not feasible, measurements of stresses at the single particle level in colloidal glasses, a widely used model system for atomic glasses, has recently been made possible using Stress Assessment from Local Structural Anisotropy (SALSA). In this work, we use SALSA to visualize the three dimensional stress network in a hard-sphere glass during start-up shear. By measuring the evolution of this stress network we identify local-yielding. We find that these local-yielding events often require only minimal structural rearrangement and as such have most likely been ignored in previous analyses. We then relate these micro-scale yielding events to the macro-scale flow behavior observed using bulk measurements.

  12. View generation for 3D-TV using image reconstruction from irregularly spaced samples

    NASA Astrophysics Data System (ADS)

    Vázquez, Carlos

    2007-02-01

    Three-dimensional television (3D-TV) will become the next big step in the development of advanced TV systems. One of the major challenges for the deployment of 3D-TV systems is the diversity of display technologies and the high cost of capturing multi-view content. Depth image-based rendering (DIBR) has been identified as a key technology for the generation of new views for stereoscopic and multi-view displays from a small number of views captured and transmitted. We propose a disparity compensation method for DIBR that does not require spatial interpolation of the disparity map. We use a forward-mapping disparity compensation with real precision. The proposed method deals with the irregularly sampled image resulting from this disparity compensation process by applying a re-sampling algorithm based on a bi-cubic spline function space that produces smooth images. The fact that no approximation is made on the position of the samples implies that geometrical distortions in the final images due to approximations in sample positions are minimized. We also paid attention to the occlusion problem. Our algorithm detects the occluded regions in the newly generated images and uses simple depth-aware inpainting techniques to fill the gaps created by newly exposed areas. We tested the proposed method in the context of generation of views needed for viewing on SynthaGram TM auto-stereoscopic displays. We used as input either a 2D image plus a depth map or a stereoscopic pair with the associated disparity map. Our results show that this technique provides high quality images to be viewed on different display technologies such as stereoscopic viewing with shutter glasses (two views) and lenticular auto-stereoscopic displays (nine views).

  13. Glasses for 3D ultrasound computer tomography: phase compensation

    NASA Astrophysics Data System (ADS)

    Zapf, M.; Hopp, T.; Ruiter, N. V.

    2016-03-01

    Ultrasound Computer Tomography (USCT), developed at KIT, is a promising new imaging system for breast cancer diagnosis, and was successfully tested in a pilot study. The 3D USCT II prototype consists of several hundreds of ultrasound (US) transducers on a semi-ellipsoidal aperture. Spherical waves are sequentially emitted by individual transducers and received in parallel by many transducers. Reflectivity volumes are reconstructed by synthetic aperture focusing (SAFT). However, straight forward SAFT imaging leads to blurred images due to system imperfections. We present an extension of a previously proposed approach to enhance the images. This approach includes additional a priori information and system characteristics. Now spatial phase compensation was included. The approach was evaluated with a simulation and clinical data sets. An increase in the image quality was observed and quantitatively measured by SNR and other metrics.

  14. Microbiological safety of glasses dispensed at 3D movie theatres.

    PubMed

    De Giusti, Maria; Marinelli, Lucia; Ursillo, Paolo; Del Cimmuto, Angela; Cottarelli, Alessia; Palazzo, Caterina; Marzuillo, Carolina; Solimini, Angelo Giuseppe; Boccia, Antonio

    2015-02-01

    Recent popularity of three-dimensional movies raised some concern about microbiological safety of glasses dispensed into movie theatres. In this study, we analysed the level of microbiological contamination on them before and after use and between theatres adopting manual and automatic sanitation systems. The manual sanitation system was more effective in reducing the total mesophilic count levels compared with the automatic system (P < 0.05), but no differences were found for coagulase-positive staphylococci levels (P = 0.22). No differences were found for mould and yeast between before and after levels (P = 0.21) and between sanitation systems (P = 0.44). We conclude that more evidences are needed to support microbiological risk evaluation.

  15. 3D laser gated viewing from a moving submarine platform

    NASA Astrophysics Data System (ADS)

    Christnacher, F.; Laurenzis, M.; Monnin, D.; Schmitt, G.; Metzger, Nicolas; Schertzer, Stéphane; Scholtz, T.

    2014-10-01

    Range-gated active imaging is a prominent technique for night vision, remote sensing or vision through obstacles (fog, smoke, camouflage netting…). Furthermore, range-gated imaging not only informs on the scene reflectance but also on the range for each pixel. In this paper, we discuss 3D imaging methods for underwater imaging applications. In this situation, it is particularly difficult to stabilize the imaging platform and these 3D reconstruction algorithms suffer from the motion between the different images in the recorded sequence. To overcome this drawback, we investigated a new method based on a combination between image registration by homography and 3D scene reconstruction through tomography or two-image technique. After stabilisation, the 3D reconstruction is achieved by using the two upper-mentioned techniques. In the different experimental examples given in this paper, a centimetric resolution could be achieved.

  16. Automated 3D reconstruction of interiors with multiple scan views

    NASA Astrophysics Data System (ADS)

    Sequeira, Vitor; Ng, Kia C.; Wolfart, Erik; Goncalves, Joao G. M.; Hogg, David C.

    1998-12-01

    This paper presents two integrated solutions for realistic 3D model acquisition and reconstruction; an early prototype, in the form of a push trolley, and a later prototype in the form of an autonomous robot. The systems encompass all hardware and software required, from laser and video data acquisition, processing and output of texture-mapped 3D models in VRML format, to batteries for power supply and wireless network communications. The autonomous version is also equipped with a mobile platform and other sensors for the purpose of automatic navigation. The applications for such a system range from real estate and tourism (e.g., showing a 3D computer model of a property to a potential buyer or tenant) or as tool for content creation (e.g., creating 3D models of heritage buildings or producing broadcast quality virtual studios). The system can also be used in industrial environments as a reverse engineering tool to update the design of a plant, or as a 3D photo-archive for insurance purposes. The system is Internet compatible: the photo-realistic models can be accessed via the Internet and manipulated interactively in 3D using a common Web browser with a VRML plug-in. Further information and example reconstructed models are available on- line via the RESOLV web-page at http://www.scs.leeds.ac.uk/resolv/.

  17. 3-D Perspective View, Miquelon and Saint Pierre Islands

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image shows Miquelon and Saint Pierre Islands, located south of Newfoundland, Canada. These islands, along with five smaller islands, are a self-governing territory of France. North is in the top right corner of the image. The island of Miquelon, in the background, is divided by a thin barrier beach into Petite Miquelon on the left, and Grande Miquelon on the right. Saint Pierre Island is seen in the foreground. The maximum elevation of this land is 240 meters (787 feet). The land mass of the islands is about 242square kilometers (94 square miles) or 1.5 times the size of Washington, DC.

    This three-dimensional perspective view is one of several still photographs taken from a simulated flyover of the islands. It shows how elevation data collected by the Shuttle Radar Topography Mission (SRTM) can be used to enhance other satellite images. Color and natural shading are provided by a Landsat 7 image taken on September 7, 1999. The Landsat image was draped over the SRTM data. Terrain perspective and shading are from SRTM. The vertical scale has been increased six times to make it easier to see the small features. This also makes the sea cliffs around the edges of the islands look larger. In this view the capital city of Saint Pierre is seen as the bright area in the foreground of the island. The thin bright line seen in the water is a breakwater that offers some walled protection for the coastal city.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and

  18. Dynamics of 3D view invariance in monkey inferotemporal cortex.

    PubMed

    Ratan Murty, N Apurva; Arun, Sripati P

    2015-04-01

    Rotations in depth are challenging for object vision because features can appear, disappear, be stretched or compressed. Yet we easily recognize objects across views. Are the underlying representations view invariant or dependent? This question has been intensely debated in human vision, but the neuronal representations remain poorly understood. Here, we show that for naturalistic objects, neurons in the monkey inferotemporal (IT) cortex undergo a dynamic transition in time, whereby they are initially sensitive to viewpoint and later encode view-invariant object identity. This transition depended on two aspects of object structure: it was strongest when objects foreshortened strongly across views and were similar to each other. View invariance in IT neurons was present even when objects were reduced to silhouettes, suggesting that it can arise through similarity between external contours of objects across views. Our results elucidate the viewpoint debate by showing that view invariance arises dynamically in IT neurons out of a representation that is initially view dependent.

  19. LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.

  20. Real-Depth imaging: a new (no glasses) 3D imaging technology with video/data projection applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax and accommodation which coincides with convergence. Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed. The imagery must either be formatted for the Floating Images platform when written, or existing software can be reformatted without much difficult. The optical hardware system can be made to accommodate virtually any projection system to produce Floating Images for the Boardroom, video arcade, stage shows, or the classroom.

  1. Spirit 360-Degree View, Sol 388 (3D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this 360-degree view of the rover's surroundings on Spirit's 388th martian day, or sol (Feb. 4, 2005). Spirit had driven about 13 meters (43 feet) uphill toward 'Cumberland Ridge' on this sol. This location is catalogued as Spirit's Site 102, Position 513. The view is presented in a cylindrical-perspective projection with geometric and brightness seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  2. Spirit 360-Degree View on Sol 409 (3D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this 360-degree view of the rover's surroundings on Spirit's 409th martian day, or sol (Feb. 26, 2005). Spirit had driven 2 meters (7 feet) on this sol to get in position on 'Cumberland Ridge' for looking into 'Tennessee Valley' to the east. This location is catalogued as Spirit's Site 108. Rover-wheel tracks from climbing the ridge are visible on the right. The summit of 'Husband Hill' is at the center, to the south. This view is presented in a cylindrical-perspective projection with geometric and brightness seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  3. Full-Circle View from Near 'Tetl' (3D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    This 360-degree view combines frames taken by the navigation camera on NASA's Mars Exploration Rover Spirit during the rover's 271st martian day, or sol, on Oct. 7, 2004. The rover had just driven into position for using the tools on its robotic arm (not in the picture) to examine a layered rock called 'Tetl' in the 'Columbia Hills.' Spirit's total driving distance from its landing to this point was 3,641 meters (2.26 miles), more than six times the distance set as a criterion for mission success. The three-dimensional view is presented here in a cylindrical projection with geometric seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  4. Computer-generated hologram for 3D scene from multi-view images

    NASA Astrophysics Data System (ADS)

    Chang, Eun-Young; Kang, Yun-Suk; Moon, KyungAe; Ho, Yo-Sung; Kim, Jinwoong

    2013-05-01

    Recently, the computer generated hologram (CGH) calculated from real existing objects is more actively investigated to support holographic video and TV applications. In this paper, we propose a method of generating a hologram of the natural 3-D scene from multi-view images in order to provide motion parallax viewing with a suitable navigation range. After a unified 3-D point source set describing the captured 3-D scene is obtained from multi-view images, a hologram pattern supporting motion-parallax is calculated from the set using a point-based CGH method. We confirmed that 3-D scenes are faithfully reconstructed using numerical reconstruction.

  5. Viewing 3D TV over two months produces no discernible effects on balance, coordination or eyesight.

    PubMed

    Read, Jenny C A; Godfrey, Alan; Bohr, Iwo; Simonotto, Jennifer; Galna, Brook; Smulders, Tom V

    2016-08-01

    With the rise in stereoscopic 3D media, there has been concern that viewing stereoscopic 3D (S3D) content could have long-term adverse effects, but little data are available. In the first study to address this, 28 households who did not currently own a 3D TV were given a new TV set, either S3D or 2D. The 116 members of these households all underwent tests of balance, coordination and eyesight, both before they received their new TV set, and after they had owned it for 2 months. We did not detect any changes which appeared to be associated with viewing 3D TV. We conclude that viewing 3D TV does not produce detectable effects on balance, coordination or eyesight over the timescale studied. Practitioner Summary: Concern has been expressed over possible long-term effects of stereoscopic 3D (S3D). We looked for any changes in vision, balance and coordination associated with normal home S3D TV viewing in the 2 months after first acquiring a 3D TV. We find no evidence of any changes over this timescale.

  6. Viewing 3D TV over two months produces no discernible effects on balance, coordination or eyesight

    PubMed Central

    Read, Jenny C.A.; Godfrey, Alan; Bohr, Iwo; Simonotto, Jennifer; Galna, Brook; Smulders, Tom V.

    2016-01-01

    Abstract With the rise in stereoscopic 3D media, there has been concern that viewing stereoscopic 3D (S3D) content could have long-term adverse effects, but little data are available. In the first study to address this, 28 households who did not currently own a 3D TV were given a new TV set, either S3D or 2D. The 116 members of these households all underwent tests of balance, coordination and eyesight, both before they received their new TV set, and after they had owned it for 2 months. We did not detect any changes which appeared to be associated with viewing 3D TV. We conclude that viewing 3D TV does not produce detectable effects on balance, coordination or eyesight over the timescale studied. Practitioner Summary: Concern has been expressed over possible long-term effects of stereoscopic 3D (S3D). We looked for any changes in vision, balance and coordination associated with normal home S3D TV viewing in the 2 months after first acquiring a 3D TV. We find no evidence of any changes over this timescale. PMID:26758965

  7. Fabrication of 3D microfluidic structures inside glass by femtosecond laser micromachining

    NASA Astrophysics Data System (ADS)

    Sugioka, Koji; Cheng, Ya

    2014-01-01

    Femtosecond lasers have opened up new avenues in materials processing due to their unique characteristics of ultrashort pulse widths and extremely high peak intensities. One of the most important features of femtosecond laser processing is that a femtosecond laser beam can induce strong absorption in even transparent materials due to nonlinear multiphoton absorption. This makes it possible to directly create three-dimensional (3D) microfluidic structures in glass that are of great use for fabrication of biochips. For fabrication of the 3D microfluidic structures, two technical approaches are being attempted. One of them employs femtosecond laser-induced internal modification of glass followed by wet chemical etching using an acid solution (Femtosecond laser-assisted wet chemical etching), while the other one performs femtosecond laser 3D ablation of the glass in distilled water (liquid-assisted femtosecond laser drilling). This paper provides a review on these two techniques for fabrication of 3D micro and nanofluidic structures in glass based on our development and experimental results.

  8. Opportunity's View After Sol 321 Drive (3-D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    NASA's Mars Exploration Rover Opportunity was on its way from 'Endurance Crater' toward the spacecraft's jettisoned heat shield when the navigation camera took the images combined into this 360-degree panorama. Opportunity drove 60 meters (197 feet) on its 321st martian day, or sol (Dec. 18, 2004). These images were taken later that sol and on the following sol. The rover had spent 181 sols inside the crater. This view is presented in a cylindrical-perspective projection without seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  9. A shortcut to align 3D images captured from multiple views

    NASA Astrophysics Data System (ADS)

    Heng, Wei; Wang, Hao

    2008-11-01

    In order to get whole shape of an object, lots of parts of 3D images need to be captured from multiple views and aligned into a same 3D coordinate. That usually involves in both complex software process and expensive hardware system. In this paper, a shortcut approach is proposed to align 3D images captured from multiple views. Employing only a calibrated turntable, a single-view 3D camera can capture a sequence of 3D images of an object from different view angle one by one, then align them quickly and automatically. The alignment doesn't need any help from the operator. It can achieve good performances such as high accuracy, robust, rapidly capturing and low cost. The turntable calibration can be easily implemented by the single-view 3D camera. Fixed with the turntable, single-view 3D camera can calibrate the revolving-axis of the turntable just by measuring the positions of a little calibration-ball revolving with the turntable at several angles. Then system can get the coordinate transformation formula between multiple views of different revolving angle by a LMS algorithm. The formulae for calibration and alignment are given with the precision analysis. Experiments were performed and showed effective result to recover 3D objects.

  10. Recognition by Humans and Pigeons of Novel Views of 3-D Objects and Their Photographs

    ERIC Educational Resources Information Center

    Friedman, Alinda; Spetch, Marcia L.; Ferrey, Anne

    2005-01-01

    Humans and pigeons were trained to discriminate between 2 views of actual 3-D objects or their photographs. They were tested on novel views that were either within the closest rotational distance between the training views (interpolated) or outside of that range (extrapolated). When training views were 60? apart, pigeons, but not humans,…

  11. Venus - 3D Perspective View of Gula Mons

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Gula Mons is displayed in this computer-simulated view of the surface of Venus. The viewpoint is located 110 kilometers (68 miles) southwest of Gula Mons at the same elevation as the summit, 3 kilometers (1.9 miles) above Eistla Regio. Lava flows extend for hundreds of kilometers across the fractured plains. The view is to the northeast with Gula Mons appearing at the center of the image. Gula Mons, a 3 kilometer (1.9 mile) high volcano, is located at approximately 22 degrees north latitude, 359 degrees east longitude in western Eistla Regio. Magellan synthetic aperture radar data is combined with radar altimetry to produce a three-dimensional map of the surface. Rays cast in a computer intersect the surface to create a three-dimensional perspective view. Simulated color and a digital elevation map developed by the U.S. Geological Survey are used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced by the JPL Multimission Image Processing Laboratory and is a single frame from a video released at the March 5, 1991, JPL news conference.

  12. Venus - 3D Perspective View of Sif Mons

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Sif Mons is displayed in this computer-simulated view of the surface of Venus. The viewpoint is located 360 kilometers (223 miles) north of Sif Mons at a height of 7.5 kilometers (4.7 miles) above the lava flows. Lave flows extend for hundreds of kilometers across the fractured plains shown in the foreground to the base of Sif Mons. The view is to the south. Sif Mons, a volcano with a diameter of 300 kilometers (186 miles) and a height of 2 kilometers (1.2 miles), appears in the upper half of the image. Magellan synthetic aperture radar data is combined with radar altimetry to produce a three-dimensional map of the surface. Rays, cast in a computer, intersect the surface to create a three-dimensional perspective view. Simulated color and a digital elevation map developed by the U.S. Geological Survey are used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced at the JPL Multimission Image Processing Laboratory and is a single frame from a video released at the March 5, 1991, JPL news conference.

  13. Venus - 3D Perspective View of Eistla Regio

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A portion of western Eistla Regio is displayed in this three-dimensional perspective view of the surface of Venus. The viewpoint is located 1,310 kilometers (812 miles) southwest of Gula Mons at an elevation of 0.78 kilometer (0.48 mile). The view is to the northeast with Gula Mons appearing on the horizon. Gula Mons, a 3 kilometer (1.86 mile) high volcano, is located at approximately 22 degrees north latitude, 359 degrees east longitude. The impact crater Cunitz, named for the astronomer and mathematician Maria Cunitz, is visible in the center of the image. The crater is 48.5 kilometers (30 miles) in diameter and is 215 kilometers (133 miles) from the viewer's position. Magellan synthetic aperture radar data is combined with radar altimetry to develop a three-dimensional map of the surface. Rays cast in a computer intersect the surface to create a three-dimensional perspective view. Simulated color and a digital elevation map developed by the U.S. Geological Survey, are used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced at the JPL Multimission Image Processing Laboratory and is a single frame from a video released at the March 5, 1991, JPL news conference.

  14. Venus - 3D Perspective View of Maat Mons

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Maat Mons is displayed in this three-dimensional perspective view of the surface of Venus. The viewpoint is located 560 kilometers (347 miles) north of Maat Mons at an elevation of 1.7 kilometers (1 mile) above the terrain. Lava flows extend for hundreds of kilometers across the fractured plains shown in the foreground, to the base of Maat Mons. The view is to the south with Maat Mons appearing at the center of the image on the horizon. Maat Mons, an 8-kilometer (5 mile) high volcano, is located at approximately 0.9 degrees north latitude, 194.5 degrees east longitude. Maat Mons is named for an Egyptian goddess of truth and justice. Magellan synthetic aperture radar data is combined with radar altimetry to develop a three-dimensional map of the surface. The vertical scale in this perspective has been exaggerated 22.5 times. Rays cast in a computer intersect the surface to create a three-dimensional perspective view. Simulated color and a digital elevation map developed by the U.S. Geological Survey, are used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced at the JPL Multimission Image Processing Laboratory.

  15. Dual-view integral imaging 3D display using polarizer parallax barriers.

    PubMed

    Wu, Fei; Wang, Qiong-Hua; Luo, Cheng-Gao; Li, Da-Hai; Deng, Huan

    2014-04-01

    We propose a dual-view integral imaging (DVII) 3D display using polarizer parallax barriers (PPBs). The DVII 3D display consists of a display panel, a microlens array, and two PPBs. The elemental images (EIs) displayed on the left and right half of the display panel are captured from two different 3D scenes, respectively. The lights emitted from two kinds of EIs are modulated by the left and right half of the microlens array to present two different 3D images, respectively. A prototype of the DVII 3D display is developed, and the experimental results agree well with the theory.

  16. Venus - 3D Perspective View of Sapas Mons

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Sapas Mons is displayed in the center of this computer-generated three-dimensional perspective view of the surface of Venus. The viewpoint is located 527 kilometers (327 miles) northwest of Sapas Mons at an elevation of 4 kilometers (2.5 miles) above the terrain. Lava flows extend for hundreds of kilometers across the fractured plains shown in the foreground to the base of Sapas Mons. The view is to the southeast with Sapas Mons appearing at the center with Maat Mons located in the background on the horizon. Sapas Mons, a volcano 400 kilometers (248 miles) across and 1.5 kilometers (0.9 mile) high is located at approximately 8 degrees north latitude, 188 degrees east longitude, on the western edge of Atla Regio. Its peak sits at an elevation of 4.5 kilometers (2.8 miles) above the planet's mean elevation. Sapas Mons is named for a Phoenician goddess. The vertical scale in this perspective has been exaggerated 10 times. Rays cast in a computer intersect the surface to create a three-dimensional perspective view. Simulated color and a digital elevation map developed by the U.S. Geological Survey are used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced by the Solar System Visualization project and the Magellan Science team at the JPL Multimission Image Processing Laboratory and is a single frame from a video released at the April 22, 1992 news conference.

  17. Color and 3D views of the Sierra Nevada mountains

    NASA Technical Reports Server (NTRS)

    2002-01-01

    These true-color images covering north-central New Mexico capture the bluish-white smoke plume of the Los Alamos fire, just west of the Rio Grande river. The middle image is a downward-looking (nadir) view, taken by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. As MISR flew from north to south, it viewed the scene from nine different angles. The top image was taken by the MISR camera looking 60 degrees forward along the orbit, whereas the bottom image looks 60 degrees aft. The plume stands out more dramatically in the steep-angle views. Its color and brightness also change with angle. By comparison, a thin, white, water cloud appears in the upper right portion of the scene, and is most easily detected in the top image. MISR uses these angle-to-angle differences to monitor particulate pollution and to identify different types of haze. Such observations allow scientists to study how airborne particles interact with sunlight, a measure of their impact on Earth's climate system. The images are about 400 km (250 miles) wide. The spatial resolution of the nadir image is 275 meters (300 yards); it is 1.1 kilometers (1,200 yards) for the off-nadir images. North is toward the top. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology. For more information, see the MISR web site Image courtesy NASA/GSFC/JPL, MISR Science Team

  18. Space Radar Image of Long Valley, California - 3D view

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory

  19. The Twin Peaks in 3-D, as Viewed by the Mars Pathfinder IMP Camera

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The Twin Peaks are modest-size hills to the southwest of the Mars Pathfinder landing site. They were discovered on the first panoramas taken by the IMP camera on the 4th of July, 1997, and subsequently identified in Viking Orbiter images taken over 20 years ago. The peaks are approximately 30-35 meters (-100 feet) tall. North Twin is approximately 860 meters (2800 feet) from the lander, and South Twin is about a kilometer away (3300 feet). The scene includes bouldery ridges and swales or 'hummocks' of flood debris that range from a few tens of meters away from the lander to the distance of the South Twin Peak. The large rock at the right edge of the scene is nicknamed 'Hippo'. This rock is about a meter (3 feet) across and 25 meters (80 feet) distant.

    This view of the Twin Peaks was produced by combining 4 individual 'Superpan' scenes from the left and right eyes of the IMP camera to cover both peaks. Each frame consists of 8 individual frames (left eye) and 7 frames (right eye) taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution pancromatic frame that is sharper than an individual frame would be.

    The anaglyph view of the Twin Peaks was produced by combining the left and right eye mosaics (above) by assigning the left eye view to the red color plane and the right eye view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech). The IMP was developed by the University of Arizona Lunar and Planetary

  20. Venus - 3D Perspective View of Maat Mons

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Maat Mons is displayed in this computer generated three-dimensional perspective of the surface of Venus. The viewpoint is located 634 kilometers (393 miles) north of Maat Mons at an elevation of 3 kilometers (2 miles) above the terrain. Lava flows extend for hundreds of kilometers across the fractured plains shown in the foreground, to the base of Maat Mons. The view is to the south with the volcano Maat Mons appearing at the center of the image on the horizon and rising to almost 5 kilometers (3 miles) above the surrounding terrain. Maat Mons is located at approximately 0.9 degrees north latitude, 194.5 degrees east longitude with a peak that ascends to 8 kilometers (5 miles) above the mean surface. Maat Mons is named for an Egyptian Goddess of truth and justice. Magellan synthetic aperture radar data is combined with radar altimetry to develop a three-dimensional map of the surface. The vertical scale in this perspective has been exaggerated 10 times. Rays cast in a computer intersect the surface to crate a three-dimensional perspective view. Simulated color and a digital elevation map developed by the U.S. Geological Survey are used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced by the Solar System Visualization project and the Magellan Science team at the JPL Multimission Image Processing Laboratory and is a single frame from a video released at the April 22, 1992 news conference.

  1. Venus - 3D Perspective View of Estla Regio

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A portion of western Eistla Regio is displayed in this three-dimensional perspective view of the surface of Venus. The viewpoint is located 1,100 kilometers (682 miles) northeast of Gula Mons at an elevation of 7.5 kilometers (4.6 miles). Lava flows extend for hundreds of kilometers across the fractured plains shown in the foreground to the base of Gula Mons. The viewpoint is to the southwest with Gula Mons appearing at the left just below the horizon. Gula Mons, a 3 kilometers (1.8 miles) high volcano, is located at approximately 22 degrees north latitude, 359 degrees east longitude. Sif Mons, a volcano with a diameter of 300 kilometers (180 miles) and a height of 2 kilometers (1.2 miles), appears to the right of Gula Mons. The distance between Sif Mons and Gula Mons is approximately 730 kilometers (453 miles). Magellan synthetic aperture radar data is combined with radar altimetry to develop a three-dimensional map of the surface. Ray tracing (rays as if from a light source are cast in a computer to intersect the surface) simulate a perspective view. Simulated color and a digital elevation map developed by Randy Kirk of the U.S. Geological Survey, are used to enhance small scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced at the JPL Multimission Image Processing Laboratory by Eric De Jong, Jeff Hall and Myche McAuley, and is a single frame from a video released at a March 5, 1991, JPL news conference.

  2. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  3. The use of Interferometric Microscopy to assess 3D modifications of deteriorated medieval glass.

    NASA Astrophysics Data System (ADS)

    Gentaz, L.; Lombardo, T.; Chabas, A.

    2012-04-01

    Due to low durability, Northern European medieval glass undergoes the action of the atmospheric environment leading in some cases to a state of dramatic deterioration. Modification features varies from a simple loss of transparency to a severe material loss. In order to understand the underlying mechanisms and preserve this heritage, fundamental research is necessary too. In this optic, field exposure of analogues and original stained glass was carried out to study the early stages of the glass weathering. Model glass and original stained glass (after removal of deterioration products) were exposed in real conditions in an urban site (Paris) for 48 months. A regular withdrawal of samples allowed a follow-up of short-term glass evolution. Morphological modifications of the exposed samples were investigated through conventional and non destructive microscopy, using respectively a Scanning Electron Microscope (SEM) and an Interferometric Microscope (IM). This latter allows a 3D quantification of the object with no sample preparation. For all glasses, both surface recession and build-up of deposit were observed as a consequence of a leaching process (interdiffusion of protons and glass cations). The build-up of a deposit comes from the reaction between the extracted glass cations and atmospheric gases. Instead, surface recession is due mainly to the formation of brittle layer of altered glass at the sub-surface, where a fracture network can appear, leading to the scaling of parts of this modified glass. Finally, dissolution of the glass takes place, inducing the formation of pits and craters. The arithmetic roughness (Ra) was used as an indicator of weathering increase, in order to evaluate the deterioration state. For instance, the Ra grew from few tens of nm for pristine glass to thousands of nm for scaled areas. This technique also allowed a precise quantification of dimensions (height, depth and width) of deposits and pits, and the estimation of their overall

  4. 3D reconstruction based on multiple views for close-range objects

    NASA Astrophysics Data System (ADS)

    Ji, Zheng; Zhang, Jianqing

    2007-06-01

    It is difficult for traditional photogrammetry techniques to reconstruct 3D model of close-range objects. To overcome the restriction and realize complex objects' 3D reconstruction, we present a realistic approach on the basis of multi-baseline stereo vision. This incorporates the image matching based on short-baseline-multi-views, and 3D measurement based on multi-ray intersection, and the 3D reconstruction of the object's based on TIN or parametric geometric model. Different complex object are reconstructed by this way. The results demonstrate the feasibility and effectivity of the method.

  5. 3D multi-view system using electro-wetting liquid lenticular lenses

    NASA Astrophysics Data System (ADS)

    Won, Yong Hyub; Kim, Junoh; Kim, Cheoljoong; Shin, Dooseub; Lee, Junsik; Koo, Gyohyun

    2016-06-01

    Lenticular multi-view system has great potential of three dimensional image realization. This paper introduces a fabrication of liquid lenticular lens array and an idea of increasing view points with a same resolution. Tunable liquid lens array can produce three dimensional images by using electro-wetting principle that changes surface tensions by applying voltage. The liquid lenticular device consists of a chamber, two different liquids and a sealing plate. To fabricate the chamber, an <100> silicon wafer is wet-etched by KOH solution and a trapezoid shaped chamber can be made after a certain time. The chamber having slanted walls is advantageous for electro-wetting achieving high diopter. Electroplating is done to make a nikel mold and poly methyl methacrylate (PMMA) chamber is fabricated through an embossing process. Indium tin oxide (ITO) is sputtered and parylene C and Teflon AF1600 is deposited for dielectric and hydrophobic layer respectively. Two immiscible liquids are injected and a glass plate as a sealing plate is covered with polycarbonates (PC) gaskets and sealed by UV adhesive. Two immiscible liquids are D.I water and a mixture of 1-chloronaphthalene and dodecane. The completed lenticular lens shows 2D and 3D images by applying certain voltages. Dioptric power and operation speed of the lenticular lens array are measured. A novel idea that an increment of viewpoints by electrode separation process is also proposed. The left and right electrodes of lenticular lens can be induced by different voltages and resulted in tilted optical axis. By switching the optical axis quickly, two times of view-points can be achieved with a same pixel resolution.

  6. Dual-view integral imaging 3D display by using orthogonal polarizer array and polarization switcher.

    PubMed

    Wang, Qiong-Hua; Ji, Chao-Chao; Li, Lei; Deng, Huan

    2016-01-11

    In this paper, a dual-view integral imaging three-dimensional (3D) display consisting of a display panel, two orthogonal polarizer arrays, a polarization switcher, and a micro-lens array is proposed. Two elemental image arrays for two different 3D images are presented by the display panel alternately, and the polarization switcher controls the polarization direction of the light rays synchronously. The two elemental image arrays are modulated by their corresponding and neighboring micro-lenses of the micro-lens array, and reconstruct two different 3D images in viewing zones 1 and 2, respectively. A prototype of the dual-view II 3D display is developed, and it has good performances.

  7. Sweeping View of the 'Columbia Hills' and Gusev Crater (3-D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Mars.

    It took seven days, from sols 591 to 597 (Sept. 1 to Sept. 7, 2005) of its exploration of Mars, for Spirit's panoramic camera to acquire all the images combined into this mosaic. This panorama covers a field of view just under 180 degrees from left to right. This stereo view is presented in a cylindrical-perspective projection with geometric seam correction. The stereo image may be viewed with standard blue and red 3-D glasses.

  8. Facile synthesis 3D flexible core-shell graphene/glass fiber via chemical vapor deposition

    PubMed Central

    2014-01-01

    Direct deposition of graphene layers on the flexible glass fiber surface to form the three-dimensional (3D) core-shell structures is offered using a two-heating reactor chemical vapor deposition system. The two-heating reactor is utilized to offer sufficient, well-proportioned floating C atoms and provide a facile way for low-temperature deposition. Graphene layers, which are controlled by changing the growth time, can be grown on the surface of wire-type glass fiber with the diameter from 30 nm to 120 um. The core-shell graphene/glass fiber deposition mechanism is proposed, suggesting that the 3D graphene films can be deposited on any proper wire-type substrates. These results open a facile way for direct and high-efficiency deposition of the transfer-free graphene layers on the low-temperature dielectric wire-type substrates. PACS 81.05.U-; 81.07.-b; 81.15.Gh PMID:25170331

  9. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    NASA Astrophysics Data System (ADS)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  10. A 3D measurement method based on multi-view fringe projection by using a turntable

    NASA Astrophysics Data System (ADS)

    Song, Li-mei; Gao, Yan-yan; Zhu, Xin-jun; Guo, Qing-hua; Xi, Jiang-tao

    2016-09-01

    In order to get the entire data in the optical measurement, a multi-view three-dimensional (3D) measurement method based on turntable is proposed. In the method, a turntable is used to rotate the object and obtain multi-view point cloud data, and then multi-view point cloud data are registered and integrated into a 3D model. The measurement results are compared with that of the sticking marked point method. Experimental results show that the measurement process of the proposed method is simpler, and the scanning speed and accuracy are improved.

  11. The numerical integration and 3-D finite element formulation of a viscoelastic model of glass

    SciTech Connect

    Chambers, R.S.

    1994-08-01

    The use of glasses is widespread in making hermetic, insulating seals for many electronic components. Flat panel displays and fiber optic connectors are other products utilizing glass as a structural element. When glass is cooled from sealing temperatures, residual stresses are generated due to mismatches in thermal shrinkage created by the dissimilar material properties of the adjoining materials. Because glass is such a brittle material at room temperature, tensile residual stresses must be kept small to ensure durability and avoid cracking. Although production designs and the required manufacturing process development can be deduced empirically, this is an expensive and time consuming process that does not necessarily lead to an optimal design. Agile manufacturing demands that analyses be used to reduce development costs and schedules by providing insight and guiding the design process through the development cycle. To make these gains, however, viscoelastic models of glass must be available along with the right tool to use them. A viscoelastic model of glass can be used to simulate the stress and volume relaxation that occurs at elevated temperatures as the molecular structure of the glass seeks to equilibrate to the state of the supercooled liquid. The substance of the numerical treatment needed to support the implementation of the model in a 3-D finite element program is presented herein. An accurate second-order, central difference integrator is proposed for the constitutive equations, and numerical solutions are compared to those obtained with other integrators. Inherent convergence problems are reviewed and fixes are described. The resulting algorithms are generally applicable to the broad class of viscoelastic material models. First-order error estimates are used as a basis for developing a scheme for automatic time step controls, and several demonstration problems are presented to illustrate the performance of the methodology.

  12. Complementary cellophane optic gate and its use for a 3D iPad without glasses

    NASA Astrophysics Data System (ADS)

    Iizuka, K.

    2012-04-01

    A complementary cellophane optic gate was fabricated using a birefringent cellophane sheet. Previous versions of the optic gate required the retardance of the cellophane to be as close to 180° as possible throughout the entire visible wavelength range, which meant it was often difficult to find a cellophane sheet with the right thickness and dispersion characteristics to meet this requirement. The complementary optic gate reported in this paper has no restriction on the thickness, composition, or wavelength range of the cellophane sheet except that the cellophane must have some birefringence. Even with an arbitrary retardance, an extinction ratio of 5 × 10-3 was achieved at λ = 0.63 μm. The optic gate was used to convert an iPad into a 3D display without the need for the observer to wear glasses. The high extinction ratio of the optic gate resulted in a 3D display of supreme quality.

  13. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer

    PubMed Central

    Douglas, David B.; Boone, John M.; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Objective To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. Methods A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. Results The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. Conclusion The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice. PMID:27774517

  14. Calibration and 3D reconstruction of underwater objects with non-single-view projection model by structured light stereo imaging.

    PubMed

    Wang, Yexin; Negahdaripour, Shahriar; Aykin, Murat D

    2016-08-20

    Establishing the projection model of imaging systems is critical in 3D reconstruction of object shapes from multiple 2D views. When deployed underwater, these are enclosed in waterproof housings with transparent glass ports that generate nonlinear refractions of optical rays at interfaces, leading to invalidation of the commonly assumed single-viewpoint (SVP) model. In this paper, we propose a non-SVP ray tracing model for the calibration of a projector-camera system, employed for 3D reconstruction based on the structured light paradigm. The projector utilizes dot patterns, having established that the contrast loss is less severe than for traditional stripe patterns in highly turbid waters. Experimental results are presented to assess the achieved calibrating accuracy. PMID:27556973

  15. Linear programming approach to optimize 3D data obtained from multiple view angiograms

    NASA Astrophysics Data System (ADS)

    Noël, Peter B.; Xu, Jinhui; Hoffmann, Kenneth R.; Singh, Vikas; Schafer, Sebastian; Walczak, Alan M.

    2007-03-01

    Three-dimensional (3D) vessel data from CTA or MRA are not always available prior to or during endovascular interventional procedures, whereas multiple 2D projection angiograms often are. Unfortunately, patient movement, table movement, and gantry sag during angiographic procedures can lead to large errors in gantry-based imaging geometries and thereby incorrect 3D. Therefore, we are developing methods for combining vessel data from multiple 2D angiographic views obtained during interventional procedures to provide 3D vessel data during these procedures. Multiple 2D projection views of carotid vessels are obtained, and the vessel centerlines are indicated. For each pair of views, endpoints of the 3D centerlines are reconstructed using triangulation based on the provided gantry geometry. Previous investigations indicated that translation errors were the primary source of error in the reconstructed 3D. Therefore, the errors in the translations relating the imaging systems are corrected by minimizing the L1 distance between the reconstructed endpoints, after which the 3D centerlines are reconstructed using epipolar constraints for every pair of views. Evaluations were performed using simulations, phantom data, and clinical cases. In simulation and phantom studies, the RMS error decreased from 6.0 mm obtained with biplane approaches to 0.5 mm with our technique. Centerlines in clinical cases are smoother and more consistent than those calculated from individual biplane pairs. The 3D centerlines are calculated in about 2 seconds. These results indicate that reliable 3D vessel data can be generated for treatment planning or revision during interventional procedures.

  16. A stereo matching model observer for stereoscopic viewing of 3D medical images

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.; Muralidlhar, Gautam S.

    2014-03-01

    Stereoscopic viewing of 3D medical imaging data has the potential to increase the detection of abnormalities. We present a new stereo model observer inspired by the characteristics of stereopsis in human vision. Given a stereo pair of images of an object (i.e., left and right images separated by a small displacement), the model observer rst nds the corresponding points between the two views, and then fuses them together to create a 2D cyclopean view. Assuming that the cyclopean view has extracted most of the 3D information presented in the stereo pair, a channelized Hotelling observer (CHO) can be utilized to make decisions. We conduct a simulation study that attempts to mimic the detection of breast lesions on stereoscopic viewing of breast tomosynthesis projection images. We render voxel datasets that contain random 3D power-law noise to model normal breast tissues with various breast densities. 3D Gaussian signal is added to some of the datasets to model the presence of a breast lesion. By changing the separation angle between the two views, multiple stereo pairs of projection images are generated for each voxel dataset. The performance of the model is evaluated in terms of the accuracy of binary decisions on the presence of the simulated lesions.

  17. Micro-CT studies on 3-D bioactive glass-ceramic scaffolds for bone regeneration.

    PubMed

    Renghini, Chiara; Komlev, Vladimir; Fiori, Fabrizio; Verné, Enrica; Baino, Francesco; Vitale-Brovarone, Chiara

    2009-05-01

    The aim of this study was the preparation and characterization of bioactive glass-ceramic scaffolds for bone tissue engineering. For this purpose, a glass belonging to the system SiO2-P2O5-CaO-MgO-Na2O-K2O (CEL2) was used. The sponge-replication method was adopted to prepare the scaffolds; specifically, a polymeric skeleton was impregnated with a slurry containing CEL2 powder, polyvinyl alcohol (PVA) as a binding agent and distilled water. The impregnated sponge was then thermally treated to remove the polymeric phase and to sinter the inorganic one. The obtained scaffolds possessed an open and interconnected porosity, analogous to cancellous bone texture, and with a mechanical strength above 2 MPa. Moreover, the scaffolds underwent partial bioresorption due to ion-leaching phenomena. This feature was investigated by X-ray computed microcomputed tomography (micro-CT). Micro-CT is a three-dimensional (3-D) radiographic imaging technique, able to achieve a spatial resolution close to 1 microm(3). The use of synchrotron radiation allows the selected photon energy to be tuned to optimize the contrast among the different phases in the investigated samples. The 3-D scaffolds were soaked in a simulated body fluid (SBF) to study the formation of hydroxyapatite microcrystals on the scaffold struts and on the internal pore walls. The 3-D scaffolds were also soaked in a buffer solution (Tris-HCl) for different times to assess the scaffold bioresorption according to the ISO standard. A gradual resorption of the pores walls was observed during the soakings both in SBF and in Tris-HCl.

  18. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    NASA Astrophysics Data System (ADS)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  19. Effects of microalloying with 3d transition metals on glass formation in AlYFe alloys

    SciTech Connect

    Bondi, K.S.; Gangopadhyay, A.K.; Marine, Z.; Kim, T.H.; Mukhopadhyay, Anindita; Goldman, A.I.; Buhro, William E.; Kelton, K.F.

    2008-05-20

    The effects of microalloying on glass formation and stability were systematically investigated by substituting 0.5 at.% of all 3d transition metals for Al in Al{sub 88}Y{sub 7}Fe{sub 5} alloys. X-ray diffraction and isothermal differential scanning calorimetry studies indicate that samples containing microadditions of Ti, V, Cr, Mn, Fe and Co were amorphous, while those alloyed with Ni and Cu were not. The onset temperatures for crystallization (devitrification) of the amorphous alloys were increased with microalloying and some showed a supercooled liquid region ({Delta}T{sub x} = T{sub x} - T{sub g}) of up to 40 C. In addition, microalloying changes the glass structure and the devitrification sequence, as determined by differential scanning calorimetry (DSC), X-ray diffraction (XRD), transmission electron microscopy (TEM), differential thermal analysis (DTA) and high energy X-ray diffraction. The results presented here suggest that the order induced in the alloy by the transition metal microaddition decreases the atomic mobility in the glass and raises the barrier for the nucleation of {alpha}-Al, the primary devitrifying phase in most cases. New intermetallic phases also appear with microalloying and vary for different transition metal additions.

  20. Evaluation of 3D nano-macro porous bioactive glass scaffold for hard tissue engineering.

    PubMed

    Wang, S; Falk, M M; Rashad, A; Saad, M M; Marques, A C; Almeida, R M; Marei, M K; Jain, H

    2011-05-01

    Recently, nano-macro dual-porous, three-dimensional (3D) glass structures were developed for use as bioscaffolds for hard tissue regeneration, but there have been concerns regarding the interconnectivity and homogeneity of nanopores in the scaffolds, as well as the cytotoxicity of the environment deep inside due to limited fluid access. Therefore, mercury porosimetry, nitrogen absorption, and TEM have been used to characterize nanopore network of the scaffolds. In parallel, viability of MG 63 human osteosarcoma cells seeded on scaffold surface was investigated by fluorescence, confocal and electron microscopy methods. The results show that cells attach, migrate and penetrate inside the glass scaffold with high proliferation and viability rate. Additionally, scaffolds were implanted under the skin of a male New Zealand rabbit for in vivo animal test. Initial observations show the formation of new tissue with blood vessels and collagen fibers deep inside the implanted scaffolds with no obvious inflammatory reaction. Thus, the new nano-macro dual-porous glass structure could be a promising bioscaffold for use in regenerative medicine and tissue engineering for bone regeneration. PMID:21445655

  1. Enabling ITK-based processing and 3D Slicer MRML scene management in ParaView.

    PubMed

    Enquobahrie, Andinet; Bowers, Michael; Ibanez, Luis; Finet, Julien; Audette, Michel; Kolasny, Anthony

    2012-02-28

    This paper documents on-going work to facilitate ITK-based processing and 3D Slicer scene management in ParaView. We believe this will broaden the use of ParaView for high performance computing and visualization in the medical imaging research community. The effort is focused on developing ParaView plug-ins for managing VTK structures from 3D Slicer MRML scenes and encapsulating ITK filters for deployment in ParaView. In this paper, we present KWScene, an open source cross-platform library that is being developed to support implementation of these types of plugins. We describe the overall design of the library and provide implementation details and conclude by presenting a concrete example that demonstrates the use of the KWScene library in computational anatomy research at Johns Hopkins Center for Imaging Science.

  2. Effect of mental fatigue caused by mobile 3D viewing on selective attention: an ERP study.

    PubMed

    Mun, Sungchul; Kim, Eun-Soo; Park, Min-Chul

    2014-12-01

    This study investigated behavioral responses to and auditory event-related potential (ERP) correlates of mental fatigue caused by mobile three-dimensional (3D) viewing. Twenty-six participants (14 women) performed a selective attention task in which they were asked to respond to the sounds presented at the attended side while ignoring sounds at the ignored side before and after mobile 3D viewing. Considering different individual susceptibilities to 3D, participants' subjective fatigue data were used to categorize them into two groups: fatigued and unfatigued. The amplitudes of d-ERP components were defined as differences in amplitudes between time-locked brain oscillations of the attended and ignored sounds, and these values were used to calculate the degree to which spatial selective attention was impaired by 3D mental fatigue. The fatigued group showed significantly longer response times after mobile 3D viewing compared to before the viewing. However, response accuracy did not significantly change between the two conditions, implying that the participants used a behavioral strategy to cope with their performance accuracy decrement by increasing their response times. No significant differences were observed for the unfatigued group. Analysis of covariance revealed group differences with significant and trends toward significant decreases in the d-P200 and d-late positive potential (LPP) amplitudes at the occipital electrodes of the fatigued and unfatigued groups. Our findings indicate that mentally fatigued participants did not effectively block out distractors in their information processing mechanism, providing support for the hypothesis that 3D mental fatigue impairs spatial selective attention and is characterized by changes in d-P200 and d-LPP amplitudes. PMID:25194505

  3. Multi-Modal Clique-Graph Matching for View-Based 3D Model Retrieval.

    PubMed

    Liu, An-An; Nie, Wei-Zhi; Gao, Yue; Su, Yu-Ting

    2016-05-01

    Multi-view matching is an important but a challenging task in view-based 3D model retrieval. To address this challenge, we propose an original multi-modal clique graph (MCG) matching method in this paper. We systematically present a method for MCG generation that is composed of cliques, which consist of neighbor nodes in multi-modal feature space and hyper-edges that link pairwise cliques. Moreover, we propose an image set-based clique/edgewise similarity measure to address the issue of the set-to-set distance measure, which is the core problem in MCG matching. The proposed MCG provides the following benefits: 1) preserves the local and global attributes of a graph with the designed structure; 2) eliminates redundant and noisy information by strengthening inliers while suppressing outliers; and 3) avoids the difficulty of defining high-order attributes and solving hyper-graph matching. We validate the MCG-based 3D model retrieval using three popular single-modal data sets and one novel multi-modal data set. Extensive experiments show the superiority of the proposed method through comparisons. Moreover, we contribute a novel real-world 3D object data set, the multi-view RGB-D object data set. To the best of our knowledge, it is the largest real-world 3D object data set containing multi-modal and multi-view information. PMID:26978821

  4. Are 3-D coronal mass ejection parameters from single-view observations consistent with multiview ones?

    NASA Astrophysics Data System (ADS)

    Lee, Harim; Moon, Y.-J.; Na, Hyeonock; Jang, Soojeong; Lee, Jae-Ok

    2015-12-01

    To prepare for when only single-view observations are available, we have made a test whether the 3-D parameters (radial velocity, angular width, and source location) of halo coronal mass ejections (HCMEs) from single-view observations are consistent with those from multiview observations. For this test, we select 44 HCMEs from December 2010 to June 2011 with the following conditions: partial and full HCMEs by SOHO and limb CMEs by twin STEREO spacecraft when they were approximately in quadrature. In this study, we compare the 3-D parameters of the HCMEs from three different methods: (1) a geometrical triangulation method, the STEREO CAT tool developed by NASA/CCMC, for multiview observations using STEREO/SECCHI and SOHO/LASCO data, (2) the graduated cylindrical shell (GCS) flux rope model for multiview observations using STEREO/SECCHI data, and (3) an ice cream cone model for single-view observations using SOHO/LASCO data. We find that the radial velocities and the source locations of the HCMEs from three methods are well consistent with one another with high correlation coefficients (≥0.9). However, the angular widths by the ice cream cone model are noticeably underestimated for broad CMEs larger than 100° and several partial HCMEs. A comparison between the 3-D CME parameters directly measured from twin STEREO spacecraft and the above 3-D parameters shows that the parameters from multiview are more consistent with the STEREO measurements than those from single view.

  5. Adaptive image warping for hole prevention in 3D view synthesis.

    PubMed

    Plath, Nils; Knorr, Sebastian; Goldmann, Lutz; Sikora, Thomas

    2013-09-01

    Increasing popularity of 3D videos calls for new methods to ease the conversion process of existing monocular video to stereoscopic or multi-view video. A popular way to convert video is given by depth image-based rendering methods, in which a depth map that is associated with an image frame is used to generate a virtual view. Because of the lack of knowledge about the 3D structure of a scene and its corresponding texture, the conversion of 2D video, inevitably, however, leads to holes in the resulting 3D image as a result of newly-exposed areas. The conversion process can be altered such that no holes become visible in the resulting 3D view by superimposing a regular grid over the depth map and deforming it. In this paper, an adaptive image warping approach as an improvement to the regular approach is proposed. The new algorithm exploits the smoothness of a typical depth map to reduce the complexity of the underlying optimization problem that is necessary to find the deformation, which is required to prevent holes. This is achieved by splitting a depth map into blocks of homogeneous depth using quadtrees and running the optimization on the resulting adaptive grid. The results show that this approach leads to a considerable reduction of the computational complexity while maintaining the visual quality of the synthesized views. PMID:23782807

  6. JD3 - 3D Views of the Cycling Sun in Stellar Context: Overview

    NASA Astrophysics Data System (ADS)

    van Driel-Gesztelyi, Lidia; Schrijver, Carolus J.

    2015-03-01

    We summarise the motivations and main results of the joint discussion ``3D Views of the Cycling Sun in Stellar Context'', and give credit to contributed talks and poster presentations, as due to the limited number of pages, this proceedings could only include contributions from the keynote speakers.

  7. Superpixel-based 3D warping using view plus depth data from multiple viewpoints

    NASA Astrophysics Data System (ADS)

    Tezuka, Tomoyuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    This paper presents a method of virtual view synthesis using view plus depth data from multiple viewpoints. Intuitively, virtual view generation from those data can be easily achieved by simple 3D warping. However, 3D points reconstructed from those data are isolated, i.e. not connected with each other. Consequently, the images generated by existing methods have many holes that are very annoying due to occlusions and the limited sampling density. To tackle this problem, we propose two steps algorithm as follows. In the first step, view plus depth data from each viewpoint is 3D warped to the virtual viewpoint. In this process, we determine which neighboring pixels should be connected or kept isolated. For this determination, we use depth differences among neighboring pixels, and SLIC-based superpixel segmentation that considers both color and depth information. The pixel pairs that have small depth differences or reside in same superpixels are connected, and the polygons enclosed by the connected pixels are inpainted, which greatly reduces the holes. This warping process is performed individually for each viewpoint from which view plus depth data are provided, resulting in several images at the virtual viewpoint that are warped from different viewpoints. In the second step, we merge those warped images to obtain the final result. Thanks to the data provided from different viewpoints, the final result has less noises and holes compared to the result from single viewpoint information. Experimental results using publicly available view plus depth data are reported to validate our method.

  8. Developing a protocol for creating microfluidic devices with a 3D printer, PDMS, and glass

    NASA Astrophysics Data System (ADS)

    Collette, Robyn; Novak, Eric; Shirk, Kathryn

    2015-03-01

    Microfluidics research requires the design and fabrication of devices that have the ability to manipulate small volumes of fluid, typically ranging from microliters to picoliters. These devices are used for a wide range of applications including the assembly of materials and testing of biological samples. Many methods have been previously developed to create microfluidic devices, including traditional nanolithography techniques. However, these traditional techniques are cost-prohibitive for many small-scale laboratories. This research explores a relatively low-cost technique using a 3D printed master, which is used as a template for the fabrication of polydimethylsiloxane (PDMS) microfluidic devices. The masters are designed using computer aided design (CAD) software and can be printed and modified relatively quickly. We have developed a protocol for creating simple microfluidic devices using a 3D printer and PDMS adhered to glass. This relatively simple and lower-cost technique can now be scaled to more complicated device designs and applications. Funding provided by the Undergraduate Research Grant Program at Shippensburg University and the Student/Faculty Research Engagement Grants from the College of Arts and Sciences at Shippensburg University.

  9. Surface functionalization of 3D glass-ceramic porous scaffolds for enhanced mineralization in vitro

    NASA Astrophysics Data System (ADS)

    Ferraris, Sara; Vitale-Brovarone, Chiara; Bretcanu, Oana; Cassinelli, Clara; Vernè, Enrica

    2013-04-01

    Bone reconstruction after tissue loosening due to traumatic, pathological or surgical causes is in increasing demand. 3D scaffolds are a widely studied solution for supporting new bone growth. Bioactive glass-ceramic porous materials can offer a three-dimensional structure that is able to chemically bond to bone. The ability to surface modify these devices by grafting biologically active molecules represents a challenge, with the aim of stimulating physiological bone regeneration with both inorganic and organic signals. In this research work glass ceramic scaffolds with very high mechanical properties and moderate bioactivity have been functionalized with the enzyme alkaline phosphatase (ALP). The material surface was activated in order to expose hydroxyl groups. The activated surface was further grafted with ALP both via silanization and also via direct grafting to the surface active hydroxyl groups. Enzymatic activity of grafted samples were measured by means of UV-vis spectroscopy before and after ultrasonic washing in TRIS-HCl buffer solution. In vitro inorganic bioactivity was investigated by soaking the scaffolds after the different steps of functionalization in a simulated body fluid (SBF). SEM observations allowed the monitoring of the scaffold morphology and surface chemical composition after soaking in SBF. The presence of ALP enhanced the in vitro inorganic bioactivity of the tested material.

  10. Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.

    PubMed

    Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho

    2016-04-18

    We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.

  11. Reconstructing White Walls: Multi-View Multi-Shot 3d Reconstruction of Textureless Surfaces

    NASA Astrophysics Data System (ADS)

    Ley, Andreas; Hänsch, Ronny; Hellwich, Olaf

    2016-06-01

    The reconstruction of the 3D geometry of a scene based on image sequences has been a very active field of research for decades. Nevertheless, there are still existing challenges in particular for homogeneous parts of objects. This paper proposes a solution to enhance the 3D reconstruction of weakly-textured surfaces by using standard cameras as well as a standard multi-view stereo pipeline. The underlying idea of the proposed method is based on improving the signal-to-noise ratio in weakly-textured regions while adaptively amplifying the local contrast to make better use of the limited numerical range in 8-bit images. Based on this premise, multiple shots per viewpoint are used to suppress statistically uncorrelated noise and enhance low-contrast texture. By only changing the image acquisition and adding a preprocessing step, a tremendous increase of up to 300% in completeness of the 3D reconstruction is achieved.

  12. Detection and 3D reconstruction of traffic signs from multiple view color images

    NASA Astrophysics Data System (ADS)

    Soheilian, Bahman; Paparoditis, Nicolas; Vallet, Bruno

    2013-03-01

    3D reconstruction of traffic signs is of great interest in many applications such as image-based localization and navigation. In order to reflect the reality, the reconstruction process should meet both accuracy and precision. In order to reach such a valid reconstruction from calibrated multi-view images, accurate and precise extraction of signs in every individual view is a must. This paper presents first an automatic pipeline for identifying and extracting the silhouette of signs in every individual image. Then, a multi-view constrained 3D reconstruction algorithm provides an optimum 3D silhouette for the detected signs. The first step called detection, tackles with a color-based segmentation to generate ROIs (Region of Interests) in image. The shape of every ROI is estimated by fitting an ellipse, a quadrilateral or a triangle to edge points. A ROI is rejected if none of the three shapes can be fitted sufficiently precisely. Thanks to the estimated shape the remained candidates ROIs are rectified to remove the perspective distortion and then matched with a set of reference signs using textural information. Poor matches are rejected and the types of remained ones are identified. The output of the detection algorithm is a set of identified road signs whose silhouette in image plane is represented by and ellipse, a quadrilateral or a triangle. The 3D reconstruction process is based on a hypothesis generation and verification. Hypotheses are generated by a stereo matching approach taking into account epipolar geometry and also the similarity of the categories. The hypotheses that are plausibly correspond to the same 3D road sign are identified and grouped during this process. Finally, all the hypotheses of the same group are merged to generate a unique 3D road sign by a multi-view algorithm integrating a priori knowledges about 3D shape of road signs as constraints. The algorithm is assessed on real and synthetic images and reached and average accuracy of 3.5cm for

  13. Single-Frame 3D Human Pose Recovery from Multiple Views

    NASA Astrophysics Data System (ADS)

    Hofmann, Michael; Gavrila, Dariu M.

    We present a system for the estimation of unconstrained 3D human upper body pose from multi-camera single-frame views. Pose recovery starts with a shape detection stage where candidate poses are generated based on hierarchical exemplar matching in the individual camera views. The hierarchy used in this stage is created using a hybrid clustering approach in order to efficiently deal with the large number of represented poses. In the following multi-view verification stage, poses are re-projected to the other camera views and ranked according to a multi-view matching score. A subsequent gradient-based local pose optimization stage bridges the gap between the used discrete pose exemplars and the underlying continuous parameter space. We demonstrate that the proposed clustering approach greatly outperforms state-of-the-art bottom-up clustering in parameter space and present a detailed experimental evaluation of the complete system on a large data set.

  14. 3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Otake, Y.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Siewerdsen, J. H.

    2014-01-01

    An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ˜0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ˜10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.

  15. Improved Segmentation of Multiple Cavities of the Heart in Wide-View 3-D Transesophageal Echocardiograms.

    PubMed

    Haak, Alexander; Ren, Ben; Mulder, Harriët W; Vegas-Sánchez-Ferrero, Gonzalo; van Burken, Gerard; van der Steen, Antonius F W; van Stralen, Marijn; Pluim, Josien P W; van Walsum, Theo; Bosch, Johannes G

    2015-07-01

    Minimally invasive interventions in the heart such as in electrophysiology are becoming more and more important in clinical practice. Currently, preoperative computed tomography angiography (CTA) is used to provide anatomic information during electrophysiology interventions, but this does not provide real-time feedback and burdens the patient with additional radiation and side effects of the contrast agent. Three-dimensional transesophageal echocardiography (TEE) is an excellent modality for visualization of anatomic structures and instruments in real time, but some cavities, especially the left atrium, suffer from the limited coverage of the 3-D TEE volumes. This leads to difficulty in segmenting the left atrium. We propose replacing or complementing pre-operative CTA imaging with wide-view TEE. We tested this proposal on 20 patients for which TEE image volumes covering the left atrium and CTA images were acquired. The TEE images were manually registered, and wide-view volumes were generated. Five heart cavities in single-view and wide-view TEE were segmented and compared with atlas based-segmentations derived from the CTA images. We found that the segmentation accuracy (Dice coefficients) improved relative to segmentation of single-view images by 5, 15 and 9 percentage points for the left atrium, right atrium and aorta, respectively. Average anatomic coverage was improved by 2, 29, 62 and 49 percentage points for the right ventricle, left atrium, right atrium and aorta, respectively. This finding confirms that wide-view 3-D TEE can be useful in supporting electrophysiology interventions.

  16. Integration of multiple view plus depth data for free viewpoint 3D display

    NASA Astrophysics Data System (ADS)

    Suzuki, Kazuyoshi; Yoshida, Yuko; Kawamoto, Tetsuya; Fujii, Toshiaki; Mase, Kenji

    2014-03-01

    This paper proposes a method for constructing a reasonable scale of end-to-end free-viewpoint video system that captures multiple view and depth data, reconstructs three-dimensional polygon models of objects, and display them on virtual 3D CG spaces. This system consists of a desktop PC and four Kinect sensors. First, multiple view plus depth data at four viewpoints are captured by Kinect sensors simultaneously. Then, the captured data are integrated to point cloud data by using camera parameters. The obtained point cloud data are sampled to volume data that consists of voxels. Since volume data that are generated from point cloud data are sparse, those data are made dense by using global optimization algorithm. Final step is to reconstruct surfaces on dense volume data by discrete marching cubes method. Since accuracy of depth maps affects to the quality of 3D polygon model, a simple inpainting method for improving depth maps is also presented.

  17. Modeling of multi-view 3D freehand radio frequency ultrasound.

    PubMed

    Klein, T; Hansson, M; Navab, Nassir

    2012-01-01

    Nowadays ultrasound (US) examinations are typically performed with conventional machines providing two dimensional imagery. However, there exist a multitude of applications where doctors could benefit from three dimensional ultrasound providing better judgment, due to the extended spatial view. 3D freehand US allows acquisition of images by means of a tracking device attached to the ultrasound transducer. Unfortunately, view dependency makes the 3D representation of ultrasound a non-trivial task. To address this we model speckle statistics, in envelope-detected radio frequency (RF) data, using a finite mixture model (FMM), assuming a parametric representation of data, in which the multiple views are treated as components of the FMM. The proposed model is show-cased with registration, using an ultrasound specific distribution based pseudo-distance, and reconstruction tasks, performed on the manifold of Gamma model parameters. Example field of application is neurology using transcranial US, as this domain requires high accuracy and data systematically features low SNR, making intensity based registration difficult. In particular, 3D US can be specifically used to improve differential diagnosis of Parkinson's disease (PD) compared to conventional approaches and is therefore of high relevance for future application. PMID:23285579

  18. Learning the 3-D structure of objects from 2-D views depends on shape, not format

    PubMed Central

    Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit

    2016-01-01

    Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196

  19. Mesoporous bioactive glass nanolayer-functionalized 3D-printed scaffolds for accelerating osteogenesis and angiogenesis.

    PubMed

    Zhang, Yali; Xia, Lunguo; Zhai, Dong; Shi, Mengchao; Luo, Yongxiang; Feng, Chun; Fang, Bing; Yin, Jingbo; Chang, Jiang; Wu, Chengtie

    2015-12-01

    The hierarchical microstructure, surface and interface of biomaterials are important factors influencing their bioactivity. Porous bioceramic scaffolds have been widely used for bone tissue engineering by optimizing their chemical composition and large-pore structure. However, the surface and interface of struts in bioceramic scaffolds are often ignored. The aim of this study is to incorporate hierarchical pores and bioactive components into the bioceramic scaffolds by constructing nanopores and bioactive elements on the struts of scaffolds and further improve their bone-forming activity. Mesoporous bioactive glass (MBG) modified β-tricalcium phosphate (MBG-β-TCP) scaffolds with a hierarchical pore structure and a functional strut surface (∼100 nm of MBG nanolayer) were successfully prepared via 3D printing and spin coating. The compressive strength and apatite-mineralization ability of MBG-β-TCP scaffolds were significantly enhanced as compared to β-TCP scaffolds without the MBG nanolayer. The attachment, viability, alkaline phosphatase (ALP) activity, osteogenic gene expression (Runx2, BMP2, OPN and Col I) and protein expression (OPN, Col I, VEGF, HIF-1α) of rabbit bone marrow stromal cells (rBMSCs) as well as the attachment, viability and angiogenic gene expression (VEGF and HIF-1α) of human umbilical vein endothelial cells (HUVECs) in MBG-β-TCP scaffolds were significantly upregulated compared with conventional bioactive glass (BG)-modified β-TCP (BG-β-TCP) and pure β-TCP scaffolds. Furthermore, MBG-β-TCP scaffolds significantly enhanced the formation of new bone in vivo as compared to BG-β-TCP and β-TCP scaffolds. The results suggest that application of the MBG nanolayer to modify 3D-printed bioceramic scaffolds offers a new strategy to construct hierarchically porous scaffolds with significantly improved physicochemical and biological properties, such as mechanical properties, osteogenesis, angiogenesis and protein expression for bone tissue

  20. Mesoporous bioactive glass nanolayer-functionalized 3D-printed scaffolds for accelerating osteogenesis and angiogenesis

    NASA Astrophysics Data System (ADS)

    Zhang, Yali; Xia, Lunguo; Zhai, Dong; Shi, Mengchao; Luo, Yongxiang; Feng, Chun; Fang, Bing; Yin, Jingbo; Chang, Jiang; Wu, Chengtie

    2015-11-01

    The hierarchical microstructure, surface and interface of biomaterials are important factors influencing their bioactivity. Porous bioceramic scaffolds have been widely used for bone tissue engineering by optimizing their chemical composition and large-pore structure. However, the surface and interface of struts in bioceramic scaffolds are often ignored. The aim of this study is to incorporate hierarchical pores and bioactive components into the bioceramic scaffolds by constructing nanopores and bioactive elements on the struts of scaffolds and further improve their bone-forming activity. Mesoporous bioactive glass (MBG) modified β-tricalcium phosphate (MBG-β-TCP) scaffolds with a hierarchical pore structure and a functional strut surface (~100 nm of MBG nanolayer) were successfully prepared via 3D printing and spin coating. The compressive strength and apatite-mineralization ability of MBG-β-TCP scaffolds were significantly enhanced as compared to β-TCP scaffolds without the MBG nanolayer. The attachment, viability, alkaline phosphatase (ALP) activity, osteogenic gene expression (Runx2, BMP2, OPN and Col I) and protein expression (OPN, Col I, VEGF, HIF-1α) of rabbit bone marrow stromal cells (rBMSCs) as well as the attachment, viability and angiogenic gene expression (VEGF and HIF-1α) of human umbilical vein endothelial cells (HUVECs) in MBG-β-TCP scaffolds were significantly upregulated compared with conventional bioactive glass (BG)-modified β-TCP (BG-β-TCP) and pure β-TCP scaffolds. Furthermore, MBG-β-TCP scaffolds significantly enhanced the formation of new bone in vivo as compared to BG-β-TCP and β-TCP scaffolds. The results suggest that application of the MBG nanolayer to modify 3D-printed bioceramic scaffolds offers a new strategy to construct hierarchically porous scaffolds with significantly improved physicochemical and biological properties, such as mechanical properties, osteogenesis, angiogenesis and protein expression for bone tissue

  1. Mesoporous bioactive glass nanolayer-functionalized 3D-printed scaffolds for accelerating osteogenesis and angiogenesis.

    PubMed

    Zhang, Yali; Xia, Lunguo; Zhai, Dong; Shi, Mengchao; Luo, Yongxiang; Feng, Chun; Fang, Bing; Yin, Jingbo; Chang, Jiang; Wu, Chengtie

    2015-12-01

    The hierarchical microstructure, surface and interface of biomaterials are important factors influencing their bioactivity. Porous bioceramic scaffolds have been widely used for bone tissue engineering by optimizing their chemical composition and large-pore structure. However, the surface and interface of struts in bioceramic scaffolds are often ignored. The aim of this study is to incorporate hierarchical pores and bioactive components into the bioceramic scaffolds by constructing nanopores and bioactive elements on the struts of scaffolds and further improve their bone-forming activity. Mesoporous bioactive glass (MBG) modified β-tricalcium phosphate (MBG-β-TCP) scaffolds with a hierarchical pore structure and a functional strut surface (∼100 nm of MBG nanolayer) were successfully prepared via 3D printing and spin coating. The compressive strength and apatite-mineralization ability of MBG-β-TCP scaffolds were significantly enhanced as compared to β-TCP scaffolds without the MBG nanolayer. The attachment, viability, alkaline phosphatase (ALP) activity, osteogenic gene expression (Runx2, BMP2, OPN and Col I) and protein expression (OPN, Col I, VEGF, HIF-1α) of rabbit bone marrow stromal cells (rBMSCs) as well as the attachment, viability and angiogenic gene expression (VEGF and HIF-1α) of human umbilical vein endothelial cells (HUVECs) in MBG-β-TCP scaffolds were significantly upregulated compared with conventional bioactive glass (BG)-modified β-TCP (BG-β-TCP) and pure β-TCP scaffolds. Furthermore, MBG-β-TCP scaffolds significantly enhanced the formation of new bone in vivo as compared to BG-β-TCP and β-TCP scaffolds. The results suggest that application of the MBG nanolayer to modify 3D-printed bioceramic scaffolds offers a new strategy to construct hierarchically porous scaffolds with significantly improved physicochemical and biological properties, such as mechanical properties, osteogenesis, angiogenesis and protein expression for bone tissue

  2. Full optical characterization of autostereoscopic 3D displays using local viewing angle and imaging measurements

    NASA Astrophysics Data System (ADS)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    Two commercial auto-stereoscopic 3D displays are characterized a using Fourier optics viewing angle system and an imaging video-luminance-meter. One display has a fixed emissive configuration and the other adapts its emission to the observer position using head tracking. For a fixed emissive condition, three viewing angle measurements are performed at three positions (center, right and left). Qualified monocular and binocular viewing spaces in front of the display are deduced as well as the best working distance. The imaging system is then positioned at this working distance and crosstalk homogeneity on the entire surface of the display is measured. We show that the crosstalk is generally not optimized on all the surface of the display. Display aspect simulation using viewing angle measurements allows understanding better the origin of those crosstalk variations. Local imperfections like scratches and marks generally increase drastically the crosstalk, demonstrating that cleanliness requirements for this type of display are quite critical.

  3. Image selection in photogrammetric multi-view stereo methods for metric and complete 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Hosseininaveh Ahmadabadian, Ali; Robson, Stuart; Boehm, Jan; Shortis, Mark

    2013-04-01

    Multi-View Stereo (MVS) as a low cost technique for precise 3D reconstruction can be a rival for laser scanners if the scale of the model is resolved. A fusion of stereo imaging equipment with photogrammetric bundle adjustment and MVS methods, known as photogrammetric MVS, can generate correctly scaled 3D models without using any known object distances. Although a huge number of stereo images (e.g. 200 high resolution images from a small object) captured of the object contains redundant data that allows detailed and accurate 3D reconstruction, the capture and processing time is increased when a vast amount of high resolution images are employed. Moreover, some parts of the object are often missing due to the lack of coverage of all areas. These problems demand a logical selection of the most suitable stereo camera views from the large image dataset. This paper presents a method for clustering and choosing optimal stereo or optionally single images from a large image dataset. The approach focusses on the two key steps of image clustering and iterative image selection. The method is developed within a software application called Imaging Network Designer (IND) and tested by the 3D recording of a gearbox and three metric reference objects. A comparison is made between IND and CMVS, which is a free package for selecting vantage images. The final 3D models obtained from the IND and CMVS approaches are compared with datasets generated with an MMDx Nikon Laser scanner. Results demonstrate that IND can provide a better image selection for MVS than CMVS in terms of surface coordinate uncertainty and completeness.

  4. Single DMD time-multiplexed 64-views autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Loreti, Luigi

    2013-03-01

    Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links

  5. Determining canonical views of 3D object using minimum description length criterion and compressive sensing method

    NASA Astrophysics Data System (ADS)

    Chen, Ping-Feng; Krim, Hamid

    2008-02-01

    In this paper, we propose using two methods to determine the canonical views of 3D objects: minimum description length (MDL) criterion and compressive sensing method. MDL criterion searches for the description length that achieves the balance between model accuracy and parsimony. It takes the form of the sum of a likelihood and a penalizing term, where the likelihood is in favor of model accuracy such that more views assists the description of an object, while the second term penalizes lengthy description to prevent overfitting of the model. In order to devise the likelihood term, we propose a model to represent a 3D object as the weighted sum of multiple range images, which is used in the second method to determine the canonical views as well. In compressive sensing method, an intelligent way of parsimoniously sampling an object is presented. We make direct inference from Donoho1 and Candes'2 work, and adapt it to our model. Each range image is viewed as a projection, or a sample, of a 3D model, and by using compressive sensing theory, we are able to reconstruct the object with an overwhelming probability by scarcely sensing the object in a random manner. Compressive sensing is different from traditional compressing method in the sense that the former compress things in the sampling stage while the later collects a large number of samples and then compressing mechanism is carried out thereafter. Compressive sensing scheme is particularly useful when the number of sensors are limited or the sampling machinery cost much resource or time.

  6. Multi-view indoor human behavior recognition based on 3D skeleton

    NASA Astrophysics Data System (ADS)

    Peng, Ling; Lu, Tongwei; Min, Feng

    2015-12-01

    For the problems caused by viewpoint changes in activity recognition, a multi-view interior human behavior recognition method based on 3D framework is presented. First, Microsoft's Kinect device is used to obtain body motion video in the positive perspective, the oblique angle and the side perspective. Second, it extracts bone joints and get global human features and the local features of arms and legs at the same time to form 3D skeletal features set. Third, online dictionary learning on feature set is used to reduce the dimension of feature. Finally, linear support vector machine (LSVM) is used to obtain the results of behavior recognition. The experimental results show that this method has better recognition rate.

  7. Consistent Multi-View Texturing of Detailed 3d Surface Models

    NASA Astrophysics Data System (ADS)

    Davydova, K.; Kuschk, G.; Hoegner, L.; Reinartz, P.; Stilla, U.

    2015-03-01

    Texture mapping techniques are used to achieve a high degree of realism for computer generated large-scale and detailed 3D surface models by extracting the texture information from photographic images and applying it to the object surfaces. Due to the fact that a single image cannot capture all parts of the scene, a number of images should be taken. However, texturing the object surfaces from several images can lead to lighting variations between the neighboring texture fragments. In this paper we describe the creation of a textured 3D scene from overlapping aerial images using a Markov Random Field energy minimization framework. We aim to maximize the quality of the generated texture mosaic, preserving the resolution from the original images, and at the same time to minimize the seam visibilities between adjacent fragments. As input data we use a triangulated mesh of the city center of Munich and multiple camera views of the scene from different directions.

  8. Inferring 3D kinematics of carpal bones from single view fluoroscopic sequences.

    PubMed

    Chen, Xin; Graham, Jim; Hutchinson, Charles; Muir, Lindsay

    2011-01-01

    We present a novel framework for inferring 3D carpal bone kinematics and bone shapes from a single view fluoroscopic sequence. A hybrid statistical model representing both the kinematics and shape variation of the carpal bones is built, based on a number of 3D CT data sets obtained from different subjects at different poses. Given a fluoroscopic sequence, the wrist pose, carpal bone kinematics and bone shapes are estimated iteratively by matching the statistical model with the 2D images. A specially designed cost function enables smoothed parameter estimation across frames. We have evaluated the proposed method on both simulated data and real fluoroscopic sequences. It was found that the relative positions between carpal bones can be accurately estimated, which is potentially useful for detection of conditions such as scapholunate dissociation.

  9. In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie

    2015-03-01

    Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.

  10. Bird's Eye View - A 3-D Situational Awareness Tool for the Space Station

    NASA Technical Reports Server (NTRS)

    Dershowitz, Adam; Chamitoff, Gregory

    2002-01-01

    Even as space-qualified computer hardware lags well behind the latest home computers, the possibility of using high-fidelity interactive 3-D graphics for displaying important on board information has finally arrived, and is being used on board the International Space Station (ISS). With the quantity and complexity of space-flight telemetry, 3-D displays can greatly enhance the ability of users, both onboard and on the ground, to interpret data quickly and accurately. This is particularly true for data related to vehicle attitude, position, configuration, and relation to other objects on the ground or in-orbit Bird's Eye View (BEV) is a 3-D real-time application that provides a high degree of Situational Awareness for the crew. Its purpose is to instantly convey important motion-related parameters to the crew and mission controllers by presenting 3-D simulated camera views of the International Space Station (ISS) in its actual environment Driven by actual telemetry, and running on board, as well as on the ground, the user can visualize the Space Station relative to the Earth, Sun, stars, various reference frames, and selected targets, such as ground-sites or communication satellites. Since the actual ISS configuration (geometry) is also modeled accurately, everything from the alignment of the solar panels to the expected view from a selected window can be visualized accurately. A virtual representation of the Space Station in real time has many useful applications. By selecting different cameras, the crew or mission control can monitor the station's orientation in space, position over the Earth, transition from day to night, direction to the Sun, the view from a particular window, or the motion of the robotic arm. By viewing the vehicle attitude and solar panel orientations relative to the Sun, the power status of the ISS can be easily visualized and understood. Similarly, the thermal impacts of vehicle attitude can be analyzed and visually confirmed. Communication

  11. 3D Cryo-Imaging: A Very High-Resolution View of the Whole Mouse

    PubMed Central

    Roy, Debashish; Steyer, Grant J.; Gargesha, Madhusudhana; Stone, Meredith E.; Wilson, David L.

    2009-01-01

    We developed the Case Cryo-imaging system that provides information rich, very high-resolution, color brightfield, and molecular fluorescence images of a whole mouse using a section-and-image block-face imaging technology. The system consists of a mouse-sized, motorized cryo-microtome with special features for imaging, a modified, brightfield/ fluorescence microscope, and a robotic xyz imaging system positioner, all of which is fully automated by a control system. Using the robotic system, we acquired microscopic tiled images at a pixel size of 15.6 µm over the block face of a whole mouse sectioned at 40 µm, with a total data volume of 55 GB. Viewing 2D images at multiple resolutions, we identified small structures such as cardiac vessels, muscle layers, villi of the small intestine, the optic nerve, and layers of the eye. Cryo-imaging was also suitable for imaging embryo mutants in 3D. A mouse, in which enhanced green fluorescent protein was expressed under gamma actin promoter in smooth muscle cells, gave clear 3D views of smooth muscle in the urogenital and gastrointestinal tracts. With cryo-imaging, we could obtain 3D vasculature down to 10 µm, over very large regions of mouse brain. Software is fully automated with fully programmable imaging/sectioning protocols, email notifications, and automatic volume visualization. With a unique combination of field-of-view, depth of field, contrast, and resolution, the Case Cryo-imaging system fills the gap between whole animal in vivo imaging and histology. PMID:19248166

  12. 3D reconstruction of an IVUS transducer trajectory with a single view in cineangiography

    NASA Astrophysics Data System (ADS)

    Jourdain, Melissa; Meunier, Jean; Mongrain, Rosaire; Sherknies, Denis; Weng, Ji Yao; Tardif, Jean-Claude

    2005-04-01

    During an Intravascular Ultrasound(IVUS) intervention, a catheter with an ultrasound transducer is introduced in the body through a blood vessel and then pulled back to image a sequence of vessel cross-sections. Unfortunately, there is no 3D information about the position and orientation of these cross-section planes. To position the IVUS images in space, some researchers have proposed complex stereoscopic procedures relying on biplane angiography to get two X-ray image sequences of the IVUS transducer trajectory along the catheter. We have elaborated a much simpler algorithm to recover the transducer 3D trajectory with only a single view X-ray image sequence. The known pullback distance of the transducer during the IVUS intervention is used as an a priori to perform this task. Considering that biplane system are difficult to operate and rather expensive and uncommon in hospitals; this simple pose estimation algorithm could lead to an affordable and useful tool to better assess the 3D shape of vessels investigated with IVUS

  13. 5. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. General view of looking glass aircraft in the project looking glass historic district. View to north. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  14. 2. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. General view of looking glass aircraft in the project looking glass historic district. View to south. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  15. 1. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. General view of looking glass aircraft in the project looking glass historic district. View to southeast. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  16. 3. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. General view of looking glass aircraft in the project looking glass historic district. View to west. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  17. 4. General view of looking glass aircraft in the project ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. General view of looking glass aircraft in the project looking glass historic district. View to west. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  18. Assessment of next-best-view algorithms performance with various 3D scanners and manipulator

    NASA Astrophysics Data System (ADS)

    Karaszewski, M.; Adamczyk, M.; Sitnik, R.

    2016-09-01

    The problem of calculating three dimensional (3D) sensor position (and orientation) during the digitization of real-world objects (called next best view planning or NBV) has been an active topic of research for over 20 years. While many solutions have been developed, it is hard to compare their quality based only on the exemplary results presented in papers. We implemented 13 of the most popular NBV algorithms and evaluated their performance by digitizing five objects of various properties, using three measurement heads with different working volumes mounted on a 6-axis robot with a rotating table for placing objects. The results obtained for the 13 algorithms were then compared based on four criteria: the number of directional measurements, digitization time, total positioning distance, and surface coverage required to digitize test objects with available measurement heads.

  19. Flight tests of a hybrid-centered integrated 3D perspective-view primary flight display

    NASA Astrophysics Data System (ADS)

    He, Gang; Feyereisen, Thea; Wilson, Blake; Wyatt, Sandy; Engels, Jary

    2006-05-01

    This paper describes flight tests of a Honeywell Synthetic Vision System (SVS) prototype operating in a hybrid-centered mode on a Primus Epic TM large format display. This novel hybrid mode effectively resolves some cognitive and perceptual human factors issues associated with traditional heading-up or track-up display modes. By integrating synthetic 3D perspective view with advanced Head-Up Display (HUD) symbology in this mode, the test results demonstrate that the hybrid display mode provides clear indications of current track and crab conditions, and is effective in overcoming flight guidance symbology collision and resultant ambiguity. The hybrid-centering SVS display concept is shown to be effective in all phases of flight and is particularly valuable during landing operations with a strong cross-wind. The recorded flight test data from Honeywell's prototype SVS concept at Reno, Nevada on board Honeywell Citation V aircraft will be discussed.

  20. From ATLASGAL to SEDIGISM: Towards a Complete 3D View of the Dense Galactic Interstellar Medium

    NASA Astrophysics Data System (ADS)

    Schuller, F.; Urquhart, J.; Bronfman, L.; Csengeri, T.; Bontemps, S.; Duarte-Cabral, A.; Giannetti, A.; Ginsburg, A.; Henning, T.; Immer, K.; Leurini, S.; Mattern, M.; Menten, K.; Molinari, S.; Muller, E.; Sánchez-Monge, A.; Schisano, E.; Suri, S.; Testi, L.; Wang, K.; Wyrowski, F.; Zavagno, A.

    2016-09-01

    The ATLASGAL survey has provided the first unbiased view of the inner Galactic Plane at sub-millimetre wavelengths. This is the largest ground-based survey of its kind to date, covering 420 square degrees at a wavelength of 870 µm. The reduced data, consisting of images and a catalogue of > 104 compact sources, are available from the ESO Science Archive Facility through the Phase 3 infrastructure. The extremely rich statistics of this survey initiated several follow-up projects, including spectroscopic observations to explore molecular complexity and high angular resolution imaging with the Atacama Large Millimeter/submillimeter Array (ALMA), aimed at resolving individual protostars. The most extensive follow-up project is SEDIGISM, a 3D mapping of the dense interstellar medium over a large fraction of the inner Galaxy. Some notable results of these surveys are highlighted.

  1. Femtosecond laser 3D micromachining: a powerful tool for the fabrication of microfluidic, optofluidic, and electrofluidic devices based on glass.

    PubMed

    Sugioka, Koji; Xu, Jian; Wu, Dong; Hanada, Yasutaka; Wang, Zhongke; Cheng, Ya; Midorikawa, Katsumi

    2014-09-21

    Femtosecond lasers have unique characteristics of ultrashort pulse width and extremely high peak intensity; however, one of the most important features of femtosecond laser processing is that strong absorption can be induced only at the focus position inside transparent materials due to nonlinear multiphoton absorption. This exclusive feature makes it possible to directly fabricate three-dimensional (3D) microfluidic devices in glass microchips by two methods: 3D internal modification using direct femtosecond laser writing followed by chemical wet etching (femtosecond laser-assisted etching, FLAE) and direct ablation of glass in water (water-assisted femtosecond laser drilling, WAFLD). Direct femtosecond laser writing also enables the integration of micromechanical, microelectronic, and microoptical components into the 3D microfluidic devices without stacking or bonding substrates. This paper gives a comprehensive review on the state-of-the-art femtosecond laser 3D micromachining for the fabrication of microfluidic, optofluidic, and electrofluidic devices. A new strategy (hybrid femtosecond laser processing) is also presented, in which FLAE is combined with femtosecond laser two-photon polymerization to realize a new type of biochip termed the ship-in-a-bottle biochip. PMID:25012238

  2. Fabrication and characterization of strontium incorporated 3-D bioactive glass scaffolds for bone tissue from biosilica.

    PubMed

    Özarslan, Ali Can; Yücel, Sevil

    2016-11-01

    Bioactive glass scaffolds that contain silica are high viable biomaterials as bone supporters for bone tissue engineering due to their bioactive behaviour in simulated body fluid (SBF). In the human body, these materials help inorganic bone structure formation due to a combination of the particular ratio of elements such as silicon (Si), calcium (Ca), sodium (Na) and phosphorus (P), and the doping of strontium (Sr) into the scaffold structure increases their bioactive behaviour. In this study, bioactive glass scaffolds were produced by using rice hull ash (RHA) silica and commercial silica based bioactive glasses. The structural properties of scaffolds such as pore size, porosity and also the bioactive behaviour were investigated. The results showed that undoped and Sr-doped RHA silica-based bioactive glass scaffolds have better bioactivity than that of commercial silica based bioactive glass scaffolds. Moreover, undoped and Sr-doped RHA silica-based bioactive glass scaffolds will be able to be used instead of undoped and Sr-doped commercial silica based bioactive glass scaffolds for bone regeneration applications. Scaffolds that are produced from undoped or Sr-doped RHA silica have high potential to form new bone for bone defects in tissue engineering.

  3. Fabrication and characterization of strontium incorporated 3-D bioactive glass scaffolds for bone tissue from biosilica.

    PubMed

    Özarslan, Ali Can; Yücel, Sevil

    2016-11-01

    Bioactive glass scaffolds that contain silica are high viable biomaterials as bone supporters for bone tissue engineering due to their bioactive behaviour in simulated body fluid (SBF). In the human body, these materials help inorganic bone structure formation due to a combination of the particular ratio of elements such as silicon (Si), calcium (Ca), sodium (Na) and phosphorus (P), and the doping of strontium (Sr) into the scaffold structure increases their bioactive behaviour. In this study, bioactive glass scaffolds were produced by using rice hull ash (RHA) silica and commercial silica based bioactive glasses. The structural properties of scaffolds such as pore size, porosity and also the bioactive behaviour were investigated. The results showed that undoped and Sr-doped RHA silica-based bioactive glass scaffolds have better bioactivity than that of commercial silica based bioactive glass scaffolds. Moreover, undoped and Sr-doped RHA silica-based bioactive glass scaffolds will be able to be used instead of undoped and Sr-doped commercial silica based bioactive glass scaffolds for bone regeneration applications. Scaffolds that are produced from undoped or Sr-doped RHA silica have high potential to form new bone for bone defects in tissue engineering. PMID:27524030

  4. View showing rear of looking glass aircraft on operational apron ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View showing rear of looking glass aircraft on operational apron with nose dock hangar in background. View to northeast - Offutt Air Force Base, Looking Glass Airborne Command Post, Operational & Hangar Access Aprons, Spanning length of northeast half of Project Looking Glass Historic District, Bellevue, Sarpy County, NE

  5. A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes

    PubMed Central

    2011-01-01

    Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better

  6. A 3D view of the outflow in the Orion Molecular Cloud 1 (OMC-1)

    NASA Astrophysics Data System (ADS)

    Nissen, H. D.; Cunningham, N. J.; Gustafsson, M.; Bally, J.; Lemaire, J.-L.; Favre, C.; Field, D.

    2012-04-01

    Context. Stars whose mass is an order of magnitude greater than the Sun play a prominent role in the evolution of galaxies, exploding as supernovae, triggering bursts of star formation and spreading heavy elements about their host galaxies. A fundamental aspect of star formation is the creation of an outflow. The fast outflow emerging from a region associated with massive star formation in the Orion Molecular Cloud 1 (OMC-1), located behind the Orion Nebula, appears to have been set in motion by an explosive event. Aims: We study the structure and dynamics of outflows in OMC-1. We combine radial velocity and proper motion data for near-IR emission of molecular hydrogen to obtain the first 3-dimensional (3D) structure of the OMC-1 outflow. Our work illustrates a new diagnostic tool for studies of star formation that will be exploited in the near future with the advent of high spatial resolution spectro-imaging in particular with data from the Atacama Large Millimeter Array (ALMA). Methods: We used published radial and proper motion velocities obtained from the shock-excited vibrational emission in the H2 v = 1-0 S(1) line at 2.122 μm obtained with the GriF instrument on the Canada-France-Hawaii Telescope, the Apache Point Observatory, the Anglo-Australian Observatory, and the Subaru Telescope. Results: These data give the 3D velocity of ejecta yielding a 3D reconstruction of the outflows. This allows one to view the material from different vantage points in space giving considerable insight into the geometry. Our analysis indicates that the ejection occurred ≲720 years ago from a distorted ring-like structure of ~15″ (6000 AU) in diameter centered on the proposed point of close encounter of the stars BN, source I and maybe also source n. We propose a simple model involving curvature of shock trajectories in magnetic fields through which the origin of the explosion and the center defined by extrapolated proper motions of BN, I and n may be brought into spatial

  7. PREFACE: Viewing the World through Spin Glasses

    NASA Astrophysics Data System (ADS)

    Coolen, Ton; Nishimori, Hidetoshi; Sourlas, Nicolas; Wong, Michael

    2008-08-01

    This special issue of Journal of Physics A: Mathematical and Theoretical collects papers by speakers and participants of the conference `Viewing the World through Spin Glasses', held in Oxford (UK) on 31 August and 1 September 2007 in honour of Professor David Sherrington. It also includes contributions by many other active researchers in the field of spin glasses and related problems. The theory of spin glasses has a history of more than 30 years and continues to develop within itself as well as into an unexpectedly vast range of interdisciplinary subjects, including neural networks, error-correcting codes, optimization problems and social problems. Most of these amazing developments have their formal basis in the ground-breaking work of David Sherrington with Scott Kirkpatrick, centred on the SK model and the techniques devised to analyse it via the replica method. In this 'classic-of-classics' paper, a theoretical paradigm was suddenly established which became the common tool of analysis for thousands of papers in the following decades. It also led to deep developments in probability theory, through the efforts to understand the enigmatic Parisi solution of the SK model. The work of Professor Sherrington will continue to be an infinite source of our inspiration in many years to come. The purpose of the conference `Viewing the World through Spin Glasses' was to provide an overview of the present status of the fields which Professor Sherrington initiated, on the occasion of his 65th birthday, organized by John Cardy, Juan P Garrahan and the present Guest Editors. The first contribution in this special issue, by Professor Paul Goldbart, reflects his salute delivered at the conference dinner, and conveys its atmosphere very well. The papers that follow, ordered by the date of acceptance, represent the current activities of leading researchers in spin glasses and related fields, and we expect these to serve as milestones for future developments. We thank all the

  8. GENERAL VIEW OF BATCH PLANT, CONVEYOR AND GLASS FURNACE STACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    GENERAL VIEW OF BATCH PLANT, CONVEYOR AND GLASS FURNACE STACK LOOKING NORTHEAST FROM DREY STREET - Chambers Window Glass Company, Batch Plant, North of Drey (Nineteenth) Street, West of Constitution Boulevard, Arnold, Westmoreland County, PA

  9. Effect of 3d-transition metal doping on the shielding behavior of barium borate glasses: a spectroscopic study.

    PubMed

    ElBatal, H A; Abdelghany, A M; Ghoneim, N A; ElBatal, F H

    2014-12-10

    UV-visible and FT infrared spectra were measured for prepared samples before and after gamma irradiation. Base undoped barium borate glass of the basic composition (BaO 40%-B2O3 60mol.%) reveals strong charge transfer UV absorption bands which are related to unavoidable trace iron impurities (Fe(3+)) within the chemical raw materials. 3d transition metal (TM)-doped glasses exhibit extra characteristic absorption bands due to each TM in its specific valence or coordinate state. The optical spectra show that TM ions favor generally the presence in the high valence or tetrahedral coordination state in barium borate host glass. Infrared absorption bands of all prepared glasses reveal the appearance of both triangular BO3 units and tetrahedral BO4 units within their characteristic vibrational modes and the TM-ions cause minor effects because of the low doping level introduced (0.2%). Gamma irradiation of the undoped barium borate glass increases the intensity of the UV absorption together with the generation of an induced broad visible band at about 580nm. These changes are correlated with suggested photochemical reactions of trace iron impurities together with the generation of positive hole center (BHC or OHC) within the visible region through generated electrons and positive holes during the irradiation process.

  10. Construction of Extended 3D Field of Views of the Internal Bladder Wall Surface: A Proof of Concept

    NASA Astrophysics Data System (ADS)

    Ben-Hamadou, Achraf; Daul, Christian; Soussen, Charles

    2016-09-01

    3D extended field of views (FOVs) of the internal bladder wall facilitate lesion diagnosis, patient follow-up and treatment traceability. In this paper, we propose a 3D image mosaicing algorithm guided by 2D cystoscopic video-image registration for obtaining textured FOV mosaics. In this feasibility study, the registration makes use of data from a 3D cystoscope prototype providing, in addition to each small FOV image, some 3D points located on the surface. This proof of concept shows that textured surfaces can be constructed with minimally modified cystoscopes. The potential of the method is demonstrated on numerical and real phantoms reproducing various surface shapes. Pig and human bladder textures are superimposed on phantoms with known shape and dimensions. These data allow for quantitative assessment of the 3D mosaicing algorithm based on the registration of images simulating bladder textures.

  11. 3-D view of erosional scars on U. S. Mid-Atlantic continental margin

    SciTech Connect

    Farre, J.A.; Ryan, W.B.

    1985-06-01

    Deep-towed side-scan and bathymetric data have been merged to present a 3-D view of the lower continental slope and upper continental rise offshore Atlantic City, New Jersey. Carteret Canyon narrows and becomes nearly stranded on the lower slope where it leads into one of two steep-walled, flat-floored erosional chutes. The floors of the chutes, cut into semilithified middle Eocene siliceous limestones, are marked by downslope-trending grooves. The grooves are interpreted to be gouge marks formed during rock and sediment slides. On the uppermost rise, beneath the chutes, is a 40-m deep depression. The origin of the depression is believed to be related to material moving downslope and encountering the change in gradient at the slope/rise boundary. Downslope of the depression are channels, trails, and allochthonous blocks. The lack of significant post-early Miocene deposits implies that the lower slope offshore New Jersey has yet to reach a configuration conducive to sediment accumulation. The age of erosion on the lower slope apparently ranges from late Eocene-early Miocene to the recent geologic past.

  12. Automatic alignment of standard views in 3D echocardiograms using real-time tracking

    NASA Astrophysics Data System (ADS)

    Orderud, Fredrik; Torp, Hans; Rabben, Stein Inge

    2009-02-01

    In this paper, we present an automatic approach for alignment of standard apical and short-axis slices, and correcting them for out-of-plane motion in 3D echocardiography. This is enabled by using real-time Kalman tracking to perform automatic left ventricle segmentation using a coupled deformable model, consisting of a left ventricle model, as well as structures for the right ventricle and left ventricle outflow tract. Landmark points from the segmented model are then used to generate standard apical and short-axis slices. The slices are automatically updated after tracking in each frame to correct for out-of-plane motion caused by longitudinal shortening of the left ventricle. Results from a dataset of 35 recordings demonstrate the potential for automating apical slice initialization and dynamic short-axis slices. Apical 4-chamber, 2-chamber and long-axis slices are generated based on an assumption of fixed angle between the slices, and short-axis slices are generated so that they follow the same myocardial tissue over the entire cardiac cycle. The error compared to manual annotation was 8.4 +/- 3.5 mm for apex, 3.6 +/- 1.8 mm for mitral valve and 8.4 +/- 7.4 for apical 4-chamber view. The high computational efficiency and automatic behavior of the method enables it to operate in real-time, potentially during image acquisition.

  13. Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras

    NASA Astrophysics Data System (ADS)

    El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid

    2015-03-01

    In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.

  14. 2D-3D registration for brain radiation therapy using a 3D CBCT and a single limited field-of-view 2D kV radiograph

    NASA Astrophysics Data System (ADS)

    Munbodh, R.; Moseley, D. J.

    2014-03-01

    We report results of an intensity-based 2D-3D rigid registration framework for patient positioning and monitoring during brain radiotherapy. We evaluated two intensity-based similarity measures, the Pearson Correlation Coefficient (ICC) and Maximum Likelihood with Gaussian noise (MLG) derived from the statistics of transmission images. A useful image frequency band was identified from the bone-to-no-bone ratio. Validation was performed on gold-standard data consisting of 3D kV CBCT scans and 2D kV radiographs of an anthropomorphic head phantom acquired at 23 different poses with parameter variations along six degrees of freedom. At each pose, a single limited field of view kV radiograph was registered to the reference CBCT. The ground truth was determined from markers affixed to the phantom and visible in the CBCT images. The mean (and standard deviation) of the absolute errors in recovering each of the six transformation parameters along the x, y and z axes for ICC were varphix: 0.08(0.04)°, varphiy: 0.10(0.09)°, varphiz: 0.03(0.03)°, tx: 0.13(0.11) mm, ty: 0.08(0.06) mm and tz: 0.44(0.23) mm. For MLG, the corresponding results were varphix: 0.10(0.04)°, varphiy: 0.10(0.09)°, varphiz: 0.05(0.07)°, tx: 0.11(0.13) mm, ty: 0.05(0.05) mm and tz: 0.44(0.31) mm. It is feasible to accurately estimate all six transformation parameters from a 3D CBCT of the head and a single 2D kV radiograph within an intensity-based registration framework that incorporates the physics of transmission images.

  15. Direct laser-writing of ferroelectric single-crystal waveguide architectures in glass for 3D integrated optics.

    PubMed

    Stone, Adam; Jain, Himanshu; Dierolf, Volkmar; Sakakura, Masaaki; Shimotsuma, Yasuhiko; Miura, Kiyotaka; Hirao, Kazuyuki; Lapointe, Jerome; Kashyap, Raman

    2015-01-01

    Direct three-dimensional laser writing of amorphous waveguides inside glass has been studied intensely as an attractive route for fabricating photonic integrated circuits. However, achieving essential nonlinear-optic functionality in such devices will also require the ability to create high-quality single-crystal waveguides. Femtosecond laser irradiation is capable of crystallizing glass in 3D, but producing optical-quality single-crystal structures suitable for waveguiding poses unique challenges that are unprecedented in the field of crystal growth. In this work, we use a high angular-resolution electron diffraction method to obtain the first conclusive confirmation that uniform single crystals can be grown inside glass by femtosecond laser writing under optimized conditions. We confirm waveguiding capability and present the first quantitative measurement of power transmission through a laser-written crystal-in-glass waveguide, yielding loss of 2.64 dB/cm at 1530 nm. We demonstrate uniformity of the crystal cross-section down the length of the waveguide and quantify its birefringence. Finally, as a proof-of-concept for patterning more complex device geometries, we demonstrate the use of dynamic phase modulation to grow symmetric crystal junctions with single-pass writing. PMID:25988599

  16. Optimization of composition, structure and mechanical strength of bioactive 3-D glass-ceramic scaffolds for bone substitution.

    PubMed

    Baino, Francesco; Ferraris, Monica; Bretcanu, Oana; Verné, Enrica; Vitale-Brovarone, Chiara

    2013-03-01

    Fabrication of 3-D highly porous, bioactive, and mechanically competent scaffolds represents a significant challenge of bone tissue engineering. In this work, Bioglass®-derived glass-ceramic scaffolds actually fulfilling this complex set of requirements were successfully produced through the sponge replication method. Scaffold processing parameters and sintering treatment were carefully designed in order to obtain final porous bodies with pore content (porosity above 70 %vol), trabecular architecture and mechanical properties (compressive strength up to 3 MPa) analogous to those of the cancellous bone. Influence of the Bioglass® particles size on the structural and mechanical features of the sintered scaffolds was considered and discussed. Relationship between porosity and mechanical strength was investigated and modeled. Three-dimensional architecture, porosity, mechanical strength and in vitro bioactivity of the optimized Bioglass®-derived scaffolds were also compared to those of CEL2-based glass-ceramic scaffolds (CEL2 is an experimental bioactive glass originally developed by the authors at Politecnico di Torino) fabricated by the same processing technique, in an attempt at understanding the role of different bioactive glass composition on the major features of scaffolds prepared by the same method.

  17. Direct laser-writing of ferroelectric single-crystal waveguide architectures in glass for 3D integrated optics

    PubMed Central

    Stone, Adam; Jain, Himanshu; Dierolf, Volkmar; Sakakura, Masaaki; Shimotsuma, Yasuhiko; Miura, Kiyotaka; Hirao, Kazuyuki; Lapointe, Jerome; Kashyap, Raman

    2015-01-01

    Direct three-dimensional laser writing of amorphous waveguides inside glass has been studied intensely as an attractive route for fabricating photonic integrated circuits. However, achieving essential nonlinear-optic functionality in such devices will also require the ability to create high-quality single-crystal waveguides. Femtosecond laser irradiation is capable of crystallizing glass in 3D, but producing optical-quality single-crystal structures suitable for waveguiding poses unique challenges that are unprecedented in the field of crystal growth. In this work, we use a high angular-resolution electron diffraction method to obtain the first conclusive confirmation that uniform single crystals can be grown inside glass by femtosecond laser writing under optimized conditions. We confirm waveguiding capability and present the first quantitative measurement of power transmission through a laser-written crystal-in-glass waveguide, yielding loss of 2.64 dB/cm at 1530 nm. We demonstrate uniformity of the crystal cross-section down the length of the waveguide and quantify its birefringence. Finally, as a proof-of-concept for patterning more complex device geometries, we demonstrate the use of dynamic phase modulation to grow symmetric crystal junctions with single-pass writing. PMID:25988599

  18. Guided Evolution of Bulk Metallic Glass Nanostructures: A Platform for Designing 3D Electrocatalytic Surfaces.

    PubMed

    Doubek, Gustavo; Sekol, Ryan C; Li, Jinyang; Ryu, Won-Hee; Gittleson, Forrest S; Nejati, Siamak; Moy, Eric; Reid, Candy; Carmo, Marcelo; Linardi, Marcelo; Bordeenithikasem, Punnathat; Kinser, Emily; Liu, Yanhui; Tong, Xiao; Osuji, Chinedum O; Schroers, Jan; Mukherjee, Sundeep; Taylor, André D

    2016-03-01

    Electrochemical devices such as fuel cells, electrolyzers, lithium-air batteries, and pseudocapacitors are expected to play a major role in energy conversion/storage in the near future. Here, it is demonstrated how desirable bulk metallic glass compositions can be obtained using a combinatorial approach and it is shown that these alloys can serve as a platform technology for a wide variety of electrochemical applications through several surface modification techniques. PMID:26689722

  19. Human guidance of mobile robots in complex 3D environments using smart glasses

    NASA Astrophysics Data System (ADS)

    Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel

    2016-05-01

    In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.

  20. Human guidance of mobile robots in complex 3D environments using smart glasses

    NASA Astrophysics Data System (ADS)

    Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel

    2016-05-01

    In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.

  1. Development of a Top-View Numeric Coding Teaching-Learning Trajectory within an Elementary Grades 3-D Visualization Design Research Project

    ERIC Educational Resources Information Center

    Sack, Jacqueline J.

    2013-01-01

    This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…

  2. Extended field-of-view and increased-signal 3D holographic illumination with time-division multiplexing.

    PubMed

    Yang, Samuel J; Allen, William E; Kauvar, Isaac; Andalman, Aaron S; Young, Noah P; Kim, Christina K; Marshel, James H; Wetzstein, Gordon; Deisseroth, Karl

    2015-12-14

    Phase spatial light modulators (SLMs) are widely used for generating multifocal three-dimensional (3D) illumination patterns, but these are limited to a field of view constrained by the pixel count or size of the SLM. Further, with two-photon SLM-based excitation, increasing the number of focal spots penalizes the total signal linearly--requiring more laser power than is available or can be tolerated by the sample. Here we analyze and demonstrate a method of using galvanometer mirrors to time-sequentially reposition multiple 3D holograms, both extending the field of view and increasing the total time-averaged two-photon signal. We apply our approach to 3D two-photon in vivo neuronal calcium imaging.

  3. Extended field-of-view and increased-signal 3D holographic illumination with time-division multiplexing.

    PubMed

    Yang, Samuel J; Allen, William E; Kauvar, Isaac; Andalman, Aaron S; Young, Noah P; Kim, Christina K; Marshel, James H; Wetzstein, Gordon; Deisseroth, Karl

    2015-12-14

    Phase spatial light modulators (SLMs) are widely used for generating multifocal three-dimensional (3D) illumination patterns, but these are limited to a field of view constrained by the pixel count or size of the SLM. Further, with two-photon SLM-based excitation, increasing the number of focal spots penalizes the total signal linearly--requiring more laser power than is available or can be tolerated by the sample. Here we analyze and demonstrate a method of using galvanometer mirrors to time-sequentially reposition multiple 3D holograms, both extending the field of view and increasing the total time-averaged two-photon signal. We apply our approach to 3D two-photon in vivo neuronal calcium imaging. PMID:26699047

  4. Extended field-of-view and increased-signal 3D holographic illumination with time-division multiplexing

    PubMed Central

    Yang, Samuel J.; Allen, William E.; Kauvar, Isaac; Andalman, Aaron S.; Young, Noah P.; Kim, Christina K.; Marshel, James H.; Wetzstein, Gordon; Deisseroth, Karl

    2016-01-01

    Phase spatial light modulators (SLMs) are widely used for generating multifocal three-dimensional (3D) illumination patterns, but these are limited to a field of view constrained by the pixel count or size of the SLM. Further, with two-photon SLM-based excitation, increasing the number of focal spots penalizes the total signal linearly—requiring more laser power than is available or can be tolerated by the sample. Here we analyze and demonstrate a method of using galvanometer mirrors to time-sequentially reposition multiple 3D holograms, both extending the field of view and increasing the total time-averaged two-photon signal. We apply our approach to 3D two-photon in vivo neuronal calcium imaging. PMID:26699047

  5. From pixel to voxel: a deeper view of biological tissue by 3D mass spectral imaging

    PubMed Central

    Ye, Hui; Greer, Tyler; Li, Lingjun

    2011-01-01

    Three dimensional mass spectral imaging (3D MSI) is an exciting field that grants the ability to study a broad mass range of molecular species ranging from small molecules to large proteins by creating lateral and vertical distribution maps of select compounds. Although the general premise behind 3D MSI is simple, factors such as choice of ionization method, sample handling, software considerations and many others must be taken into account for the successful design of a 3D MSI experiment. This review provides a brief overview of ionization methods, sample preparation, software types and technological advancements driving 3D MSI research of a wide range of low- to high-mass analytes. Future perspectives in this field are also provided to conclude that the positive and promises ever-growing applications in the biomedical field with continuous developments of this powerful analytical tool. PMID:21320052

  6. Effectiveness of Applying 2D Static Depictions and 3D Animations to Orthographic Views Learning in Graphical Course

    ERIC Educational Resources Information Center

    Wu, Chih-Fu; Chiang, Ming-Chin

    2013-01-01

    This study provides experiment results as an educational reference for instructors to help student obtain a better way to learn orthographic views in graphical course. A visual experiment was held to explore the comprehensive differences between 2D static and 3D animation object features; the goal was to reduce the possible misunderstanding…

  7. Hubble and ESO's VLT provide unique 3D views of remote galaxies

    NASA Astrophysics Data System (ADS)

    2009-03-01

    Astronomers have obtained exceptional 3D views of distant galaxies, seen when the Universe was half its current age, by combining the twin strengths of the NASA/ESA Hubble Space Telescope's acute eye, and the capacity of ESO's Very Large Telescope to probe the motions of gas in tiny objects. By looking at this unique "history book" of our Universe, at an epoch when the Sun and the Earth did not yet exist, scientists hope to solve the puzzle of how galaxies formed in the remote past. ESO PR Photo 10a/09 A 3D view of remote galaxies ESO PR Photo 10b/09 Measuring motions in 3 distant galaxies ESO PR Video 10a/09 Galaxies in collision For decades, distant galaxies that emitted their light six billion years ago were no more than small specks of light on the sky. With the launch of the Hubble Space Telescope in the early 1990s, astronomers were able to scrutinise the structure of distant galaxies in some detail for the first time. Under the superb skies of Paranal, the VLT's FLAMES/GIRAFFE spectrograph (ESO 13/02) -- which obtains simultaneous spectra from small areas of extended objects -- can now also resolve the motions of the gas in these distant galaxies (ESO 10/06). "This unique combination of Hubble and the VLT allows us to model distant galaxies almost as nicely as we can close ones," says François Hammer, who led the team. "In effect, FLAMES/GIRAFFE now allows us to measure the velocity of the gas at various locations in these objects. This means that we can see how the gas is moving, which provides us with a three-dimensional view of galaxies halfway across the Universe." The team has undertaken the Herculean task of reconstituting the history of about one hundred remote galaxies that have been observed with both Hubble and GIRAFFE on the VLT. The first results are coming in and have already provided useful insights for three galaxies. In one galaxy, GIRAFFE revealed a region full of ionised gas, that is, hot gas composed of atoms that have been stripped of

  8. Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system.

    PubMed

    Tao, Tianyang; Chen, Qian; Da, Jian; Feng, Shijie; Hu, Yan; Zuo, Chao

    2016-09-01

    In recent years, fringe projection has become an established and essential method for dynamic three-dimensional (3-D) shape measurement in different fields such as online inspection and real-time quality control. Numerous high-speed 3-D shape measurement methods have been developed by either employing high-speed hardware, minimizing the number of pattern projection, or both. However, dynamic 3-D shape measurement of arbitrarily-shaped objects with full sensor resolution without the necessity of additional pattern projections is still a big challenge. In this work, we introduce a high-speed 3-D shape measurement technique based on composite phase-shifting fringes and a multi-view system. The geometry constraint is adopted to search the corresponding points independently without additional images. Meanwhile, by analysing the 3-D position and the main wrapped phase of the corresponding point, pairs with an incorrect 3-D position or a considerable phase difference are effectively rejected. All of the qualified corresponding points are then corrected, and the unique one as well as the related period order is selected through the embedded triangular wave. Finally, considering that some points can only be captured by one of the cameras due to the occlusions, these points may have different fringe orders in the two views, so a left-right consistency check is employed to eliminate those erroneous period orders in this case. Several experiments on both static and dynamic scenes are performed, verifying that our method can achieve a speed of 120 frames per second (fps) with 25-period fringe patterns for fast, dense, and accurate 3-D measurement.

  9. Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system.

    PubMed

    Tao, Tianyang; Chen, Qian; Da, Jian; Feng, Shijie; Hu, Yan; Zuo, Chao

    2016-09-01

    In recent years, fringe projection has become an established and essential method for dynamic three-dimensional (3-D) shape measurement in different fields such as online inspection and real-time quality control. Numerous high-speed 3-D shape measurement methods have been developed by either employing high-speed hardware, minimizing the number of pattern projection, or both. However, dynamic 3-D shape measurement of arbitrarily-shaped objects with full sensor resolution without the necessity of additional pattern projections is still a big challenge. In this work, we introduce a high-speed 3-D shape measurement technique based on composite phase-shifting fringes and a multi-view system. The geometry constraint is adopted to search the corresponding points independently without additional images. Meanwhile, by analysing the 3-D position and the main wrapped phase of the corresponding point, pairs with an incorrect 3-D position or a considerable phase difference are effectively rejected. All of the qualified corresponding points are then corrected, and the unique one as well as the related period order is selected through the embedded triangular wave. Finally, considering that some points can only be captured by one of the cameras due to the occlusions, these points may have different fringe orders in the two views, so a left-right consistency check is employed to eliminate those erroneous period orders in this case. Several experiments on both static and dynamic scenes are performed, verifying that our method can achieve a speed of 120 frames per second (fps) with 25-period fringe patterns for fast, dense, and accurate 3-D measurement. PMID:27607632

  10. 3D analysis of thermal and stress evolution during laser cladding of bioactive glass coatings.

    PubMed

    Krzyzanowski, Michal; Bajda, Szymon; Liu, Yijun; Triantaphyllou, Andrew; Mark Rainforth, W; Glendenning, Malcolm

    2016-06-01

    Thermal and strain-stress transient fields during laser cladding of bioactive glass coatings on the Ti6Al4V alloy basement were numerically calculated and analysed. Conditions leading to micro-cracking susceptibility of the coating have been investigated using the finite element based modelling supported by experimental results of microscopic investigation of the sample coatings. Consecutive temperature and stress peaks are developed within the cladded material as a result of the laser beam moving along the complex trajectory, which can lead to micro-cracking. The preheated to 500°C base plate allowed for decrease of the laser power and lowering of the cooling speed between the consecutive temperature peaks contributing in such way to achievement of lower cracking susceptibility. The cooling rate during cladding of the second and the third layer was lower than during cladding of the first one, in such way, contributing towards improvement of cracking resistance of the subsequent layers due to progressive accumulation of heat over the process.

  11. Multi-scale Characterisation of the 3D Microstructure of a Thermally-Shocked Bulk Metallic Glass Matrix Composite.

    PubMed

    Zhang, Wei; Bodey, Andrew J; Sui, Tan; Kockelmann, Winfried; Rau, Christoph; Korsunsky, Alexander M; Mi, Jiawei

    2016-01-01

    Bulk metallic glass matrix composites (BMGMCs) are a new class of metal alloys which have significantly increased ductility and impact toughness, resulting from the ductile crystalline phases distributed uniformly within the amorphous matrix. However, the 3D structures and their morphologies of such composite at nano and micrometre scale have never been reported before. We have used high density electric currents to thermally shock a Zr-Ti based BMGMC to different temperatures, and used X-ray microtomography, FIB-SEM nanotomography and neutron diffraction to reveal the morphologies, compositions, volume fractions and thermal stabilities of the nano and microstructures. Understanding of these is essential for optimizing the design of BMGMCs and developing viable manufacturing methods. PMID:26725519

  12. Multi-scale Characterisation of the 3D Microstructure of a Thermally-Shocked Bulk Metallic Glass Matrix Composite

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Bodey, Andrew J.; Sui, Tan; Kockelmann, Winfried; Rau, Christoph; Korsunsky, Alexander M.; Mi, Jiawei

    2016-01-01

    Bulk metallic glass matrix composites (BMGMCs) are a new class of metal alloys which have significantly increased ductility and impact toughness, resulting from the ductile crystalline phases distributed uniformly within the amorphous matrix. However, the 3D structures and their morphologies of such composite at nano and micrometre scale have never been reported before. We have used high density electric currents to thermally shock a Zr-Ti based BMGMC to different temperatures, and used X-ray microtomography, FIB-SEM nanotomography and neutron diffraction to reveal the morphologies, compositions, volume fractions and thermal stabilities of the nano and microstructures. Understanding of these is essential for optimizing the design of BMGMCs and developing viable manufacturing methods.

  13. Multi-scale Characterisation of the 3D Microstructure of a Thermally-Shocked Bulk Metallic Glass Matrix Composite

    PubMed Central

    Zhang, Wei; Bodey, Andrew J.; Sui, Tan; Kockelmann, Winfried; Rau, Christoph; Korsunsky, Alexander M.; Mi, Jiawei

    2016-01-01

    Bulk metallic glass matrix composites (BMGMCs) are a new class of metal alloys which have significantly increased ductility and impact toughness, resulting from the ductile crystalline phases distributed uniformly within the amorphous matrix. However, the 3D structures and their morphologies of such composite at nano and micrometre scale have never been reported before. We have used high density electric currents to thermally shock a Zr-Ti based BMGMC to different temperatures, and used X-ray microtomography, FIB-SEM nanotomography and neutron diffraction to reveal the morphologies, compositions, volume fractions and thermal stabilities of the nano and microstructures. Understanding of these is essential for optimizing the design of BMGMCs and developing viable manufacturing methods. PMID:26725519

  14. Robotic deposition and in vitro characterization of 3D gelatin-bioactive glass hybrid scaffolds for biomedical applications.

    PubMed

    Gao, Chunxia; Rahaman, Mohamed N; Gao, Qiang; Teramoto, Akira; Abe, Koji

    2013-07-01

    The development of inorganic-organic hybrid scaffolds with controllable degradation and bioactive properties is receiving considerable interest for bone and tissue regeneration. The objective of this study was to create hybrid scaffolds of gelatin and bioactive glass (BG) with a controlled, three-dimensional (3D) architecture by a combined sol-gel and robotic deposition (robocasting) method and evaluate their mechanical response, bioactivity, and response to cells in vitro. Inks for robotic deposition of the scaffolds were prepared by dissolving gelatin in a sol-gel precursor solution of the bioactive glass (70SiO2 -25CaO-5P2 O5 ; mol%) and aging the solution to form a gel with the requisite viscosity. After drying and crosslinking, the gelatin-BG scaffolds, with a grid-like architecture (filament diameter ∼350 µm; pore width ∼550 µm), showed an elasto-plastic response, with a compressive strength of 5.1 ± 0.6 MPa, in the range of values for human trabecular bone (2-12 MPa). When immersed in phosphate-buffered saline, the crosslinked scaffolds rapidly absorbed water (∼440% of its dry weight after 2 h) and showed an elastic response at deformations up to ∼60%. Immersion of the scaffolds in a simulated body fluid resulted in the formation of a hydroxyapatite-like surface layer within 5 days, indicating their bioactivity in vitro. The scaffolds supported the proliferation, alkaline phosphatase activity, and mineralization of osteogenic MC3T3-E1 cells in vitro, showing their biocompatibility. Altogether, the results indicate that these gelatin-BG hybrid scaffolds with a controlled, 3D architecture of inter-connected pores have potential for use as implants for bone regeneration.

  15. Micro-electrical discharge machining of 3D micro-molds from Pd40Cu30P20Ni10 metallic glass by using laminated 3D micro-electrodes

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Wu, Xiao-yu; Ma, Jiang; Liang, Xiong; Lei, Jian-guo; Wu, Bo; Ruan, Shuang-chen; Wang, Zhen-long

    2016-03-01

    For obtaining 3D micro-molds with better surface quality (slight ridges) and mechanical properties, in this paper 3D micro-electrodes were fabricated and applied to micro-electrical discharge machining (micro-EDM) to process Pd40Cu30P20Ni10 metallic glass. First, 100 μm-thick Cu foil was cut to obtain multilayer 2D micro-structures and these were connected to fit 3D micro-electrodes (with feature sizes of less than 1 mm). Second, under the voltage of 80 V, pulse frequency of 0.2MHZ, pulse width of 800 ns and pulse interval of 4200 ns, the 3D micro-electrodes were applied to micro-EDM for processing Pd40Cu30P20Ni10 metallic glass. The 3D micro-molds with feature within 1 mm were obtained. Third, scanning electron microscope, energy dispersive spectroscopy and x-ray diffraction analysis were carried out on the processed results. The analysis results indicate that with an increase in the depth of micro-EDM, carbon on the processed surface gradually increased from 0.5% to 5.8%, and the processed surface contained new phases (Ni12P5 and Cu3P).

  16. The Relationship Between Glass Formability and the Properties of the Bcc Phase in TITANIUM-3D Metal Alloys

    NASA Astrophysics Data System (ADS)

    Sinkler, Wharton

    The present study concerns glass formation and the beta (bcc) phase in Ti-3d metal systems. beta phase stability is related to amorphization, because the formability and stability of metallic glasses depends on the relative thermodynamic instability of chemically disordered crystalline solid solution phases (Johnson 1986). Correlations are found in this series of alloys which support a connection between electronic characteristics of the bcc phase and the tendency for glass formation. Electron irradiation-induced amorphization in Ti-3d metal systems is investigated as a function of temperature and DeltaN, the group number difference between Ti and the solute. DeltaN is made continuous by using a series of pseudobinary Laves compounds Ti(M1_{x}M2 _{(1-x)}_2. For DeltaN <= 2.2 (between TiCr_2 and TiMn _2) low temperature irradiation damage induces oriented precipitation of the beta (bcc) solid solution phase from the damaged compound. For DeltaN > 2.2 amorphization occurs. beta-phase precipitation under irradiation suggests that beta phase stability is continuously enhanced as Delta N decreases. Diffuse omega scattering in the quenched Ti-Cr beta phase is investigated using electron diffraction and low temperature electron irradiation. A new model of the short range ordered atomic displacements causing the diffuse scattering is developed. Based on this model, it is proposed that the structure reflects chemical short range order. This is supported by irradiation results on the beta phase. A correlation is found between the diffuse scattering and the valence electron concentration. The explanation proposed for this correlation is that the chemical ordering in the beta phase is driven by Fermi surface nesting. Results of annealing of quenched beta Ti-Cr are presented, and are compared with reports of annealing-induced amorphization of this phase (Blatter et al. 1988; Yan et al. 1993). Amorphization is not reproduced. A metastable compound phase beta ^{''} precipitates

  17. 3D Shape and Pose Estimaion of Deformable Tapes from Multiple Views

    NASA Astrophysics Data System (ADS)

    Kubota, Hitoshi; Ono, Masakazu; Takeshi, Masami; Saito, Hideo

    In this paper, we propose a method to estimate 3D shape of deformable plastic tapes from multiple camera images. In this method, the tape is modeled as serial connection of multiple rectangular plates, where the size of each plate is previously known and node angles of between plates represent the shape of the object. The node angles of the object are estimated by 2D silhouette shapes taken in the multiple images. The estimation is performed by minimizing the difference of the silhouette shapes between the input images and synthesized images of the model shape. For demonstrating the proposed method, 3D shape of a tape is estimated with two camera images. The accuracy of the estimation is sufficient for making the assembling robot in our plant to handle the tape. Computation time is also sufficiently short for applying the proposed algorithm in the assembling plant.

  18. View planetary differentiation process through high-resolution 3D imaging

    NASA Astrophysics Data System (ADS)

    Fei, Y.

    2011-12-01

    Core-mantle separation is one of the most important processes in planetary evolution, defining the structure and chemical distribution in the planets. Iron-dominated core materials could migrate through silicate mantle to the core by efficient liquid-liquid separation and/or by percolation of liquid metal through solid silicate matrix. We can experimentally simulate these processes to examine the efficiency and time of core formation and its geochemical signatures. The quantitative measure of the efficiency of percolation is usually the dihedral angle, related to the interfacial energies of the liquid and solid phases. To determine the true dihedral angle at high pressure and temperatures, it is necessary to measure the relative frequency distributions of apparent dihedral angles between the quenched liquid metal and silicate grains for each experiment. Here I present a new imaging technique to visualize the distribution of liquid metal in silicate matrix in 3D by combination of focus ion beam (FIB) milling and high-resolution SEM image. The 3D volume rendering provides precise determination of the dihedral angle and quantitative measure of volume fraction and connectivity. I have conducted a series of experiments using mixtures of San Carlos olivine and Fe-S (10wt%S) metal with different metal-silicate ratios, up to 25 GPa and at temperatures above 1800C. High-quality 3D volume renderings were reconstructed from FIB serial sectioning and imaging with 10-nm slice thickness and 14-nm image resolution for each quenched sample. The unprecedented spatial resolution at nano scale allows detailed examination of textural features and precise determination of the dihedral angle as a function of pressure, temperature and composition. The 3D reconstruction also allows direct assessment of connectivity in multi-phase matrix, providing a new way to investigate the efficiency of metal percolation in a real silicate mantle.

  19. 3D high-efficiency video coding for multi-view video and depth data.

    PubMed

    Muller, Karsten; Schwarz, Heiko; Marpe, Detlev; Bartnik, Christian; Bosse, Sebastian; Brust, Heribert; Hinz, Tobias; Lakshman, Haricharan; Merkle, Philipp; Rhee, Franz Hunn; Tech, Gerhard; Winken, Martin; Wiegand, Thomas

    2013-09-01

    This paper describes an extension of the high efficiency video coding (HEVC) standard for coding of multi-view video and depth data. In addition to the known concept of disparity-compensated prediction, inter-view motion parameter, and inter-view residual prediction for coding of the dependent video views are developed and integrated. Furthermore, for depth coding, new intra coding modes, a modified motion compensation and motion vector coding as well as the concept of motion parameter inheritance are part of the HEVC extension. A novel encoder control uses view synthesis optimization, which guarantees that high quality intermediate views can be generated based on the decoded data. The bitstream format supports the extraction of partial bitstreams, so that conventional 2D video, stereo video, and the full multi-view video plus depth format can be decoded from a single bitstream. Objective and subjective results are presented, demonstrating that the proposed approach provides 50% bit rate savings in comparison with HEVC simulcast and 20% in comparison with a straightforward multi-view extension of HEVC without the newly developed coding tools. PMID:23715605

  20. 3D high-efficiency video coding for multi-view video and depth data.

    PubMed

    Muller, Karsten; Schwarz, Heiko; Marpe, Detlev; Bartnik, Christian; Bosse, Sebastian; Brust, Heribert; Hinz, Tobias; Lakshman, Haricharan; Merkle, Philipp; Rhee, Franz Hunn; Tech, Gerhard; Winken, Martin; Wiegand, Thomas

    2013-09-01

    This paper describes an extension of the high efficiency video coding (HEVC) standard for coding of multi-view video and depth data. In addition to the known concept of disparity-compensated prediction, inter-view motion parameter, and inter-view residual prediction for coding of the dependent video views are developed and integrated. Furthermore, for depth coding, new intra coding modes, a modified motion compensation and motion vector coding as well as the concept of motion parameter inheritance are part of the HEVC extension. A novel encoder control uses view synthesis optimization, which guarantees that high quality intermediate views can be generated based on the decoded data. The bitstream format supports the extraction of partial bitstreams, so that conventional 2D video, stereo video, and the full multi-view video plus depth format can be decoded from a single bitstream. Objective and subjective results are presented, demonstrating that the proposed approach provides 50% bit rate savings in comparison with HEVC simulcast and 20% in comparison with a straightforward multi-view extension of HEVC without the newly developed coding tools.

  1. INTERIOR VIEW SHOWING FURNACE KEEPER OBSERVING FURNACE THROUGH BLUE GLASS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    INTERIOR VIEW SHOWING FURNACE KEEPER OBSERVING FURNACE THROUGH BLUE GLASS EVERY TWENTY MINUTES TO DETERMINE SIZE AND TEXTURE OF BATCH AND OTHER VARIABLES. FAN IN FRONT COOLS WORKERS AS THEY CONDUCT REPAIRS. FURNACE TEMPERATURE AT 1572 DEGREES FAHRENHEIT. - Chambers-McKee Window Glass Company, Furnace No. 2, Clay Avenue Extension, Jeannette, Westmoreland County, PA

  2. VIEW OF THREE NORTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF THREE NORTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ARE LOCATED JUST BELOW THE CHOIR LOFT. - U.S. Naval Base, Pearl Harbor, Chapel, Corner of Oakley & Nimitz Street, Pearl City, Honolulu County, HI

  3. VIEW OF THREE NORTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF THREE NORTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ARE LOCATED ADJACENT TO THE ALTAR. - U.S. Naval Base, Pearl Harbor, Chapel, Corner of Oakley & Nimitz Street, Pearl City, Honolulu County, HI

  4. VIEW OF THREE SOUTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF THREE SOUTH FACING STAINED GLASS WINDOWS. THESE WINDOWS ARE LOCATED ADJACENT TO THE ALTER. - U.S. Naval Base, Pearl Harbor, Chapel, Corner of Oakley & Nimitz Street, Pearl City, Honolulu County, HI

  5. 18. INTERIOR DETAIL VIEW OF STAINED GLASS WINDOW LOCATED AT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. INTERIOR DETAIL VIEW OF STAINED GLASS WINDOW LOCATED AT SOUTH SIDE OF ALTAR, NOTE INSCRIPTION DEDICATED IN THE MEMORY OF FATHER DAMIEN - St. Francis Catholic Church, Moloka'i Island, Kalaupapa, Kalawao County, HI

  6. 3D reconstruction of scintillation light emission from proton pencil beams using limited viewing angles—a simulation study

    NASA Astrophysics Data System (ADS)

    Hui, CheukKai; Robertson, Daniel; Beddar, Sam

    2014-08-01

    An accurate and high-resolution quality assurance (QA) method for proton radiotherapy beams is necessary to ensure correct dose delivery to the target. Detectors based on a large volume of liquid scintillator have shown great promise in providing fast and high-resolution measurements of proton treatment fields. However, previous work with these detectors has been limited to two-dimensional measurements, and the quantitative measurement of dose distributions was lacking. The purpose of the current study is to assess the feasibility of reconstructing three-dimensional (3D) scintillation light distributions of spot scanning proton beams using a scintillation system. The proposed system consists of a tank of liquid scintillator imaged by charge-coupled device cameras at three orthogonal viewing angles. Because of the limited number of viewing angles, we developed a profile-based technique to obtain an initial estimate that can improve the quality of the 3D reconstruction. We found that our proposed scintillator system and profile-based technique can reconstruct a single energy proton beam in 3D with a gamma passing rate (3%/3 mm local) of 100.0%. For a single energy layer of an intensity modulated proton therapy prostate treatment plan, the proposed method can reconstruct the 3D light distribution with a gamma pass rate (3%/3 mm local) of 99.7%. In addition, we also found that the proposed method is effective in detecting errors in the treatment plan, indicating that it can be a very useful tool for 3D proton beam QA.

  7. 3D reconstruction of scintillation light emission from proton pencil beams using limited viewing angles-a simulation study.

    PubMed

    Hui, CheukKai; Robertson, Daniel; Beddar, Sam

    2014-08-21

    An accurate and high-resolution quality assurance (QA) method for proton radiotherapy beams is necessary to ensure correct dose delivery to the target. Detectors based on a large volume of liquid scintillator have shown great promise in providing fast and high-resolution measurements of proton treatment fields. However, previous work with these detectors has been limited to two-dimensional measurements, and the quantitative measurement of dose distributions was lacking. The purpose of the current study is to assess the feasibility of reconstructing three-dimensional (3D) scintillation light distributions of spot scanning proton beams using a scintillation system. The proposed system consists of a tank of liquid scintillator imaged by charge-coupled device cameras at three orthogonal viewing angles. Because of the limited number of viewing angles, we developed a profile-based technique to obtain an initial estimate that can improve the quality of the 3D reconstruction. We found that our proposed scintillator system and profile-based technique can reconstruct a single energy proton beam in 3D with a gamma passing rate (3%/3 mm local) of 100.0%. For a single energy layer of an intensity modulated proton therapy prostate treatment plan, the proposed method can reconstruct the 3D light distribution with a gamma pass rate (3%/3 mm local) of 99.7%. In addition, we also found that the proposed method is effective in detecting errors in the treatment plan, indicating that it can be a very useful tool for 3D proton beam QA. PMID:25054735

  8. Spectral analysis of views interpolated by chroma subpixel downsampling for 3D autosteroscopic displays

    NASA Astrophysics Data System (ADS)

    Marson, Avishai; Stern, Adrian

    2015-05-01

    One of the main limitations of horizontal parallax autostereoscopic displays is the horizontal resolution loss due the need to repartition the pixels of the display panel among the multiple views. Recently we have shown that this problem can be alleviated by applying a color sub-pixel rendering technique1. Interpolated views are generated by down-sampling the panel pixels at sub-pixel level, thus increasing the number of views. The method takes advantage of lower acuity of the human eye to chromatic resolution. Here we supply further support of the technique by analyzing the spectra of the subsampled images.

  9. A molecular view of vapor deposited glasses

    SciTech Connect

    Singh, Sadanand; Pablo, Juan J. de

    2011-05-21

    Recently, novel organic glassy materials that exhibit remarkable stability have been prepared by vapor deposition. The thermophysical properties of these new ''stable'' glasses are equivalent to those that common glasses would exhibit after aging over periods lasting thousands of years. The origin of such enhanced stability has been elusive; in the absence of detailed models, past studies have discussed the formation of new polyamorphs or that of nanocrystals to explain the observed behavior. In this work, an atomistic molecular model of trehalose, a disaccharide of glucose, is used to examine the properties of vapor-deposited stable glasses. Consistent with experiment, the model predicts the formation of stable glasses having a higher density, a lower enthalpy, and higher onset temperatures than those of the corresponding ''ordinary'' glass formed by quenching the bulk liquid. Simulations reveal that newly formed layers of the growing vapor-deposited film exhibit greater mobility than the remainder of the material, thereby enabling a reorganization of the film as it is grown. They also reveal that ''stable'' glasses exhibit a distinct layered structure in the direction normal to the substrate that is responsible for their unusual properties.

  10. Stereoscopic 3D entertainment and its effect on viewing comfort: comparison of children and adults.

    PubMed

    Pölönen, Monika; Järvenpää, Toni; Bilcu, Beatrice

    2013-01-01

    Children's and adults' viewing comfort during stereoscopic three-dimensional film viewing and computer game playing was studied. Certain mild changes in visual function, heterophoria and near point of accommodation values, as well as eyestrain and visually induced motion sickness levels were found when single setups were compared. The viewing system had an influence on viewing comfort, in particular for eyestrain levels, but no clear difference between two- and three-dimensional systems was found. Additionally, certain mild changes in visual functions and visually induced motion sickness levels between adults and children were found. In general, all of the system-task combinations caused mild eyestrain and possible changes in visual functions, but these changes in magnitude were small. According to subjective opinions that further support these measurements, using a stereoscopic three-dimensional system for up to 2 h was acceptable for most of the users regardless of their age.

  11. A method for automatic 3D reconstruction based on multiple views from a free-mobile camera

    NASA Astrophysics Data System (ADS)

    Yu, Qingbing; Zhang, Zhijiang

    2004-09-01

    Automatic 3D-reconstruction from an image sequence of an object is described. The construction is based on multiple views from a free-mobile camera and the object is placed on a novel calibration pattern consisting of two concentric circles connected by radial line segments. Compared to other methods of 3D-reconstruction, the approach reduces the restriction of the measurement environment and increases the flexibility of the user. In the first step, the images of each view are calibrated individually to obtain camera information. The calibration pattern is separated from the input image with the erosion-dilation algorithm and the calibration points can be extracted from the pattern image accurately after estimations of two ellipses and lines. Tsai"s two-stage technique is used in calibration process. In the second step, the 3D reconstruction of real object can be subdivided into two parts: the shape reconstruction and texture mapping. With the principle of "shape from silhouettes (SFS)", a bounding cone is constructed from one image using the calibration information and silhouette. The intersection of all bounding cones defines an approximate geometric representation. The experimental results with real object are performed, the reconstruction error <1%, which validate this method"s high efficiency and feasibility.

  12. Quantitative analysis of 3D stent reconstruction from a limited number of views in cardiac rotational angiography

    NASA Astrophysics Data System (ADS)

    Perrenot, Béatrice; Vaillant, Régis; Prost, Rémy; Finet, Gérard; Douek, Philippe; Peyrin, Françoise

    2007-03-01

    Percutaneous coronary angioplasty consists in conducting a guidewire carrying a balloon and a stent through the lesion and deploying the stent by balloon inflation. A stent is a small 3D complex mesh hardly visible in X-ray images : the control of stent deployment is difficult although it is important to avoid post intervention complications. In a previous work, we proposed a method to reconstruct 3D stent images from a set of 2D cone-beam projections acquired in rotational acquisition mode. The process involves a motion compensation procedure based on the position of two markers located on the guidewire in the 2D radiographic sequence. Under the hypothesis that the stent and markers motions are identical, the method was shown to generate a negligible error. If this hypothesis is not fulfilled, a solution could be to use only the images where motion is weakest, at the detriment of having a limiter number of views. In this paper, we propose a simulation based study of the impact of a limited number of views in our context. The chain image involved in the acquisition of X-ray sequences is first modeled to simulate realistic noisy projections of stent animated by a motion close to cardiac motion. Then, the 3D stent images are reconstructed using the proposed motion compensation method from gated projections. Two gating strategies are examined to select projection in the sequences. A quantitative analysis is carried out to assess reconstruction quality as a function of noise and acquisition strategy.

  13. Potential hazards of viewing 3-D stereoscopic television, cinema and computer games: a review.

    PubMed

    Howarth, Peter A

    2011-03-01

    The visual stimulus provided by a 3-D stereoscopic display differs from that of the real world because the image provided to each eye is produced on a flat surface. The distance from the screen to the eye remains fixed, providing a single focal distance, but the introduction of disparity between the images allows objects to be located geometrically in front of, or behind, the screen. Unlike in the real world, the stimulus to accommodation and the stimulus to convergence do not match. Although this mismatch is used positively in some forms of Orthoptic treatment, a number of authors have suggested that it could negatively lead to the development of asthenopic symptoms. From knowledge of the zone of clear, comfortable, single binocular vision one can predict that, for people with normal binocular vision, adverse symptoms will not be present if the discrepancy is small, but are likely if it is large, and that what constitutes 'large' and 'small' are idiosyncratic to the individual. The accommodation-convergence mismatch is not, however, the only difference between the natural and the artificial stimuli. In the former case, an object located in front of, or behind, a fixated object will not only be perceived as double if the images fall outside Panum's fusional areas, but it will also be defocused and blurred. In the latter case, however, it is usual for the producers of cinema, TV or computer game content to provide an image that is in focus over the whole of the display, and as a consequence diplopic images will be sharply in focus. The size of Panum's fusional area is spatial frequency-dependent, and because of this the high spatial frequencies present in the diplopic 3-D image will provide a different stimulus to the fusion system from that found naturally.

  14. Subjective and Objective Video Quality Assessment of 3D Synthesized Views With Texture/Depth Compression Distortion.

    PubMed

    Liu, Xiangkai; Zhang, Yun; Hu, Sudeng; Kwong, Sam; Kuo, C-C Jay; Peng, Qiang

    2015-12-01

    The quality assessment for synthesized video with texture/depth compression distortion is important for the design, optimization, and evaluation of the multi-view video plus depth (MVD)-based 3D video system. In this paper, the subjective and objective studies for synthesized view assessment are both conducted. First, a synthesized video quality database with texture/depth compression distortion is presented with subjective scores given by 56 subjects. The 140 videos are synthesized from ten MVD sequences with different texture/depth quantization combinations. Second, a full reference objective video quality assessment (VQA) method is proposed concerning about the annoying temporal flicker distortion and the change of spatio-temporal activity in the synthesized video. The proposed VQA algorithm has a good performance evaluated on the entire synthesized video quality database, and is particularly prominent on the subsets which have significant temporal flicker distortion induced by depth compression and view synthesis process. PMID:26292342

  15. Highly optimized simulations on single- and multi-GPU systems of the 3D Ising spin glass model

    NASA Astrophysics Data System (ADS)

    Lulli, M.; Bernaschi, M.; Parisi, G.

    2015-11-01

    We present a highly optimized implementation of a Monte Carlo (MC) simulator for the three-dimensional Ising spin-glass model with bimodal disorder, i.e., the 3D Edwards-Anderson model running on CUDA enabled GPUs. Multi-GPU systems exchange data by means of the Message Passing Interface (MPI). The chosen MC dynamics is the classic Metropolis one, which is purely dissipative, since the aim was the study of the critical off-equilibrium relaxation of the system. We focused on the following issues: (i) the implementation of efficient memory access patterns for nearest neighbours in a cubic stencil and for lagged-Fibonacci-like pseudo-Random Numbers Generators (PRNGs); (ii) a novel implementation of the asynchronous multispin-coding Metropolis MC step allowing to store one spin per bit and (iii) a multi-GPU version based on a combination of MPI and CUDA streams. Cubic stencils and PRNGs are two subjects of very general interest because of their widespread use in many simulation codes.

  16. Automated bone segmentation from large field of view 3D MR images of the hip joint

    NASA Astrophysics Data System (ADS)

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S.; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-10-01

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation.

  17. Automated bone segmentation from large field of view 3D MR images of the hip joint.

    PubMed

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-10-21

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation.

  18. Wide-field-of-view image pickup system for multiview volumetric 3D displays using multiple RGB-D cameras

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Kakeya, Hideki

    2014-03-01

    A real-time and wide-field-of-view image pickup system for coarse integral volumetric imaging (CIVI) is realized. This system is to apply CIVI display for live action videos generated by the real-time 3D reconstruction. By using multiple RGB-D cameras from different directions, a complete surface of the objects and a wide field of views can be shown in our CIVI displays. A prototype system is constructed and it works as follows. Firstly, image features and depth data are used for a fast and accurate calibration. Secondly, 3D point cloud data are obtained by each RGB-D camera and they are all converted into the same coordinate system. Thirdly, multiview images are constructed by perspective transformation from different viewpoints. Finally, the image for each viewpoint is divided depending on the depth of each pixel for a volumetric view. The experiments show a better result than using only one RGB-D camera and the whole system works on the real-time basis.

  19. Supervised and unsupervised MRF based 3D scene classification in multiple view airborne oblique images

    NASA Astrophysics Data System (ADS)

    Gerke, M.; Xiao, J.

    2013-10-01

    In this paper we develop and compare two methods for scene classification in 3D object space, that is, not single image pixels get classified, but voxels which carry geometric, textural and color information collected from the airborne oblique images and derived products like point clouds from dense image matching. One method is supervised, i.e. relies on training data provided by an operator. We use Random Trees for the actual training and prediction tasks. The second method is unsupervised, thus does not ask for any user interaction. We formulate this classification task as a Markov-Random-Field problem and employ graph cuts for the actual optimization procedure. Two test areas are used to test and evaluate both techniques. In the Haiti dataset we are confronted with largely destroyed built-up areas since the images were taken after the earthquake in January 2010, while in the second case we use images taken over Enschede, a typical Central European city. For the Haiti case it is difficult to provide clear class definitions, and this is also reflected in the overall classification accuracy; it is 73% for the supervised and only 59% for the unsupervised method. If classes are defined more unambiguously like in the Enschede area, results are much better (85% vs. 78%). In conclusion the results are acceptable, also taking into account that the point cloud used for geometric features is not of good quality and no infrared channel is available to support vegetation classification.

  20. Accurate multiple view 3D reconstruction using patch-based stereo for large-scale scenes.

    PubMed

    Shen, Shuhan

    2013-05-01

    In this paper, we propose a depth-map merging based multiple view stereo method for large-scale scenes which takes both accuracy and efficiency into account. In the proposed method, an efficient patch-based stereo matching process is used to generate depth-map at each image with acceptable errors, followed by a depth-map refinement process to enforce consistency over neighboring views. Compared to state-of-the-art methods, the proposed method can reconstruct quite accurate and dense point clouds with high computational efficiency. Besides, the proposed method could be easily parallelized at image level, i.e., each depth-map is computed individually, which makes it suitable for large-scale scene reconstruction with high resolution images. The accuracy and efficiency of the proposed method are evaluated quantitatively on benchmark data and qualitatively on large data sets.

  1. A method of 3D reconstruction from multiple views based on graph theoretic segmentation

    NASA Astrophysics Data System (ADS)

    Li, Yi; Hong, Hanyu; Zhang, Xiuhua; Bai, Haoyu

    2013-10-01

    During the process of three-dimensional vision inspection for products, the target objects under the complex background are usually immovable. So the desired three-dimensional reconstruction results can not be able to be obtained because of achieving the targets, which is difficult to be extracted from the images under the complicated and diverse background. Aiming at the problem, a method of three-dimensional reconstruction based on the graph theoretic segmentation and multiple views is proposed in this paper. Firstly, the target objects are segmented from obtained multi-view images by the method based on graph theoretic segmentation and the parameters of all cameras arranged in a linear way are gained by the method of Zhengyou Zhang calibration. Then, combined with Harris corner detection and Difference of Gaussian detection algorithm, the feature points of the images are detected. At last, after matching feature points by the triangle method, the surface of the object is reconstructed by the method of Poisson surface reconstruction. The reconstruction experimental results show that the proposed algorithm segments the target objects in the complex scene accurately and steadily. What's more, the algorithm based on the graph theoretic segmentation solves the problem of object extraction in the complex scene, and the static object surface is reconstructed precisely. The proposed algorithm also provides the crucial technology for the three-dimensional vision inspection and other practical applications.

  2. Venus - 3D Perspective View of Latona Corona and Dali Chasma

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This computer-generated perspective view of Latona Corona and Dali Chasma on Venus shows Magellan radar data superimposed on topography. The view is from the northeast and vertical exaggeration is 10 times. Exaggeration of relief is a common tool scientists use to detect relationships between structure (i.e. faults and fractures) and topography. Latona Corona, a circular feature approximately 1,000 kilometers (620 miles) in diameter whose eastern half is shown at the left of the image, has a relatively smooth, radar-bright raised rim. Bright lines or fractures within the corona appear to radiate away from its center toward the rim. The rest of the bright fractures in the area are associated with the relatively deep (approximately 3 kilometers or 1.9 miles) troughs of Dali Chasma. The Dali and Diana Chasma system consist of deep troughs that extend for 7,400 kilometers (4,588 miles) and are very distinct features on Venus. Those chasma connect the Ovda and Thetis highlands with the large volcanoes at Atla Regio and thus are considered to be the 'Scorpion Tail' of Aphrodite Terra. The broad, curving scarp resembles some of Earth's subduction zones where crustal plates are pushed over each other. The radar-bright surface at the highest elevation along the scarp is similar to surfaces in other elevated regions where some metallic mineral such as pyrite (fool's gold) may occur on the surface.

  3. Automatic thermographic scanning with the creation of 3D panoramic views of buildings

    NASA Astrophysics Data System (ADS)

    Ferrarini, G.; Cadelano, G.; Bortolin, A.

    2016-05-01

    Infrared thermography is widely applied to the inspection of building, enabling the identification of thermal anomalies due to the presence of hidden structures, air leakages, and moisture. One of the main advantages of this technique is the possibility to acquire rapidly a temperature map of a surface. However, due to the actual low-resolution of thermal camera and the necessity of scanning surfaces with different orientation, during a building survey it is necessary to take multiple images. In this work a device based on quantitative infrared thermography, called aIRview, has been applied during building surveys to automatically acquire thermograms with a camera mounted on a robotized pan tilt unit. The goal is to perform a first rapid survey of the building that could give useful information for the successive quantitative thermal investigations. For each data acquisition, the instrument covers a rotational field of view of 360° around the vertical axis and up to 180° around the horizontal one. The obtained images have been processed in order to create a full equirectangular projection of the ambient. For this reason the images have been integrated into a web visualization tool, working with web panorama viewers such as Google Street View, creating a webpage where it is possible to have a three dimensional virtual visit of the building. The thermographic data are embedded with the visual imaging and with other sensor data, facilitating the understanding of the physical phenomena underlying the temperature distribution.

  4. Towards An Understanding of Mobile Touch Navigation in a Stereoscopic Viewing Environment for 3D Data Exploration.

    PubMed

    López, David; Oehlberg, Lora; Doger, Candemir; Isenberg, Tobias

    2016-05-01

    We discuss touch-based navigation of 3D visualizations in a combined monoscopic and stereoscopic viewing environment. We identify a set of interaction modes, and a workflow that helps users transition between these modes to improve their interaction experience. In our discussion we analyze, in particular, the control-display space mapping between the different reference frames of the stereoscopic and monoscopic displays. We show how this mapping supports interactive data exploration, but may also lead to conflicts between the stereoscopic and monoscopic views due to users' movement in space; we resolve these problems through synchronization. To support our discussion, we present results from an exploratory observational evaluation with domain experts in fluid mechanics and structural biology. These experts explored domain-specific datasets using variations of a system that embodies the interaction modes and workflows; we report on their interactions and qualitative feedback on the system and its workflow.

  5. Towards An Understanding of Mobile Touch Navigation in a Stereoscopic Viewing Environment for 3D Data Exploration.

    PubMed

    López, David; Oehlberg, Lora; Doger, Candemir; Isenberg, Tobias

    2016-05-01

    We discuss touch-based navigation of 3D visualizations in a combined monoscopic and stereoscopic viewing environment. We identify a set of interaction modes, and a workflow that helps users transition between these modes to improve their interaction experience. In our discussion we analyze, in particular, the control-display space mapping between the different reference frames of the stereoscopic and monoscopic displays. We show how this mapping supports interactive data exploration, but may also lead to conflicts between the stereoscopic and monoscopic views due to users' movement in space; we resolve these problems through synchronization. To support our discussion, we present results from an exploratory observational evaluation with domain experts in fluid mechanics and structural biology. These experts explored domain-specific datasets using variations of a system that embodies the interaction modes and workflows; we report on their interactions and qualitative feedback on the system and its workflow. PMID:27045916

  6. Effects Of Long-Term Viewing Of VISIDEP tm 3-D Television

    NASA Astrophysics Data System (ADS)

    McLaurin, A. P.; Jones, Edwin R.

    1988-06-01

    A comparison was made between viewing normal television and VISIDEPTM television which produces three-dimensional images by the method of alternating images. Two separate groups of fifteen university students reviewed fifty minute unrelieved exposure to television; one group watched standard television and the other watched VISIDEP. Both groups were surveyed regarding questions of eye strain, fatigue, headache, or other discomforts, as well as questions of apparent depth and image quality. One week later the participants were all shown the VISIDEP television and surveyed in the same manner as before. In addition, they were given a chance to make a direct side-by-side comparison and evaluate the images. Analysis of the viewer responses shows that in relation to viewer comfort, VISIDEP television is as acceptable to viewers as normal television, for it introduces no additional problems. However, the VISIDEP images were clearly superior in there ability to invoke an enhanced perception of depth.

  7. Interior detail view, surviving stained glass panel in an east ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior detail view, surviving stained glass panel in an east aisle window. Most of the stained glass has been removed from the building and relocated to other area churches. (Similar to HABS No. PA-6694-25). - Acts of the Apostles Church in Jesus Christ, 1400-28 North Twenty-eighth Street, northwest corner of North Twenty-eighth & Master Streets, Philadelphia, Philadelphia County, PA

  8. A user-friendly nano-CT image alignment and 3D reconstruction platform based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Wang, Sheng-Hao; Zhang, Kai; Wang, Zhi-Li; Gao, Kun; Wu, Zhao; Zhu, Pei-Ping; Wu, Zi-Yu

    2015-01-01

    X-ray computed tomography at the nanometer scale (nano-CT) offers a wide range of applications in scientific and industrial areas. Here we describe a reliable, user-friendly, and fast software package based on LabVIEW that may allow us to perform all procedures after the acquisition of raw projection images in order to obtain the inner structure of the investigated sample. A suitable image alignment process to address misalignment problems among image series due to mechanical manufacturing errors, thermal expansion, and other external factors has been considered, together with a novel fast parallel beam 3D reconstruction procedure that was developed ad hoc to perform the tomographic reconstruction. We have obtained remarkably improved reconstruction results at the Beijing Synchrotron Radiation Facility after the image calibration, the fundamental role of this image alignment procedure was confirmed, which minimizes the unwanted blurs and additional streaking artifacts that are always present in reconstructed slices. Moreover, this nano-CT image alignment and its associated 3D reconstruction procedure are fully based on LabVIEW routines, significantly reducing the data post-processing cycle, thus making the activity of the users faster and easier during experimental runs.

  9. The MUSE 3D view of the Hubble Deep Field South

    NASA Astrophysics Data System (ADS)

    Bacon, R.; Brinchmann, J.; Richard, J.; Contini, T.; Drake, A.; Franx, M.; Tacchella, S.; Vernet, J.; Wisotzki, L.; Blaizot, J.; Bouché, N.; Bouwens, R.; Cantalupo, S.; Carollo, C. M.; Carton, D.; Caruana, J.; Clément, B.; Dreizler, S.; Epinat, B.; Guiderdoni, B.; Herenz, C.; Husser, T.-O.; Kamann, S.; Kerutt, J.; Kollatschny, W.; Krajnovic, D.; Lilly, S.; Martinsson, T.; Michel-Dansac, L.; Patricio, V.; Schaye, J.; Shirazi, M.; Soto, K.; Soucail, G.; Steinmetz, M.; Urrutia, T.; Weilbacher, P.; de Zeeuw, T.

    2015-03-01

    We observed Hubble Deep Field South with the new panoramic integral-field spectrograph MUSE that we built and have just commissioned at the VLT. The data cube resulting from 27 h of integration covers one arcmin2 field of view at an unprecedented depth with a 1σ emission-line surface brightness limit of 1 × 10-19 erg s-1 cm-2 arcsec-2, and contains ~90 000 spectra. We present the combined and calibrated data cube, and we performed a first-pass analysis of the sources detected in the Hubble Deep Field South imaging. We measured the redshifts of 189 sources up to a magnitude I814 = 29.5, increasing the number of known spectroscopic redshifts in this field by more than an order of magnitude. We also discovered 26 Lyα emitting galaxies that are not detected in the HST WFPC2 deep broad-band images. The intermediate spectral resolution of 2.3 Å allows us to separate resolved asymmetric Lyα emitters, [O ii]3727 emitters, and C iii]1908 emitters, and the broad instantaneous wavelength range of 4500 Å helps to identify single emission lines, such as [O iii]5007, Hβ, and Hα, over a very wide redshift range. We also show how the three-dimensional information of MUSE helps to resolve sources that are confused at ground-based image quality. Overall, secure identifications are provided for 83% of the 227 emission line sources detected in the MUSE data cube and for 32% of the 586 sources identified in the HST catalogue. The overall redshift distribution is fairly flat to z = 6.3, with a reduction between z = 1.5 to 2.9, in the well-known redshift desert. The field of view of MUSE also allowed us to detect 17 groups within the field. We checked that the number counts of [O ii]3727 and Lyα emitters are roughly consistent with predictions from the literature. Using two examples, we demonstrate that MUSE is able to provide exquisite spatially resolved spectroscopic information on the intermediate-redshift galaxies present in the field. Thisunique data set can be used for a

  10. 3D pulse EPR imaging from sparse-view projections via constrained, total variation minimization

    NASA Astrophysics Data System (ADS)

    Qiao, Zhiwei; Redler, Gage; Epel, Boris; Qian, Yuhua; Halpern, Howard

    2015-09-01

    Tumors and tumor portions with low oxygen concentrations (pO2) have been shown to be resistant to radiation therapy. As such, radiation therapy efficacy may be enhanced if delivered radiation dose is tailored based on the spatial distribution of pO2 within the tumor. A technique for accurate imaging of tumor oxygenation is critically important to guide radiation treatment that accounts for the effects of local pO2. Electron paramagnetic resonance imaging (EPRI) has been considered one of the leading methods for quantitatively imaging pO2 within tumors in vivo. However, current EPRI techniques require relatively long imaging times. Reducing the number of projection scan considerably reduce the imaging time. Conventional image reconstruction algorithms, such as filtered back projection (FBP), may produce severe artifacts in images reconstructed from sparse-view projections. This can lower the utility of these reconstructed images. In this work, an optimization based image reconstruction algorithm using constrained, total variation (TV) minimization, subject to data consistency, is developed and evaluated. The algorithm was evaluated using simulated phantom, physical phantom and pre-clinical EPRI data. The TV algorithm is compared with FBP using subjective and objective metrics. The results demonstrate the merits of the proposed reconstruction algorithm.

  11. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    NASA Astrophysics Data System (ADS)

    Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.

    2014-06-01

    This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  12. fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays.

    PubMed

    Yoshida, Shunsuke

    2016-06-13

    A novel glasses-free tabletop 3D display to float virtual objects on a flat tabletop surface is proposed. This method employs circularly arranged projectors and a conical rear-projection screen that serves as an anisotropic diffuser. Its practical implementation installs them beneath a round table and produces horizontal parallax in a circumferential direction without the use of high speed or a moving apparatus. Our prototype can display full-color, 5-cm-tall 3D characters on the table. Multiple viewers can share and enjoy its real-time animation from any angle of 360 degrees with appropriate perspectives as if the animated figures were present.

  13. fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays.

    PubMed

    Yoshida, Shunsuke

    2016-06-13

    A novel glasses-free tabletop 3D display to float virtual objects on a flat tabletop surface is proposed. This method employs circularly arranged projectors and a conical rear-projection screen that serves as an anisotropic diffuser. Its practical implementation installs them beneath a round table and produces horizontal parallax in a circumferential direction without the use of high speed or a moving apparatus. Our prototype can display full-color, 5-cm-tall 3D characters on the table. Multiple viewers can share and enjoy its real-time animation from any angle of 360 degrees with appropriate perspectives as if the animated figures were present. PMID:27410336

  14. 2D/3D registration using only single-view fluoroscopy to guide cardiac ablation procedures: a feasibility study

    NASA Astrophysics Data System (ADS)

    Fallavollita, Pascal

    2010-02-01

    The CARTO XP is an electroanatomical cardiac mapping system that provides 3D color-coded maps of the electrical activity of the heart, however it is expensive and it can only use a single costly magnetic catheter for each patient intervention. Aim: To develop an affordable fluoroscopic navigation system that could shorten the duration of RF ablation procedures and increase its efficacy. Methodology: A 4-step filtering technique was implemented in order to project the tip electrode of an ablation catheter visible in single-view C-arm images in order to calculate its width. The width is directly proportional to the depth of the catheter. Results: For phantom experimentation, when displacing a 7- French catheter at 1cm intervals away from an X-ray source, the recovered depth using a single image was 2.05 +/- 1.47 mm, whereas depth errors improved to 1.55 +/- 1.30 mm when using an 8-French catheter. In clinic experimentation, twenty posterior and left lateral images of a catheter inside the left ventricle of a mongrel dog were acquired. The standard error of estimate for the recovered depth of the tip-electrode of the mapping catheter was 13.1 mm and 10.1 mm respectively for the posterior and lateral views. Conclusions: A filtering implementation using single-view C-arm images showed that it was possible to recover depth in phantom study and proved adequate in clinical experimentation based on isochronal map fusion results.

  15. 19. Photocopy of photograph. VIEW OF WORKER MANIPULATING SMALL GLASS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    19. Photocopy of photograph. VIEW OF WORKER MANIPULATING SMALL GLASS OBJECTS IN THE HOT BAY WITH MANIPULATOR ARMS AT WORK STATION E-2. Photographer unknown, ca. 1969, original photograph and negative on file at the Remote Sensing Laboratory, Department of Energy, Nevada Operations Office. - Nevada Test Site, Engine Maintenance Assembly & Disassembly Facility, Area 25, Jackass Flats, Mercury, Nye County, NV

  16. SOUTH PORCH REFLECTED PLAN; DETAIL VIEW, SOUTHWEST SIDE. Glass plate ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    SOUTH PORCH REFLECTED PLAN; DETAIL VIEW, SOUTHWEST SIDE. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-ST-D-5 157.5007. Right (not printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  17. SOUTH PORCH REFLECTED PLAN; DETAIL VIEW, SOUTHWEST SIDE. Glass plate ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    SOUTH PORCH REFLECTED PLAN; DETAIL VIEW, SOUTHWEST SIDE. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-ST-D-5 157.5007. Left (printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  18. 11. GENERAL VIEW IN SENATE CHAMBER, FROM WEST; PAINTED GLASS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. GENERAL VIEW IN SENATE CHAMBER, FROM WEST; PAINTED GLASS WINDOW BEHIND COLUMNS DEPICTS 'THE LANDING OF DE SOTO;' MURAL TO LEFT SHOWS 'THOMAS HART BENTON'S SPEECH AT ST. LOUIS 1849;' MURAL TO RIGHT SHOWS 'PRESIDENT JEFFERSON GREETING LEWIS AND CLARK' - Missouri State Capitol, High Street between Broadway & Jefferson Streets, Jefferson City, Cole County, MO

  19. View forward from stern showing skylight with rippled glass over ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View forward from stern showing skylight with rippled glass over compartment c-110, officer's quarters; note manually operated capstan at center, and simulated eight inch guns in sheet metal mock-up turret; also note five inch guns in sponsons port and starboard. (p37) - USS Olympia, Penn's Landing, 211 South Columbus Boulevard, Philadelphia, Philadelphia County, PA

  20. EAST TOWER, DETAIL VIEW OF CARVED MASK. Glass plate stereopair ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    EAST TOWER, DETAIL VIEW OF CARVED MASK. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-ET-D-1 157.4683. Left (printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  1. EAST TOWER, DETAIL VIEW OF CARVED IONIC PILASTER CAPITAL. Glass ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    EAST TOWER, DETAIL VIEW OF CARVED IONIC PILASTER CAPITAL. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-ET-D-2 157.4684. Left (printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  2. EAST TOWER, DETAIL VIEW OF CARVED IONIC PILASTER CAPITAL. Glass ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    EAST TOWER, DETAIL VIEW OF CARVED IONIC PILASTER CAPITAL. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-ET-D-2 157.4684. Right (not printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  3. EAST TOWER, DETAIL VIEW OF CARVED MASK. Glass plate stereopair ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    EAST TOWER, DETAIL VIEW OF CARVED MASK. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-ET-D-1 157.4683. Right (not printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  4. Optical and infrared absorption spectra of 3d transition metal ions-doped sodium borophosphate glasses and effect of gamma irradiation.

    PubMed

    Abdelghany, A M; ElBatal, F H; Azooz, M A; Ouis, M A; ElBatal, H A

    2012-12-01

    Undoped and transition metals (3d TM) doped sodium borophosphate glasses were prepared. UV-visible absorption spectra were measured in the region 200-900nm before and after gamma irradiation. Experimental optical data indicate that the undoped sodium borophosphate glass reveals before irradiation strong and broad UV absorption and no visible bands could be identified. Such UV absorption is related to the presence of unavoidable trace iron impurities within the raw materials used for preparation of this base borophosphate glass. The TMs-doped glasses show absorption bands within the UV and/or visible regions which are characteristic to each respective TM ion in addition to the UV absorption observed from the host base glass. Infrared absorption spectra of the undoped and TMs-doped glasses reveal complex FTIR consisting of extended characteristic vibrational bands which are specific for phosphate groups as a main constituent but with the sharing of some vibrations due to the borate groups. This criterion was investigated and approved using DAT (deconvolution analysis technique). The effects of different TMs ions on the FTIR spectra are very limited due to the low doping level (0.2%) introduced in the glass composition. Gamma irradiation causes minor effect on the FTIR spectra specifically the decrease of intensities of some bands. Such behavior is related to the change of bond angles and/or bond lengths of some structural building units upon gamma irradiation.

  5. A 3-D view of field-scale fault-zone cementation from geologically ground-truthed electrical resistivity

    NASA Astrophysics Data System (ADS)

    Barnes, H.; Spinelli, G. A.; Mozley, P.

    2015-12-01

    Fault-zones are an important control on fluid flow, affecting groundwater supply, hydrocarbon/contaminant migration, and waste/carbon storage. However, current models of fault seal are inadequate, primarily focusing on juxtaposition and entrainment effects, despite the recognition that fault-zone cementation is common and can dramatically reduce permeability. We map the 3D cementation patterns of the variably cemented Loma Blanca fault from the land surface to ~40 m depth, using electrical resistivity and induced polarization (IP). The carbonate-cemented fault zone is a region of anomalously low normalized chargeability, relative to the surrounding host material. Zones of low-normalized chargeability immediately under the exposed cement provide the first ground-truth that a cemented fault yields an observable IP anomaly. Low-normalized chargeability extends down from the surface exposure, surrounded by zones of high-normalized chargeability, at an orientation consistent with normal faults in the region; this likely indicates cementation of the fault zone at depth, which could be confirmed by drilling and coring. Our observations are consistent with: 1) the expectation that carbonate cement in a sandstone should lower normalized chargeability by reducing pore-surface area and bridging gaps in the pore space, and 2) laboratory experiments confirming that calcite precipitation within a column of glass beads decreases polarization magnitude. The ability to characterize spatial variations in the degree of fault-zone cementation with resistivity and IP has exciting implications for improving predictive models of the hydrogeologic impacts of cementation within faults.

  6. 3D micro- and nano-machining of hydrogenated amorphous silicon films on SiO2/Si and glass substrates

    NASA Astrophysics Data System (ADS)

    Soleimani-Amiri, S.; Zanganeh, S.; Ramzani, R.; Talei, R.; Mohajerzadeh, S.; Azimi, S.; Sanaee, Z.

    2015-07-01

    We report on the hydrogen-assisted deep reactive ion etching of hydrogenated amorphous silicon (a-Si:H) films deposited using radio-frequency plasma enhanced chemical vapor deposition (RF-PECVD). High aspect-ratio vertical and 3D amorphous silicon features, with the desired control over the shaping of the sidewalls, in micro and nano scales, were fabricated in ordered arrays. The suitable adhesion of amorphous Si film to the underlayer allows one to apply deep micro- and nano-machining to these layers. By means of a second deposition of amorphous silicon on highly curved 3D structures and subsequent etching, the fabrication of amorphous silicon rings is feasible. In addition to photolithography, nanosphere colloidal lithography and electron beam lithography were exploited to realize ultra-small features of amorphous silicon. We have also investigated the optical properties of fabricated hexagonally patterned a-Si nanowire arrays on glass substrates and demonstrated their high potential as active layers for solar cells. This etching process presents an inexpensive method for the formation of highly featured arrays of vertical and 3D amorphous silicon rods on both glass and silicon substrates, suitable for large-area applications.

  7. Forward-viewing resonant fiber-optic scanning endoscope of appropriate scanning speed for 3D OCT imaging

    PubMed Central

    Huo, Li; Xi, Jiefeng; Wu, Yicong; Li, Xingde

    2010-01-01

    A forward-viewing resonant fiber-optic endoscope of a scanning speed appropriate for a high-speed Fourier-domain optical coherence tomography (FD-OCT) system was developed to enable real-time, three-dimensional endoscopic OCT imaging. A new method was explored to conveniently tune the scanning frequency of a resonant fiber-optic scanner, by properly selecting the fiber-optic cantilever length, partially changing the mechanical property of the cantilever, and adding a weight to the cantilever tip. Systematic analyses indicated the resonant scanning frequency can be tuned over two orders of magnitude spanning from ~10Hz to ~kHz. Such a flexible scanning frequency range makes it possible to set an appropriate scanning speed of the endoscope to match the different A-scan rates of a variety of FD-OCT systems. A 2.4-mm diameter, 62.5-Hz scanning endoscope appropriate to work with a 40-kHz swept-source OCT (SS-OCT) system was developed and demonstrated for 3D OCT imaging of biological tissues. PMID:20639922

  8. 3D FEA of cemented glass fiber and cast posts with various dental cements in a maxillary central incisor.

    PubMed

    Madfa, Ahmed A; Al-Hamzi, Mohsen A; Al-Sanabani, Fadhel A; Al-Qudaimi, Nasr H; Yue, Xiao-Guang

    2015-01-01

    This study aimed to analyse and compare the stability of two dental posts cemented with four different luting agents by examining their shear stress transfer through the FEM. Eight three-dimensional finite element models of a maxillary central incisor restored with glass fiber and Ni-Cr alloy cast dental posts. Each dental post was luted with zinc phosphate, Panavia resin, super bond C&B resin and glass ionomer materials. Finite element models were constructed and oblique loading of 100 N was applied. The distribution of shear stress was investigated at posts and cement/dentine interfaces using ABAQUS/CAE software. The peak shear stress for glass fiber post models minimized approximately three to four times of those for Ni-Cr alloy cast post models. There was negligible difference in peak of shear stress when various cements were compared, irrespective of post materials. The shear stress had same trend for all cement materials. This study found that the glass fiber dental post reduced the shear stress concentration at interfacial of post and cement/dentine compared to Ni-Cr alloy cast dental post.

  9. 3D FEA of cemented glass fiber and cast posts with various dental cements in a maxillary central incisor.

    PubMed

    Madfa, Ahmed A; Al-Hamzi, Mohsen A; Al-Sanabani, Fadhel A; Al-Qudaimi, Nasr H; Yue, Xiao-Guang

    2015-01-01

    This study aimed to analyse and compare the stability of two dental posts cemented with four different luting agents by examining their shear stress transfer through the FEM. Eight three-dimensional finite element models of a maxillary central incisor restored with glass fiber and Ni-Cr alloy cast dental posts. Each dental post was luted with zinc phosphate, Panavia resin, super bond C&B resin and glass ionomer materials. Finite element models were constructed and oblique loading of 100 N was applied. The distribution of shear stress was investigated at posts and cement/dentine interfaces using ABAQUS/CAE software. The peak shear stress for glass fiber post models minimized approximately three to four times of those for Ni-Cr alloy cast post models. There was negligible difference in peak of shear stress when various cements were compared, irrespective of post materials. The shear stress had same trend for all cement materials. This study found that the glass fiber dental post reduced the shear stress concentration at interfacial of post and cement/dentine compared to Ni-Cr alloy cast dental post. PMID:26543733

  10. The Influence of Monocular Spatial Cues on Vergence Eye Movements in Monocular and Binocular Viewing of 3-D and 2-D Stimuli.

    PubMed

    Batvinionak, Anton A; Gracheva, Maria A; Bolshakov, Andrey S; Rozhkova, Galina I

    2015-01-01

    The influence of monocular spatial cues on the vergence eye movements was studied in two series of experiments: (I) the subjects were viewing a 3-D video and also its 2-D version-binocularly and monocularly; and (II) in binocular and monocular viewing conditions, the subjects were presented with stationary 2-D stimuli containing or not containing some monocular indications of spatial arrangement. The results of the series (I) showed that, in binocular viewing conditions, the vergence eye movements were only present in the case of 3-D but not 2-D video, while in the course of monocular viewing of 2-D video, some regular vergence eye movements could be revealed, suggesting that the occluded eye position could be influenced by the spatial organization of the scene reconstructed on the basis of the monocular depth information provided by the viewing eye. The data obtained in series (II), in general, seem to support this hypothesis. PMID:26562921

  11. Exploring interaction with 3D volumetric displays

    NASA Astrophysics Data System (ADS)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  12. Radionuclide Incorporation in Secondary Crystalline Minerals Resulting from Chemical Weathering of Selected Waste Glasses: Progress Report for Subtask 3d

    SciTech Connect

    SV Mattigod; DI Kaplan; VL LeGore; RD Orr; HT Schaef; JS Young

    1998-10-23

    Experiments were conducted in fiscal year 1998 by Pacific Northwest National Laboratory to evaluate potential incorporation of radionuclides in secondary mineral phases that form from weathering vitrified nuclear waste glasses. These experiments were conducted as part of the Immobilized Low- Activity Waste-Petiormance Assessment (ILAW-PA) to generate data on radionuclide mobilization and transport in a near-field enviromnent of disposed vitrified wastes. An initial experiment was conducted to identify the types of secondary minerals that form from two glass samples of differing compositions, LD6 and SRL202. Chemical weathering of LD6 glass at 90oC in contact with an aliquot of uncontaminated Hanford Site groundwater resulted in the formation of a Crystalline zeolitic mineral, phillipsite. In contrast similar chemical weathering of SRL202 glass at 90"C resulted in the formation of a microcrystalline smectitic mineral, nontronite. A second experiment was conducted at 90"C to assess the degree to which key radionuclides would be sequestered in the structure of secondary crystalline minerals; namely, phillipsite and nontronite. Chemical weathering of LD6 in contact with radionuclide-spiked Hanford Site groundwater indicated that substantial ilactions of the total activities were retained in the phillipsite structure. Similar chemical weathering of SRL202 at 90"C, also in contact with radionuclide-spiked Hanford Site groundwater, showed that significant fractions of the total activities were retained in the nontronite structure. These results have important implications regarding the radionuclide mobilization aspects of the ILAW-PA. Additional studies are required to confkm the results and to develop an improved under- standing of mechanisms of sequestration and attenuated release of radionuclides to help refine certain aspects of their mobilization.

  13. Quantum 3D spin-glass system on the scales of space-time periods of external electromagnetic fields

    SciTech Connect

    Gevorkyan, A. S.

    2012-10-15

    A dielectric medium consisting of rigidly polarized molecules has been treated as a quantum 3D disordered spin system. It is shown that using Birkhoff's ergodic hypothesis the initial 3D disordered spin problem on scales of space-time periods of external field is reduced to two conditionally separable 1D problems. The first problem describes a 1D disordered N-particle quantum system with relaxation in random environment while the second one describes statistical properties of ensemble of disordered 1D steric spin chains of certain length. Basing on constructions which are developed in both problems, the coefficient of polarizability related to collective orientational effects under the influence of external field was calculated. On the basis of these investigations the equation of Clausius-Mossotti (CM) has been generalized as well as the equation for permittivity. It is shown that under the influence of weak standing electromagnetic fields in the equation of CM arising of catastrophe is possible, that can substantially change behavior of permittivity in the X-ray region on the macroscopic scale of space.

  14. Fabrication of a three dimensional particle focusing microfluidic device using a 3D printer, PDMS, and glass

    NASA Astrophysics Data System (ADS)

    Collette, Robyn; Rosen, Daniel; Shirk, Kathryn

    Microfluidic devices have high importance in fields such as bioanalysis because they can manipulate volumes of fluid in the range of microliters to picoliters. Small samples can be quickly and easily tested using complex microfluidic devices. Typically, these devices are created through lithography techniques, which can be costly and time consuming. It has been shown that inexpensive microfluidic devices can be produced quickly using a 3D printer and PDMS. However, a size limitation prohibits the fabrication of precisely controlled microchannels. By using shrinking materials in combination with 3D printing of flow-focusing geometries, this limitation can be overcome. This research seeks to employ these techniques to quickly fabricate an inexpensive, working device with three dimensional particle focusing capabilities. By modifying the channel geometry, colloidal particles in a solution will be focused into a single beam when passed through this device. The ability to focus particles is necessary for a variety of biological applications which requires precise detection and characterization of particles in a sample. We would like to thank the Shippensburg University Undergraduate Research Grant Program for their generous funding.

  15. [Initial research of one-beam pumping up-conversion 3D volumetric display based on Er:ZBLAN glass].

    PubMed

    Chen, Xiao-bo; Li, Mei-xian; Wen, Ou; Zhang, Fu-chu; Song, Zeng-fu

    2003-06-01

    This paper investigates one-beam pumping up-conversion three-dimensional volumetric display, which is based on a Er:ZBLAN fluoride glass. The light-length of the facula of one-beam up-conversion luminescence was studied by a 966 nm semiconductor laser. The up-conversion luminescence spectrum was also obtained. It was found that the property of one-beam pumping three-dimensional volumetric display can be improved significantly by 1.52 microns LD laser multi-photon up-conversion, this finding has not been reported.

  16. Viewing effects of 3-D images synthesized from a series of 2-D tomograms by VAP and HAP approaches

    NASA Astrophysics Data System (ADS)

    Zhai, H. C.; Wang, M. W.; Liu, F. M.; Hsu, Ken Y.

    We report, for the first time, the experimental result and its analysis of synthesizing a series of simulating 2-D tomograms into a 3-D monochromatic image. Our result shows clearly the advantage in monochromaticity of a vertical area-partition (VAP) approach over a horizontal area-partition (HAP) approach during the final white-light reconstruction. This monochromaticity will ensure a 3-D image synthesis without any distortion in gray level or positional recovery.

  17. High speed large viewing angle shutters for triple-flash active glasses

    NASA Astrophysics Data System (ADS)

    Caillaud, B.; Bellini, B.; de Bougrenet de la Tocnaye, J.-L.

    2009-02-01

    We present a new generation of liquid crystal shutters for active glasses, well suited to 3-D cinema current trends, involving triple flash regimes. Our technology uses a composite smectic C* liquid crystal mixture1. In this paper we focus on the electro-optical characterization of composite smectic-based shutters, and compare their performance with nematic ones, demonstrating their advantages for the new generation of 3-D cinema and more generally 3-D HDTV.

  18. SummitView 1.0: a code to automatically generate 3D solid models of surface micro-machining based MEMS designs.

    SciTech Connect

    McBride, Cory L. (Elemental Technologies, American Fort, UT); Yarberry, Victor R.; Schmidt, Rodney Cannon; Meyers, Ray J.

    2006-11-01

    This report describes the SummitView 1.0 computer code developed at Sandia National Laboratories. SummitView is designed to generate a 3D solid model, amenable to visualization and meshing, that represents the end state of a microsystem fabrication process such as the SUMMiT (Sandia Ultra-Planar Multilevel MEMS Technology) V process. Functionally, SummitView performs essentially the same computational task as an earlier code called the 3D Geometry modeler [1]. However, because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages. As input it requires a definition of both the process itself and the collection of individual 2D masks created by the designer and associated with each of the process steps. The definition of the process is contained in a special process definition file [2] and the 2D masks are contained in MEM format files [3]. The code is written in C++ and consists of a set of classes and routines. The classes represent the geometric data and the SUMMiT V process steps. Classes are provided for the following process steps: Planar Deposition, Planar Etch, Conformal Deposition, Dry Etch, Wet Etch and Release Etch. SummitView is built upon the 2D Boolean library GBL-2D [4], and thus contains all of that library's functionality.

  19. Accuracy of x-ray image-based 3D localization from two C-arm views: a comparison between an ideal system and a real device

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Strobel, Norbert; Yatziv, Liron; Gilson, Wesley; Meyer, Bernhard; Hornegger, Joachim; Lewin, Jonathan; Wacker, Frank

    2009-02-01

    arm X-ray imaging devices are commonly used for minimally invasive cardiovascular or other interventional procedures. Calibrated state-of-the-art systems can, however, not only be used for 2D imaging but also for three-dimensional reconstruction either using tomographic techniques or even stereotactic approaches. To evaluate the accuracy of X-ray object localization from two views, a simulation study assuming an ideal imaging geometry was carried out first. This was backed up with a phantom experiment involving a real C-arm angiography system. Both studies were based on a phantom comprising five point objects. These point objects were projected onto a flat-panel detector under different C-arm view positions. The resulting 2D positions were perturbed by adding Gaussian noise to simulate 2D point localization errors. In the next step, 3D point positions were triangulated from two views. A 3D error was computed by taking differences between the reconstructed 3D positions using the perturbed 2D positions and the initial 3D positions of the five points. This experiment was repeated for various C-arm angulations involving angular differences ranging from 15° to 165°. The smallest 3D reconstruction error was achieved, as expected, by views that were 90° degrees apart. In this case, the simulation study yielded a 3D error of 0.82 mm +/- 0.24 mm (mean +/- standard deviation) for 2D noise with a standard deviation of 1.232 mm (4 detector pixels). The experimental result for this view configuration obtained on an AXIOM Artis C-arm (Siemens AG, Healthcare Sector, Forchheim, Germany) system was 0.98 mm +/- 0.29 mm, respectively. These results show that state-of-the-art C-arm systems can localize instruments with millimeter accuracy, and that they can accomplish this almost as well as an idealized theoretical counterpart. High stereotactic localization accuracy, good patient access, and CT-like 3D imaging capabilities render state-of-the-art C-arm systems ideal devices for X

  20. Characterization by combined optical and FT infrared spectra of 3d-transition metal ions doped-bismuth silicate glasses and effects of gamma irradiation.

    PubMed

    ElBatal, F H; Abdelghany, A M; ElBatal, H A

    2014-03-25

    Optical and infrared absorption spectral measurements were carried out for binary bismuth silicate glass and other derived prepared samples with the same composition and containing additional 0.2% of one of 3d transition metal oxides. The same combined spectroscopic properties were also measured after subjecting the prepared glasses to a gamma dose of 8 Mrad. The experimental optical spectra reveal strong UV-near visible absorption bands from the base and extended to all TMs-doped samples and these specific extended and strong UV-near visible absorption bands are related to the contributions of absorption from both trace iron (Fe(3+)) ions present as contaminated impurities within the raw materials and from absorption of main constituent trivalent bismuth (Bi(3+)) ions. The strong UV-near visible absorption bands are observed to suppress any further UV bands from TM ions. The studied glasses show obvious resistant to gamma irradiation and only small changes are observed upon gamma irradiation. This observed shielding behavior is related to the presence of high Bi(3+) ions with heavy mass causing the observed stability of the optical absorption. Infrared absorption spectra of the studied glasses reveal characteristic vibrational bands due to both modes from silicate network and the sharing of Bi-O linkages and the presence of TMs in the doping level (0.2%) causes no distinct changes within the number or position of the vibrational modes. The presence of high Bi2O3 content (70 mol%) appears to cause stability of the structural building units towards gamma irradiation as revealed by FTIR measurements.

  1. The Best of Both Worlds: 3D X-ray Microscopy with Ultra-high Resolution and a Large Field of View

    NASA Astrophysics Data System (ADS)

    Li, W.; Gelb, J.; Yang, Y.; Guan, Y.; Wu, W.; Chen, J.; Tian, Y.

    2011-09-01

    3D visualizations of complex structures within various samples have been achieved with high spatial resolution by X-ray computed nanotomography (nano-CT). While high spatial resolution generally comes at the expense of field of view (FOV). Here we proposed an approach that stitched several 3D volumes together into a single large volume to significantly increase the size of the FOV while preserving resolution. Combining this with nano-CT, 18-μm FOV with sub-60-nm resolution has been achieved for non-destructive 3D visualization of clustered yeasts that were too large for a single scan. It shows high promise for imaging other large samples in the future.

  2. TransCAIP: A Live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters.

    PubMed

    Taguchi, Yuichi; Koike, Takafumi; Takahashi, Keita; Naemura, Takeshi

    2009-01-01

    The system described in this paper provides a real-time 3D visual experience by using an array of 64 video cameras and an integral photography display with 60 viewing directions. The live 3D scene in front of the camera array is reproduced by the full-color, full-parallax autostereoscopic display with interactive control of viewing parameters. The main technical challenge is fast and flexible conversion of the data from the 64 multicamera images to the integral photography format. Based on image-based rendering techniques, our conversion method first renders 60 novel images corresponding to the viewing directions of the display, and then arranges the rendered pixels to produce an integral photography image. For real-time processing on a single PC, all the conversion processes are implemented on a GPU with GPGPU techniques. The conversion method also allows a user to interactively control viewing parameters of the displayed image for reproducing the dynamic 3D scene with desirable parameters. This control is performed as a software process, without reconfiguring the hardware system, by changing the rendering parameters such as the convergence point of the rendering cameras and the interval between the viewpoints of the rendering cameras.

  3. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  4. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This area of terrain near the Sagan Memorial Station was taken on Sol 3 by the Imager for Mars Pathfinder (IMP). 3D glasses are necessary to identify surface detail.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  5. Investigation of 3D silvernanodendrite@glass as surface-enhanced Raman scattering substrate for the detection of Sildenafil and GSH

    NASA Astrophysics Data System (ADS)

    Lv, Meng; Gu, Huaimin; Yuan, Xiaojuan; Gao, Junxiang; Cai, Tiantian

    2012-12-01

    A solid-phase dendritic Ag nanostructure was synthesized in the presence of silk fibroin biomacromolecule and planted on the glass to form three-dimensional (3D) silvernanodendrite@glass film. When NO3-, Cl- and SO42- were added in the synthesis process of the film to study their influence on the Raman activity of this substrate using MB as probe molecule, it's found that the substrate with Cl-1 gives the most intensive enhancement, and two ways were proposed to explain this phenomenon. Its superiority in practical application of surface-enhanced Raman scattering (SERS) was verified by analyzing the characteristic Raman spectrum of Sildenafil between 1150 cm-1 and 1699 cm-1. Besides, the absorption mechanism of GSH on the film through the role of peptide bond was analyzed. GSH interacts strongly with the silver surface via the ν(Csbnd S) in two different conformers. The carboxyl and the amide groups are also involved in the adsorption process. In this experiment, we synthesized, studied and applied this as-growth substrate and found some information about its interaction with different molecular bonds and functional groups of peptide.

  6. FACET: a radiation view factor computer code for axisymmetric, 2D planar, and 3D geometries with shadowing

    SciTech Connect

    Shapiro, A.B.

    1983-08-01

    The computer code FACET calculates the radiation geometric view factor (alternatively called shape factor, angle factor, or configuration factor) between surfaces for axisymmetric, two-dimensional planar and three-dimensional geometries with interposed third surface obstructions. FACET was developed to calculate view factors for input to finite-element heat-transfer analysis codes. The first section of this report is a brief review of previous radiation-view-factor computer codes. The second section presents the defining integral equation for the geometric view factor between two surfaces and the assumptions made in its derivation. Also in this section are the numerical algorithms used to integrate this equation for the various geometries. The third section presents the algorithms used to detect self-shadowing and third-surface shadowing between the two surfaces for which a view factor is being calculated. The fourth section provides a user's input guide followed by several example problems.

  7. 3D scene reconstruction based on multi-view distributed video coding in the Zernike domain for mobile applications

    NASA Astrophysics Data System (ADS)

    Palma, V.; Carli, M.; Neri, A.

    2011-02-01

    In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.

  8. Creating 3D visualizations of MRI data: A brief guide

    PubMed Central

    Madan, Christopher R.

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3Dglass brain’ rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study’s findings. PMID:26594340

  9. Gypsies in the palace: Experimentalist's view on the use of 3-D physics-based simulation of hillslope hydrological response

    USGS Publications Warehouse

    James, A.L.; McDonnell, Jeffery J.; Tromp-Van Meerveld, I.; Peters, N.E.

    2010-01-01

    As a fundamental unit of the landscape, hillslopes are studied for their retention and release of water and nutrients across a wide range of ecosystems. The understanding of these near-surface processes is relevant to issues of runoff generation, groundwater-surface water interactions, catchment export of nutrients, dissolved organic carbon, contaminants (e.g. mercury) and ultimately surface water health. We develop a 3-D physics-based representation of the Panola Mountain Research Watershed experimental hillslope using the TOUGH2 sub-surface flow and transport simulator. A recent investigation of sub-surface flow within this experimental hillslope has generated important knowledge of threshold rainfall-runoff response and its relation to patterns of transient water table development. This work has identified components of the 3-D sub-surface, such as bedrock topography, that contribute to changing connectivity in saturated zones and the generation of sub-surface stormflow. Here, we test the ability of a 3-D hillslope model (both calibrated and uncalibrated) to simulate forested hillslope rainfall-runoff response and internal transient sub-surface stormflow dynamics. We also provide a transparent illustration of physics-based model development, issues of parameterization, examples of model rejection and usefulness of data types (e.g. runoff, mean soil moisture and transient water table depth) to the model enterprise. Our simulations show the inability of an uncalibrated model based on laboratory and field characterization of soil properties and topography to successfully simulate the integrated hydrological response or the distributed water table within the soil profile. Although not an uncommon result, the failure of the field-based characterized model to represent system behaviour is an important challenge that continues to vex scientists at many scales. We focus our attention particularly on examining the influence of bedrock permeability, soil anisotropy and

  10. 3D crustal architecture of the Alps-Apennines join — a new view on seismic data

    NASA Astrophysics Data System (ADS)

    Schumacher, M. E.; Laubscher, H. P.

    1996-08-01

    Seismic data from the Alps-Apennines join have usually been interpreted in the form of 2D cross-sections, passing either through the Western Alps or the Ligurian Alps-Monferrato Apennines. However, the oblique SE-NW convergence of Adria and Europa and superimposed rotations imply a distinct 3D kinematic development around the Adriatic Indenter (AI), the westernmost spur of Adria. In order to develop kinematic models, data on motion at the different margins of AI must be coordinated. Along the northern margin, the dextrally transpressive Insubric line (IL) was active between 25 and 16 Ma (Insubric-Helvetic phase of Alpine orogeny). Contemporaneously, along the southern margin (Paleo-Apenninic phase), a complementary sinistral motion took place along the Villalvernia-Varzi line (VVL). It emplaced the Monferrato Apennines westward to the north of the Ligurian Alps by carrying them westward on top of AI. Between 14 and 6 Ma (Jura-Lombardic phase of Alpine orogeny) the Lombardic thrust belt developed on the northern margin of AI, now largely hidden under the Po plain. Its continuation to the southwest is impeded by older thrust masses along the Western Alps that consist largely of basement, their sediments having been eroded, as noted on the deep reflection line CROP ALPI-1 by earlier investigators. This line, moreover, contains a deep reflection band originating in the autochthonous Mesozoic of the Apenninic foredeep. In order to better visualize this origin and the relation of further elements identified on reflection lines around the northwestern end of the Monferrato Apennines, a 3D fence diagram was constructed. It helps in establishing a 3D structural-kinematic model of the Alps-Apennines join based on the kinematics of AI. This model features an underthrust of AI under the western Alps in the Paleo-Apenninic phase. In the course of this underthrust, the Paleo-Apenninic elements of the Monferrato moved under the marginal thrusts of the western Alps. Subsequent Neo

  11. On the Use of Uavs in Mining and Archaeology - Geo-Accurate 3d Reconstructions Using Various Platforms and Terrestrial Views

    NASA Astrophysics Data System (ADS)

    Tscharf, A.; Rumpler, M.; Fraundorfer, F.; Mayer, G.; Bischof, H.

    2015-08-01

    During the last decades photogrammetric computer vision systems have been well established in scientific and commercial applications. Especially the increasing affordability of unmanned aerial vehicles (UAVs) in conjunction with automated multi-view processing pipelines have resulted in an easy way of acquiring spatial data and creating realistic and accurate 3D models. With the use of multicopter UAVs, it is possible to record highly overlapping images from almost terrestrial camera positions to oblique and nadir aerial images due to the ability to navigate slowly, hover and capture images at nearly any possible position. Multi-copter UAVs thus are bridging the gap between terrestrial and traditional aerial image acquisition and are therefore ideally suited to enable easy and safe data collection and inspection tasks in complex or hazardous environments. In this paper we present a fully automated processing pipeline for precise, metric and geo-accurate 3D reconstructions of complex geometries using various imaging platforms. Our workflow allows for georeferencing of UAV imagery based on GPS-measurements of camera stations from an on-board GPS receiver as well as tie and control point information. Ground control points (GCPs) are integrated directly in the bundle adjustment to refine the georegistration and correct for systematic distortions of the image block. We discuss our approach based on three different case studies for applications in mining and archaeology and present several accuracy related analyses investigating georegistration, camera network configuration and ground sampling distance. Our approach is furthermore suited for seamlessly matching and integrating images from different view points and cameras (aerial and terrestrial as well as inside views) into one single reconstruction. Together with aerial images from a UAV, we are able to enrich 3D models by combining terrestrial images as well inside views of an object by joint image processing to

  12. Cylindrical 3D video display observable from all directions

    NASA Astrophysics Data System (ADS)

    Endo, Tomohiro; Kajiki, Yoshihiro; Honda, Toshio; Sato, Makoto

    2000-05-01

    We propose a 3D video displaying technique that multiple viewers can observe 3D images from 360 degrees of arc horizontally without 3D glasses. This technique uses a cylindrical parallax barrier and 1D light source array. We have developed an experimental display using this technique and have demonstrated 3D images observable form 360 degrees of arc horizontally without 3D glasses. Since this technique is based on the parallax panoramagram, the parallax number and resolution are limited by the diffraction at the parallax barrier. To avoid these limits, we improved the technique by revolving the parallax barrier. We have been developing a new experimental display using this improved technique. The display is capable of displaying cylindrical 3D video images within the diameter of 100 mm and the height of 128 mm. Images are described with the resolution of 1254 pixels circularly and 128 pixels vertically, and refreshed at 30Hz. Each pixel has the viewing angle of 60 degrees and that is divided into 70 views, therefore the angular parallax interval of each pixel is less than 1 degree. In such a case, observers may barely perceive parallax discretely. The pixels are arranged on a cylinder surface, therefore produced 3D images can be observed from all directions.

  13. Wavelet-Based 3D Reconstruction of Microcalcification Clusters from Two Mammographic Views: New Evidence That Fractal Tumors Are Malignant and Euclidean Tumors Are Benign

    PubMed Central

    Batchelder, Kendra A.; Tanenbaum, Aaron B.; Albert, Seth; Guimond, Lyne; Kestener, Pierre; Arneodo, Alain; Khalil, Andre

    2014-01-01

    The 2D Wavelet-Transform Modulus Maxima (WTMM) method was used to detect microcalcifications (MC) in human breast tissue seen in mammograms and to characterize the fractal geometry of benign and malignant MC clusters. This was done in the context of a preliminary analysis of a small dataset, via a novel way to partition the wavelet-transform space-scale skeleton. For the first time, the estimated 3D fractal structure of a breast lesion was inferred by pairing the information from two separate 2D projected mammographic views of the same breast, i.e. the cranial-caudal (CC) and mediolateral-oblique (MLO) views. As a novelty, we define the “CC-MLO fractal dimension plot”, where a “fractal zone” and “Euclidean zones” (non-fractal) are defined. 118 images (59 cases, 25 malignant and 34 benign) obtained from a digital databank of mammograms with known radiologist diagnostics were analyzed to determine which cases would be plotted in the fractal zone and which cases would fall in the Euclidean zones. 92% of malignant breast lesions studied (23 out of 25 cases) were in the fractal zone while 88% of the benign lesions were in the Euclidean zones (30 out of 34 cases). Furthermore, a Bayesian statistical analysis shows that, with 95% credibility, the probability that fractal breast lesions are malignant is between 74% and 98%. Alternatively, with 95% credibility, the probability that Euclidean breast lesions are benign is between 76% and 96%. These results support the notion that the fractal structure of malignant tumors is more likely to be associated with an invasive behavior into the surrounding tissue compared to the less invasive, Euclidean structure of benign tumors. Finally, based on indirect 3D reconstructions from the 2D views, we conjecture that all breast tumors considered in this study, benign and malignant, fractal or Euclidean, restrict their growth to 2-dimensional manifolds within the breast tissue. PMID:25222610

  14. The New Realm of 3-D Vision

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  15. Sojourner near Barnacle Bill - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    At right, Sojourner has traveled off the lander's rear ramp and onto the surface of Mars. 3D glasses are necessary to identify surface detail. The rock Barnacle Bill and the rear ramp is to the left of Sojourner.

    The image was taken by the Imager for Mars Pathfinder (IMP) on Sol 3. The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  16. Increasing the effective aperture of a detector and enlarging the receiving field of view in a 3D imaging lidar system through hexagonal prism beam splitting.

    PubMed

    Lee, Xiaobao; Wang, Xiaoyi; Cui, Tianxiang; Wang, Chunhui; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-07-11

    The detector in a highly accurate and high-definition scanning 3D imaging lidar system requires high frequency bandwidth and sufficient photosensitive area. To solve the problem of small photosensitive area of an existing indium gallium arsenide detector with a certain frequency bandwidth, this study proposes a method for increasing the receiving field of view (FOV) and enlarging the effective photosensitive aperture of such detector through hexagonal prism beam splitting. The principle and construction of hexagonal prism beam splitting is also discussed in this research. Accordingly, a receiving optical system with two hexagonal prisms is provided and the splitting beam effect of the simulation experiment is analyzed. Using this novel method, the receiving optical system's FOV can be improved effectively up to ±5°, and the effective photosensitive aperture of the detector is increased from 0.5 mm to 1.5 mm. PMID:27410800

  17. TU-C-BRE-04: 3D Gel Dosimetry Using ViewRay On-Board MR Scanner: A Feasibility Study

    SciTech Connect

    Zhang, L; Du, D; Green, O; Rodriguez, V; Wooten, H; Xiao, Z; Yang, D; Hu, Y; Li, H

    2014-06-15

    Purpose: MR based 3D gel has been proposed for radiation therapy dosimetry. However, access to MR scanner has been one of the limiting factors for its wide acceptance. Recent commercialization of an on-board MR-IGRT device (ViewRay) may render the availability issue less of a concern. This work reports our attempts to simulate MR based dose measurement accuracy on ViewRay using three different gels. Methods: A spherical BANG gel dosimeter was purchased from MGS Research. Cylindrical MAGIC gel and Fricke gel were fabricated in-house according to published recipes. After irradiation, BANG and MAGIC were imaged using a dual-echo spin echo sequence for T2 measurement on a Philips 1.5T MR scanner, while Fricke gel was imaged using multiple spin echo sequences. Difference between MR measured and TPS calculated dose was defined as noise. The noise power spectrum was calculated and then simulated for the 0.35 T magnetic field associated with ViewRay. The estimated noise was then added to TG-119 test cases to simulate measured dose distributions. Simulated measurements were evaluated against TPS calculated doses using gamma analysis. Results: Given same gel, sequence and coil setup, with a FOV of 180×90×90 mm3, resolution of 3×3×3 mm3, and scanning time of 30 minutes, the simulated measured dose distribution using BANG would have a gamma passing rate greater than 90% (3%/3mm and absolute). With a FOV 180×90×90 mm3, resolution of 4×4×5 mm3, and scanning time of 45 minutes, the simulated measuremened dose distribution would have a gamma passing rate greater than 97%. MAGIC exhibited similar performance while Fricke gel was inferior due to much higher noise. Conclusions: The simulation results demonstrated that it may be feasible to use MAGIC and BANG gels for 3D dose verification using ViewRay low-field on-board MRI scanner.

  18. 6. Building E9; view of glass lines for dilute liquor ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. Building E-9; view of glass lines for dilute liquor and spent acid; second floor, looking ESE. Bottom of wash tank is at the top of the view. (Ryan and Harms) - Holston Army Ammunition Plant, RDX-and-Composition-B Manufacturing Line 9, Kingsport, Sullivan County, TN

  19. 3d visualization of atomistic simulations on every desktop

    NASA Astrophysics Data System (ADS)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-08-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given.

  20. `We put on the glasses and Moon comes closer!' Urban Second Graders Exploring the Earth, the Sun and Moon Through 3D Technologies in a Science and Literacy Unit

    NASA Astrophysics Data System (ADS)

    Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin

    2014-01-01

    This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day and night, Moon phases and seasons. These modules were used in a science and literacy unit for 35 second graders at an urban elementary school in Midwestern USA. Data included pre- and post-interviews, audio-taped lessons and classroom observations. Post-interviews demonstrated that children's knowledge of the shapes and the movements of the Earth and Moon, alternation of day and night, the occurrence of the seasons, and Moon's changing appearance increased. Second graders reported that they enjoyed expanding their knowledge through hands-on experiences; through its reality effect, 3D visualization enabled them to observe the space objects that move in the virtual space. The teachers noted that 3D visualization stimulated children's interest in space and that using 3D visualization in combination with other teaching methods-literacy experiences, videos and photos, simulations, discussions, and presentations-supported student learning. The teachers and the students still experienced challenges using 3D visualization due to technical problems with 3D vision and time constraints. We conclude that 3D visualization offers hands-on experiences for challenging science concepts and may support young children's ability to view phenomena that would typically be observed through direct, long-term observations in outer space. Results imply a reconsideration of assumed capabilities of young children to understand astronomical phenomena.

  1. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  2. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible.

  3. Timescales of quartz crystallization estimated from glass inclusion faceting using 3D propagation phase-contrast x-ray tomography: examples from the Bishop (California, USA) and Oruanui (Taupo Volcanic Zone, New Zealand) Tuffs

    NASA Astrophysics Data System (ADS)

    Pamukcu, A.; Gualda, G. A.; Anderson, A. T.

    2012-12-01

    Compositions of glass inclusions have long been studied for the information they provide on the evolution of magma bodies. Textures - sizes, shapes, positions - of glass inclusions have received less attention, but they can also provide important insight into magmatic processes, including the timescales over which magma bodies develop and erupt. At magmatic temperatures, initially round glass inclusions will become faceted (attain a negative crystal shape) through the process of dissolution and re-precipitation, such that the extent to which glass inclusions are faceted can be used to estimate timescales. The size and position of the inclusion within a crystal will influence how much faceting occurs: a larger inclusion will facet more slowly; an inclusion closer to the rim will have less time to facet. As a result, it is critical to properly document the size, shape, and position of glass inclusions to assess faceting timescales. Quartz is an ideal mineral to study glass inclusion faceting, as Si is the only diffusing species of concern, and Si diffusion rates are relatively well-constrained. Faceting time calculations to date (Gualda et al., 2012) relied on optical microscopy to document glass inclusions. Here we use 3D propagation phase-contrast x-ray tomography to image glass inclusions in quartz. This technique enhances inclusion edges such that images can be processed more successfully than with conventional tomography. We have developed a set of image processing tools to isolate inclusions and more accurately obtain information on the size, shape, and position of glass inclusions than with optical microscopy. We are studying glass inclusions from two giant tuffs. The Bishop Tuff is ~1000 km3 of high-silica rhyolite ash fall, ignimbrite, and intracaldera deposits erupted ~760 ka in eastern California (USA). Glass inclusions in early-erupted Bishop Tuff range from non-faceted to faceted, and faceting times determined using both optical microscopy and x

  4. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  5. The effect of activity outside the direct field of view in a 3D-only whole-body positron tomograph

    NASA Astrophysics Data System (ADS)

    Spinks, T. J.; Miller, M. P.; Bailey, D. L.; Bloomfield, P. M.; Livieratos, L.; Jones, T.

    1998-04-01

    The ECAT EXACT3D (CTI/Siemens 966) 3D-only PET tomograph has unprecedented sensitivity due to the large BGO (bismuth germanate) detector volume. However, the consequences of a large (23.4 cm) axial field-of-view (FOV) and the need for a patient port diameter to accommodate body scanning make the device more sensitive to photons arising from activity outside the direct (coincidence) FOV. This leads to relatively higher deadtime and an increased registration of random and scatter (true) coincidences. The purpose of this study is to determine the influence of activity outside the FOV on (i) noise-equivalent counts (NEC) and (ii) the performance of a `model-based' scatter correction algorithm, and to investigate the effect of side shielding additional to that supplied with the tomograph. Annular shielding designed for brain scanning increased the NEC for blood flow (O) measurement (integrated over 120 s) by up to 25%. For tracer studies, the increase is less than 5% over 120 min. Purpose-built additional body shielding, made to conform to the shape of a volunteer, reduced the randoms count rate in a heart blood flow measurement (O) by about 30%. After scatter correction the discrepancy between ROI count ratios for compartments within the 20 cm diameter `Utah' phantom differed by less than 5% from true (sampled) activity concentration ratios. This was so with or without activity outside the FOV and with or without additional side shielding. Count rate performance is thus improved by extra shielding but more improvement is seen in head than in body scanning. Measurement of heart blood flow using bolus injections of O would benefit from the use of detectors

  6. Glass for parenteral products: a surface view using the scanning electron microscope.

    PubMed

    Roseman, T J; Brown, J A; Scothorn, W W

    1976-01-01

    The scanning electron microscope was utilized to explore the internal surface of glass ampuls and vials used in parenteral products. The surface topography of USP Type I borosilicate glass containers was viewed after exposure to "sulfur," ammonium bifluoride, and sulfuric acid treatments. The scanning electron micrographs showed startling differences in the appearance of the surface regions. "Sulfur treatment" of ampuls was associated with a pitting of the surface and the presence of sodium sulfate crystals. The sulfur treatment of vials altered the glass surface in a characteristically different manner. The dissimilarity between the surface appearances was attributed to the method of sulfur treatment. Ampuls exposed to sulfuric acid solutions at room temperature did not show the pitting associated with the sulfur treatment. Scanning electron micrographs of ammonium bifluoride-treated ampuls showed a relief effect, suggesting that the glass was affected by the bifluoride solution but that sufficient stripping of the surface layer did not occur to remove the pits associated with the sulfur treatment. Flakes emanating from the glass were identified with the aid of the electron microprobe. Scanning electron micrographs showed that these vitreous flakes resulted from a delamination of a thin layer of the glass surface. It is concluded that the scanning electron microscope, in conjunction with other analytical techniques, is a valuable tool in assessing the quality of glass used for parenteral products. The techniques studied should be of particular importance to the pharmaceutical industry where efforts are being made to reduce the levels of particulate matter in parenteral dosage forms.

  7. TOWER, 1750'S STAIRWAY; ANGLE VIEW LOOKING NORTHEAST. Glass plate stereopair ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    TOWER, 1750'S STAIRWAY; ANGLE VIEW LOOKING NORTHEAST. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-TR-D-3 157.4895. Right (not printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  8. TOWER, 1750'S STAIRWAY; ANGLE VIEW LOOKING NORTHEAST. Glass plate stereopair ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    TOWER, 1750'S STAIRWAY; ANGLE VIEW LOOKING NORTHEAST. Glass plate stereopair number PA-1430-139 LC-HABS-GS05-TR-D-3 157.4895. Left (printed) - Independence Hall Complex, Independence Hall, 500 Chestnut Street, Philadelphia, Philadelphia County, PA

  9. Integrated 3D view of postmating responses by the Drosophila melanogaster female reproductive tract, obtained by micro-computed tomography scanning.

    PubMed

    Mattei, Alexandra L; Riccio, Mark L; Avila, Frank W; Wolfner, Mariana F

    2015-07-01

    Physiological changes in females during and after mating are triggered by seminal fluid components in conjunction with female-derived molecules. In insects, these changes include increased egg production, storage of sperm, and changes in muscle contraction within the reproductive tract (RT). Such postmating changes have been studied in dissected RT tissues, but understanding their coordination in vivo requires a holistic view of the tissues and their interrelationships. Here, we used high-resolution, multiscale micro-computed tomography (CT) scans to visualize and measure postmating changes in situ in the Drosophila female RT before, during, and after mating. These studies reveal previously unidentified dynamic changes in the conformation of the female RT that occur after mating. Our results also reveal how the reproductive organs temporally shift in concert within the confines of the abdomen. For example, we observed chiral loops in the uterus and in the upper common oviduct that relax and constrict throughout sperm storage and egg movement. We found that specific seminal fluid proteins or female secretions mediate some of the postmating changes in morphology. The morphological movements, in turn, can cause further changes due to the connections among organs. In addition, we observed apparent copulatory damage to the female intima, suggesting a mechanism for entry of seminal proteins, or other exogenous components, into the female's circulatory system. The 3D reconstructions provided by high-resolution micro-CT scans reveal how male and female molecules and anatomy interface to carry out and coordinate mating-dependent changes in the female's reproductive physiology.

  10. Integrated 3D view of postmating responses by the Drosophila melanogaster female reproductive tract, obtained by micro-computed tomography scanning

    PubMed Central

    Mattei, Alexandra L.; Riccio, Mark L.; Avila, Frank W.; Wolfner, Mariana F.

    2015-01-01

    Physiological changes in females during and after mating are triggered by seminal fluid components in conjunction with female-derived molecules. In insects, these changes include increased egg production, storage of sperm, and changes in muscle contraction within the reproductive tract (RT). Such postmating changes have been studied in dissected RT tissues, but understanding their coordination in vivo requires a holistic view of the tissues and their interrelationships. Here, we used high-resolution, multiscale micro-computed tomography (CT) scans to visualize and measure postmating changes in situ in the Drosophila female RT before, during, and after mating. These studies reveal previously unidentified dynamic changes in the conformation of the female RT that occur after mating. Our results also reveal how the reproductive organs temporally shift in concert within the confines of the abdomen. For example, we observed chiral loops in the uterus and in the upper common oviduct that relax and constrict throughout sperm storage and egg movement. We found that specific seminal fluid proteins or female secretions mediate some of the postmating changes in morphology. The morphological movements, in turn, can cause further changes due to the connections among organs. In addition, we observed apparent copulatory damage to the female intima, suggesting a mechanism for entry of seminal proteins, or other exogenous components, into the female’s circulatory system. The 3D reconstructions provided by high-resolution micro-CT scans reveal how male and female molecules and anatomy interface to carry out and coordinate mating-dependent changes in the female’s reproductive physiology. PMID:26041806

  11. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  12. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  13. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  14. The effect of activity outside the field of view on image quality for a 3D LSO-based whole body PET/CT scanner.

    PubMed

    Matheoud, R; Secco, C; Della Monica, P; Leva, L; Sacchetti, G; Inglese, E; Brambilla, M

    2009-10-01

    The purpose of this study was to quantify the influence of outside field of view (FOV) activity concentration (A(c)(,out)) on the noise equivalent count rate (NECR), scatter fraction (SF) and image quality of a 3D LSO whole-body PET/CT scanner. The contrast-to-noise ratio (CNR) was the figure of merit used to characterize the image quality of PET scans. A modified International Electrotechnical Commission (IEC) phantom was used to obtain SF and counting rates similar to those found in average patients. A scatter phantom was positioned at the end of the modified IEC phantom to simulate an activity that extends beyond the scanner. The modified IEC phantom was filled with (18)F (11 kBq mL(-1)) and the spherical targets, with internal diameter (ID) ranging from 10 to 37 mm, had a target-to-background ratio of 10. PET images were acquired with background activity concentrations into the FOV (A(c)(,bkg)) about 11, 9.2, 6.6, 5.2 and 3.5 kBq mL(-1). The emission scan duration (ESD) was set to 1, 2, 3 and 4 min. The tube inside the scatter phantom was filled with activities to provide A(c)(,out) in the whole scatter phantom of zero, half, unity, twofold and fourfold the one of the modified IEC phantom. Plots of CNR versus the various parameters are provided. Multiple linear regression was employed to study the effects of A(c)(,out) on CNR, adjusted for the presence of variables (sphere ID, A(c)(,bkg) and ESD) related to CNR. The presence of outside FOV activity at the same concentration as the one inside the FOV reduces peak NECR of 30%. The increase in SF is marginal (1.2%). CNR diminishes significantly with increasing outside FOV activity, in the range explored. ESD and A(c)(,out) have a similar weight in accounting for CNR variance. Thus, an experimental law that adjusts the scan duration to the outside FOV activity can be devised. Recovery of CNR loss due to an elevated A(c)(,out) activity seems feasible by modulating the ESD in individual bed positions according to A(c)(,out).

  15. A cross-platform solution for light field based 3D telemedicine.

    PubMed

    Wang, Gengkun; Xiang, Wei; Pickering, Mark

    2016-03-01

    Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The "immersiveness" provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience. PMID:26689324

  16. A cross-platform solution for light field based 3D telemedicine.

    PubMed

    Wang, Gengkun; Xiang, Wei; Pickering, Mark

    2016-03-01

    Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The "immersiveness" provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience.

  17. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  18. Forward ramp in 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Mars Pathfinder's forward rover ramp can be seen successfully unfurled in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This ramp was not used for the deployment of the microrover Sojourner, which occurred at the end of Sol 2. When this image was taken, Sojourner was still latched to one of the lander's petals, waiting for the command sequence that would execute its descent off of the lander's petal.

    The image helped Pathfinder scientists determine whether to deploy the rover using the forward or backward ramps and the nature of the first rover traverse. The metallic object at the lower left of the image is the lander's low-gain antenna. The square at the end of the ramp is one of the spacecraft's magnetic targets. Dust that accumulates on the magnetic targets will later be examined by Sojourner's Alpha Proton X-Ray Spectrometer instrument for chemical analysis. At right, a lander petal is visible.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  19. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  20. Lunar and Planetary Science XXXV: Viewing the Lunar Interior Through Titanium-Colored Glasses

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The session"Viewing the Lunar Interior Through Titanium-Colored Glasses" included the following reports:Consequences of High Crystallinity for the Evolution of the Lunar Magma Ocean: Trapped Plagioclase; Low Abundances of Highly Siderophile Elements in the Lunar Mantle: Evidence for Prolonged Late Accretion; Fast Anorthite Dissolution Rates in Lunar Picritic Melts: Petrologic Implications; Searching the Moon for Aluminous Mare Basalts Using Compositional Remote-Sensing Constraints II: Detailed analysis of ROIs; Origin of Lunar High Titanium Ultramafic Glasses: A Hybridized Source?; Ilmenite Solubility in Lunar Basalts as a Function of Temperature and Pressure: Implications for Petrogenesis; Garnet in the Lunar Mantle: Further Evidence from Volcanic Glasses; Preliminary High Pressure Phase Relations of Apollo 15 Green C Glass: Assessment of the Role of Garnet; Oxygen Fugacity of Mare Basalts and the Lunar Mantle. Application of a New Microscale Oxybarometer Based on the Valence State of Vanadium; A Model for the Origin of the Dark Ring at Orientale Basin; Petrology and Geochemistry of LAP 02 205: A New Low-Ti Mare-Basalt Meteorite; Thorium and Samarium in Lunar Pyroclastic Glasses: Insights into the Composition of the Lunar Mantle and Basaltic Magmatism on the Moon; and Eu2+ and REE3+ Diffusion in Enstatite, Diopside, Anorthite, and a Silicate Melt: A Database for Understanding Kinetic Fractionation of REE in the Lunar Mantle and Crust.

  1. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  2. Analyzing the 3D Structure of Human Carbonic Anhydrase II and Its Mutants Using Deep View and the Protein Data Bank

    ERIC Educational Resources Information Center

    Ship, Noam J.; Zamble, Deborah B.

    2005-01-01

    The self directed study of a 3D image of a biomolecule stresses the complex nature of the intra- and intermolecular interactions that come together to define its structure. This is made up of a series of in vitro experiments with a wild-type and mutants forms of human carbonic anhydrase II (hCAII) that examine the structure function relationship…

  3. MTF characterization in 2D and 3D for a high resolution, large field of view flat panel imager for cone beam CT

    NASA Astrophysics Data System (ADS)

    Shah, Jainil; Mann, Steve D.; Tornai, Martin P.; Richmond, Michelle; Zentai, George

    2014-03-01

    The 2D and 3D modulation transfer functions (MTFs) of a custom made, large 40x30cm2 area, 600- micron CsI-TFT based flat panel imager having 127-micron pixellation, along with the micro-fiber scintillator structure, were characterized in detail using various techniques. The larger area detector yields a reconstructed FOV of 25cm diameter with an 80cm SID in CT mode. The MTFs were determined with 1x1 (intrinsic) binning. The 2D MTFs were determined using a 50.8 micron tungsten wire and a solid lead edge, and the 3D MTF was measured using a custom made phantom consisting of three nearly orthogonal 50.8 micron tungsten wires suspended in an acrylic cubic frame. The 2D projection data was reconstructed using an iterative OSC algorithm using 16 subsets and 5 iterations. As additional verification of the resolution, along with scatter, the Catphan® phantom was also imaged and reconstructed with identical parameters. The measured 2D MTF was ~4% using the wire technique and ~1% using the edge technique at the 3.94 lp/mm Nyquist cut-off frequency. The average 3D MTF measured along the wires was ~8% at the Nyquist. At 50% MTF, the resolutions were 1.2 and 2.1 lp/mm in 2D and 3D, respectively. In the Catphan® phantom, the 1.7 lp/mm bars were easily observed. Lastly, the 3D MTF measured on the three wires has an observed 5.9% RMSD, indicating that the resolution of the imaging system is uniform and spatially independent. This high performance detector is integrated into a dedicated breast SPECT-CT imaging system.

  4. 3D visualization of polymer nanostructure

    SciTech Connect

    Werner, James H

    2009-01-01

    Soft materials and structured polymers are extremely useful nanotechnology building blocks. Block copolymers, in particular, have served as 2D masks for nanolithography and 3D scaffolds for photonic crystals, nanoparticle fabrication, and solar cells. F or many of these applications, the precise 3 dimensional structure and the number and type of defects in the polymer is important for ultimate function. However, directly visualizing the 3D structure of a soft material from the nanometer to millimeter length scales is a significant technical challenge. Here, we propose to develop the instrumentation needed for direct 3D structure determination at near nanometer resolution throughout a nearly millimeter-cubed volume of a soft, potentially heterogeneous, material. This new capability will be a valuable research tool for LANL missions in chemistry, materials science, and nanoscience. Our approach to soft materials visualization builds upon exciting developments in super-resolution optical microscopy that have occurred over the past two years. To date, these new, truly revolutionary, imaging methods have been developed and almost exclusively used for biological applications. However, in addition to biological cells, these super-resolution imaging techniques hold extreme promise for direct visualization of many important nanostructured polymers and other heterogeneous chemical systems. Los Alamos has a unique opportunity to lead the development of these super-resolution imaging methods for problems of chemical rather than biological significance. While these optical methods are limited to systems transparent to visible wavelengths, we stress that many important functional chemicals such as polymers, glasses, sol-gels, aerogels, or colloidal assemblies meet this requirement, with specific examples including materials designed for optical communication, manipulation, or light-harvesting Our Research Goals are: (1) Develop the instrumentation necessary for imaging materials

  5. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  6. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  7. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  8. Multi-photon lithography of 3D micro-structures in As2S3 and Ge5(As2Se3)95 chalcogenide glasses

    NASA Astrophysics Data System (ADS)

    Schwarz, Casey M.; Labh, Shreya; Barker, Jayk E.; Sapia, Ryan J.; Richardson, Gerald D.; Rivero-Baleine, Clara; Gleason, Benn; Richardson, Kathleen A.; Pogrebnyakov, Alexej; Mayer, Theresa S.; Kuebler, Stephen M.

    2016-03-01

    This work reports a detailed study of the processing and photo-patterning of two chalcogenide glasses (ChGs) - arsenic trisulfide (As2S3) and a new composition of germanium-doped arsenic triselenide Ge5(As2Se3)95 - as well as their use for creating functional optical structures. ChGs are materials with excellent infrared (IR) transparency, large index of refraction, low coefficient of thermal expansion, and low change in refractive index with temperature. These features make them well suited for a wide range of commercial and industrial applications including detectors, sensors, photonics, and acousto-optics. Photo-patternable films of As2S3 and Ge5(As2Se3)95 were prepared by thermally depositing the ChGs onto silicon substrates. For some As2S3 samples, an anti-reflection layer of arsenic triselenide (As2Se3) was first added to mitigate the effects of standing-wave interference during laser patterning. The ChG films were photo-patterned by multi-photon lithography (MPL) and then chemically etched to remove the unexposed material, leaving free-standing structures that were negative-tone replicas of the photo-pattern in networked-solid ChG. The chemical composition and refractive index of the unexposed and photo-exposed materials were examined using Raman spectroscopy and near-IR ellipsometry. Nano-structured arrays were photo-patterned and the resulting nano-structure morphology and chemical composition were characterized and correlated with the film compositions, conditions of thermal deposition, patterned irradiation, and etch processing. Photo-patterned Ge5(As2Se3)95 was found to be more resistant than As2S3 toward degradation by formation of surface oxides.

  9. Accurate registration of random radiographic projections based on three spherical references for the purpose of few-view 3D reconstruction

    SciTech Connect

    Schulze, Ralf; Heil, Ulrich; Weinheimer, Oliver; Gross, Daniel; Bruellmann, Dan; Thomas, Eric; Schwanecke, Ulrich; Schoemer, Elmar

    2008-02-15

    Precise registration of radiographic projection images acquired in almost arbitrary geometries for the purpose of three-dimensional (3D) reconstruction is beset with difficulties. We modify and enhance a registration method [R. Schulze, D. D. Bruellmann, F. Roeder, and B. d'Hoedt, Med. Phys. 31, 2849-2854 (2004)] based on coupling a minimum amount of three reference spheres in arbitrary positions to a rigid object under study for precise a posteriori pose estimation. Two consecutive optimization procedures (a, initial guess; b, iterative coordinate refinement) are applied to completely exploit the reference's shadow information for precise registration of the projections. The modification has been extensive, i.e., only the idea of using the sphere shadows to locate each sphere in three dimensions from each projection was retained whereas the approach to extract the shadow information has been changed completely and extended. The registration information is used for subsequent algebraic reconstruction of the 3D information inherent in the projections. We present a detailed mathematical theory of the registration process as well as simulated data investigating its performance in the presence of error. Simulation of the initial guess revealed a mean relative error in the critical depth coordinate ranging between 2.1% and 4.4%, and an evident error reduction by the subsequent iterative coordinate refinement. To prove the applicability of the method for real-world data, algebraic 3D reconstructions from few ({<=}9) projection radiographs of a human skull, a human mandible and a teeth-containing mandible segment are presented. The method facilitates extraction of 3D information from only few projections obtained from off-the-shelf radiographic projection units without the need for costly hardware. Technical requirements as well as radiation dose are low.

  10. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  11. Filling gaps in cultural heritage documentation by 3D photography

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.

    2015-08-01

    geometry" and to multistage concepts of 3D photographs in Cultural Heritage just started. Furthermore a revised list of the 3D visualization principles, claiming completeness, has been carried out. Beside others in an outlook *It is highly recommended, to list every historical and current stereo view with relevance to Cultural Heritage in a global Monument Information System (MIS), like in google earth. *3D photographs seem to be very suited, to complete and/or at least partly to replace manual archaeological sketches. In this concern the still underestimated 3D effect will be demonstrated, which even allows, e.g., the spatial perception of extremely small scratches etc... *A consequent dealing with 3D Technology even seems to indicate, currently we experience the beginning of a new age of "real 3DPC- screens", which at least could add or even partly replace the conventional 2D screens. Here the spatial visualization is verified without glasses in an all-around vitreous body. In this respect nowadays widespread lasered crystals showing monuments are identified as "Early Bird" 3D products, which, due to low resolution and contrast and due to lack of color, currently might even remember to the status of the invention of photography by Niepce (1827), but seem to promise a great future also in 3D Cultural Heritage documentation. *Last not least 3D printers more and more seem to conquer the IT-market, obviously showing an international competition.

  12. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  13. Facile aqueous synthesis and electromagnetic properties of novel 3D urchin-like glass/Ni-Ni(3)P/Co(2)P(2)O(7) core/shell/shell composite hollow structures.

    PubMed

    An, Zhenguo; Zhang, Jingjie; Pan, Shunlong

    2010-04-14

    Novel 3D urchin-like glass/Ni-Ni(3)P/Co(2)P(2)O(7) core/shell/shell composite hollow structures are fabricated for the first time by controlled stepwise assembly of granular Ni-Ni(3)P alloy and ribbon-like Co(2)P(2)O(7) nanocrystals on hollow glass spheres in aqueous solutions at mild conditions. It is found that the shell structure and the overall morphology of the products can be tailored by properly tuning the annealing temperature. The as-obtained composite core/shell/shell products possess low density (ca. 1.18 g cm(-3)) and shape-dependent magnetic and microwave absorbing properties, and thus may have some promising applications in the fields of low-density magnetic materials, microwave absorbers, etc. Based on a series of contrast experiments, the probable formation mechanism of the core/shell/shell hierarchical structures is proposed. This work provides an additional strategy to prepare core/shell composite spheres with tailored shell morphology and electromagnetic properties. PMID:20379530

  14. 3D visualization techniques for the STEREO-mission

    NASA Astrophysics Data System (ADS)

    Wiegelmann, T.; Podlipnik, B.; Inhester, B.; Feng, L.; Ruan, P.

    The forthcoming STEREO-mission will observe the Sun from two different viewpoints We expect about 2GB data per day which ask for suitable data presentation techniques A key feature of STEREO is that it will provide for the first time a 3D-view of the Sun and the solar corona In our normal environment we see objects three dimensional because the light from real 3D objects needs different travel times to our left and right eye As a consequence we see slightly different images with our eyes which gives us information about the depth of objects and a corresponding 3D impression Techniques for the 3D-visualization of scientific and other data on paper TV computer screen cinema etc are well known e g two colour anaglyph technique shutter glasses polarization filters and head-mounted displays We discuss advantages and disadvantages of these techniques and how they can be applied to STEREO-data The 3D-visualization techniques are not limited to visual images but can be also used to show the reconstructed coronal magnetic field and energy and helicity distribution In the advent of STEREO we test the method with data from SOHO which provides us different viewpoints by the solar rotation This restricts the analysis to structures which remain stationary for several days Real STEREO-data will not be affected by these limitations however

  15. Stereo 3-D Vision in Teaching Physics

    NASA Astrophysics Data System (ADS)

    Zabunov, Svetoslav

    2012-03-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The current paper describes the modern stereo 3-D technologies that are applicable to various tasks in teaching physics in schools, colleges, and universities. Examples of stereo 3-D simulations developed by the author can be observed on online.

  16. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-08

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  17. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  18. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  19. Super stereoscopy technique for comfortable and realistic 3D displays.

    PubMed

    Akşit, Kaan; Niaki, Amir Hossein Ghanbari; Ulusoy, Erdem; Urey, Hakan

    2014-12-15

    Two well-known problems of stereoscopic displays are the accommodation-convergence conflict and the lack of natural blur for defocused objects. We present a new technique that we name Super Stereoscopy (SS3D) to provide a convenient solution to these problems. Regular stereoscopic glasses are replaced by SS3D glasses which deliver at least two parallax images per eye through pinholes equipped with light selective filters. The pinholes generate blur-free retinal images so as to enable correct accommodation, while the delivery of multiple parallax images per eye creates an approximate blur effect for defocused objects. Experiments performed with cameras and human viewers indicate that the technique works as desired. In case two, pinholes equipped with color filters per eye are used; the technique can be used on a regular stereoscopic display by only uploading a new content, without requiring any change in display hardware, driver, or frame rate. Apart from some tolerable loss in display brightness and decrease in natural spatial resolution limit of the eye because of pinholes, the technique is quite promising for comfortable and realistic 3D vision, especially enabling the display of close objects that are not possible to display and comfortably view on regular 3DTV and cinema. PMID:25503026

  20. Hands-on guide for 3D image creation for geological purposes

    NASA Astrophysics Data System (ADS)

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red

  1. Web Based Interactive Anaglyph Stereo Visualization of 3D Model of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Han, J.

    2014-12-01

    The objectives of this study were to create interactive online tool for generating and viewing the anaglyph 3D stereo image on a Web browser via Internet. To achieve this, we designed and developed the prototype system. Three-dimensional visualization is well known and becoming popular in recent years to understand the target object and the related physical phenomena. Geoscience data have the complex data model, which combines large extents with rich small scale visual details. So, the real-time visualization of 3D geoscience data model on the Internet is a challenging work. In this paper, we show the result of creating which can be viewed in 3D anaglyph of geoscience data in any web browser which supports WebGL. We developed an anaglyph image viewing prototype system, and some representative results are displayed by anaglyph 3D stereo image generated in red-cyan colour from pairs of air-photo/digital elevation model and geological map/digital elevation model respectively. The best viewing is achieved by using suitable 3D red-cyan glasses, although alternatively red-blue or red-green spectacles can be also used. The middle mouse wheel can be used to zoom in/out the anaglyph image on a Web browser. Application of anaglyph 3D stereo image is a very important and easy way to understand the underground geologic system and active tectonic geomorphology. The integrated strata with fine three-dimensional topography and geologic map data can help to characterise the mineral potential area and the active tectonic abnormal characteristics. To conclude, it can be stated that anaglyph 3D stereo image provides a simple and feasible method to improve the relief effect of geoscience data such as geomorphology and geology. We believe that with further development, the anaglyph 3D stereo imaging system could as a complement to 3D geologic modeling, constitute a useful tool for better understanding of the underground geology and the active tectonic

  2. Evaluation of the monocular depth cue in 3D displays.

    PubMed

    Kim, Sung-Kyu; Kim, Dong-Wook; Kwon, Yong Moo; Son, Jung-Young

    2008-12-22

    Binocular disparity and monocular depth information are the principal functions of ideal 3D displays. 3D display systems such as stereoscopic or multi-view, super multi-view (SMV), and multi-focus (MF) displays were considered for the testing of the satisfaction level with the monocular accommodation of three different depths of 3D object points. The numerical simulation and experimental results show that the MF 3D display gives a monocular depth cue. In addition, the experimental results of the monocular MF 3D display show clear monocular focus on four different depths. Therefore, we can apply the MF 3D display to monocular 3D displays.

  3. Using the Technology: Introducing Point of View Video Glasses Into the Simulated Clinical Learning Environment.

    PubMed

    Metcalfe, Helene; Jonas-Dwyer, Diana; Saunders, Rosemary; Dugmore, Helen

    2015-10-01

    The introduction of learning technologies into educational settings continues to grow alongside the emergence of innovative technologies into the healthcare arena. The challenge for health professionals such as medical, nursing, and allied health practitioners is to develop an improved understanding of these technologies and how they may influence practice and contribute to healthcare. For nurse educators to remain contemporary, there is a need to not only embrace current technologies in teaching and learning but to also ensure that students are able to adapt to this changing pedagogy. One recent technological innovation is the use of wearable computing technology, consisting of video recording with the capability of playback analysis. The authors of this article discuss the introduction of the use of wearable Point of View video glasses by a cohort of nursing students in a simulated clinical learning laboratory. Of particular interest was the ease of use of the glasses, also termed the usability of this technology, which is central to its success. Students' reflections were analyzed together with suggestions for future use.

  4. Automatic 3D video format detection

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Wang, Zhe; Zhai, Jiefu; Doyen, Didier

    2011-03-01

    Many 3D formats exist and will probably co-exist for a long time even if 3D standards are today under definition. The support for multiple 3D formats will be important for bringing 3D into home. In this paper, we propose a novel and effective method to detect whether a video is a 3D video or not, and to further identify the exact 3D format. First, we present how to detect those 3D formats that encode a pair of stereo images into a single image. The proposed method detects features and establishes correspondences between features in the left and right view images, and applies the statistics from the distribution of the positional differences between corresponding features to detect the existence of a 3D format and to identify the format. Second, we present how to detect the frame sequential 3D format. In the frame sequential 3D format, the feature points are oscillating from frame to frame. Similarly, the proposed method tracks feature points over consecutive frames, computes the positional differences between features, and makes a detection decision based on whether the features are oscillating. Experiments show the effectiveness of our method.

  5. FPGA implementation of glass-free stereo vision

    NASA Astrophysics Data System (ADS)

    Tang, Weidong; Yan, Xiaolin

    2016-04-01

    This paper presents a real-time efficient glass-free 3D system, which is based on FPGA. The system converts two-view input that is 60 frames per second (fps) 1080P stream into a multi-view video with 30fps and 4K resolution. In order to provide smooth and comfortable viewing experience, glass-free 3D systems must display multi-view videos. To generate a multi-view video from a two-view input includes three steps, the first is to compute disparity maps from two input views; the second is to synthesize a couple of new views based on the computed disparity maps and input views; the last is to produce video from the new views according to the specifications of the lens installed on TV sets.

  6. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  7. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  8. Grounding-line migration in plan-view marine ice-sheet models: results of the ice2sea MISMIP3d intercomparison

    NASA Astrophysics Data System (ADS)

    Pattyn, Frank; Perichon, Laura; Durand, Gaël; Gagliardini, Olivier; Favier, Lionel; Hindmarsh, Richard; Zwinger, Thomas; Participants, Mismip3d

    2013-04-01

    Predictions of marine ice-sheet behaviour require models able to simulate grounding line migration. We present results of an intercomparison experiment for plan-view marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no buttressing effects from lateral drag). A unique steady state grounding line position exists for ice sheets on a downward sloping bed under those simplified conditions. Perturbation experiments specifying spatial (lateral) variation in basal sliding parameters permitted the evolution of curved grounding lines, generating buttressing effects. The experiments showed regions of compression and extensional flow across the grounding line, thereby invalidating the boundary layer theory. Models based on the shallow ice approximation, which neither resolve membrane stresses, nor reproduce the approximate analytical results unless appropriate parameterizations for ice flux are imposed at the grounding line, are invalid. Steady-state grounding line positions were found to be dependent on the level of physical model approximation. Models that only include membrane stresses result in ice sheets with a larger span than those that also incorporate vertical shearing at the grounding line, such as higher-order and full-Stokes models. From a numerical perspective, resolving grounding lines requires a sufficiently small grid size (

  9. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  10. "We Put on the Glasses and Moon Comes Closer!" Urban Second Graders Exploring the Earth, the Sun and Moon through 3D Technologies in a Science and Literacy Unit

    ERIC Educational Resources Information Center

    Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin

    2014-01-01

    This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day…

  11. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  12. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  13. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  14. Forward ramp and Twin Peaks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A lander petal and the forward ramp are featured in this image, taken by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. There are several prominent rocks, including Wedge at left; Shark, Half-Dome, and Pumpkin in the background; and Flat Top and Little Flat Top at center.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  15. Sojourner's favorite rocks - in 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, and Little Flat Top are at center. The 'Twin Peaks' in the distance are one to two kilometers away. Curvature in the image is due to parallax.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  16. Automatic view synthesis by image-domain-warping.

    PubMed

    Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa

    2013-09-01

    Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.

  17. Automatic view synthesis by image-domain-warping.

    PubMed

    Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa

    2013-09-01

    Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals. PMID:23715602

  18. Venus in 3D

    NASA Astrophysics Data System (ADS)

    Plaut, J. J.

    1993-08-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  19. 3D reservoir visualization

    SciTech Connect

    Van, B.T.; Pajon, J.L.; Joseph, P. )

    1991-11-01

    This paper shows how some simple 3D computer graphics tools can be combined to provide efficient software for visualizing and analyzing data obtained from reservoir simulators and geological simulations. The animation and interactive capabilities of the software quickly provide a deep understanding of the fluid-flow behavior and an accurate idea of the internal architecture of a reservoir.

  20. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  1. 3D Printable Graphene Composite

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  2. Subjective experiences of watching stereoscopic Avatar and U2 3D in a cinema

    NASA Astrophysics Data System (ADS)

    Pölönen, Monika; Salmimaa, Marja; Takatalo, Jari; Häkkinen, Jukka

    2012-01-01

    A stereoscopic 3-D version of the film Avatar was shown to 85 people who subsequently answered questions related to sickness, visual strain, stereoscopic image quality, and sense of presence. Viewing Avatar for 165 min induced some symptoms of visual strain and sickness, but the symptom levels remained low. A comparison between Avatar and previously published results for the film U2 3D showed that sickness and visual strain levels were similar despite the films' runtimes. The genre of the film had a significant effect on the viewers' opinions and sense of presence. Avatar, which has been described as a combination of action, adventure, and sci-fi genres, was experienced as more immersive and engaging than the music documentary U2 3D. However, participants in both studies were immersed, focused, and absorbed in watching the stereoscopic 3-D (S3-D) film and were pleased with the film environments. The results also showed that previous stereoscopic 3-D experience significantly reduced the amount of reported eye strain and complaints about the weight of the viewing glasses.

  3. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    NASA Astrophysics Data System (ADS)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  4. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  5. 3D Radiative Transfer Effects in Multi-Angle/Multi-Spectral Radio-Polarimetric Signals from a Mixture of Clouds and Aerosols Viewed by a Non-Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Davis, Anthony B.; Garay, Michael J.; Xu, Feng; Qu, Zheng; Emde, Claudia

    2013-01-01

    When observing a spatially complex mix of aerosols and clouds in a single relatively large field-of-view, nature entangles their signals non-linearly through polarized radiation transport processes that unfold in the 3D position and direction spaces. In contrast, any practical forward model in a retrieval algorithm will use only 1D vector radiative transfer (vRT) in a linear mixing technique. We assess the difference between the observed and predicted signals using synthetic data from a high-fidelity 3D vRT model with clouds generated using a Large Eddy Simulation model and an aerosol climatology. We find that this difference is signal--not noise--for the Aerosol Polarimetry Sensor (APS), an instrument developed by NASA. Moreover, the worst case scenario is also the most interesting case, namely, when the aerosol burden is large, hence hase the most impact on the cloud microphysics and dynamics. Based on our findings, we formulate a mitigation strategy for these unresolved cloud adjacency effects assuming that some spatial information is available about the structure of the clouds at higher resolution from "context" cameras, as was planned for NASA's ill-fated Glory mission that was to carry the APS but failed to reach orbit. Application to POLDER (POLarization and Directionality of Earth Reflectances) data from the period when PARASOL (Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) was in the A-train is briefly discussed.

  6. Crosstalk in automultiscopic 3-D displays: blessing in disguise?

    NASA Astrophysics Data System (ADS)

    Jain, Ashish; Konrad, Janusz

    2007-02-01

    Most of 3-D displays suffer from interocular crosstalk, i.e., the perception of an unintended view in addition to intended one. The resulting "ghosting" at high-contrast object boundaries is objectionable and interferes with depth perception. In automultiscopic (no glasses, multiview) displays using microlenses or parallax barrier, the effect is compounded since several unintended views may be perceived at once. However, we recently discovered that crosstalk in automultiscopic displays can be also beneficial. Since spatial multiplexing of views in order to prepare a composite image for automultiscopic viewing involves sub-sampling, prior anti-alias filtering is required. To date, anti-alias filter design has ignored the presence of crosstalk in automultiscopic displays. In this paper, we propose a simple multiplexing model that takes crosstalk into account. Using this model we derive a mathematical expression for the spectrum of single view with crosstalk, and we show that it leads to reduced spectral aliasing compared to crosstalk-free case. We then propose a new criterion for the characterization of ideal anti-alias pre-filter. In the experimental part, we describe a simple method to measure optical crosstalk between views using digital camera. We use the measured crosstalk parameters to find the ideal frequency response of anti-alias filter and we design practical digital filters approximating this response. Having applied the designed filters to a number of multiview images prior to multiplexing, we conclude that, due to their increased bandwidth, the filters lead to visibly sharper 3-D images without increasing aliasing artifacts.

  7. 3D rapid mapping

    NASA Astrophysics Data System (ADS)

    Isaksson, Folke; Borg, Johan; Haglund, Leif

    2008-04-01

    In this paper the performance of passive range measurement imaging using stereo technique in real time applications is described. Stereo vision uses multiple images to get depth resolution in a similar way as Synthetic Aperture Radar (SAR) uses multiple measurements to obtain better spatial resolution. This technique has been used in photogrammetry for a long time but it will be shown that it is now possible to do the calculations, with carefully designed image processing algorithms, in e.g. a PC in real time. In order to get high resolution and quantitative data in the stereo estimation a mathematical camera model is used. The parameters to the camera model are settled in a calibration rig or in the case of a moving camera the scene itself can be used for calibration of most of the parameters. After calibration an ordinary TV camera has an angular resolution like a theodolite, but to a much lower price. The paper will present results from high resolution 3D imagery from air to ground. The 3D-results from stereo calculation of image pairs are stitched together into a large database to form a 3D-model of the area covered.

  8. Shutter glasses stereo LCD with a dynamic backlight

    NASA Astrophysics Data System (ADS)

    Liou, Jian-Chiun; Lee, Kuen; Tseng, Fan-Gang; Huang, Jui-Feng; Yen, Wei-Ting; Hsu, Wei-Liang

    2009-02-01

    Although a naked-eye 3D display is more convenient to watch for a viewer, so far and in the near future the image quality of a stereo display watched with special glasses is still much better than the former. e.g. the viewing angle, the crosstalk, the resolution, etc. While focusing on the glasses-type stereo display, the image performance of a time multiplexed shutter-glasses-type 3D display should be better than that of a spatial multiplexed polarization-encoded 3D display. Shutter-glasses-type 3D display was implemented many years ago by CRT. However, due to the generation supersedure the CRT was replaced by LCD, the shutter-glasses solution couldn't work for several years as a result of the long response time of LCD. Thanks to the development of over-drive technology, the response time of LCD is getting faster, and a 100-120Hz panel refresh rate is possible. Therefore, 3D game fans have a very good opportunity to watch full resolution, large viewing angle and low crosstalk stereo LCDs again. In this paper, a 120Hz LCD and an LED dynamic backlight to overcome the hold-type characteristic of an LCD are used to implement a time-multiplexed 3D display. A synchronization circuit is developed to connect the time scheme of the vertical sync. signal from the display card, the scanning backlight and the shutter glasses. The crosstalk under different scanning conditions is measured.

  9. The psychology of the 3D experience

    NASA Astrophysics Data System (ADS)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence.

  10. Low-cost 3D rangefinder system

    NASA Astrophysics Data System (ADS)

    Chen, Bor-Tow; Lou, Wen-Shiou; Chen, Chia-Chen; Lin, Hsien-Chang

    1998-06-01

    Nowadays, 3D data are popularly performed in computer, and 3D browsers manipulate 3D model in the virtual world. Yet, till now, 3D digitizer is still a high-cost product and not a familiar equipment. In order to meet the requirement of 3D fancy world, in this paper, the concept of a low-cost 3D digitizer system is proposed to catch 3D range data from objects. The specified optical design of the 3D extraction is effective to depress the size, and the processing software of the system is compatible with PC to promote its portable capability. Both features contribute a low-cost system in PC environment in contrast to a large system bundled in an expensive workstation platform. In the structure of 3D extraction, laser beam and CCD camera are adopted to construct a 3D sensor. Instead of 2 CCD cameras for capturing laser lines twice before, a 2-in-1 system is proposed to merge 2 images in one CCD which still retains the information of two fields of views to inhibit occlusion problems. Besides, optical paths of two camera views are reflected by mirror in order that the volume of the system can be minified with one rotary axis only. It makes a portable system be more possible to work. Combined with the processing software executable in PC windows system, the proposed system not only saves hardware cost but also processing time of software. The system performance achieves 0.05 mm accuracy. It shows that a low- cost system is more possible to be high-performance.

  11. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    , even if one data object lies behind another. Stereoscopic viewing is another powerful tool to investigate 3-D relationships between objects. This form of immersion is constructed through viewing two separate images that are interleaved--typically 48 frames per second, per eye--and synced through an emitter and a set of specialized polarizing eyeglasses. The polarizing lenses flicker at an equivalent rate, blanking the eye for which a particular image was not drawn, producing the desired stereo effect. Volumetric visualization of the ARAD 3-D seismic dataset will be presented. The effective use of transparency reveals detailed structure of the melt-lens beneath the 9°03'N overlapping spreading center (OSC) along the East Pacific Rise, including melt-filled fractures within the propagating rift-tip. In addition, range-gated images of seismic reflectivity will be co-registered to investigate the physical properties (melt versus mush) of the magma chamber at this locale. Surface visualization of a dense, 2-D grid of MCS seismic data beneath Axial seamount (Juan de Fuca Ridge) will also be highlighted, including relationships between the summit caldera and rift zones, and the underlying (and humongous) magma chamber. A selection of Quicktime movies will be shown. Popcorn will be served, really!

  12. Perception of detail in 3D images

    NASA Astrophysics Data System (ADS)

    Heynderickx, Ingrid; Kaptein, Ronald

    2009-01-01

    A lot of current 3D displays suffer from the fact that their spatial resolution is lower compared to their 2D counterparts. One reason for this is that the multiple views needed to generate 3D are often spatially multiplexed. Besides this, imperfect separation of the left- and right-eye view leads to blurring or ghosting, and therefore to a decrease in perceived sharpness. However, people watching stereoscopic videos have reported that the 3D scene contained more details, compared to the 2D scene with identical spatial resolution. This is an interesting notion, that has never been tested in a systematic and quantitative way. To investigate this effect, we had people compare the amount of detail ("detailedness") in pairs of 2D and 3D images. A blur filter was applied to one of the two images, and the blur level was varied using an adaptive staircase procedure. In this way, the blur threshold for which the 2D and 3D image contained perceptually the same amount of detail could be found. Our results show that the 3D image needed to be blurred more than the 2D image. This confirms the earlier qualitative findings that 3D images contain perceptually more details than 2D images with the same spatial resolution.

  13. Innovations in 3D printing: a 3D overview from optics to organs.

    PubMed

    Schubert, Carl; van Langeveld, Mark C; Donoso, Larry A

    2014-02-01

    3D printing is a method of manufacturing in which materials, such as plastic or metal, are deposited onto one another in layers to produce a three dimensional object, such as a pair of eye glasses or other 3D objects. This process contrasts with traditional ink-based printers which produce a two dimensional object (ink on paper). To date, 3D printing has primarily been used in engineering to create engineering prototypes. However, recent advances in printing materials have now enabled 3D printers to make objects that are comparable with traditionally manufactured items. In contrast with conventional printers, 3D printing has the potential to enable mass customisation of goods on a large scale and has relevance in medicine including ophthalmology. 3D printing has already been proved viable in several medical applications including the manufacture of eyeglasses, custom prosthetic devices and dental implants. In this review, we discuss the potential for 3D printing to revolutionise manufacturing in the same way as the printing press revolutionised conventional printing. The applications and limitations of 3D printing are discussed; the production process is demonstrated by producing a set of eyeglass frames from 3D blueprints. PMID:24288392

  14. Innovations in 3D printing: a 3D overview from optics to organs.

    PubMed

    Schubert, Carl; van Langeveld, Mark C; Donoso, Larry A

    2014-02-01

    3D printing is a method of manufacturing in which materials, such as plastic or metal, are deposited onto one another in layers to produce a three dimensional object, such as a pair of eye glasses or other 3D objects. This process contrasts with traditional ink-based printers which produce a two dimensional object (ink on paper). To date, 3D printing has primarily been used in engineering to create engineering prototypes. However, recent advances in printing materials have now enabled 3D printers to make objects that are comparable with traditionally manufactured items. In contrast with conventional printers, 3D printing has the potential to enable mass customisation of goods on a large scale and has relevance in medicine including ophthalmology. 3D printing has already been proved viable in several medical applications including the manufacture of eyeglasses, custom prosthetic devices and dental implants. In this review, we discuss the potential for 3D printing to revolutionise manufacturing in the same way as the printing press revolutionised conventional printing. The applications and limitations of 3D printing are discussed; the production process is demonstrated by producing a set of eyeglass frames from 3D blueprints.

  15. NASA's 3D View of Celestial Lightsabers

    NASA Video Gallery

    This movie envisions a three-dimensional perspective on the Hubble Space Telescope's striking image of the Herbig-Haro object known as HH 24. The central star is hidden by gas and dust, but its pro...

  16. Taming supersymmetric defects in 3d-3d correspondence

    NASA Astrophysics Data System (ADS)

    Gang, Dongmin; Kim, Nakwoo; Romo, Mauricio; Yamazaki, Masahito

    2016-07-01

    We study knots in 3d Chern-Simons theory with complex gauge group {SL}(N,{{C}}), in the context of its relation with 3d { N }=2 theory (the so-called 3d-3d correspondence). The defect has either co-dimension 2 or co-dimension 4 inside the 6d (2,0) theory, which is compactified on a 3-manifold \\hat{M}. We identify such defects in various corners of the 3d-3d correspondence, namely in 3d {SL}(N,{{C}}) CS theory, in 3d { N }=2 theory, in 5d { N }=2 super Yang-Mills theory, and in the M-theory holographic dual. We can make quantitative checks of the 3d-3d correspondence by computing partition functions at each of these theories. This Letter is a companion to a longer paper [1], which contains more details and more results.

  17. The influence of autostereoscopic 3D displays on subsequent task performance

    NASA Astrophysics Data System (ADS)

    Barkowsky, Marcus; Le Callet, Patrick

    2010-02-01

    Viewing 3D content on an autostereoscopic is an exciting experience. This is partly due to the fact that the 3D effect is seen without glasses. Nevertheless, it is an unnatural condition for the eyes as the depth effect is created by the disparity of the left and the right view on a flat screen instead of having a real object at the corresponding location. Thus, it may be more tiring to watch 3D than 2D. This question is investigated in this contribution by a subjective experiment. A search task experiment is conducted and the behavior of the participants is recorded with an eyetracker. Several indicators both for low level perception as well as for the task performance itself are evaluated. In addition two optometric tests are performed. A verification session with conventional 2D viewing is included. The results are discussed in detail and it can be concluded that the 3D viewing does not have a negative impact on the task performance used in the experiment.

  18. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  19. 3D Printed Microscope for Mobile Devices that Cost Pennies

    ScienceCinema

    Erikson, Rebecca; Baird, Cheryl; Hutchinson, Janine

    2016-07-12

    Scientists at PNNL have designed a 3D-printable microscope for mobile devices using pennies worth of plastic and glass materials. The microscope has a wide range of uses, from education to in-the-field science.

  20. 3D Printed Microscope for Mobile Devices that Cost Pennies

    SciTech Connect

    Erikson, Rebecca; Baird, Cheryl; Hutchinson, Janine

    2014-09-15

    Scientists at PNNL have designed a 3D-printable microscope for mobile devices using pennies worth of plastic and glass materials. The microscope has a wide range of uses, from education to in-the-field science.

  1. 3D dynamic roadmapping for abdominal catheterizations.

    PubMed

    Bender, Frederik; Groher, Martin; Khamene, Ali; Wein, Wolfgang; Heibel, Tim Hauke; Navab, Nassir

    2008-01-01

    Despite rapid advances in interventional imaging, the navigation of a guide wire through abdominal vasculature remains, not only for novice radiologists, a difficult task. Since this navigation is mostly based on 2D fluoroscopic image sequences from one view, the process is slowed down significantly due to missing depth information and patient motion. We propose a novel approach for 3D dynamic roadmapping in deformable regions by predicting the location of the guide wire tip in a 3D vessel model from the tip's 2D location, respiratory motion analysis, and view geometry. In a first step, the method compensates for the apparent respiratory motion in 2D space before backprojecting the 2D guide wire tip into three dimensional space, using a given projection matrix. To countervail the error connected to the projection parameters and the motion compensation, as well as the ambiguity caused by vessel deformation, we establish a statistical framework, which computes a reliable estimate of the guide wire tip location within the 3D vessel model. With this 2D-to-3D transfer, the navigation can be performed from arbitrary viewing angles, disconnected from the static perspective view of the fluoroscopic sequence. Tests on a realistic breathing phantom and on synthetic data with a known ground truth clearly reveal the superiority of our approach compared to naive methods for 3D roadmapping. The concepts and information presented in this paper are based on research and are not commercially available. PMID:18982662

  2. Recent developments in DFD (depth-fused 3D) display and arc 3D display

    NASA Astrophysics Data System (ADS)

    Suyama, Shiro; Yamamoto, Hirotsugu

    2015-05-01

    We will report our recent developments in DFD (Depth-fused 3D) display and arc 3D display, both of which have smooth movement parallax. Firstly, fatigueless DFD display, composed of only two layered displays with a gap, has continuous perceived depth by changing luminance ratio between two images. Two new methods, called "Edge-based DFD display" and "Deep DFD display", have been proposed in order to solve two severe problems of viewing angle and perceived depth limitations. Edge-based DFD display, layered by original 2D image and its edge part with a gap, can expand the DFD viewing angle limitation both in 2D and 3D perception. Deep DFD display can enlarge the DFD image depth by modulating spatial frequencies of front and rear images. Secondly, Arc 3D display can provide floating 3D images behind or in front of the display by illuminating many arc-shaped directional scattering sources, for example, arcshaped scratches on a flat board. Curved Arc 3D display, composed of many directional scattering sources on a curved surface, can provide a peculiar 3D image, for example, a floating image in the cylindrical bottle. The new active device has been proposed for switching arc 3D images by using the tips of dual-frequency liquid-crystal prisms as directional scattering sources. Directional scattering can be switched on/off by changing liquid-crystal refractive index, resulting in switching of arc 3D image.

  3. 'Endurance' Untouched (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1 [figure removed for brevity, see original site] Figure 2

    This navigation camera mosaic, created from images taken by NASA's Mars Exploration Rover Opportunity on sols 115 and 116 (May 21 and 22, 2004) provides a dramatic view of 'Endurance Crater.' The rover engineering team carefully plotted the safest path into the football field-sized crater, eventually easing the rover down the slopes around sol 130 (June 12, 2004). To the upper left of the crater sits the rover's protective heatshield, which sheltered Opportunity as it passed through the martian atmosphere. The 360-degree, stereo view is presented in a cylindrical-perspective projection, with geometric and radiometric seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  4. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    NASA Astrophysics Data System (ADS)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J.

    2015-09-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.

  5. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  6. ShowMe3D

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  7. 3D-printed microfluidic devices.

    PubMed

    Amin, Reza; Knowlton, Stephanie; Hart, Alexander; Yenilmez, Bekir; Ghaderinezhad, Fariba; Katebifar, Sara; Messina, Michael; Khademhosseini, Ali; Tasoglu, Savas

    2016-06-20

    Microfluidics is a flourishing field, enabling a wide range of biochemical and clinical applications such as cancer screening, micro-physiological system engineering, high-throughput drug testing, and point-of-care diagnostics. However, fabrication of microfluidic devices is often complicated, time consuming, and requires expensive equipment and sophisticated cleanroom facilities. Three-dimensional (3D) printing presents a promising alternative to traditional techniques such as lithography and PDMS-glass bonding, not only by enabling rapid design iterations in the development stage, but also by reducing the costs associated with institutional infrastructure, equipment installation, maintenance, and physical space. With the recent advancements in 3D printing technologies, highly complex microfluidic devices can be fabricated via single-step, rapid, and cost-effective protocols, making microfluidics more accessible to users. In this review, we discuss a broad range of approaches for the application of 3D printing technology to fabrication of micro-scale lab-on-a-chip devices.

  8. The rendering context for stereoscopic 3D web

    NASA Astrophysics Data System (ADS)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  9. Digital relief generation from 3D models

    NASA Astrophysics Data System (ADS)

    Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian

    2016-09-01

    It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.

  10. Eyes on the Earth 3D

    NASA Technical Reports Server (NTRS)

    Kulikov, anton I.; Doronila, Paul R.; Nguyen, Viet T.; Jackson, Randal K.; Greene, William M.; Hussey, Kevin J.; Garcia, Christopher M.; Lopez, Christian A.

    2013-01-01

    Eyes on the Earth 3D software gives scientists, and the general public, a realtime, 3D interactive means of accurately viewing the real-time locations, speed, and values of recently collected data from several of NASA's Earth Observing Satellites using a standard Web browser (climate.nasa.gov/eyes). Anyone with Web access can use this software to see where the NASA fleet of these satellites is now, or where they will be up to a year in the future. The software also displays several Earth Science Data sets that have been collected on a daily basis. This application uses a third-party, 3D, realtime, interactive game engine called Unity 3D to visualize the satellites and is accessible from a Web browser.

  11. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  12. Modeling Cellular Processes in 3-D

    PubMed Central

    Mogilner, Alex; Odde, David

    2011-01-01

    Summary Recent advances in photonic imaging and fluorescent protein technology offer unprecedented views of molecular space-time dynamics in living cells. At the same time, advances in computing hardware and software enable modeling of ever more complex systems, from global climate to cell division. As modeling and experiment become more closely integrated, we must address the issue of modeling cellular processes in 3-D. Here, we highlight recent advances related to 3-D modeling in cell biology. While some processes require full 3-D analysis, we suggest that others are more naturally described in 2-D or 1-D. Keeping the dimensionality as low as possible reduces computational time and makes models more intuitively comprehensible; however, the ability to test full 3-D models will build greater confidence in models generally and remains an important emerging area of cell biological modeling. PMID:22036197

  13. 360-degree panorama in 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This 360-degree panorama was taken in stereo by the deployed Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses (red left lens, blue right lens) are necessary to help identify surface detail. All three petals, the perimeter of the deflated airbags, deployed rover Sojourner, forward and backward ramps and prominent surface features are visible, including the double Twin Peaks at the horizon. Sojourner would later investigate the rock Barnacle Bill just to its left in this image, and the larger rock Yogi at its forward right.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters. Stereoscopic imaging brings exceptional clarity and depth to many of the features in this image, particularly the ridge beyond the far left petal and the large rock Yogi. The curvature and misalignment of several section are due to image parallax.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  14. Inflammation in 3D.

    PubMed

    Kobayashi, Scott D; DeLeo, Frank R

    2012-06-14

    Our view of the response to infection is limited by current methodologies, which provide minimal spatial information on the systemic inflammatory response. In this issue, Attia et al. (2012) describe a cutting-edge approach to image the inflammatory response to infection, which includes identification of host proteins in three dimensions. PMID:22704615

  15. [3D virtual endoscopy of heart].

    PubMed

    Du, Aan; Yang, Xin; Xue, Haihong; Yao, Liping; Sun, Kun

    2012-10-01

    In this paper, we present a virtual endoscopy (VE) for diagnosis of heart diseases, which is proved efficient and affordable, easy to popularize for viewing the interior of the heart. The dual source CT (DSCT) data were used as primary data in our system. The 3D structure of virtual heart was reconstructed with 3D texture mapping technology based on graphics processing unit (GPU), and could be displayed dynamically in real time. When we displayed it in real time, we could not only observe the inside of the chambers of heart but also examine from the new angle of view by the 3D data which were already clipped according to doctor's desire. In the pattern of observation, we used both mutual interactive mode and auto mode. In the auto mode, we used Dijkstra Algorithm which treated the 3D Euler distance as weighting factor to find out the view path quickly, and, used view path to calculate the four chamber plane. PMID:23198444

  16. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  17. 3-D movies using microprocessor-controlled optoelectronic spectacles

    NASA Astrophysics Data System (ADS)

    Jacobs, Ken; Karpf, Ron

    2012-02-01

    Despite rapid advances in technology, 3-D movies are impractical for general movie viewing. A new approach that opens all content for casual 3-D viewing is needed. 3Deeps--advanced microprocessor controlled optoelectronic spectacles--provides such a new approach to 3-D. 3Deeps works on a different principle than other methods for 3-D. 3-D movies typically use the asymmetry of dual images to produce stereopsis, necessitating costly dual-image content, complex formatting and transmission standards, and viewing via a corresponding selection device. In contrast, all 3Deeps requires to view movies in realistic depth is an illumination asymmetry--a controlled difference in optical density between the lenses. When a 2-D movie has been projected for viewing, 3Deeps converts every scene containing lateral motion into realistic 3-D. Put on 3Deeps spectacles for 3-D viewing, or remove them for viewing in 2-D. 3Deeps works for all analogue and digital 2-D content, by any mode of transmission, and for projection screens, digital or analogue monitors. An example using aerial photography is presented. A movie consisting of successive monoscopic aerial photographs appears in realistic 3-D when viewed through 3Deeps spectacles.

  18. Locomotive wheel 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Luo, Zhisheng; Gao, Xiaorong; Wu, Jianle

    2010-08-01

    In the article, a system, which is used to reconstruct locomotive wheels, is described, helping workers detect the condition of a wheel through a direct view. The system consists of a line laser, a 2D camera, and a computer. We use 2D camera to capture the line-laser light reflected by the object, a wheel, and then compute the final coordinates of the structured light. Finally, using Matlab programming language, we transform the coordinate of points to a smooth surface and illustrate the 3D view of the wheel. The article also proposes the system structure, processing steps and methods, and sets up an experimental platform to verify the design proposal. We verify the feasibility of the whole process, and analyze the results comparing to standard date. The test results show that this system can work well, and has a high accuracy on the reconstruction. And because there is still no such application working in railway industries, so that it has practical value in railway inspection system.

  19. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  20. Image quality of up-converted 2D video from frame-compatible 3D video

    NASA Astrophysics Data System (ADS)

    Speranza, Filippo; Tam, Wa James; Vázquez, Carlos; Renaud, Ronald; Blanchfield, Phil

    2011-03-01

    In the stereoscopic frame-compatible format, the separate high-definition left and high-definition right views are reduced in resolution and packed to fit within the same video frame as a conventional two-dimensional high-definition signal. This format has been suggested for 3DTV since it does not require additional transmission bandwidth and entails only small changes to the existing broadcasting infrastructure. In some instances, the frame-compatible format might be used to deliver both 2D and 3D services, e.g., for over-the-air television services. In those cases, the video quality of the 2D service is bound to decrease since the 2D signal will have to be generated by up-converting one of the two views. In this study, we investigated such loss by measuring the perceptual image quality of 1080i and 720p up-converted video as compared to that of full resolution original 2D video. The video was encoded with either a MPEG-2 or a H.264/AVC codec at different bit rates and presented for viewing with either no polarized glasses (2D viewing mode) or with polarized glasses (3D viewing mode). The results confirmed a loss of video quality of the 2D video up-converted material. The loss due to the sampling processes inherent to the frame-compatible format was rather small for both 1080i and 720p video formats; the loss became more substantial with encoding, particularly for MPEG-2 encoding. The 3D viewing mode provided higher quality ratings, possibly because the visibility of the degradations was reduced.

  1. Gravity and spatial orientation in virtual 3D-mazes.

    PubMed

    Vidal, Manuel; Lipshits, Mark; McIntyre, Joseph; Berthoz, Alain

    2003-01-01

    In order to bring new insights into the processing of 3D spatial information, we conducted experiments on the capacity of human subjects to memorize 3D-structured environments, such as buildings with several floors or the potentially complex 3D structure of an orbital space station. We had subjects move passively in one of two different exploration modes, through a visual virtual environment that consisted of a series of connected tunnels. In upright displacement, self-rotation when going around corners in the tunnels was limited to yaw rotations. For horizontal translations, subjects faced forward in the direction of motion. When moving up or down through vertical segments of the 3D tunnels, however, subjects facing the tunnel wall, remaining upright as if moving up and down in a glass elevator. In the unconstrained displacement mode, subjects would appear to climb or dive face-forward when moving vertically; thus, in this mode subjects could experience visual flow consistent with rotations about any of the 3 canonical axes. In a previous experiment, subjects were asked to determine whether a static, outside view of a test tunnel corresponded or not to the tunnel through which they had just passed. Results showed that performance was better on this task for the upright than for the unconstrained displacement mode; i.e. when subjects remained "upright" with respect to the virtual environment as defined by subject's posture in the first segment. This effect suggests that gravity may provide a key reference frame used in the shift between egocentric and allocentric representations of the 3D virtual world. To check whether it is the polarizing effects of gravity that leads to the favoring of the upright displacement mode, the experimental paradigm was adapted for orbital flight and performed by cosmonauts onboard the International Space Station. For these flight experiments the previous recognition task was replaced by a computerized reconstruction task, which proved

  2. 3-D transient analysis of pebble-bed HTGR by TORT-TD/ATTICA3D

    SciTech Connect

    Seubert, A.; Sureda, A.; Lapins, J.; Buck, M.; Bader, J.; Laurien, E.

    2012-07-01

    As most of the acceptance criteria are local core parameters, application of transient 3-D fine mesh neutron transport and thermal hydraulics coupled codes is mandatory for best estimate evaluations of safety margins. This also applies to high-temperature gas cooled reactors (HTGR). Application of 3-D fine-mesh transient transport codes using few energy groups coupled with 3-D thermal hydraulics codes becomes feasible in view of increasing computing power. This paper describes the discrete ordinates based coupled code system TORT-TD/ATTICA3D that has recently been extended by a fine-mesh diffusion solver. Based on transient analyses for the PBMR-400 design, the transport/diffusion capabilities are demonstrated and 3-D local flux and power redistribution effects during a partial control rod withdrawal are shown. (authors)

  3. Aging kinetics of levoglucosan orientational glass as a rate dispersion process and consequences for the heterogeneous dynamics view.

    PubMed

    Righetti, Maria Cristina; Tombari, Elpidio; Johari, G P

    2016-08-01

    Aging kinetics of a glass is currently modeled in terms of slowing of its α-relaxation dynamics, whose features are interpreted in terms of dynamic heterogeneity, i.e., formation and decay of spatially and temporally distinct nm-size regions. To test the merits of this view, we studied the calorimetric effects of aging an orientational glass of levoglucosan crystal in which such regions would not form in the same way as they form in liquids, and persist in structural glasses, because there is no liquid-like molecular diffusion in the crystal. By measuring the heat capacity, Cp, we determined the change in the enthalpy, H, and the entropy, S, during two aging-protocols: (a) keeping the samples isothermally at temperature, Ta, and measuring the changes after different aging times, ta, and (b) keeping the samples at different Tas and measuring the changes after the same ta. A model-free analysis of the data shows that as ta is increased (procedure (a)), H and S decrease according to a dispersive rate kinetics, and as Ta is increased (procedure (b)), H and S first increase, reach a local maximum at a certain Ta, and then decrease. Even though there is no translational diffusion to produce (liquid-like) free volume, and no translational-rotational decoupling, the aging features are indistinguishable from those of structural glasses. We also find that the Kohlrausch parameter, originally fitted to the glass-aging data, decreases with decrease in Ta, which is incompatible with the current use of the aging data for estimating the α-relaxation time. We argue that the vibrational state of a glass is naturally incompatible with its configurational state, and both change on aging until they are compatible, in the equilibrium liquid. So, dipolar fluctuations seen as the α-relaxation would not be the same motions that cause aging. We suggest that aging kinetics is intrinsically dispersive with its own characteristic rate constant and it does not yield the α-relaxation rate

  4. Aging kinetics of levoglucosan orientational glass as a rate dispersion process and consequences for the heterogeneous dynamics view

    NASA Astrophysics Data System (ADS)

    Righetti, Maria Cristina; Tombari, Elpidio; Johari, G. P.

    2016-08-01

    Aging kinetics of a glass is currently modeled in terms of slowing of its α-relaxation dynamics, whose features are interpreted in terms of dynamic heterogeneity, i.e., formation and decay of spatially and temporally distinct nm-size regions. To test the merits of this view, we studied the calorimetric effects of aging an orientational glass of levoglucosan crystal in which such regions would not form in the same way as they form in liquids, and persist in structural glasses, because there is no liquid-like molecular diffusion in the crystal. By measuring the heat capacity, Cp, we determined the change in the enthalpy, H, and the entropy, S, during two aging-protocols: (a) keeping the samples isothermally at temperature, Ta, and measuring the changes after different aging times, ta, and (b) keeping the samples at different Tas and measuring the changes after the same ta. A model-free analysis of the data shows that as ta is increased (procedure (a)), H and S decrease according to a dispersive rate kinetics, and as Ta is increased (procedure (b)), H and S first increase, reach a local maximum at a certain Ta, and then decrease. Even though there is no translational diffusion to produce (liquid-like) free volume, and no translational-rotational decoupling, the aging features are indistinguishable from those of structural glasses. We also find that the Kohlrausch parameter, originally fitted to the glass-aging data, decreases with decrease in Ta, which is incompatible with the current use of the aging data for estimating the α-relaxation time. We argue that the vibrational state of a glass is naturally incompatible with its configurational state, and both change on aging until they are compatible, in the equilibrium liquid. So, dipolar fluctuations seen as the α-relaxation would not be the same motions that cause aging. We suggest that aging kinetics is intrinsically dispersive with its own characteristic rate constant and it does not yield the α-relaxation rate

  5. Aging kinetics of levoglucosan orientational glass as a rate dispersion process and consequences for the heterogeneous dynamics view.

    PubMed

    Righetti, Maria Cristina; Tombari, Elpidio; Johari, G P

    2016-08-01

    Aging kinetics of a glass is currently modeled in terms of slowing of its α-relaxation dynamics, whose features are interpreted in terms of dynamic heterogeneity, i.e., formation and decay of spatially and temporally distinct nm-size regions. To test the merits of this view, we studied the calorimetric effects of aging an orientational glass of levoglucosan crystal in which such regions would not form in the same way as they form in liquids, and persist in structural glasses, because there is no liquid-like molecular diffusion in the crystal. By measuring the heat capacity, Cp, we determined the change in the enthalpy, H, and the entropy, S, during two aging-protocols: (a) keeping the samples isothermally at temperature, Ta, and measuring the changes after different aging times, ta, and (b) keeping the samples at different Tas and measuring the changes after the same ta. A model-free analysis of the data shows that as ta is increased (procedure (a)), H and S decrease according to a dispersive rate kinetics, and as Ta is increased (procedure (b)), H and S first increase, reach a local maximum at a certain Ta, and then decrease. Even though there is no translational diffusion to produce (liquid-like) free volume, and no translational-rotational decoupling, the aging features are indistinguishable from those of structural glasses. We also find that the Kohlrausch parameter, originally fitted to the glass-aging data, decreases with decrease in Ta, which is incompatible with the current use of the aging data for estimating the α-relaxation time. We argue that the vibrational state of a glass is naturally incompatible with its configurational state, and both change on aging until they are compatible, in the equilibrium liquid. So, dipolar fluctuations seen as the α-relaxation would not be the same motions that cause aging. We suggest that aging kinetics is intrinsically dispersive with its own characteristic rate constant and it does not yield the α-relaxation rate

  6. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  7. The hype cycle in 3D displays: inherent limits of autostereoscopy

    NASA Astrophysics Data System (ADS)

    Grasnick, Armin

    2013-06-01

    Since a couple of years, a renaissance of 3dimensional cinema can be observed. Even though the stereoscopy was quite popular within the last 150 years, the 3d cinema has disappeared and re-established itself several times. The first boom in the late 19th century stagnated and vanished after a few years of success, the same happened again in 50's and 80's of the 20th century. With the commercial success of the 3d blockbuster "Avatar" in 2009, at the latest, it is obvious that the 3d cinema is having a comeback. How long will it last this time? There are already some signs of a declining interest in 3d movies, as the discrepancy between expectations and the results delivered becomes more evident. From the former hypes it is known: After an initial phase of curiosity (high expectations and excessive fault tolerance), a phase of frustration and saturation (critical analysis and subsequent disappointment) will follow. This phenomenon is known as "Hype Cycle" The everyday experienced evolution of technology has conditioned the consumers. The expectation "any technical improvement will preserve all previous properties" cannot be fulfilled with present 3d technologies. This is an inherent problem of stereoscopy and autostereoscopy: The presentation of an additional dimension caused concessions in relevant characteristics (i.e. resolution, brightness, frequency, viewing area) or leads to undesirable physical side effects (i.e. subjective discomfort, eye strain, spatial disorientation, feeling of nausea). It will be verified that the 3d apparatus (3d glasses or 3d display) is also the source for these restrictions and a reason for decreasing fascination. The limitations of present autostereoscopic technologies will be explained.

  8. 3D Display Using Conjugated Multiband Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; White, Victor E.; Shcheglov, Kirill

    2012-01-01

    Stereoscopic display techniques are based on the principle of displaying two views, with a slightly different perspective, in such a way that the left eye views only by the left eye, and the right eye views only by the right eye. However, one of the major challenges in optical devices is crosstalk between the two channels. Crosstalk is due to the optical devices not completely blocking the wrong-side image, so the left eye sees a little bit of the right image and the right eye sees a little bit of the left image. This results in eyestrain and headaches. A pair of interference filters worn as an optical device can solve the problem. The device consists of a pair of multiband bandpass filters that are conjugated. The term "conjugated" describes the passband regions of one filter not overlapping with those of the other, but the regions are interdigitated. Along with the glasses, a 3D display produces colors composed of primary colors (basis for producing colors) having the spectral bands the same as the passbands of the filters. More specifically, the primary colors producing one viewpoint will be made up of the passbands of one filter, and those of the other viewpoint will be made up of the passbands of the conjugated filter. Thus, the primary colors of one filter would be seen by the eye that has the matching multiband filter. The inherent characteristic of the interference filter will allow little or no transmission of the wrong side of the stereoscopic images.

  9. R3D-2-MSA: the RNA 3D structure-to-multiple sequence alignment server

    PubMed Central

    Cannone, Jamie J.; Sweeney, Blake A.; Petrov, Anton I.; Gutell, Robin R.; Zirbel, Craig L.; Leontis, Neocles

    2015-01-01

    The RNA 3D Structure-to-Multiple Sequence Alignment Server (R3D-2-MSA) is a new web service that seamlessly links RNA three-dimensional (3D) structures to high-quality RNA multiple sequence alignments (MSAs) from diverse biological sources. In this first release, R3D-2-MSA provides manual and programmatic access to curated, representative ribosomal RNA sequence alignments from bacterial, archaeal, eukaryal and organellar ribosomes, using nucleotide numbers from representative atomic-resolution 3D structures. A web-based front end is available for manual entry and an Application Program Interface for programmatic access. Users can specify up to five ranges of nucleotides and 50 nucleotide positions per range. The R3D-2-MSA server maps these ranges to the appropriate columns of the corresponding MSA and returns the contents of the columns, either for display in a web browser or in JSON format for subsequent programmatic use. The browser output page provides a 3D interactive display of the query, a full list of sequence variants with taxonomic information and a statistical summary of distinct sequence variants found. The output can be filtered and sorted in the browser. Previous user queries can be viewed at any time by resubmitting the output URL, which encodes the search and re-generates the results. The service is freely available with no login requirement at http://rna.bgsu.edu/r3d-2-msa. PMID:26048960

  10. SOLIDFELIX: a transportable 3D static volume display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Kreft, Alexander; Wörden, Henrik Tom

    2009-02-01

    Flat 2D screens cannot display complex 3D structures without the usage of different slices of the 3D model. Volumetric displays like the "FELIX 3D-Displays" can solve the problem. They provide space-filling images and are characterized by "multi-viewer" and "all-round view" capabilities without requiring cumbersome goggles. In the past many scientists tried to develop similar 3D displays. Our paper includes an overview from 1912 up to today. During several years of investigations on swept volume displays within the "FELIX 3D-Projekt" we learned about some significant disadvantages of rotating screens, for example hidden zones. For this reason the FELIX-Team started investigations also in the area of static volume displays. Within three years of research on our 3D static volume display at a normal high school in Germany we were able to achieve considerable results despite minor funding resources within this non-commercial group. Core element of our setup is the display volume which consists of a cubic transparent material (crystal, glass, or polymers doped with special ions, mainly from the rare earth group or other fluorescent materials). We focused our investigations on one frequency, two step upconversion (OFTS-UC) and two frequency, two step upconversion (TFTSUC) with IR-Lasers as excitation source. Our main interest was both to find an appropriate material and an appropriate doping for the display volume. Early experiments were carried out with CaF2 and YLiF4 crystals doped with 0.5 mol% Er3+-ions which were excited in order to create a volumetric pixel (voxel). In addition to that the crystals are limited to a very small size which is the reason why we later investigated on heavy metal fluoride glasses which are easier to produce in large sizes. Currently we are using a ZBLAN glass belonging to the mentioned group and making it possible to increase both the display volume and the brightness of the images significantly. Although, our display is currently

  11. Wax-bonding 3D microfluidic chips.

    PubMed

    Gong, Xiuqing; Yi, Xin; Xiao, Kang; Li, Shunbo; Kodzius, Rimantas; Qin, Jianhua; Wen, Weijia

    2010-10-01

    We report a simple, low-cost and detachable microfluidic chip incorporating easily accessible paper, glass slides or other polymer films as the chip materials along with adhesive wax as the recycling bonding material. We use a laser to cut through the paper or film to form patterns and then sandwich the paper and film between glass sheets or polymer membranes. The hot-melt adhesive wax can realize bridge bonding between various materials, for example, paper, polymethylmethacrylate (PMMA) film, glass sheets, or metal plate. The bonding process is reversible and the wax is reusable through a melting and cooling process. With this process, a three-dimensional (3D) microfluidic chip is achievable by vacuating and venting the chip in a hot-water bath. To study the biocompatibility and applicability of the wax-based microfluidic chip, we tested the PCR compatibility with the chip materials first. Then we applied the wax-paper based microfluidic chip to HeLa cell electroporation (EP). Subsequently, a prototype of a 5-layer 3D chip was fabricated by multilayer wax bonding. To check the sealing ability and the durability of the chip, green fluorescence protein (GFP) recombinant Escherichia coli (E. coli) bacteria were cultured, with which the chemotaxis of E. coli was studied in order to determine the influence of antibiotic ciprofloxacin concentration on the E. coli migration.

  12. 3-D capaciflector

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1998-01-01

    A capacitive type proximity sensor having improved range and sensitivity between a surface of arbitrary shape and an intruding object in the vicinity of the surface having one or more outer conductors on the surface which serve as capacitive sensing elements shaped to conform to the underlying surface of a machine. Each sensing element is backed by a reflector driven at the same voltage and in phase with the corresponding capacitive sensing element. Each reflector, in turn, serves to reflect the electric field lines of the capacitive sensing element away from the surface of the machine on which the sensor is mounted so as to enhance the component constituted by the capacitance between the sensing element and an intruding object as a fraction of the total capacitance between the sensing element and ground. Each sensing element and corresponding reflecting element are electrically driven in phase, and the capacitance between the sensing elements individually and the sensed object is determined using circuitry known to the art. The reflector may be shaped to shield the sensor and to shape its field of view, in effect providing an electrostatic lensing effect. Sensors and reflectors may be fabricated using a variety of known techniques such as vapor deposition, sputtering, painting, plating, or deformation of flexible films, to provide conformal coverage of surfaces of arbitrary shape.

  13. Recent development of 3D display technology for new market

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Sik

    2003-11-01

    A multi-view 3D video processor was designed and implemented with several FPGAs for real-time applications and a projection-type 3D display was introduced for low-cost commercialization. One high resolution projection panel and only one projection lens is capable of displaying multiview autostereoscopic images. It can cope with various arrangements of 3D camera systems (or pixel arrays) and resolutions of 3D displays. This system shows high 3-D image quality in terms of resolution, brightness, and contrast so it is well suited for the commercialization in the field of game and advertisement market.

  14. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  15. Simnple, portable, 3-D projection routine

    SciTech Connect

    Wagner, J.S.

    1987-04-01

    A 3-D projection routine is presented for use in computer graphics applications. The routine is simple enough to be considered portable, and easily modified for special problems. There is often the need to draw three-dimensional objects on a two-dimensional plotting surface. For the object to appear realistic, perspective effects must be included that allow near objects to appear larger than distant objects. Several 3-D projection routines are commercially available, but they are proprietary, not portable, and not easily changed by the user. Most are restricted to surfaces that are functions of two variables. This makes them unsuitable for viewing physical objects such as accelerator prototypes or propagating beams. This report develops a very simple algorithm for 3-D projections; the core routine is only 39 FORTRAN lines long. It can be easily modified for special problems. Software dependent calls are confined to simple drivers that can be exchanged when different plotting software packages are used.

  16. Computer-aided 3D display system and its application in 3D vision test

    NASA Astrophysics Data System (ADS)

    Shen, XiaoYun; Ma, Lan; Hou, Chunping; Wang, Jiening; Tang, Da; Li, Chang

    1998-08-01

    The computer aided 3D display system, flicker-free field sequential stereoscopic image display system, is newly developed. This system is composed of personal computer, liquid crystal glasses driving card, stereoscopic display software and liquid crystal glasses. It can display field sequential stereoscopic images at refresh rate of 70 Hz to 120 Hz. A typical application of this system, 3D vision test system, is mainly discussed in this paper. This stereoscopic vision test system can test stereoscopic acuity, cross disparity, uncross disparity and dynamic stereoscopic vision quantitatively. We have taken the use of random-dot- stereograms as stereoscopic vision test charts. Through practical test experiment between Anaglyph Stereoscopic Vision Test Charts and this stereoscopic vision test system, the statistical figures and test result is given out.

  17. Super long viewing distance light homogeneous emitting three-dimensional display

    NASA Astrophysics Data System (ADS)

    Liao, Hongen

    2015-04-01

    Three-dimensional (3D) display technology has continuously been attracting public attention with the progress in today's 3D television and mature display technologies. The primary characteristics of conventional glasses-free autostereoscopic displays, such as spatial resolution, image depths, and viewing angle, are often limited due to the use of optical lenses or optical gratings. We present a 3D display using MEMS-scanning-mechanism-based light homogeneous emitting (LHE) approach and demonstrate that the display can directly generate an autostereoscopic 3D image without the need for optical lenses or gratings. The generated 3D image has the advantages of non-aberration and a high-definition spatial resolution, making it the first to exhibit animated 3D images with image depth of six meters. Our LHE 3D display approach can be used to generate a natural flat-panel 3D display with super long viewing distance and alternative real-time image update.

  18. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  19. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  20. Embedding objects during 3D printing to add new functionalities.

    PubMed

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  1. Embedding objects during 3D printing to add new functionalities.

    PubMed

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  2. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology.

  3. bioWeb3D: an online webGL 3D data visualisation tool

    PubMed Central

    2013-01-01

    Background Data visualization is critical for interpreting biological data. However, in practice it can prove to be a bottleneck for non trained researchers; this is especially true for three dimensional (3D) data representation. Whilst existing software can provide all necessary functionalities to represent and manipulate biological 3D datasets, very few are easily accessible (browser based), cross platform and accessible to non-expert users. Results An online HTML5/WebGL based 3D visualisation tool has been developed to allow biologists to quickly and easily view interactive and customizable three dimensional representations of their data along with multiple layers of information. Using the WebGL library Three.js written in Javascript, bioWeb3D allows the simultaneous visualisation of multiple large datasets inputted via a simple JSON, XML or CSV file, which can be read and analysed locally thanks to HTML5 capabilities. Conclusions Using basic 3D representation techniques in a technologically innovative context, we provide a program that is not intended to compete with professional 3D representation software, but that instead enables a quick and intuitive representation of reasonably large 3D datasets. PMID:23758781

  4. An Interactive 3D Virtual Anatomy Puzzle for Learning and Simulation - Initial Demonstration and Evaluation.

    PubMed

    Messier, Erik; Wilcox, Jascha; Dawson-Elli, Alexander; Diaz, Gabriel; Linte, Cristian A

    2016-01-01

    To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls. PMID:27046584

  5. 3D display based on parallax barrier with multiview zones.

    PubMed

    Lv, Guo-Jiao; Wang, Qiong-Hua; Zhao, Wu-Xiang; Wang, Jun

    2014-03-01

    A 3D display based on a parallax barrier with multiview zones is proposed. This display consists of a 2D display panel and a parallax barrier. The basic element of the parallax barrier has three narrow slits. They can show three columns of subpixels on the 2D display panel and form 3D pixels. The parallax barrier can provide multiview zones. In these multiview zones, the proposed 3D display can use a small number of views to achieve a high density of views. Therefore, the distance between views is the same as the conventional ones with more views. Considering the proposed display has fewer views, which bring more 3D pixels in the 3D images, the resolution and brightness will be higher than the conventional ones. A 12-view prototype of the proposed 3D display is developed, and it provides the same density of views as a conventional one with 28 views. Experimental results show the proposed display has higher resolution and brightness than the conventional one. The cross talk is also limited at a low level.

  6. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  7. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  8. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  9. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  10. A Joint Approach to the Study of S-Type and P-Type Habitable Zones in Binary Systems: New Results in the View of 3-D Planetary Climate Models

    NASA Astrophysics Data System (ADS)

    Cuntz, Manfred

    2015-01-01

    In two previous papers, given by Cuntz (2014a,b) [ApJ 780, A14 (19 pages); arXiv:1409.3796], a comprehensive approach has been provided for the study of S-type and P-type habitable zones in stellar binary systems, P-type orbits occur when the planet orbits both binary components, whereas in case of S-type orbits, the planet orbits only one of the binary components with the second component considered a perturbator. The selected approach considers a variety of aspects, including (1) the consideration of a joint constraint including orbital stability and a habitable region for a possible system planet through the stellar radiative energy fluxes; (2) the treatment of conservative (CHZ), general (GHZ) and extended zones of habitability (EHZ) [see Paper I for definitions] for the systems as previously defined for the Solar System; (3) the provision of a combined formalism for the assessment of both S-type and P-type habitability; in particular, mathematical criteria are devised for which kind of system S-type and P-type habitability is realized; and (4) the applications of the theoretical approach to systems with the stars in different kinds of orbits, including elliptical orbits (the most expected case). Particularly, an algebraic formalism for the assessment of both S-type and P-type habitability is given based on a higher-order polynomial expression. Thus, an a prior specification for the presence or absence of S-type or P-type radiative habitable zones is - from a mathematical point of view - neither necessary nor possible, as those are determined by the adopted formalism. Previously, numerous applications of the method have been given encompassing theoretical star-panet systems and and observations. Most recently, this method has been upgraded to include recent studies of 3-D planetary climate models. Originally, this type of work affects the extent and position of habitable zones around single stars; however, it has also profound consequence for the habitable

  11. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  12. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  13. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  14. MRI Volume Fusion Based on 3D Shearlet Decompositions

    PubMed Central

    Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong

    2014-01-01

    Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods. PMID:24817880

  15. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  16. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  17. Cosmic origins: experiences making a stereoscopic 3D movie

    NASA Astrophysics Data System (ADS)

    Holliman, Nick

    2010-02-01

    Context: Stereoscopic 3D movies are gaining rapid acceptance commercially. In addition our previous experience with the short 3D movie "Cosmic Cookery" showed that there is great public interest in the presentation of cosmology research using this medium. Objective: The objective of the work reported in this paper was to create a three-dimensional stereoscopic movie describing the life of the Milky way galaxy. This was a technical and artistic exercise to take observed and simulated data from leading scientists and produce a short (six minute) movie that describes how the Milky Way was created and what happens in its future. The initial target audience was the visitors to the Royal Society's 2009 Summer Science Exhibition in central London, UK. The movie is also intended to become a presentation tool for scientists and educators following the exhibition. Apparatus: The presentation and playback systems used consisted of off-the shelf devices and software. The display platform for the Royal Society presentation was a RealD LP Pro switch used with a DLP projector to rear project a 4 metre diagonal image. The LP Pro enables the use of cheap disposable linearly polarising glasses so that the high turnover rate of the audience (every ten minutes at peak times) could be sustained without needing delays to clean the glasses. The playback system was a high speed PC with an external 8Tb RAID driving the projectors at 30Hz per eye, the Lightspeed DepthQ software was used to decode and generate the video stream. Results: A wide range of tools were used to render the image sequences, ranging from commercial to custom software. Each tool was able to produce a stream of 1080p images in stereo at 30fps. None of the rendering tools used allowed precise calibration of the stereo effect at render time and therefore all sequences were tuned extensively in a trial and error process until the stereo effect was acceptable and supported a comfortable viewing experience. Conclusion: We

  18. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  19. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  20. 3D Printed Micro Free-Flow Electrophoresis Device.

    PubMed

    Anciaux, Sarah K; Geiger, Matthew; Bowser, Michael T

    2016-08-01

    The cost, time, and restrictions on creative flexibility associated with current fabrication methods present significant challenges in the development and application of microfluidic devices. Additive manufacturing, also referred to as three-dimensional (3D) printing, provides many advantages over existing methods. With 3D printing, devices can be made in a cost-effective manner with the ability to rapidly prototype new designs. We have fabricated a micro free-flow electrophoresis (μFFE) device using a low-cost, consumer-grade 3D printer. Test prints were performed to determine the minimum feature sizes that could be reproducibly produced using 3D printing fabrication. Microfluidic ridges could be fabricated with dimensions as small as 20 μm high × 640 μm wide. Minimum valley dimensions were 30 μm wide × 130 μm wide. An acetone vapor bath was used to smooth acrylonitrile-butadiene-styrene (ABS) surfaces and facilitate bonding of fully enclosed channels. The surfaces of the 3D-printed features were profiled and compared to a similar device fabricated in a glass substrate. Stable stream profiles were obtained in a 3D-printed μFFE device. Separations of fluorescent dyes in the 3D-printed device and its glass counterpart were comparable. A μFFE separation of myoglobin and cytochrome c was also demonstrated on a 3D-printed device. Limits of detection for rhodamine 110 were determined to be 2 and 0.3 nM for the 3D-printed and glass devices, respectively.

  1. 3D Printed Micro Free-Flow Electrophoresis Device.

    PubMed

    Anciaux, Sarah K; Geiger, Matthew; Bowser, Michael T

    2016-08-01

    The cost, time, and restrictions on creative flexibility associated with current fabrication methods present significant challenges in the development and application of microfluidic devices. Additive manufacturing, also referred to as three-dimensional (3D) printing, provides many advantages over existing methods. With 3D printing, devices can be made in a cost-effective manner with the ability to rapidly prototype new designs. We have fabricated a micro free-flow electrophoresis (μFFE) device using a low-cost, consumer-grade 3D printer. Test prints were performed to determine the minimum feature sizes that could be reproducibly produced using 3D printing fabrication. Microfluidic ridges could be fabricated with dimensions as small as 20 μm high × 640 μm wide. Minimum valley dimensions were 30 μm wide × 130 μm wide. An acetone vapor bath was used to smooth acrylonitrile-butadiene-styrene (ABS) surfaces and facilitate bonding of fully enclosed channels. The surfaces of the 3D-printed features were profiled and compared to a similar device fabricated in a glass substrate. Stable stream profiles were obtained in a 3D-printed μFFE device. Separations of fluorescent dyes in the 3D-printed device and its glass counterpart were comparable. A μFFE separation of myoglobin and cytochrome c was also demonstrated on a 3D-printed device. Limits of detection for rhodamine 110 were determined to be 2 and 0.3 nM for the 3D-printed and glass devices, respectively. PMID:27377354

  2. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  3. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  4. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  5. Mars Odyssey Seen by Mars Global Surveyor (3-D)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This stereoscopic picture of NASA's Mars Odyssey spacecraft was created from two views of that spacecraft taken by the Mars Orbiter Camera on NASA's Mars Global Surveyor. The camera's successful imaging of Odyssey and of the European Space Agency's Mars Express in April 2005 produced the first pictures of any spacecraft orbiting Mars taken by another spacecraft orbiting Mars.

    Mars Global Surveyor acquired this image of Mars Odyssey on April 21, 2005. The stereoscopic picture combines one view captured while the two orbiters were 90 kilometers (56 miles) apart with a second view captured from a slightly different angle when the two orbiters were 135 kilometers (84 miles) apart. For proper viewing, the user needs '3-D' glasses with red over the left eye and blue over the right eye.

    The Mars Orbiter Camera can resolve features on the surface of Mars as small as a few meters or yards across from Mars Global Surveyor's orbital altitude of 350 to 405 kilometers (217 to 252 miles). From a distance of 100 kilometers (62 miles), the camera would be able to resolve features substantially smaller than 1 meter or yard across.

    Mars Odyssey was launched on April 7, 2001, and reached Mars on Oct. 24, 2001. Mars Global Surveyor left Earth on Nov. 7, 1996, and arrived in Mars orbit on Sept. 12, 1997. Both orbiters are in an extended mission phase, both have relayed data from the Mars Exploration Rovers, and both are continuing to return exciting new results from Mars. JPL, a division of the California Institute of Technology, Pasadena, manages both missions for NASA's Science Mission Directorate, Washington, D.C.

  6. 3D-printed microfluidic devices.

    PubMed

    Amin, Reza; Knowlton, Stephanie; Hart, Alexander; Yenilmez, Bekir; Ghaderinezhad, Fariba; Katebifar, Sara; Messina, Michael; Khademhosseini, Ali; Tasoglu, Savas

    2016-06-01

    Microfluidics is a flourishing field, enabling a wide range of biochemical and clinical applications such as cancer screening, micro-physiological system engineering, high-throughput drug testing, and point-of-care diagnostics. However, fabrication of microfluidic devices is often complicated, time consuming, and requires expensive equipment and sophisticated cleanroom facilities. Three-dimensional (3D) printing presents a promising alternative to traditional techniques such as lithography and PDMS-glass bonding, not only by enabling rapid design iterations in the development stage, but also by reducing the costs associated with institutional infrastructure, equipment installation, maintenance, and physical space. With the recent advancements in 3D printing technologies, highly complex microfluidic devices can be fabricated via single-step, rapid, and cost-effective protocols, making microfluidics more accessible to users. In this review, we discuss a broad range of approaches for the application of 3D printing technology to fabrication of micro-scale lab-on-a-chip devices. PMID:27321137

  7. 3-D video techniques in endoscopic surgery.

    PubMed

    Becker, H; Melzer, A; Schurr, M O; Buess, G

    1993-02-01

    Three-dimensional visualisation of the operative field is an important requisite for precise and fast handling of open surgical operations. Up to now it has only been possible to display a two-dimensional image on the monitor during endoscopic procedures. The increasing complexity of minimal invasive interventions requires endoscopic suturing and ligatures of larger vessels which are difficult to perform without the impression of space. Three-dimensional vision therefore may decrease the operative risk, accelerate interventions and widen the operative spectrum. In April 1992 a 3-D video system developed at the Nuclear Research Center Karlsruhe, Germany (IAI Institute) was applied in various animal experimental procedures and clinically in laparoscopic cholecystectomy. The system works with a single monitor and active high-speed shutter glasses. Our first trials with this new 3-D imaging system clearly showed a facilitation of complex surgical manoeuvres like mobilisation of organs, preparation in the deep space and suture techniques. The 3-D-system introduced in this article will enter the market in 1993 (Opticon Co., Karlsruhe, Germany. PMID:8050009

  8. A NASA 3-D Flyby of Hurricane Seymour

    NASA Video Gallery

    This 3-D Flyby animation from data gathered by the GPM core observatory satellite is from its view of Hurricane Seymour on Oct. 25 at 7:46 am PDT (1646 UTC). GPM showed rain falling at the extreme ...

  9. 3D Surface Reconstruction and Automatic Camera Calibration

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre

    2004-01-01

    Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.

  10. Collaborative annotation of 3D crystallographic models.

    PubMed

    Hunter, J; Henderson, M; Khan, I

    2007-01-01

    This paper describes the AnnoCryst system-a tool that was designed to enable authenticated collaborators to share online discussions about 3D crystallographic structures through the asynchronous attachment, storage, and retrieval of annotations. Annotations are personal comments, interpretations, questions, assessments, or references that can be attached to files, data, digital objects, or Web pages. The AnnoCryst system enables annotations to be attached to 3D crystallographic models retrieved from either private local repositories (e.g., Fedora) or public online databases (e.g., Protein Data Bank or Inorganic Crystal Structure Database) via a Web browser. The system uses the Jmol plugin for viewing and manipulating the 3D crystal structures but extends Jmol by providing an additional interface through which annotations can be created, attached, stored, searched, browsed, and retrieved. The annotations are stored on a standardized Web annotation server (Annotea), which has been extended to support 3D macromolecular structures. Finally, the system is embedded within a security framework that is capable of authenticating users and restricting access only to trusted colleagues.

  11. Spacecraft 3D Augmented Reality Mobile App

    NASA Technical Reports Server (NTRS)

    Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M.

    2013-01-01

    The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker.

  12. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  13. What is 3D good for? A review of human performance on stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  14. Spatially resolved 3D noise

    NASA Astrophysics Data System (ADS)

    Haefner, David P.; Preece, Bradley L.; Doe, Joshua M.; Burks, Stephen D.

    2016-05-01

    When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations, which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD engineers developed an approximation to the 3D power spectral density (PSD) for noise in imaging systems known as 3D noise. In this correspondence, we describe how the confidence intervals for the 3D noise measurement allows for determination of the sampling necessary to reach a desired precision. We then apply that knowledge to create a smaller cube that can be evaluated spatially across the 2D image giving the noise as a function of position. The method presented here allows for both defective pixel identification and implements the finite sampling correction matrix. In support of the reproducible research effort, the Matlab functions associated with this work can be found on the Mathworks file exchange [1].

  15. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  16. Accepting the T3D

    SciTech Connect

    Rich, D.O.; Pope, S.C.; DeLapp, J.G.

    1994-10-01

    In April, a 128 PE Cray T3D was installed at Los Alamos National Laboratory`s Advanced Computing Laboratory as part of the DOE`s High-Performance Parallel Processor Program (H4P). In conjunction with CRI, the authors implemented a 30 day acceptance test. The test was constructed in part to help them understand the strengths and weaknesses of the T3D. In this paper, they briefly describe the H4P and its goals. They discuss the design and implementation of the T3D acceptance test and detail issues that arose during the test. They conclude with a set of system requirements that must be addressed as the T3D system evolves.

  17. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  18. 6. Looking glass aircraft in the project looking glass historic ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. Looking glass aircraft in the project looking glass historic district. View to north. - Offutt Air Force Base, Looking Glass Airborne Command Post, Looking Glass Avenue between Comstat Drive & Nightwatch Avenue, Offutt Air Force Base, Bellevue, Sarpy County, NE

  19. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  20. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  1. A 2D driven 3D vessel segmentation algorithm for 3D digital subtraction angiography data

    NASA Astrophysics Data System (ADS)

    Spiegel, M.; Redel, T.; Struffert, T.; Hornegger, J.; Doerfler, A.

    2011-10-01

    Cerebrovascular disease is among the leading causes of death in western industrial nations. 3D rotational angiography delivers indispensable information on vessel morphology and pathology. Physicians make use of this to analyze vessel geometry in detail, i.e. vessel diameters, location and size of aneurysms, to come up with a clinical decision. 3D segmentation is a crucial step in this pipeline. Although a lot of different methods are available nowadays, all of them lack a method to validate the results for the individual patient. Therefore, we propose a novel 2D digital subtraction angiography (DSA)-driven 3D vessel segmentation and validation framework. 2D DSA projections are clinically considered as gold standard when it comes to measurements of vessel diameter or the neck size of aneurysms. An ellipsoid vessel model is applied to deliver the initial 3D segmentation. To assess the accuracy of the 3D vessel segmentation, its forward projections are iteratively overlaid with the corresponding 2D DSA projections. Local vessel discrepancies are modeled by a global 2D/3D optimization function to adjust the 3D vessel segmentation toward the 2D vessel contours. Our framework has been evaluated on phantom data as well as on ten patient datasets. Three 2D DSA projections from varying viewing angles have been used for each dataset. The novel 2D driven 3D vessel segmentation approach shows superior results against state-of-the-art segmentations like region growing, i.e. an improvement of 7.2% points in precision and 5.8% points for the Dice coefficient. This method opens up future clinical applications requiring the greatest vessel accuracy, e.g. computational fluid dynamic modeling.

  2. 3D genome tuner: compare multiple circular genomes in a 3D context.

    PubMed

    Wang, Qi; Liang, Qun; Zhang, Xiuqing

    2009-09-01

    Circular genomes, being the largest proportion of sequenced genomes, play an important role in genome analysis. However, traditional 2D circular map only provides an overview and annotations of genome but does not offer feature-based comparison. For remedying these shortcomings, we developed 3D Genome Tuner, a hybrid of circular map and comparative map tools. Its capability of viewing comparisons between multiple circular maps in a 3D space offers great benefits to the study of comparative genomics. The program is freely available (under an LGPL licence) at http://sourceforge.net/projects/dgenometuner.

  3. Microscopic view of glass transition dynamics: A quasielastic neutron scattering study on trans-1,4-polychloroprene

    NASA Astrophysics Data System (ADS)

    Kanaya, T.; Kawaguchi, T.; Kaji, K.

    1996-09-01

    We have studied the glass transition dynamics of trans-1,4-chloroprene from microscopic view points using a quasielastic neutron scattering technique in a time range of ˜4×10-13 to ˜4×10-10 s. It was found that the so-called fast process of picosecond order appears at around the Vogel-Fulcher temperature T0, similarly to cis-1,4-polybutadiene having no large side groups [J. Chem. Phys. 98, 8262 (1993)]. It is considered that the onset temperature at around T0 must be characteristic to polymers having no large side groups or no large internal degrees of freedom. In addition to the fast process, the slow process of subnanosecond order sets in at around the glass transition temperature Tg and the activation energy of the relaxation time was found to be ˜2.5 kcal/mol. The nature of the slow process is discussed in terms of conformational transition near Tg.

  4. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  5. Optical characterization and measurements of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Salmimaa, Marja; Järvenpää, Toni

    2008-04-01

    3D or autostereoscopic display technologies offer attractive solutions for enriching the multimedia experience. However, both characterization and comparison of 3D displays have been challenging when the definitions for the consistent measurement methods have been lacking and displays with similar specifications may appear quite different. Earlier we have investigated how the optical properties of autostereoscopic (3D) displays can be objectively measured and what are the main characteristics defining the perceived image quality. In this paper the discussion is extended to cover the viewing freedom (VF) and the definition for the optimum viewing distance (OVD) is elaborated. VF is the volume inside which the eyes have to be to see an acceptable 3D image. Characteristics limiting the VF space are proposed to be 3D crosstalk, luminance difference and color difference. Since the 3D crosstalk can be presumed to be dominating the quality of the end user experience and in our approach is forming the basis for the calculations of the other optical parameters, the reliability of the 3D crosstalk measurements is investigated. Furthermore the effect on the derived VF definition is evaluated. We have performed comparison 3D crosstalk measurements with different measurement device apertures and the effect of different measurement geometry on the results on actual 3D displays is reported.

  6. LASTRAC.3d: Transition Prediction in 3D Boundary Layers

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2004-01-01

    Langley Stability and Transition Analysis Code (LASTRAC) is a general-purpose, physics-based transition prediction code released by NASA for laminar flow control studies and transition research. This paper describes the LASTRAC extension to general three-dimensional (3D) boundary layers such as finite swept wings, cones, or bodies at an angle of attack. The stability problem is formulated by using a body-fitted nonorthogonal curvilinear coordinate system constructed on the body surface. The nonorthogonal coordinate system offers a variety of marching paths and spanwise waveforms. In the extreme case of an infinite swept wing boundary layer, marching with a nonorthogonal coordinate produces identical solutions to those obtained with an orthogonal coordinate system using the earlier release of LASTRAC. Several methods to formulate the 3D parabolized stability equations (PSE) are discussed. A surface-marching procedure akin to that for 3D boundary layer equations may be used to solve the 3D parabolized disturbance equations. On the other hand, the local line-marching PSE method, formulated as an easy extension from its 2D counterpart and capable of handling the spanwise mean flow and disturbance variation, offers an alternative. A linear stability theory or parabolized stability equations based N-factor analysis carried out along the streamline direction with a fixed wavelength and downstream-varying spanwise direction constitutes an efficient engineering approach to study instability wave evolution in a 3D boundary layer. The surface-marching PSE method enables a consistent treatment of the disturbance evolution along both streamwise and spanwise directions but requires more stringent initial conditions. Both PSE methods and the traditional LST approach are implemented in the LASTRAC.3d code. Several test cases for tapered or finite swept wings and cones at an angle of attack are discussed.

  7. Color and brightness uniformity compensation of a multi-projection 3D display

    NASA Astrophysics Data System (ADS)

    Lee, Jin-Ho; Park, Juyong; Nam, Dongkyung; Park, Du-Sik

    2015-09-01

    Light-field displays are good candidates in the field of glasses-free 3D display for showing real 3D images without decreasing the image resolution. Light-field displays can create light rays using a large number of projectors in order to express the natural 3D images. However, in light-field displays using multi-projectors, the compensation is very critical due to different characteristics and arrangement positions of each projector. In this paper, we present an enhanced 55- inch, 100-Mpixel multi-projection 3D display consisting of 96 micro projectors for immersive natural 3D viewing in medical and educational applications. To achieve enhanced image quality, color and brightness uniformity compensation methods are utilized along with an improved projector configuration design and a real-time calibration process of projector alignment. For color uniformity compensation, projected images from each projector are captured by a camera arranged in front of the screen, the number of pixels based on RGB color intensities of each captured image is analyzed, and the distributions of RGB color intensities are adjusted by using the respective maximum values of RGB color intensities. For brightness uniformity compensation, each light-field ray emitted from a screen pixel is modeled by a radial basis function, and compensating weights of each screen pixel are calculated and transferred to the projection images by the mapping relationship between the screen and projector coordinates. Finally, brightness compensated images are rendered for each projector. Consequently, the display shows improved color and brightness uniformity, and consistent, exceptional 3D image quality.

  8. 3D measurement using circular gratings

    NASA Astrophysics Data System (ADS)

    Harding, Kevin

    2013-09-01

    3D measurement using methods of structured light are well known in the industry. Most such systems use some variation of straight lines, either as simple lines or with some form of encoding. This geometry assumes the lines will be projected from one side and viewed from another to generate the profile information. But what about applications where a wide triangulation angle may not be practical, particularly at longer standoff distances. This paper explores the use of circular grating patterns projected from a center point to achieve 3D information. Originally suggested by John Caulfield around 1990, the method had some interesting potential, particularly if combined with alternate means of measurement from traditional triangulation including depth from focus methods. The possible advantages of a central reference point in the projected pattern may offer some different capabilities not as easily attained with a linear grating pattern. This paper will explore the pros and cons of the method and present some examples of possible applications.

  9. Examination of 3D visual attention in stereoscopic video content

    NASA Astrophysics Data System (ADS)

    Huynh-Thu, Quan; Schiatti, Luca

    2011-03-01

    Recent advances in video technology and digital cinema have made it possible to produce entertaining 3D stereoscopic content that can be viewed for an extended duration without necessarily causing extreme fatigue, visual strain and discomfort. Viewers focus naturally their attention on specific areas of interest in their visual field. Visual attention is an important aspect of perception and its understanding is therefore an important aspect for the creation of 3D stereoscopic content. Most of the studies on visual attention have focused on the case of still images or 2D video. Only a very few studies have investigated eye movement patterns in 3D stereoscopic moving sequences, and how these may differ from viewing 2D video content. In this paper, we present and discuss the results of a subjective experiment that we conducted using an eye-tracking apparatus to record observers' gaze patterns. Participants were asked to watch the same set of video clips in a free-viewing task. Each clip was shown in a 3D stereoscopic version and 2D version. Our results indicate that the extent of areas of interests is not necessarily wider in 3D. We found a very strong content dependency in the difference of density and locations of fixations between 2D and 3D stereoscopic content. However, we found that saccades were overall faster and that fixation durations were overall lower when observers viewed the 3D stereoscopic version.

  10. Real time 3D scanner: investigations and results

    NASA Astrophysics Data System (ADS)

    Nouri, Taoufik; Pflug, Leopold

    1993-12-01

    This article presents a concept of reconstruction of 3-D objects using non-invasive and touch loss techniques. The principle of this method is to display parallel interference optical fringes on an object and then to record the object under two angles of view. According to an appropriated treatment one reconstructs the 3-D object even when the object has no symmetrical plan. The 3-D surface data is available immediately in digital form for computer- visualization and for analysis software tools. The optical set-up for recording the 3-D object, the 3-D data extraction and treatment, as well as the reconstruction of the 3-D object are reported and commented on. This application is dedicated for reconstructive/cosmetic surgery, CAD, animation and research purposes.

  11. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  12. 3D Face modeling using the multi-deformable method.

    PubMed

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  13. 3D Face Modeling Using the Multi-Deformable Method

    PubMed Central

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  14. 3D visualization for medical volume segmentation validation

    NASA Astrophysics Data System (ADS)

    Eldeib, Ayman M.

    2002-05-01

    This paper presents a 3-D visualization tool that manipulates and/or enhances by user input the segmented targets and other organs. A 3-D visualization tool is developed to create a precise and realistic 3-D model from CT/MR data set for manipulation in 3-D and permitting physician or planner to look through, around, and inside the various structures. The 3-D visualization tool is designed to assist and to evaluate the segmentation process. It can control the transparency of each 3-D object. It displays in one view a 2-D slice (axial, coronal, and/or sagittal)within a 3-D model of the segmented tumor or structures. This helps the radiotherapist or the operator to evaluate the adequacy of the generated target compared to the original 2-D slices. The graphical interface enables the operator to easily select a specific 2-D slice of the 3-D volume data set. The operator is enabled to manually override and adjust the automated segmentation results. After correction, the operator can see the 3-D model again and go back and forth till satisfactory segmentation is obtained. The novelty of this research work is in using state-of-the-art of image processing and 3-D visualization techniques to facilitate a process of a medical volume segmentation validation and assure the accuracy of the volume measurement of the structure of interest.

  15. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia.

  16. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  17. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  18. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  19. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  20. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  1. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  2. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  3. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  4. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  5. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  6. [Computer-assisted 3D phonetography].

    PubMed

    Neuschaefer-Rube, C; Klajman, S

    1996-10-01

    Profiles of fundamental frequency sound pressure levels and voice duration are measured separately in clinical practice. It was the aim of the present study to combine the two examinations, in order to estimate the relationship between pitch, sound pressure level and voice duration and to develop a new computer-assisted graph. A three-dimensional (3D) wireframe phonogram was constructed based on SPL profiles to obtain a general view of the parameters recorded. We have termed this "phonetography". Variable further projections were selected for the analysis of different aspects of parametric relationships. The results in 21 healthy volunteers and 4 patients with hyperfunctional dysphonias demonstrated that there were three typical figures of the 3D phonograms produced, depending on the relationship between voice duration when soft ("piano") compared to loud ("forte"). In one-third of the healthy volunteers, the values of the piano voice duration were greater than those of forte for almost all pitches examined. In two-thirds of the healthy subjects the values of forte voice duration were partly greater, as were those of piano voice duration. All of the patients showed voice duration values greater for forte than for piano. The results of the study demonstrate that the 3D phonogram is a useful tool for obtaining new insights into various relationships of voice parameters.

  7. 3D MR imaging in real time

    NASA Astrophysics Data System (ADS)

    Guttman, Michael A.; McVeigh, Elliot R.

    2001-05-01

    A system has been developed to produce live 3D volume renderings from an MR scanner. Whereas real-time 2D MR imaging has been demonstrated by several groups, 3D volumes are currently rendered off-line to gain greater understanding of anatomical structures. For example, surgical planning is sometimes performed by viewing 2D images or 3D renderings from previously acquired image data. A disadvantage of this approach is misregistration which could occur if the anatomy changes due to normal muscle contractions or surgical manipulation. The ability to produce volume renderings in real-time and present them in the magnet room could eliminate this problem, and enable or benefit other types of interventional procedures. The system uses the data stream generated by a fast 2D multi- slice pulse sequence to update a volume rendering immediately after a new slice is available. We demonstrate some basic types of user interaction with the rendering during imaging at a rate of up to 20 frames per second.

  8. Usability of stereoscopic view in teleoperation

    NASA Astrophysics Data System (ADS)

    Boonsuk, Wutthigrai

    2015-03-01

    Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.

  9. Fish body surface data measurement based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Jiang, Ming; Qian, Chen; Yang, Wenkai

    2016-01-01

    To film the moving fish in the glass tank, light will be bent at the interface of air and glass, glass and water. Based on binocular stereo vision and refraction principle, we establish a mathematical model of 3D image correlation to reconstruct the 3D coordinates of samples in the water. Marking speckle in fish surface, a series of real-time speckle images of swimming fish will be obtained by two high-speed cameras, and instantaneous 3D shape, strain, displacement etc. of fish will be reconstructed.

  10. Design of a single projector multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2014-03-01

    Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.

  11. Streamlined, Inexpensive 3D Printing of the Brain and Skull

    PubMed Central

    Cash, Sydney S.

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3–4 in consumable plastic filament as described, and the total process takes 14–17 hours, almost all of which is unsupervised (preprocessing = 4–6 hr; printing = 9–11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1–5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes. PMID:26295459

  12. Streamlined, Inexpensive 3D Printing of the Brain and Skull.

    PubMed

    Naftulin, Jason S; Kimchi, Eyal Y; Cash, Sydney S

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3-4 in consumable plastic filament as described, and the total process takes 14-17 hours, almost all of which is unsupervised (preprocessing = 4-6 hr; printing = 9-11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1-5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes.

  13. Streamlined, Inexpensive 3D Printing of the Brain and Skull.

    PubMed

    Naftulin, Jason S; Kimchi, Eyal Y; Cash, Sydney S

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3-4 in consumable plastic filament as described, and the total process takes 14-17 hours, almost all of which is unsupervised (preprocessing = 4-6 hr; printing = 9-11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1-5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes. PMID:26295459

  14. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  15. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  16. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  17. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  18. 3-D visualization of ensemble weather forecasts - Part 1: The visualization tool Met.3D (version 1.0)

    NASA Astrophysics Data System (ADS)

    Rautenhaus, M.; Kern, M.; Schäfler, A.; Westermann, R.

    2015-02-01

    We present Met.3D, a new open-source tool for the interactive 3-D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns, however, is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output - 3-D visualization, ensemble visualization, and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2-D visualization methods commonly used in meteorology to 3-D visualization by combining both visualization types in a 3-D context. We address the issue of spatial perception in the 3-D view and present approaches to using the ensemble to allow the user to assess forecast uncertainty. Interactivity is key to our approach. Met.3D uses modern graphics technology to achieve interactive visualization on standard consumer hardware. The tool supports forecast data from the European Centre for Medium Range Weather Forecasts and can operate directly on ECMWF hybrid sigma-pressure level grids. We describe the employed visualization algorithms, and analyse the impact of the ECMWF grid topology on computing 3-D ensemble statistical quantitites. Our techniques are demonstrated with examples from the T-NAWDEX-Falcon 2012 campaign.

  19. Met.3D - a new open-source tool for interactive 3D visualization of ensemble weather forecasts

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Kern, Michael; Schäfler, Andreas; Westermann, Rüdiger

    2015-04-01

    We introduce Met.3D, a new open-source tool for the interactive 3D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns, however, is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output -- 3D visualisation, ensemble visualization, and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2D visualization methods commonly used in meteorology to 3D visualization by combining both visualization types in a 3D context. It implements methods that address the issue of spatial perception in the 3D view as well as approaches to using the ensemble in order to assess forecast uncertainty. Interactivity is key to the Met.3D approach. The tool uses modern graphics hardware technology to achieve interactive visualization of present-day numerical weather prediction datasets on standard consumer hardware. Met.3D supports forecast data from the European Centre for Medium Range Weather Forecasts and operates directly on ECMWF hybrid sigma-pressure level grids. In this presentation, we provide an overview of the software --illustrated with short video examples--, and give information on its availability.

  20. 3D Tissue Culturing: Tissue in Cube: In Vitro 3D Culturing Platform with Hybrid Gel Cubes for Multidirectional Observations (Adv. Healthcare Mater. 13/2016).

    PubMed

    Hagiwara, Masaya; Kawahara, Tomohiro; Nobata, Rina

    2016-07-01

    An in vitro 3D culturing platform enabling multidirectional observations of 3D biosamples is presented by M. Hagiwara and co-workers on page 1566. 3D recognition of a sample structure can be achieved by facilitating multi-directional views using a standard microscope without a laser system. The cubic platform has the potential to promote 3D culture studies, offering easy handling and compatibility with commercial culture plates at a low price tag. PMID:27384934

  1. 3D Tissue Culturing: Tissue in Cube: In Vitro 3D Culturing Platform with Hybrid Gel Cubes for Multidirectional Observations (Adv. Healthcare Mater. 13/2016).

    PubMed

    Hagiwara, Masaya; Kawahara, Tomohiro; Nobata, Rina

    2016-07-01

    An in vitro 3D culturing platform enabling multidirectional observations of 3D biosamples is presented by M. Hagiwara and co-workers on page 1566. 3D recognition of a sample structure can be achieved by facilitating multi-directional views using a standard microscope without a laser system. The cubic platform has the potential to promote 3D culture studies, offering easy handling and compatibility with commercial culture plates at a low price tag.

  2. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  3. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  4. DYNA3D. Explicit 3-d Hydrodynamic FEM Program

    SciTech Connect

    Whirley, R.G.; Englemann, B.E. )

    1993-11-30

    DYNA3D is an explicit, three-dimensional, finite element program for analyzing the large deformation dynamic response of inelastic solids and structures. DYNA3D contains 30 material models and 10 equations of state (EOS) to cover a wide range of material behavior. The material models implemented are: elastic, orthotropic elastic, kinematic/isotropic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, Blatz-Ko rubber, high explosive burn, hydrodynamic without deviatoric stresses, elastoplastic hydrodynamic, temperature-dependent elastoplastic, isotropic elastoplastic, isotropic elastoplastic with failure, soil and crushable foam with failure, Johnson/Cook plasticity model, pseudo TENSOR geological model, elastoplastic with fracture, power law isotropic plasticity, strain rate dependent plasticity, rigid, thermal orthotropic, composite damage model, thermal orthotropic with 12 curves, piecewise linear isotropic plasticity, inviscid two invariant geologic cap, orthotropic crushable model, Moonsy-Rivlin rubber, resultant plasticity, closed form update shell plasticity, and Frazer-Nash rubber model. The hydrodynamic material models determine only the deviatoric stresses. Pressure is determined by one of 10 equations of state including linear polynomial, JWL high explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, tabulated, and TENSOR pore collapse. DYNA3D generates three binary output databases. One contains information for complete states at infrequent intervals; 50 to 100 states is typical. The second contains information for a subset of nodes and elements at frequent intervals; 1,000 to 10,000 states is typical. The last contains interface data for contact surfaces.

  5. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  6. Acquiring 3-D Spatial Data Of A Real Object

    NASA Astrophysics Data System (ADS)

    Wu, C. K.; Wang, D. Q.; Bajcsy, R. K...

    1983-10-01

    A method of acquiring spatial data of a real object via a stereometric system is presented. Three-dimensional (3-D) data of an object are acquired by: (1) camera calibration; (2) stereo matching; (3) multiple stereo views covering the whole object; (4) geometrical computations to determine the 3-D coordinates for each sample point of the object. The analysis and the experimental results indicate the method implemented is capable of measuring the spatial data of a real object with satisfactory accuracy.

  7. 3D-model building of the jaw impression

    NASA Astrophysics Data System (ADS)

    Ahmed, Moumen T.; Yamany, Sameh M.; Hemayed, Elsayed E.; Farag, Aly A.

    1997-03-01

    A novel approach is proposed to obtain a record of the patient's occlusion using computer vision. Data acquisition is obtained using intra-oral video cameras. The technique utilizes shape from shading to extract 3D information from 2D views of the jaw, and a novel technique for 3D data registration using genetic algorithms. The resulting 3D model can be used for diagnosis, treatment planning, and implant purposes. The overall purpose of this research is to develop a model-based vision system for orthodontics to replace traditional approaches. This system will be flexible, accurate, and will reduce the cost of orthodontic treatments.

  8. Preference for motion and depth in 3D film

    NASA Astrophysics Data System (ADS)

    Hartle, Brittney; Lugtigheid, Arthur; Kazimi, Ali; Allison, Robert S.; Wilcox, Laurie M.

    2015-03-01

    While heuristics have evolved over decades for the capture and display of conventional 2D film, it is not clear these always apply well to stereoscopic 3D (S3D) film. Further, while there has been considerable recent research on viewer comfort in S3D media, little attention has been paid to audience preferences for filming parameters in S3D. Here we evaluate viewers' preferences for moving S3D film content in a theatre setting. Specifically we examine preferences for combinations of camera motion (speed and direction) and stereoscopic depth (IA). The amount of IA had no impact on clip preferences regardless of the direction or speed of camera movement. However, preferences were influenced by camera speed, but only in the in-depth condition where viewers preferred faster motion. Given that previous research shows that slower speeds are more comfortable for viewing S3D content, our results show that viewing preferences cannot be predicted simply from measures of comfort. Instead, it is clear that viewer response to S3D film is complex and that film parameters selected to enhance comfort may in some instances produce less appealing content.

  9. Slanted channel microfluidic chip for 3D fluorescence imaging of cells in flow.

    PubMed

    Jagannadh, Veerendra Kalyan; Mackenzie, Mark D; Pal, Parama; Kar, Ajoy K; Gorthi, Sai Siva

    2016-09-19

    Three-dimensional cellular imaging techniques have become indispensable tools in biological research and medical diagnostics. Conventional 3D imaging approaches employ focal stack collection to image different planes of the cell. In this work, we present the design and fabrication of a slanted channel microfluidic chip for 3D fluorescence imaging of cells in flow. The approach employs slanted microfluidic channels fabricated in glass using ultrafast laser inscription. The slanted nature of the microfluidic channels ensures that samples come into and go out of focus, as they pass through the microscope imaging field of view. This novel approach enables the collection of focal stacks in a straight-forward and automated manner, even with off-the-shelf microscopes that are not equipped with any motorized translation/rotation sample stages. The presented approach not only simplifies conventional focal stack collection, but also enhances the capabilities of a regular widefield fluorescence microscope to match the features of a sophisticated confocal microscope. We demonstrate the retrieval of sectioned slices of microspheres and cells, with the use of computational algorithms to enhance the signal-to-noise ratio (SNR) in the collected raw images. The retrieved sectioned images have been used to visualize fluorescent microspheres and bovine sperm cell nucleus in 3D while using a regular widefield fluorescence microscope. We have been able to achieve sectioning of approximately 200 slices per cell, which corresponds to a spatial translation of ∼ 15 nm per slice along the optical axis of the microscope. PMID:27661949

  10. Modeling, Prediction, and Reduction of 3D Crosstalk in Circular Polarized Stereoscopic LCDs.

    PubMed

    Zeng, Menglin; Robinson, Alan E; Nguyen, Truong Q

    2015-12-01

    Crosstalk, which is the incomplete separation between the left and right views in 3D displays, induces ghosting and causes difficulty of the eyes to fuse the stereo image for depth perception. Circularly polarized (CP) liquid crystal display (LCD) is one of the main-stream consumer 3D displays with the prospering of 3D movies and gamings. The polarizing system including the patterned retarder is one of the major causes of crosstalk in CP LCD. The contributions of this paper are the modeling of the polarizing system of CP LCD, and a crosstalk reduction method that efficiently cancels crosstalk and preserves image contrast. For the modeling, the practical orientation of the polarized glasses (PG) is considered. In addition, this paper calculates the rotation of the light-propagation coordinate for the Stokes vector as light propagates from LCD to PG, and this calculation is missing in the previous works when applying Mueller calculus. The proposed crosstalk reduction method is formulated as a linear programming problem, which can be easily solved. In addition, we propose excluding the highly textured areas in the input images to further preserve image contrast in crosstalk reduction. PMID:26259220

  11. 3D colour visualization of label images using volume rendering techniques.

    PubMed

    Vandenhouten, R; Kottenhoff, R; Grebe, R

    1995-01-01

    Volume rendering methods for the visualization of 3D image data sets have been developed and collected in a C library. The core algorithm consists of a perspective ray casting technique for a natural and realistic view of the 3D scene. New edge operator shading methods are employed for a fast and information preserving representation of surfaces. Control parameters of the algorithm can be tuned to have either smoothed surfaces or a very detailed rendering of the geometrical structure. Different objects can be distinguished by different colours. Shadow ray tracing has been implemented to improve the realistic impression of the 3D image. For a simultaneous representation of objects in different depths, hiding each other, two types of transparency mode are used (wireframe and glass transparency). Single objects or groups of objects can be excluded from the rendering (peeling). Three orthogonal cutting planes or one arbitrarily placed cutting plane can be applied to the rendered objects in order to get additional information about inner structures, contours, and relative positions.

  12. Slanted channel microfluidic chip for 3D fluorescence imaging of cells in flow.

    PubMed

    Jagannadh, Veerendra Kalyan; Mackenzie, Mark D; Pal, Parama; Kar, Ajoy K; Gorthi, Sai Siva

    2016-09-19

    Three-dimensional cellular imaging techniques have become indispensable tools in biological research and medical diagnostics. Conventional 3D imaging approaches employ focal stack collection to image different planes of the cell. In this work, we present the design and fabrication of a slanted channel microfluidic chip for 3D fluorescence imaging of cells in flow. The approach employs slanted microfluidic channels fabricated in glass using ultrafast laser inscription. The slanted nature of the microfluidic channels ensures that samples come into and go out of focus, as they pass through the microscope imaging field of view. This novel approach enables the collection of focal stacks in a straight-forward and automated manner, even with off-the-shelf microscopes that are not equipped with any motorized translation/rotation sample stages. The presented approach not only simplifies conventional focal stack collection, but also enhances the capabilities of a regular widefield fluorescence microscope to match the features of a sophisticated confocal microscope. We demonstrate the retrieval of sectioned slices of microspheres and cells, with the use of computational algorithms to enhance the signal-to-noise ratio (SNR) in the collected raw images. The retrieved sectioned images have been used to visualize fluorescent microspheres and bovine sperm cell nucleus in 3D while using a regular widefield fluorescence microscope. We have been able to achieve sectioning of approximately 200 slices per cell, which corresponds to a spatial translation of ∼ 15 nm per slice along the optical axis of the microscope.

  13. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  14. Towards a Normalised 3D Geovisualisation: The Viewpoint Management

    NASA Astrophysics Data System (ADS)

    Neuville, R.; Poux, F.; Hallot, P.; Billen, R.

    2016-10-01

    This paper deals with the viewpoint management in 3D environments considering an allocentric environment. The recent advances in computer sciences and the growing number of affordable remote sensors lead to impressive improvements in the 3D visualisation. Despite some research relating to the analysis of visual variables used in 3D environments, we notice that it lacks a real standardisation of 3D representation rules. In this paper we study the "viewpoint" as being the first considered parameter for a normalised visualisation of 3D data. Unlike in a 2D environment, the viewing direction is not only fixed in a top down direction in 3D. A non-optimal camera location means a poor 3D representation in terms of relayed information. Based on this statement we propose a model based on the analysis of the computational display pixels that determines a viewpoint maximising the relayed information according to one kind of query. We developed an OpenGL prototype working on screen pixels that allows to determine the optimal camera location based on a screen pixels colour algorithm. The viewpoint management constitutes a first step towards a normalised 3D geovisualisation.

  15. Projection type transparent 3D display using active screen

    NASA Astrophysics Data System (ADS)

    Kamoshita, Hiroki; Yendo, Tomohiro

    2015-05-01

    Equipment to enjoy a 3D image, such as a movie theater, television and so on have been developed many. So 3D video are widely known as a familiar image of technology now. The display representing the 3D image are there such as eyewear, naked-eye, the HMD-type, etc. They has been used for different applications and location. But have not been widely studied for the transparent 3D display. If transparent large 3D display is realized, it is useful to display 3D image overlaid on real scene in some applications such as road sign, shop window, screen in the conference room etc. As a previous study, to produce a transparent 3D display by using a special transparent screen and number of projectors is proposed. However, for smooth motion parallax, many projectors are required. In this paper, we propose a display that has transparency and large display area by time multiplexing projection image in time-division from one or small number of projectors to active screen. The active screen is composed of a number of vertically-long small rotate mirrors. It is possible to realize the stereoscopic viewing by changing the image of the projector in synchronism with the scanning of the beam.3D vision can be realized by light is scanned. Also, the display has transparency, because it is possible to see through the display when the mirror becomes perpendicular to the viewer. We confirmed the validity of the proposed method by using simulation.

  16. 3D annotation and manipulation of medical anatomical structures

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  17. A 3-D Look at Wind-Sculpted Ridges in Aeolis

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Layers of bedrock etched by wind to form sharp, elongated ridges known to geomorphologists as yardangs are commonplace in the southern Elysium Planitia/southern Amazonis region of Mars. The ridges shown in this 3-D composite of two overlapping Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) images occur in the eastern Aeolis region of southern Elysium Planitia near 2.3oS, 206.8oW. To view the picture in stereo, you need red-blue 3-D glasses (red filter over the left eye, blue over the right). For wind to erode bedrock into the patterns seen here, the rock usually must consist of something that is fine-grained and of nearly uniform grain size, such as sand. It must also be relatively easy to erode. For decades, most Mars researchers have interpreted these materials to be eroded deposits of volcanic ash. Nothing in the new picture shown here can support nor refute this earlier speculation. The entire area is mantled by light-toned dust. Small landslides within this thin dust layer form dark streaks on some of the steeper slopes in this picture (for more examples and explanations for these streaks, see previous web pages listed below).

    The stereo (3-D) picture was compiled using an off-nadir view taken by the MOC during the Aerobrake-1 subphase of the mission in January 1998 with a nadir (straight-down-looking) view acquired in October 2000. The total area shown is about 6.7 kilometers (4.2 miles) wide by 2.5 kilometers (1.5 miles) high and is illuminated by sunlight from the upper right. The relief in the stereo image is quite exaggerated: the ridges are between about 50 and 100 meters (about 165-330 feet) high. North is toward the lower right.

  18. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  19. 3D Printed Shelby Cobra

    SciTech Connect

    Love, Lonnie

    2015-01-09

    ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.

  20. 3D Medical Collaboration Technology to Enhance Emergency Healthcare

    PubMed Central

    Welch, Greg; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M.; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E.

    2009-01-01

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare. PMID:19521951

  1. 3D medical collaboration technology to enhance emergency healthcare.

    PubMed

    Welch, Gregory F; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj K; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E

    2009-04-19

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15-20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals' viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare.