Science.gov

Sample records for 3d graphics hardware

  1. Internet-based hardware/software co-design framework for embedded 3D graphics applications

    NASA Astrophysics Data System (ADS)

    Yeh, Chi-Tsai; Wang, Chun-Hao; Huang, Ing-Jer; Wong, Weng-Fai

    2011-12-01

    Advances in technology are making it possible to run three-dimensional (3D) graphics applications on embedded and handheld devices. In this article, we propose a hardware/software co-design environment for 3D graphics application development that includes the 3D graphics software, OpenGL ES application programming interface (API), device driver, and 3D graphics hardware simulators. We developed a 3D graphics system-on-a-chip (SoC) accelerator using transaction-level modeling (TLM). This gives software designers early access to the hardware even before it is ready. On the other hand, hardware designers also stand to gain from the more complex test benches made available in the software for verification. A unique aspect of our framework is that it allows hardware and software designers from geographically dispersed areas to cooperate and work on the same framework. Designs can be entered and executed from anywhere in the world without full access to the entire framework, which may include proprietary components. This results in controlled and secure transparency and reproducibility, granting leveled access to users of various roles.

  2. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  3. Introduction to 3D Graphics through Excel

    ERIC Educational Resources Information Center

    Benacka, Jan

    2013-01-01

    The article presents a method of explaining the principles of 3D graphics through making a revolvable and sizable orthographic parallel projection of cuboid in Excel. No programming is used. The method was tried in fourteen 90 minute lessons with 181 participants, which were Informatics teachers, undergraduates of Applied Informatics and gymnasium…

  4. Fast DRR splat rendering using common consumer graphics hardware.

    PubMed

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-11-01

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2 x 10(6) voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine.

  5. Fast DRR splat rendering using common consumer graphics hardware

    SciTech Connect

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-11-15

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10{sup 6} voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine.

  6. 2D to 3D conversion implemented in different hardware

    NASA Astrophysics Data System (ADS)

    Ramos-Diaz, Eduardo; Gonzalez-Huitron, Victor; Ponomaryov, Volodymyr I.; Hernandez-Fragoso, Araceli

    2015-02-01

    Conversion of available 2D data for release in 3D content is a hot topic for providers and for success of the 3D applications, in general. It naturally completely relies on virtual view synthesis of a second view given by original 2D video. Disparity map (DM) estimation is a central task in 3D generation but still follows a very difficult problem for rendering novel images precisely. There exist different approaches in DM reconstruction, among them manually and semiautomatic methods that can produce high quality DMs but they demonstrate hard time consuming and are computationally expensive. In this paper, several hardware implementations of designed frameworks for an automatic 3D color video generation based on 2D real video sequence are proposed. The novel framework includes simultaneous processing of stereo pairs using the following blocks: CIE L*a*b* color space conversions, stereo matching via pyramidal scheme, color segmentation by k-means on an a*b* color plane, and adaptive post-filtering, DM estimation using stereo matching between left and right images (or neighboring frames in a video), adaptive post-filtering, and finally, the anaglyph 3D scene generation. Novel technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7, and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode. The time values needed, mean Similarity Structural Index Measure (SSIM) and Bad Matching Pixels (B) values for different hardware implementations (GPU, Single CPU, and DSP) are exposed in this paper.

  7. Hardware Trust Implications of 3-D Integration

    DTIC Science & Technology

    2010-12-01

    enhancing a commod- ity processor with a variety of security functions. This paper examines the 3-D design approach and provides an analysis concluding...of key components. The question addressed by this paper is, “Can a 3-D control plane provide useful secure services when it is conjoined with an...untrust- worthy computation plane?” Design-level investigation of this question yields a definite yes. This paper explores 3- D applications and their

  8. The Digital Space Shuttle, 3D Graphics, and Knowledge Management

    NASA Technical Reports Server (NTRS)

    Gomez, Julian E.; Keller, Paul J.

    2003-01-01

    The Digital Shuttle is a knowledge management project that seeks to define symbiotic relationships between 3D graphics and formal knowledge representations (ontologies). 3D graphics provides geometric and visual content, in 2D and 3D CAD forms, and the capability to display systems knowledge. Because the data is so heterogeneous, and the interrelated data structures are complex, 3D graphics combined with ontologies provides mechanisms for navigating the data and visualizing relationships.

  9. Real-time 3D video conference on generic hardware

    NASA Astrophysics Data System (ADS)

    Desurmont, X.; Bruyelle, J. L.; Ruiz, D.; Meessen, J.; Macq, B.

    2007-02-01

    Nowadays, video-conference tends to be more and more advantageous because of the economical and ecological cost of transport. Several platforms exist. The goal of the TIFANIS immersive platform is to let users interact as if they were physically together. Unlike previous teleimmersion systems, TIFANIS uses generic hardware to achieve an economically realistic implementation. The basic functions of the system are to capture the scene, transmit it through digital networks to other partners, and then render it according to each partner's viewing characteristics. The image processing part should run in real-time. We propose to analyze the whole system. it can be split into different services like central processing unit (CPU), graphical rendering, direct memory access (DMA), and communications trough the network. Most of the processing is done by CPU resource. It is composed of the 3D reconstruction and the detection and tracking of faces from the video stream. However, the processing needs to be parallelized in several threads that have as little dependencies as possible. In this paper, we present these issues, and the way we deal with them.

  10. Optimization Techniques for 3D Graphics Deployment on Mobile Devices

    NASA Astrophysics Data System (ADS)

    Koskela, Timo; Vatjus-Anttila, Jarkko

    2015-03-01

    3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.

  11. 3D Graphics Through the Internet: A "Shoot-Out"

    NASA Technical Reports Server (NTRS)

    Watson, Val; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    3D graphics through the Internet needs to move beyond the current lowest common denominator of pre-computed movies, which consume bandwidth and are non-interactive. Panelists will demonstrate and compare 3D graphical tools for accessing, analyzing, and collaborating on information through the Internet and World-wide web. The "shoot-out" will illustrate which tools are likely to be the best for the various types of information, including dynamic scientific data, 3-D objects, and virtual environments. The goal of the panel is to encourage more effective use of the Internet by encouraging suppliers and users of information to adopt the next generation of graphical tools.

  12. DspaceOgre 3D Graphics Visualization Tool

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.

    2011-01-01

    This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.

  13. Expanding the Interaction Lexicon for 3D Graphics

    DTIC Science & Technology

    2001-11-01

    Graphics We shape out tools, and thereafter our tools shape us. Marshall McLuhan It is not reason that is the guide of life, but custom. David...Interaction Lexicon for 3D Graphics We don’t know who discovered water, but we are pretty sure it wasn’t a fish. Marshall McLuhan Successful innovation in a

  14. Postprocessing of compressed 3D graphic data by using subdivision

    NASA Astrophysics Data System (ADS)

    Cheang, Ka Man; Li, Jiankun; Kuo, C.-C. Jay

    1998-10-01

    In this work, we present a postprocessing technique applied to a 3D graphic model of a lower resolution to obtain a visually more pleasant representation. Our method is an improved version of the Butterfly subdivision scheme developed by Zorin et al. Our main contribution is to exploit the flatness information of local areas of a 3D graphic model for adaptive refinement. Consequently, we can avoid unnecessary subdivision in regions which are relatively flat. The proposed new algorithm not only reduces the computational complexity but also saves the storage space. With the hierarchical mesh compression method developed by Li and Kuo as the baseline coding method, we show that the postprocessing technique can greatly improve the visual quality of the decoded 3D graphic model.

  15. Spidergl: a Graphics Library for 3d Web Applications

    NASA Astrophysics Data System (ADS)

    Di Benedetto, M.; Corsini, M.; Scopigno, R.

    2011-09-01

    The recent introduction of the WebGL API for leveraging the power of 3D graphics accelerators within Web browsers opens the possibility to develop advanced graphics applications without the need for an ad-hoc plug-in. There are several contexts in which this new technology can be exploited to enhance user experience and data fruition, like e-commerce applications, games and, in particular, Cultural Heritage. In fact, it is now possible to use the Web platform to present a virtual reconstruction hypothesis of ancient pasts, to show detailed 3D models of artefacts of interests to a wide public, and to create virtual museums. We introduce SpiderGL, a JavaScript library for developing 3D graphics Web applications. SpiderGL provides data structures and algorithms to ease the use of WebGL, to define and manipulate shapes, to import 3D models in various formats, and to handle asynchronous data loading. We show the potential of this novel library with a number of demo applications and give details about its future uses in the context of Cultural Heritage applications.

  16. MAP3D: a media processor approach for high-end 3D graphics

    NASA Astrophysics Data System (ADS)

    Darsa, Lucia; Stadnicki, Steven; Basoglu, Chris

    1999-12-01

    Equator Technologies, Inc. has used a software-first approach to produce several programmable and advanced VLIW processor architectures that have the flexibility to run both traditional systems tasks and an array of media-rich applications. For example, Equator's MAP1000A is the world's fastest single-chip programmable signal and image processor targeted for digital consumer and office automation markets. The Equator MAP3D is a proposal for the architecture of the next generation of the Equator MAP family. The MAP3D is designed to achieve high-end 3D performance and a variety of customizable special effects by combining special graphics features with high performance floating-point and media processor architecture. As a programmable media processor, it offers the advantages of a completely configurable 3D pipeline--allowing developers to experiment with different algorithms and to tailor their pipeline to achieve the highest performance for a particular application. With the support of Equator's advanced C compiler and toolkit, MAP3D programs can be written in a high-level language. This allows the compiler to successfully find and exploit any parallelism in a programmer's code, thus decreasing the time to market of a given applications. The ability to run an operating system makes it possible to run concurrent applications in the MAP3D chip, such as video decoding while executing the 3D pipelines, so that integration of applications is easily achieved--using real-time decoded imagery for texturing 3D objects, for instance. This novel architecture enables an affordable, integrated solution for high performance 3D graphics.

  17. Fast Sparse Level Sets on Graphics Hardware.

    PubMed

    Jalba, Andrei C; van der Laan, Wladimir J; Roerdink, Jos B T M

    2013-01-01

    The level-set method is one of the most popular techniques for capturing and tracking deformable interfaces. Although level sets have demonstrated great potential in visualization and computer graphics applications, such as surface editing and physically based modeling, their use for interactive simulations has been limited due to the high computational demands involved. In this paper, we address this computational challenge by leveraging the increased computing power of graphics processors, to achieve fast simulations based on level sets. Our efficient, sparse GPU level-set method is substantially faster than other state-of-the-art, parallel approaches on both CPU and GPU hardware. We further investigate its performance through a method for surface reconstruction, based on GPU level sets. Our novel multiresolution method for surface reconstruction from unorganized point clouds compares favorably with recent, existing techniques and other parallel implementations. Finally, we point out that both level-set computations and rendering of level-set surfaces can be performed at interactive rates, even on large volumetric grids. Therefore, many applications based on level sets can benefit from our sparse level-set method.

  18. Design Application Translates 2-D Graphics to 3-D Surfaces

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Fabric Images Inc., specializing in the printing and manufacturing of fabric tension architecture for the retail, museum, and exhibit/tradeshow communities, designed software to translate 2-D graphics for 3-D surfaces prior to print production. Fabric Images' fabric-flattening design process models a 3-D surface based on computer-aided design (CAD) specifications. The surface geometry of the model is used to form a 2-D template, similar to a flattening process developed by NASA's Glenn Research Center. This template or pattern is then applied in the development of a 2-D graphic layout. Benefits of this process include 11.5 percent time savings per project, less material wasted, and the ability to improve upon graphic techniques and offer new design services. Partners include Exhibitgroup/Giltspur (end-user client: TAC Air, a division of Truman Arnold Companies Inc.), Jack Morton Worldwide (end-user client: Nickelodeon), as well as 3D Exhibits Inc., and MG Design Associates Corp.

  19. Evaluation of 3-D graphics software: A case study

    NASA Technical Reports Server (NTRS)

    Lores, M. E.; Chasen, S. H.; Garner, J. M.

    1984-01-01

    An efficient 3-D geometry graphics software package which is suitable for advanced design studies was developed. The advanced design system is called GRADE--Graphics for Advanced Design. Efficiency and ease of use are gained by sacrificing flexibility in surface representation. The immediate options were either to continue development of GRADE or to acquire a commercially available system which would replace or complement GRADE. Test cases which would reveal the ability of each system to satisfy the requirements were developed. A scoring method which adequately captured the relative capabilities of the three systems was presented. While more complex multi-attribute decision methods could be used, the selected method provides all the needed information without being so complex that it is difficult to understand. If the value factors are modestly perturbed, system Z is a clear winner based on its overall capabilities. System Z is superior in two vital areas: surfacing and ease of interface with application programs.

  20. Implementation Of True 3D Cursors In Computer Graphics

    NASA Astrophysics Data System (ADS)

    Butts, David R.; McAllister, David F.

    1988-06-01

    The advances in stereoscopic image display techniques have shown an increased need for real-time interaction with the three-dimensional image. We have developed a prototype real-time stereoscopic cursor to investigate this interaction. The results have pointed out areas where hardware speeds are a limiting factor, as well as areas where various methodologies cause perceptual difficulties. This paper addresses the psychological and perceptual anomalies involved in stereo image techniques, cursor generation and motion, and the use of the device as a 3D drawing and depth measuring tool.

  1. Simulation of imaging radar using graphics hardware acceleration

    NASA Astrophysics Data System (ADS)

    Peinecke, Niklas; Döhler, Hans-Ullrich; Korn, Bernd R.

    2008-04-01

    Extending previous works by Doehler and Bollmeyer we describe a new implementation of an imaging radar simulator. Our approach is based on using modern computer graphics hardware making heavy use of recent technologies like vertex and fragment shaders. Furthermore, to allow for a nearly realistic image we generate radar shadows implementing shadow map techniques in the programmable graphics hardware. The particular implementation is tailored to imitate millimeter wave (MMW) radar but could be extended for other types of radar systems easily.

  2. Education System Using Interactive 3D Computer Graphics (3D-CG) Animation and Scenario Language for Teaching Materials

    ERIC Educational Resources Information Center

    Matsuda, Hiroshi; Shindo, Yoshiaki

    2006-01-01

    The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…

  3. 2D neural hardware versus 3D biological ones

    SciTech Connect

    Beiu, V.

    1998-12-31

    This paper will present important limitations of hardware neural nets as opposed to biological neural nets (i.e. the real ones). The author starts by discussing neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural nets. Going further, the focus will be on hardware constraints. The author will present recent results for three different alternatives of implementing neural networks: digital, threshold gate, and analog, while the area and the delay will be related to neurons' fan-in and weights' precision. Based on all of these, it will be shown why hardware implementations cannot cope with their biological inspiration with respect to their power of computation: the mapping onto silicon lacking the third dimension of biological nets. This translates into reduced fan-in, and leads to reduced precision. The main conclusion is that one is faced with the following alternatives: (1) try to cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow one to use the third dimension, e.g. using optical interconnections.

  4. Real-time hardware for a new 3D display

    NASA Astrophysics Data System (ADS)

    Kaufmann, B.; Akil, M.

    2006-02-01

    We describe in this article a new multi-view auto-stereoscopic display system with a real time architecture to generate images of n different points of view of a 3D scene. This architecture generates all the different points of view with only one generation process, the different pictures are not generated independently but all at the same time. The architecture generates a frame buffer that contains all the voxels with their three dimensions and regenerates the different pictures on demand from this frame buffer. The need of memory is decreased because there is no redundant information in the buffer.

  5. The three-dimensional Event-Driven Graphics Environment (3D-EDGE)

    NASA Technical Reports Server (NTRS)

    Freedman, Jeffrey; Hahn, Roger; Schwartz, David M.

    1993-01-01

    Stanford Telecom developed the Three-Dimensional Event-Driven Graphics Environment (3D-EDGE) for NASA GSFC's (GSFC) Communications Link Analysis and Simulation System (CLASS). 3D-EDGE consists of a library of object-oriented subroutines which allow engineers with little or no computer graphics experience to programmatically manipulate, render, animate, and access complex three-dimensional objects.

  6. Whole versus Part Presentations of the Interactive 3D Graphics Learning Objects

    ERIC Educational Resources Information Center

    Azmy, Nabil Gad; Ismaeel, Dina Ahmed

    2010-01-01

    The purpose of this study is to present an analysis of how the structure and design of the Interactive 3D Graphics Learning Objects can be effective and efficient in terms of Performance, Time on task, and Learning Efficiency. The study explored two treatments, namely whole versus Part Presentations of the Interactive 3D Graphics Learning Objects,…

  7. Motion compensation in digital subtraction angiography using graphics hardware.

    PubMed

    Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim

    2006-07-01

    An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.

  8. Extensible 3D (X3D) Graphics Clouds for Geographic Information Systems

    DTIC Science & Technology

    2008-03-01

    browser such as Microsoft Internet Explorer or Netscape using an X3D or VRML supporting plug-in. The benefits of diverse support can cause...typing model output with a particular method of 3D cloud production. Data-driven adaptation and production of cloud models for web -based delivery...and production of cloud models for web -based delivery is an achievable capability given continued research and development. vi THIS PAGE

  9. Standard Features and Their Impact on 3D Engineering Graphics

    ERIC Educational Resources Information Center

    Waldenmeyer, K. M.; Hartman, N. W.

    2009-01-01

    The prevalence of feature-based 3D modeling in industry has necessitated the accumulation and maintenance of standard feature libraries. Currently, firms who use standard features to design parts are storing and utilizing these libraries through their existing product data management (PDM) systems. Standard features have enabled companies to…

  10. Fast image interpolation for motion estimation using graphics hardware

    NASA Astrophysics Data System (ADS)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  11. Accelerating reconstruction of reference digital tomosynthesis using graphics hardware.

    PubMed

    Yan, Hui; Ren, Lei; Godfrey, Devon J; Yin, Fang-Fang

    2007-10-01

    The successful implementation of digital tomosynthesis (DTS) for on-board image guided radiation therapy (IGRT) requires fast DTS image reconstruction. Both target and reference DTS image sets are required to support an image registration application for IGRT. Target images are usually DTS image sets reconstructed from on-board projections, which can be accomplished quickly using the conventional filtered backprojection algorithm. Reference images are DTS image sets reconstructed from digitally reconstructed radiographs (DRRs) previously generated from conventional planning CT data. Generating a set of DRRs from planning CT is relatively slow using the conventional ray-casting algorithm. In order to facilitate DTS reconstruction within a clinically acceptable period of time, we implemented a high performance DRR reconstruction algorithm on a graphics processing unit of commercial PC graphics hardware. The performance of this new algorithm was evaluated and compared with that which is achieved using the conventional software-based ray-casting algorithm. DTS images were reconstructed from DRRs previously generated by both hardware and software algorithms. On average, the DRR reconstruction efficiency using the hardware method is improved by a factor of 67 over the software method. The image quality of the DRRs was comparable to those generated using the software-based ray-casting algorithm. Accelerated DRR reconstruction significantly reduces the overall time required to produce a set of reference DTS images from planning CT and makes this technique clinically practical for target localization for radiation therapy.

  12. Creating Realistic 3D Graphics with Excel at High School--Vector Algebra in Practice

    ERIC Educational Resources Information Center

    Benacka, Jan

    2015-01-01

    The article presents the results of an experiment in which Excel applications that depict rotatable and sizable orthographic projection of simple 3D figures with face overlapping were developed with thirty gymnasium (high school) students of age 17-19 as an introduction to 3D computer graphics. A questionnaire survey was conducted to find out…

  13. Resolution-independent surface rendering using programmable graphics hardware

    DOEpatents

    Loop, Charles T.; Blinn, James Frederick

    2008-12-16

    Surfaces defined by a Bezier tetrahedron, and in particular quadric surfaces, are rendered on programmable graphics hardware. Pixels are rendered through triangular sides of the tetrahedra and locations on the shapes, as well as surface normals for lighting evaluations, are computed using pixel shader computations. Additionally, vertex shaders are used to aid interpolation over a small number of values as input to the pixel shaders. Through this, rendering of the surfaces is performed independently of viewing resolution, allowing for advanced level-of-detail management. By individually rendering tetrahedrally-defined surfaces which together form complex shapes, the complex shapes can be rendered in their entirety.

  14. Accelerating epistasis analysis in human genetics with consumer graphics hardware

    PubMed Central

    2009-01-01

    Background Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. Findings We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU

  15. FFT and cone-beam CT reconstruction on graphics hardware

    NASA Astrophysics Data System (ADS)

    Després, Philippe; Sun, Mingshan; Hasegawa, Bruce H.; Prevrhal, Sven

    2007-03-01

    Graphics processing units (GPUs) are increasingly used for general purpose calculations. Their pipelined architecture can be exploited to accelerate various parallelizable algorithms. Medical imaging applications are inherently well suited to benefit from the development of GPU-based computational platforms. We evaluate in this work the potential of GPUs to improve the execution speed of two common medical imaging tasks, namely Fourier transforms and tomographic reconstructions. A two-dimensional fast Fourier transform (FFT) algorithm was GPU-implemented and compared, in terms of execution speed, to two popular CPU-based FFT routines. Similarly, the Feldkamp, David and Kress (FDK) algorithm for cone-beam tomographic reconstruction was implemented on the GPU and its performance compared to a CPU version. Different reconstruction strategies were employed to assess the performance of various GPU memory layouts. For the specific hardware used, GPU implementations of the FFT were up to 20 times faster than their CPU counterparts, but slower than highly optimized CPU versions of the algorithm. Tomographic reconstructions were faster on the GPU by a factor up to 30, allowing 256 3 voxel reconstructions of 256 projections in about 20 seconds. Overall, GPUs are an attractive alternative to other imaging-dedicated computing hardware like application-specific integrated circuits (ASICs) and field programmable gate arrays (FPGAs) in terms of cost, simplicity and versatility. With the development of simpler language extensions and programming interfaces, GPUs are likely to become essential tools in medical imaging.

  16. Implementing the lattice Boltzmann model on commodity graphics hardware

    NASA Astrophysics Data System (ADS)

    Kaufman, Arie; Fan, Zhe; Petkov, Kaloian

    2009-06-01

    Modern graphics processing units (GPUs) can perform general-purpose computations in addition to the native specialized graphics operations. Due to the highly parallel nature of graphics processing, the GPU has evolved into a many-core coprocessor that supports high data parallelism. Its performance has been growing at a rate of squared Moore's law, and its peak floating point performance exceeds that of the CPU by an order of magnitude. Therefore, it is a viable platform for time-sensitive and computationally intensive applications. The lattice Boltzmann model (LBM) computations are carried out via linear operations at discrete lattice sites, which can be implemented efficiently using a GPU-based architecture. Our simulations produce results comparable to the CPU version while improving performance by an order of magnitude. We have demonstrated that the GPU is well suited for interactive simulations in many applications, including simulating fire, smoke, lightweight objects in wind, jellyfish swimming in water, and heat shimmering and mirage (using the hybrid thermal LBM). We further advocate the use of a GPU cluster for large scale LBM simulations and for high performance computing. The Stony Brook Visual Computing Cluster has been the platform for several applications, including simulations of real-time plume dispersion in complex urban environments and thermal fluid dynamics in a pressurized water reactor. Major GPU vendors have been targeting the high performance computing market with GPU hardware implementations. Software toolkits such as NVIDIA CUDA provide a convenient development platform that abstracts the GPU and allows access to its underlying stream computing architecture. However, software programming for a GPU cluster remains a challenging task. We have therefore developed the Zippy framework to simplify GPU cluster programming. Zippy is based on global arrays combined with the stream programming model and it hides the low-level details of the

  17. Using 3D Computer Graphics Multimedia to Motivate Preservice Teachers' Learning of Geometry and Pedagogy

    ERIC Educational Resources Information Center

    Goodson-Espy, Tracy; Lynch-Davis, Kathleen; Schram, Pamela; Quickenton, Art

    2010-01-01

    This paper describes the genesis and purpose of our geometry methods course, focusing on a geometry-teaching technology we created using NVIDIA[R] Chameleon demonstration. This article presents examples from a sequence of lessons centered about a 3D computer graphics demonstration of the chameleon and its geometry. In addition, we present data…

  18. Cp-curve, a Novel 3-D Graphical Representation of Proteins

    NASA Astrophysics Data System (ADS)

    Bai, Haihua; Li, Chun; Agula, Hasi; Jirimutu, Jirimutu; Wang, Jun; Xing, Lili

    2007-12-01

    Based on a five-letter model of the 20 amino acids, we propose a novel 3-D graphical representation of proteins. The method is illustrated on the mutant exon 1 of EDA gene of a Mongolian family with X-linked congenital anodontia/wavy hair.

  19. Tensor3D: A computer graphics program to simulate 3D real-time deformation and visualization of geometric bodies

    NASA Astrophysics Data System (ADS)

    Pallozzi Lavorante, Luca; Dirk Ebert, Hans

    2008-07-01

    Tensor3D is a geometric modeling program with the capacity to simulate and visualize in real-time the deformation, specified through a tensor matrix and applied to triangulated models representing geological bodies. 3D visualization allows the study of deformational processes that are traditionally conducted in 2D, such as simple and pure shears. Besides geometric objects that are immediately available in the program window, the program can read other models from disk, thus being able to import objects created with different open-source or proprietary programs. A strain ellipsoid and a bounding box are simultaneously shown and instantly deformed with the main object. The principal axes of strain are visualized as well to provide graphical information about the orientation of the tensor's normal components. The deformed models can also be saved, retrieved later and deformed again, in order to study different steps of progressive strain, or to make this data available to other programs. The shape of stress ellipsoids and the corresponding Mohr circles defined by any stress tensor can also be represented. The application was written using the Visualization ToolKit, a powerful scientific visualization library in the public domain. This development choice, allied to the use of the Tcl/Tk programming language, which is independent on the host computational platform, makes the program a useful tool for the study of geometric deformations directly in three dimensions in teaching as well as research activities.

  20. The design and implementation of stereoscopic 3D scalable vector graphics based on WebKit

    NASA Astrophysics Data System (ADS)

    Liu, Zhongxin; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    Scalable Vector Graphics (SVG), which is a language designed based on eXtensible Markup Language (XML), is used to describe basic shapes embedded in webpages, such as circles and rectangles. However, it can only depict 2D shapes. As a consequence, web pages using classical SVG can only display 2D shapes on a screen. With the increasing development of stereoscopic 3D (S3D) technology, binocular 3D devices have been widely used. Under this circumstance, we intend to extend the widely used web rendering engine WebKit to support the description and display of S3D webpages. Therefore, the extension of SVG is of necessity. In this paper, we will describe how to design and implement SVG shapes with stereoscopic 3D mode. Two attributes representing the depth and thickness are added to support S3D shapes. The elimination of hidden lines and hidden surfaces, which is an important process in this project, is described as well. The modification of WebKit is also discussed, which is made to support the generation of both left view and right view at the same time. As is shown in the result, in contrast to the 2D shapes generated by the Google Chrome web browser, the shapes got from our modified browser are in S3D mode. With the feeling of depth and thickness, the shapes seem to be real 3D objects away from the screen, rather than simple curves and lines as before.

  1. The effects of 3D interactive animated graphics on student learning and attitudes in computer-based instruction

    NASA Astrophysics Data System (ADS)

    Moon, Hye Sun

    Visuals are most extensively used as instructional tools in education to present spatially-based information. Recent computer technology allows the generation of 3D animated visuals to extend the presentation in computer-based instruction. Animated visuals in 3D representation not only possess motivational value that promotes positive attitudes toward instruction but also facilitate learning when the subject matter requires dynamic motion and 3D visual cue. In this study, three questions are explored: (1) how 3D graphics affects student learning and attitude, in comparison with 2D graphics; (2) how animated graphics affects student learning and attitude, in comparison with static graphics; and (3) whether the use of 3D graphics, when they are supported by interactive animation, is the most effective visual cues to improve learning and to develop positive attitudes. A total of 145 eighth-grade students participated in a 2 x 2 factorial design study. The subjects were randomly assigned to one of four computer-based instructions: 2D static; 2D animated; 3D static; and 3D animated. The results indicated that: (1) Students in the 3D graphic condition exhibited more positive attitudes toward instruction than those in the 2D graphic condition. No group differences were found between the posttest score of 3D graphic condition and that of 2D graphic condition. However, students in the 3D graphic condition took less time for information retrieval on posttest than those in the 2D graphic condition. (2) Students in the animated graphic condition exhibited slightly more positive attitudes toward instruction than those in the static graphic condition. No group differences were found between the posttest score of animated graphic condition and that of static graphic condition. However, students in the animated graphic condition took less time for information retrieval on posttest than those in the static graphic condition. (3) Students in the 3D animated graphic condition

  2. Virtual reality hardware and graphic display options for brain-machine interfaces.

    PubMed

    Marathe, Amar R; Carey, Holle L; Taylor, Dawn M

    2008-01-15

    Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target-matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing.

  3. Learning from graphically integrated 2D and 3D representations improves retention of neuroanatomy

    NASA Astrophysics Data System (ADS)

    Naaz, Farah

    Visualizations in the form of computer-based learning environments are highly encouraged in science education, especially for teaching spatial material. Some spatial material, such as sectional neuroanatomy, is very challenging to learn. It involves learning the two dimensional (2D) representations that are sampled from the three dimensional (3D) object. In this study, a computer-based learning environment was used to explore the hypothesis that learning sectional neuroanatomy from a graphically integrated 2D and 3D representation will lead to better learning outcomes than learning from a sequential presentation. The integrated representation explicitly demonstrates the 2D-3D transformation and should lead to effective learning. This study was conducted using a computer graphical model of the human brain. There were two learning groups: Whole then Sections, and Integrated 2D3D. Both groups learned whole anatomy (3D neuroanatomy) before learning sectional anatomy (2D neuroanatomy). The Whole then Sections group then learned sectional anatomy using 2D representations only. The Integrated 2D3D group learned sectional anatomy from a graphically integrated 3D and 2D model. A set of tests for generalization of knowledge to interpreting biomedical images was conducted immediately after learning was completed. The order of presentation of the tests of generalization of knowledge was counterbalanced across participants to explore a secondary hypothesis of the study: preparation for future learning. If the computer-based instruction programs used in this study are effective tools for teaching anatomy, the participants should continue learning neuroanatomy with exposure to new representations. A test of long-term retention of sectional anatomy was conducted 4-8 weeks after learning was completed. The Integrated 2D3D group was better than the Whole then Sections

  4. A Microscopic Optically Tracking Navigation System That Uses High-resolution 3D Computer Graphics.

    PubMed

    Yoshino, Masanori; Saito, Toki; Kin, Taichi; Nakagawa, Daichi; Nakatomi, Hirofumi; Oyama, Hiroshi; Saito, Nobuhito

    2015-01-01

    Three-dimensional (3D) computer graphics (CG) are useful for preoperative planning of neurosurgical operations. However, application of 3D CG to intraoperative navigation is not widespread because existing commercial operative navigation systems do not show 3D CG in sufficient detail. We have developed a microscopic optically tracking navigation system that uses high-resolution 3D CG. This article presents the technical details of our microscopic optically tracking navigation system. Our navigation system consists of three components: the operative microscope, registration, and the image display system. An optical tracker was attached to the microscope to monitor the position and attitude of the microscope in real time; point-pair registration was used to register the operation room coordinate system, and the image coordinate system; and the image display system showed the 3D CG image in the field-of-view of the microscope. Ten neurosurgeons (seven males, two females; mean age 32.9 years) participated in an experiment to assess the accuracy of this system using a phantom model. Accuracy of our system was compared with the commercial system. The 3D CG provided by the navigation system coincided well with the operative scene under the microscope. Target registration error for our system was 2.9 ± 1.9 mm. Our navigation system provides a clear image of the operation position and the surrounding structures. Systems like this may reduce intraoperative complications.

  5. Medical workstation design: enhancing graphical interface with 3D anatomical atlas

    NASA Astrophysics Data System (ADS)

    Hoo, Kent S., Jr.; Wong, Stephen T. C.; Grant, Ellen

    1997-05-01

    The huge data archive of the UCSF Hospital Integrated Picture Archiving and Communication System gives healthcare providers access to diverse kinds of images and text for diagnosis and patient management. Given the mass of information accessible, however, conventional graphical user interface (GUI) approach overwhelms the user with forms, menus, fields, lists, and other widgets and causes 'information overloading.' This article describes a new approach that complements the conventional GUI with 3D anatomical atlases and presents the usefulness of this approach with a clinical neuroimaging application.

  6. Development and New Directions for the RELAP5-3D Graphical Users Interface

    SciTech Connect

    Mesina, George Lee

    2001-09-01

    The direction of development for the RELAP5 Graphical User Interfaces (RGUI) has been extended. In addition to existing plans for displaying all aspects of RELAP5 calculations, the plan now includes plans to display the calculations of a variety of codes including SCDAP, RETRAN and FLUENT. Recent work has included such extensions along with the previously planned and user-requested improvements and extensions. Visualization of heat-structures has been added. Adaptations were made for another computer program, SCDAP-3D, including plant core views. An input model builder for generating RELAP5-3D input files was partially implemented. All these are reported. Plans for future work are also summarized. These include an input processor that transfers steady-state conditions into an input file.

  7. 3D animation of facial plastic surgery based on computer graphics

    NASA Astrophysics Data System (ADS)

    Zhang, Zonghua; Zhao, Yan

    2013-12-01

    More and more people, especial women, are getting desired to be more beautiful than ever. To some extent, it becomes true because the plastic surgery of face was capable in the early 20th and even earlier as doctors just dealing with war injures of face. However, the effect of post-operation is not always satisfying since no animation could be seen by the patients beforehand. In this paper, by combining plastic surgery of face and computer graphics, a novel method of simulated appearance of post-operation will be given to demonstrate the modified face from different viewpoints. The 3D human face data are obtained by using 3D fringe pattern imaging systems and CT imaging systems and then converted into STL (STereo Lithography) file format. STL file is made up of small 3D triangular primitives. The triangular mesh can be reconstructed by using hash function. Top triangular meshes in depth out of numbers of triangles must be picked up by ray-casting technique. Mesh deformation is based on the front triangular mesh in the process of simulation, which deforms interest area instead of control points. Experiments on face model show that the proposed 3D animation facial plastic surgery can effectively demonstrate the simulated appearance of post-operation.

  8. Interactive 3-D graphics workstations in stereotaxy: clinical requirements, algorithms, and solutions

    NASA Astrophysics Data System (ADS)

    Ehricke, Hans-Heino; Daiber, Gerhard; Sonntag, Ralf; Strasser, Wolfgang; Lochner, Mathias; Rudi, Lothar S.; Lorenz, Walter J.

    1992-09-01

    In stereotactic treatment planning the spatial relationships between a variety of objects has to be taken into account in order to avoid destruction of vital brain structures and rupture of vasculature. The visualization of these highly complex relations may be supported by 3-D computer graphics methods. In this context the three-dimensional display of the intracranial vascular tree and additional objects, such as neuroanatomy, pathology, stereotactic devices, or isodose surfaces, is of high clinical value. We report an advanced rendering method for a depth-enhanced maximum intensity projection from magnetic resonance angiography (MRA) and a walk-through approach to the analysis of MRA volume data. Furthermore, various methods for a multiple-object 3-D rendering in stereotaxy are discussed. The development of advanced applications in medical imaging can hardly be successful if image acquisition problems are disregarded. We put particular emphasis on the use of conventional MRI and MRA for stereotactic guidance. The problem of MR distortion is discussed and a novel three- dimensional approach to the quantification and correction of the distortion patterns is presented. Our results suggest that the sole use of MR for stereotactic guidance is highly practical. The true three-dimensionality of the acquired datasets opens up new perspectives to stereotactic treatment planning. For the first time it is possible now to integrate all the necessary information into 3-D scenes, thus enabling an interactive 3-D planning.

  9. Evaluation of accelerated iterative x-ray CT image reconstruction using floating point graphics hardware

    NASA Astrophysics Data System (ADS)

    Kole, J. S.; Beekman, F. J.

    2006-02-01

    Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.

  10. Evaluation of accelerated iterative x-ray CT image reconstruction using floating point graphics hardware.

    PubMed

    Kole, J S; Beekman, F J

    2006-02-21

    Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.

  11. A few modeling and rendering techniques for computer graphics and their implementation on ultra hardware

    NASA Technical Reports Server (NTRS)

    Bidasaria, Hari

    1989-01-01

    Ultra network is a recently installed very high speed graphic hardware at NASA Langley Research Center. The Ultra Network interfaced to Voyager through its HSX channel is capable of transmitting up to 800 million bits of information per second. It is capable of displaying fifteen to twenty frames of precomputed images of size 1024 x 2368 with 24 bits of color information per pixel per second. Modeling and rendering techniques are being developed in computer graphics and implemented on Ultra hardware. A ray tracer is being developed for use at the Flight Software and Graphic branch. Changes were made to make the ray tracer compatible with Voyager.

  12. Assessment of 3D Viewers for the Display of Interactive Documents in the Learning of Graphic Engineering

    ERIC Educational Resources Information Center

    Barbero, Basilio Ramos; Pedrosa, Carlos Melgosa; Mate, Esteban Garcia

    2012-01-01

    The purpose of this study is to determine which 3D viewers should be used for the display of interactive graphic engineering documents, so that the visualization and manipulation of 3D models provide useful support to students of industrial engineering (mechanical, organizational, electronic engineering, etc). The technical features of 26 3D…

  13. Real time 3D structural and Doppler OCT imaging on graphics processing units

    NASA Astrophysics Data System (ADS)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2013-03-01

    In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.

  14. High-performance image reconstruction in fluorescence tomography on desktop computers and graphics hardware.

    PubMed

    Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann

    2011-11-01

    Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.

  15. Graphics hardware accelerated panorama builder for mobile phones

    NASA Astrophysics Data System (ADS)

    Bordallo López, Miguel; Hannuksela, Jari; Silvén, Olli; Vehviläinen, Markku

    2009-02-01

    Modern mobile communication devices frequently contain built-in cameras allowing users to capture highresolution still images, but at the same time the imaging applications are facing both usability and throughput bottlenecks. The difficulties in taking ad hoc pictures of printed paper documents with multi-megapixel cellular phone cameras on a common business use case, illustrate these problems for anyone. The result can be examined only after several seconds, and is often blurry, so a new picture is needed, although the view-finder image had looked good. The process can be a frustrating one with waits and the user not being able to predict the quality beforehand. The problems can be traced to the processor speed and camera resolution mismatch, and application interactivity demands. In this context we analyze building mosaic images of printed documents from frames selected from VGA resolution (640x480 pixel) video. High interactivity is achieved by providing real-time feedback on the quality, while simultaneously guiding the user actions. The graphics processing unit of the mobile device can be used to speed up the reconstruction computations. To demonstrate the viability of the concept, we present an interactive document scanning application implemented on a Nokia N95 mobile phone.

  16. Parallel Implementation of MAFFT on CUDA-Enabled Graphics Hardware.

    PubMed

    Zhu, Xiangyuan; Li, Kenli; Salah, Ahmad; Shi, Lin; Li, Keqin

    2015-01-01

    Multiple sequence alignment (MSA) constitutes an extremely powerful tool for many biological applications including phylogenetic tree estimation, secondary structure prediction, and critical residue identification. However, aligning large biological sequences with popular tools such as MAFFT requires long runtimes on sequential architectures. Due to the ever increasing sizes of sequence databases, there is increasing demand to accelerate this task. In this paper, we demonstrate how graphic processing units (GPUs), powered by the compute unified device architecture (CUDA), can be used as an efficient computational platform to accelerate the MAFFT algorithm. To fully exploit the GPU's capabilities for accelerating MAFFT, we have optimized the sequence data organization to eliminate the bandwidth bottleneck of memory access, designed a memory allocation and reuse strategy to make full use of limited memory of GPUs, proposed a new modified-run-length encoding (MRLE) scheme to reduce memory consumption, and used high-performance shared memory to speed up I/O operations. Our implementation tested in three NVIDIA GPUs achieves speedup up to 11.28 on a Tesla K20m GPU compared to the sequential MAFFT 7.015.

  17. Compressed sensing reconstruction for whole-heart imaging with 3D radial trajectories: a graphics processing unit implementation.

    PubMed

    Nam, Seunghoon; Akçakaya, Mehmet; Basha, Tamer; Stehning, Christian; Manning, Warren J; Tarokh, Vahid; Nezafat, Reza

    2013-01-01

    A disadvantage of three-dimensional (3D) isotropic acquisition in whole-heart coronary MRI is the prolonged data acquisition time. Isotropic 3D radial trajectories allow undersampling of k-space data in all three spatial dimensions, enabling accelerated acquisition of the volumetric data. Compressed sensing (CS) reconstruction can provide further acceleration in the acquisition by removing the incoherent artifacts due to undersampling and improving the image quality. However, the heavy computational overhead of the CS reconstruction has been a limiting factor for its application. In this article, a parallelized implementation of an iterative CS reconstruction method for 3D radial acquisitions using a commercial graphics processing unit is presented. The execution time of the graphics processing unit-implemented CS reconstruction was compared with that of the C++ implementation, and the efficacy of the undersampled 3D radial acquisition with CS reconstruction was investigated in both phantom and whole-heart coronary data sets. Subsequently, the efficacy of CS in suppressing streaking artifacts in 3D whole-heart coronary MRI with 3D radial imaging and its convergence properties were studied. The CS reconstruction provides improved image quality (in terms of vessel sharpness and suppression of noise-like artifacts) compared with the conventional 3D gridding algorithm, and the graphics processing unit implementation greatly reduces the execution time of CS reconstruction yielding 34-54 times speed-up compared with C++ implementation.

  18. A graphical user interface for calculation of 3D dose distribution using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Chow, J. C. L.; Leung, M. K. K.

    2008-02-01

    A software graphical user interface (GUI) for calculation of 3D dose distribution using Monte Carlo (MC) simulation is developed using MATLAB. This GUI (DOSCTP) provides a user-friendly platform for DICOM CT-based dose calculation using EGSnrcMP-based DOSXYZnrc code. It offers numerous features not found in DOSXYZnrc, such as the ability to use multiple beams from different phase-space files, and has built-in dose analysis and visualization tools. DOSCTP is written completely in MATLAB, with integrated access to DOSXYZnrc and CTCREATE. The program function may be divided into four subgroups, namely, beam placement, MC simulation with DOSXYZnrc, dose visualization, and export. Each is controlled by separate routines. The verification of DOSCTP was carried out by comparing plans with different beam arrangements (multi-beam/photon arc) on an inhomogeneous phantom as well as patient CT between the GUI and Pinnacle3. DOSCTP was developed and verified with the following features: (1) a built-in voxel editor to modify CT-based DOSXYZnrc phantoms for research purposes; (2) multi-beam placement is possible, which cannot be achieved using the current DOSXYZnrc code; (3) the treatment plan, including the dose distributions, contours and image set can be exported to a commercial treatment planning system such as Pinnacle3 or to CERR using RTOG format for plan evaluation and comparison; (4) a built-in RTOG-compatible dose reviewer for dose visualization and analysis such as finding the volume of hot/cold spots in the 3D dose distributions based on a user threshold. DOSCTP greatly simplifies the use of DOSXYZnrc and CTCREATE, and offers numerous features that not found in the original user-code. Moreover, since phase-space beams can be defined and generated by the user, it is a particularly useful tool to carry out plans using specifically designed irradiators/accelerators that cannot be found in the Linac library of commercial treatment planning systems.

  19. Dynamic 3-D computer graphics for designing a diagnostic tool for patients with schizophrenia.

    PubMed

    Farkas, Attila; Papathomas, Thomas V; Silverstein, Steven M; Kourtev, Hristiyan; Papayanopoulos, John F

    2016-11-01

    We introduce a novel procedure that uses dynamic 3-D computer graphics as a diagnostic tool for assessing disease severity in schizophrenia patients, based on their reduced influence of top-down cognitive processes in interpreting bottom-up sensory input. Our procedure uses the hollow-mask illusion, in which the concave side of the mask is misperceived as convex, because familiarity with convex faces dominates sensory cues signaling a concave mask. It is known that schizophrenia patients resist this illusion and their resistance increases with illness severity. Our method uses virtual masks rendered with two competing textures: (a) realistic features that enhance the illusion; (b) random-dot visual noise that reduces the illusion. We control the relative weights of the two textures to obtain psychometric functions for controls and patients and assess illness severity. The primary novelty is the use of a rotating mask that is easy to implement on a wide variety of portable devices and avoids the use of elaborate stereoscopic devices that have been used in the past. Thus our method, which can also be used to assess the efficacy of treatments, provides clinicians the advantage to bring the test to the patient's own environment, instead of having to bring patients to the clinic.

  20. Graphical Methods: A Review of Current Methods and Computer Hardware and Software. Technical Report No. 27.

    ERIC Educational Resources Information Center

    Bessey, Barbara L.; And Others

    Graphical methods for displaying data, as well as available computer software and hardware, are reviewed. The authors have emphasized the types of graphs which are most relevant to the needs of the National Center for Education Statistics (NCES) and its readers. The following types of graphs are described: tabulations, stem-and-leaf displays,…

  1. Towards a More Effective Use of 3D-Graphics in Mathematics Education--Utilisation of KETpic to Insert Figures into LATEX Documents

    ERIC Educational Resources Information Center

    Kitahara, Kiyoshi; Abe, Takayuki; Kaneko, Masataka; Yamashita, Satoshi; Takato, Setsuo

    2010-01-01

    Computer Algebra Systems (CAS) are equipped with rich facilities to show graphics, so the use of CAS to show 3D-graphics on screen is a popular tool for mathematics education. However, showing 3D-graphics in mass printed materials is a different story, since the clarity and preciseness of figures tend to be lost. To fill this gap, we developed…

  2. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    NASA Astrophysics Data System (ADS)

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  3. Effectiveness of Applying 2D Static Depictions and 3D Animations to Orthographic Views Learning in Graphical Course

    ERIC Educational Resources Information Center

    Wu, Chih-Fu; Chiang, Ming-Chin

    2013-01-01

    This study provides experiment results as an educational reference for instructors to help student obtain a better way to learn orthographic views in graphical course. A visual experiment was held to explore the comprehensive differences between 2D static and 3D animation object features; the goal was to reduce the possible misunderstanding…

  4. Graphics to H.264 video encoding for 3D scene representation and interaction on mobile devices using region of interest

    NASA Astrophysics Data System (ADS)

    Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Jia, Jie; Kim, Hae-Kwang

    2007-12-01

    In this paper, we propose a method of 3D graphics to video encoding and streaming that are embedded into a remote interactive 3D visualization system for rapidly representing a 3D scene on mobile devices without having to download it from the server. In particular, a 3D graphics to video framework is presented that increases the visual quality of regions of interest (ROI) of the video by performing more bit allocation to ROI during H.264 video encoding. The ROI are identified by projection 3D objects to a 2D plane during rasterization. The system offers users to navigate the 3D scene and interact with objects of interests for querying their descriptions. We developed an adaptive media streaming server that can provide an adaptive video stream in term of object-based quality to the client according to the user's preferences and the variation of network bandwidth. Results show that by doing ROI mode selection, PSNR of test sample slightly change while visual quality of objects increases evidently.

  5. A Quasi-3D, Purcell-Filtered Hardware Module for Quantum Information

    NASA Astrophysics Data System (ADS)

    Axline, C.; Reagor, M.; Shain, K.; Reinhold, P.; Brecht, T.; Holland, E.; Wang, C.; Heeres, R.; Frunzio, L.; Schoelkopf, R. J.

    2015-03-01

    The advent of 3D circuit quantum electrodynamics has provided an ultra-low-loss environment for superconducting qubits, boosting qubit coherences above 100 microseconds and linear resonator lifetimes above 10 milliseconds. Planar devices, however, allow lithographic control of parameters and suggest greater scalability. We have developed a single-chip, seamless-cavity architecture that answers the call for a modular computational element, comprising 3D transmon, fast, Purcell-filtered readout, and long-lived storage cavity. This design incorporates advantages of both 2D and 3D architectures. It also serves as a novel testbed for qubit loss mechanisms, as resonator and qubit modes have similar material participations. Initial results--T1 and T2 comparable to the best 3D transmons--shift blame away from the metal-substrate interfaces widely considered to be the limiting loss channel in current-generation transmons, and further experiments using this system will probe these losses more carefully. We propose several modifications and extensions to these modules, both to miniaturize the design and to build more sophisticated quantum systems. Work supported by: IARPA, ARO, ONR, and NSF.

  6. Hub-based simulation and graphics hardware accelerated visualization for nanotechnology applications.

    PubMed

    Qiao, Wei; McLennan, Michael; Kennell, Rick; Ebert, David S; Klimeck, Gerhard

    2006-01-01

    The Network for Computational Nanotechnology (NCN) has developed a science gateway at nanoHUB.org for nanotechnology education and research. Remote users can browse through online seminars and courses, and launch sophisticated nanotechnology simulation tools, all within their web browser. Simulations are supported by a middleware that can route complex jobs to grid supercomputing resources. But what is truly unique about the middleware is the way that it uses hardware accelerated graphics to support both problem setup and result visualization. This paper describes the design and integration of a remote visualization framework into the nanoHUB for interactive visual analytics of nanotechnology simulations. Our services flexibly handle a variety of nanoscience simulations, render them utilizing graphics hardware acceleration in a scalable manner, and deliver them seamlessly through the middleware to the user. Rendering is done only on-demand, as needed, so each graphics hardware unit can simultaneously support many user sessions. Additionally, a novel node distribution scheme further improves our system's scalability. Our approach is not only efficient but also cost-effective. Only a half-dozen render nodes are anticipated to support hundreds of active tool sessions on the nanoHUB. Moreover, this architecture and visual analytics environment provides capabilities that can serve many areas of scientific simulation and analysis beyond nanotechnology with its ability to interactively analyze and visualize multivariate scalar and vector fields.

  7. Hacking for astronomy: can 3D printers and open-hardware enable low-cost sub-/millimeter instrumentation?

    NASA Astrophysics Data System (ADS)

    Ferkinhoff, Carl

    2014-07-01

    There have been several exciting developments in the technologies commonly used n in the hardware hacking community. Advances in low cost additive-manufacturing processes (i.e. 3D-printers) and the development of openhardware projects, which have produced inexpensive and easily programmable micro-controllers and micro-computers (i.e. Arduino and Raspberry Pi) have opened a new door for individuals seeking to make their own devices. Here we describe the potential for these technologies to reduce costs in construction and development of submillimeter/millimeter astronomical instrumentation. Specifically we have begun a program to measure the optical properties of the custom plastics used in 3D-printers as well as the printer accuracy and resolution to assess the feasibility of directly printing sub- /millimeter transmissive optics. We will also discuss low cost designs for cryogenic temperature measurement and control utilizing Arduino and Raspberry Pi.

  8. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures

    NASA Astrophysics Data System (ADS)

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.

    2012-03-01

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in realtime by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  9. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures.

    PubMed

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R

    2012-02-23

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  10. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming.

  11. RGUI 1.0, New Graphical User Interface for RELAP5-3D

    SciTech Connect

    Mesina, George Lee; Galbraith, James Andrew

    1999-04-01

    With the advent of three-dimensional modeling in nuclear safety analysis codes, the need has arisen for a new display methodology. Currently, analysts either sort through voluminous numerical displays of data at points in a region, or view color coded interpretations of the data on a two-dimensional rendition of the plant. RGUI 1.0 provides 3D capability for displaying data. The 3D isometric hydrodynamic image is built automatically from the input deck without additional input from the user. Standard view change features allow the user to focus on only the important data. Familiar features that are standard to the nuclear industry, such as run, interact, and monitor, are included. RGUI 1.0 reduces the difficulty of analyzing complex three dimensional plants.

  12. RGUI 1.0, New Graphical User Interface for RELAP5-3D

    SciTech Connect

    G. L. Mesina; J. Galbraith

    1999-04-01

    With the advent of three-dimensional modeling in nuclear safety analysis codes, the need has arisen for a new display methodology. Currently, analysts either sort through voluminous numerical displays of data at points in a region, or view color coded interpretations of the data on a two-dimensional rendition of the plant. RGUI 1.0 provides 3D capability for displaying data. The 3D isometric hydrodynamic image is built automatically from the input deck without additional input from the user. Standard view change features allow the user to focus on only the important data. Familiar features that are standard to the nuclear industry, such as run, interact, and monitor, are included. RGUI 1.0 reduces the difficulty of analyzing complex three-dimensional plants.

  13. Real World Issues in Developing a Malaysian Forest Battlefield Environment for Small Unit Tactics Using 3D Graphics

    NASA Astrophysics Data System (ADS)

    Alsagoff, Syed Nasir

    In the military, training is essential as preparation for war. Small unit training involves training for platoon and section sized unit. The soldiers must train to maneuver, shoot and communicate. In order for the training to be successful, it must be as realistic as possible. Realistic training allows for the soldiers to be mentally and physically prepared for the battlefield. Unfortunately, there is a wide gap between training and the resources required to properly conduct the training [5]. Resources consist of suitable training location and material support such as ammunition, ration and fuel. Limitation on the resources means that training cannot be as realistic as possible. To ensure effective use of the limited training resources, training should be conducted in a simulated environment before migrating to a live environment. This paper will attempt to discuss the real world issues in developing a Malaysian Forest Battlefield Environment 3D Simulation for Small Unit Tactic using 3D Graphics.

  14. Isoparametric 3-D Finite Element Mesh Generation Using Interactive Computer Graphics

    NASA Technical Reports Server (NTRS)

    Kayrak, C.; Ozsoy, T.

    1985-01-01

    An isoparametric 3-D finite element mesh generator was developed with direct interface to an interactive geometric modeler program called POLYGON. POLYGON defines the model geometry in terms of boundaries and mesh regions for the mesh generator. The mesh generator controls the mesh flow through the 2-dimensional spans of regions by using the topological data and defines the connectivity between regions. The program is menu driven and the user has a control of element density and biasing through the spans and can also apply boundary conditions, loads interactively.

  15. A Parallel Implementation of a Smoothed Particle Hydrodynamics Method on Graphics Hardware Using the Compute Unified Device Architecture

    SciTech Connect

    Wong Unhong; Wong Honcheng; Tang Zesheng

    2010-05-21

    The smoothed particle hydrodynamics (SPH), which is a class of meshfree particle methods (MPMs), has a wide range of applications from micro-scale to macro-scale as well as from discrete systems to continuum systems. Graphics hardware, originally designed for computer graphics, now provide unprecedented computational power for scientific computation. Particle system needs a huge amount of computations in physical simulation. In this paper, an efficient parallel implementation of a SPH method on graphics hardware using the Compute Unified Device Architecture is developed for fluid simulation. Comparing to the corresponding CPU implementation, our experimental results show that the new approach allows significant speedups of fluid simulation through handling huge amount of computations in parallel on graphics hardware.

  16. Cross-Platform Graphical User Interface with fast 3-D Rendering for Particle-in-Cell Simulations

    NASA Astrophysics Data System (ADS)

    Bruhwiler, David; Luetkemeyer, Kelly; Cary, John

    1999-11-01

    The Graphical User Interface (GUI) for XOOPIC (X11-based Object-Oriented Particle-in-Cell) is being ported to Qt, a cross-platform C++ windowing toolkit, thus permitting the code to run on PC's running both Windows 95/98/NT and Linux, as well as all commercial Unix platforms. All 3-D graphics will be handled through OpenGL, the cross-platform standard for fast 3-D rendering. The use of object-oriented design (OOD) techniques keeps the GUI/physics interface clean, and minimizes the impact of GUI development on the physics code. OOD also improves the maintainability and extensibility of large scientific simulation codes, while allowing for cross-platform portability and ready interchange of individual algorithms or entire physics kernels. Planned new GUI features include interactive modifications of the simulation parameters, including generation of a slowly-varying mesh and automatic updating of a corresponding input file. Improved modeling of high-power microwave tubes is one of the primary applications being targeted by this project.

  17. 3D Computer graphics simulation to obtain optimal surgical exposure during microvascular decompression of the glossopharyngeal nerve.

    PubMed

    Hiraishi, Tetsuya; Matsushima, Toshio; Kawashima, Masatou; Nakahara, Yukiko; Takahashi, Yuichi; Ito, Hiroshi; Oishi, Makoto; Fujii, Yukihiko

    2013-10-01

    The affected artery in glossopharyngeal neuralgia (GPN) is most often the posterior inferior cerebellar artery (PICA) from the caudal side or the anterior inferior cerebellar artery (AICA) from the rostral side. This technical report describes two representative cases of GPN, one with PICA as the affected artery and the other with AICA, and demonstrates the optimal approach for each affected artery. We used 3D computer graphics (3D CG) simulation to consider the ideal transposition of the affected artery in any position and approach. Subsequently, we performed microvascular decompression (MVD) surgery based on this simulation. For PICA, we used the transcondylar fossa approach in the lateral recumbent position, very close to the prone position, with the patient's head tilted anteriorly for caudal transposition of PICA. In contrast, for AICA, we adopted a lateral suboccipital approach with opening of the lateral cerebellomedullary fissure, to visualize better the root entry zone of the glossopharyngeal nerve and to obtain a wide working space in the cerebellomedullary cistern, for rostral transposition of AICA. Both procedures were performed successfully. The best surgical approach for MVD in patients with GPN is contingent on the affected artery--PICA or AICA. 3D CG simulation provides tailored approach for MVD of the glossopharyngeal nerve, thereby ensuring optimal surgical exposure.

  18. High-Performance Active Liquid Crystalline Shutters for Stereo Computer Graphics and Other 3-D Technologies

    NASA Astrophysics Data System (ADS)

    Sergan, Tatiana; Sergan, Vassili; MacNaughton, Boyd

    2007-03-01

    Stereoscopic computer displays create a 3-D image by alternating two separate images for each of the viewer's eyes. Field-sequential viewing systems supply each eye with the appropriate image by blocking the wrong image for the wrong eye. In our work, we have developed a new mode of operation of a liquid crystal shutter that provides for highly effective blockage of undesired images when the screen is viewed in all viewing directions and eliminates color shifts associated with long turn-off times. The goal was achieved by using a π-cell filled with low-rotational-viscosity and high-birefringence fluid and additional negative birefringence films with splay optic axis distribution. The shutter demonstrates a contrast ratio higher than 800:1 for head-on viewing and 10:1 in the viewing cone of about 45°. The relaxation time of the shutter does not exceed 2 ms and is the same for all three primary colors.

  19. Microvision system (MVS): a 3D computer graphic-based microrobot telemanipulation and position feedback by vision

    NASA Astrophysics Data System (ADS)

    Sulzmann, Armin; Breguet, Jean-Marc; Jacot, Jacques

    1995-12-01

    The aim of our project is to control the position in 3D-space of a micro robot with sub micron accuracy and manipulate Microsystems aided by a real time 3D computer graphics (virtual reality). As Microsystems and micro structures become smaller, it is necessary to build a micro robot ((mu) -robot) capable of manipulating these systems and structures with a precision of 1 micrometers or even higher. These movements have to be controlled and guided. The first part of our project was to develop a real time 3D computer graphics (virtual reality) environment man-machine interface to guide the newly developed robot similar to the environment we built in a macroscopic robotics. Secondly we want to evaluate measurement techniques to verify its position in the region of interest (workspace). A new type of microrobot has been developed for our purposed. Its simple and compact design is believed to be of promise in the microrobotics field. Stepping motion allows speed up to 4 mm/s. Resolution smaller than 10 nm is achievable. We also focus on the vision system and on the virtual reality interface of the complex system. Basically the user interacts with the virtual 3D microscope and sees the (mu) -robot as if he is looking through a real microscope. He is able to simulate the assembly of the missing parts, e.g. parts of the micrometer, beforehand in order to verify the assembly manipulation steps such assembly of the missing parts, e.g. parts of a micromotor, beforehand in order to verify the assembly manipulation steps such as measuring, moving the table to the right position or performing the manipulation. Micro manipulation is form of a teleoperation is then performed by the robot-unit and the position is controlled by vision. First results have shown, that a guided manipulations with submicronics absolute accuracy can be achieved. Key idea of this approach is to use the intuitiveness of immersed vision to perform robotics tasks in an environment where human has only access

  20. Graphical interface for the physics-based generation of inputs to 3D MEEC SGEMP and SREMP simulations

    SciTech Connect

    Bland, M; Wondra, J; Nunan, S; Walters, D

    1998-12-01

    A graphical user interface (GUI) is under development for the MEEC family of SGEMP and SREMP simulation codes. These codes are workhorse legacy codes that have been in use for nearly two decades, with modifications and enhanced physics models added throughout the years. The MEEC codes are currently being evaluated for use by the DOE in the Dual Revalidation program and experiments at NIF. The new GUI makes the codes more accessible and less prone to input errors by automatically generating the parameters and grids that previously had to be designed by hand. physics-based algorithms define the simulation volume with expanding meshes. Users are able to specify objects, materials, and emission surfaces through dialogs and input boxes. 3D and orthographic views are available to view objects in the volume. Zone slice views are available for stepping through the overlay of objects on the mesh in planes aligned with the primary axes.

  1. Graphical interface for the physics-based generation of inputs to 3D MEEC SGEMP and SREMP simulations

    SciTech Connect

    Bland, M; Walters, D; Wondra, J

    1999-06-01

    A graphical user interface (GUI) is under development for the MEEC family of SGEMP and SREMP simulation codes [1,2]. These codes are ''workhorse'' legacy codes that have been in use for nearly two decades, with modifications and enhanced physics models added throughout the years. The MEEC codes are currently being evaluated for use by the DOE in the Dual Revalidation Program and experiments at NIF. The new GUI makes the codes more accessible and less prone to input errors by automatically generating the parameters and grids that previously had to be designed ''by hand''. Physics-based algorithms define the simulation volume with expanding meshes. Users are able to specify objects, materials, and emission surfaces through dialogs and input boxes. 3D and orthographic views are available to view objects in the volume. Zone slice views are available for stepping through the overlay of objects on the mesh in planes aligned with the primary axes.

  2. Hardware

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The full complement of EDOMP investigations called for a broad spectrum of flight hardware ranging from commercial items, modified for spaceflight, to custom designed hardware made to meet the unique requirements of testing in the space environment. In addition, baseline data collection before and after spaceflight required numerous items of ground-based hardware. Two basic categories of ground-based hardware were used in EDOMP testing before and after flight: (1) hardware used for medical baseline testing and analysis, and (2) flight-like hardware used both for astronaut training and medical testing. To ensure post-landing data collection, hardware was required at both the Kennedy Space Center (KSC) and the Dryden Flight Research Center (DFRC) landing sites. Items that were very large or sensitive to the rigors of shipping were housed permanently at the landing site test facilities. Therefore, multiple sets of hardware were required to adequately support the prime and backup landing sites plus the Johnson Space Center (JSC) laboratories. Development of flight hardware was a major element of the EDOMP. The challenges included obtaining or developing equipment that met the following criteria: (1) compact (small size and light weight), (2) battery-operated or requiring minimal spacecraft power, (3) sturdy enough to survive the rigors of spaceflight, (4) quiet enough to pass acoustics limitations, (5) shielded and filtered adequately to assure electromagnetic compatibility with spacecraft systems, (6) user-friendly in a microgravity environment, and (7) accurate and efficient operation to meet medical investigative requirements.

  3. Simplification of 3D Graphics for Mobile Devices: Exploring the Trade-off Between Energy Savings and User Perceptions of Visual Quality

    NASA Astrophysics Data System (ADS)

    Vatjus-Anttila, Jarkko; Koskela, Timo; Lappalainen, Tuomas; Häkkilä, Jonna

    2017-03-01

    3D graphics have quickly become a popular form of media that can also be accessed with today's mobile devices. However, the use of 3D applications with mobile devices is typically a very energy-consuming task due to the processing complexity and the large file size of 3D graphics. As a result, their use may lead to rapid depletion of the limited battery life. In this paper, we investigate how much energy savings can be gained in the transmission and rendering of 3D graphics by simplifying geometry data. In this connection, we also examine users' perceptions on the visual quality of the simplified 3D models. The results of this paper provide new knowledge on the energy savings that can be gained through geometry simplification, as well as on how much the geometry can be simplified before the visual quality of 3D models becomes unacceptable for the mobile users. Based on the results, it can be concluded that geometry simplification can provide significant energy savings for mobile devices without disturbing the users. When geometry simplification is combined with distance based adjustment of detail, up to 52% energy savings were gained in our experiments compared to using only a single high quality 3D model.

  4. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    Apollo's 3-dimensional graphics hardware, but does not take advantage of the shading and hidden line/surface removal capabilities of the Apollo DN10000. Although this implementation does not offer a capability for putting text on plots, it does support the use of a mouse to translate, rotate, or zoom in on views. The version 3.6b+ Apollo implementations of PLOT3D (ARC-12789) and PLOT3D/TURB3D (ARC-12785) were developed for use on Apollo computers running UNIX System V with BSD 4.3 extensions and the graphics library GMR3D Version 2.0. The standard distribution media for each of these programs is a 9-track, 6250 bpi magnetic tape in TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: 1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); 2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777, ARC-12781); 3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are trademarks of Hewlett-Packard, Incorporated. UNIX is a registered trademark of AT&T.

  5. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    Apollo's 3-dimensional graphics hardware, but does not take advantage of the shading and hidden line/surface removal capabilities of the Apollo DN10000. Although this implementation does not offer a capability for putting text on plots, it does support the use of a mouse to translate, rotate, or zoom in on views. The version 3.6b+ Apollo implementations of PLOT3D (ARC-12789) and PLOT3D/TURB3D (ARC-12785) were developed for use on Apollo computers running UNIX System V with BSD 4.3 extensions and the graphics library GMR3D Version 2.0. The standard distribution media for each of these programs is a 9-track, 6250 bpi magnetic tape in TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: 1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); 2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777, ARC-12781); 3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are trademarks of Hewlett-Packard, Incorporated. UNIX is a registered trademark of AT&T.

  6. NATURAL graphics

    NASA Technical Reports Server (NTRS)

    Jones, R. H.

    1984-01-01

    The hardware and software developments in computer graphics are discussed. Major topics include: system capabilities, hardware design, system compatibility, and software interface with the data base management system.

  7. User's Guide for Subroutine PLOT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PLOT3D is a subroutine package which generates a variety of three dimensional hidden…

  8. Programmer's Guide for Subroutine PLOT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PLOT3D is a subroutine package which generates a variety of three-dimensional hidden…

  9. User's Guide for Subroutine PRNT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PRNT3D is a subroutine package which generates a variety of printer plot displays. The displays…

  10. Programmer's Guide for Subroutine PRNT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PRNT3D is a subroutine package which generates a variety of printed plot displays. The displays…

  11. Natural frequencies and mode shapes of an automotive tire with interpretation and classification using 3-D computer graphics

    NASA Astrophysics Data System (ADS)

    Kung, L. E.; Soedel, W.; Yang, T. Y.; Charek, L. T.

    1985-10-01

    Natural frequencies and mode shapes of a radial tire have been obtained by using an efficient, 12 degree of freedom, doubly curved thin shell finite element of revolution with smeared-out properties of laminate composite materials. The finite element formulation includes the geometrical non-linearities so that the prestressed state of the tire due to inflation is taken into account. While the basic formulation follows that of earlier work done at Purdue University, a general and efficient computational procedure and program have been developed, with a main feature being integration with computer graphics. Thus the complex tire geometry can be modeled more accurately and the free vibration mode shapes can be displayed graphically. This allows an interpretation and classification of mode shapes beyond the classical mode shapes of tires that have been presented in the literature. It allows further insight into the relationship between transverse and tangential motions beyond what has been conceived at the present state of the art of experimentation. Theoretical results are compared with experimental results obtained from modal analysis and good agreement is shown.

  12. The Use of 3D Graphic Modelling in Geoarchaeological Investigations (Bykowszczyzna Archaeological Site near Kock, E Poland)

    NASA Astrophysics Data System (ADS)

    Łojek, Jacek

    2012-01-01

    The objective of this paper was to use the ArcView 3.2 application for spatial modelling of the exploration forms (pits) in the Bykowszczyzna 8 archaeological site. The 3D digital documentation at a specific scale makes possible easy archiving, presentation, and simple spatial analyses of the examined objects. The ArcView 3.2 programme and its extensions (Spatial Analyst and 3D Analyst), commonly used as analytical tools in geomorphology, were inventively used for inventory-making in the archaeological site. Traditional field sketches were only a base, which enables entering data into the programme, and don't documentation material in itself as it used to be. The method of data visual ization proposed by the author gives new possibilities for using the GIS platform software. W artykule zaprezentowano projekt wykorzystania aplikacji ArcView 3.2 w modelowaniu obrazu form eksploracyjnych na stanowisku archeologicznym Bykowszczyzna 8. Stanowisko zostało objęte programem ratowniczych badań archeologicznych w związku z budową obwodnicy miasta Kocka na trasie krajowej nr 19 relacji Siemiatycze-Lublin-Nisko. Zasadniczy etap prac archeologicznych na stanowisku Bykowszczyzna 8 obejmował pozyskanie oraz inwentaryzację materiału zabytkowego wypełniającego formy. W wyniku wybrania tego materiału, w obszarze stanowiska pozostają charakterystyczne jamy gospodarcze, które stanowią negatywowy obraz wypełnienia formy. Kształt jam jest dokumentowany w postaci szkiców oraz fotografii. Dokumentacja ta stanowi punkt wyjścia procesu digitalizacji (materiał źródłowy). Treścią artykułu jest sporządzenie cyfrowej dokumentacji zawierającej plany stanowiska w kilku poziomach szczegółowości (dla pasa, pola oraz pojedynczych form) oraz wygenerowanie modeli w standardzie 3D. Dokumentacja taka umożliwia łatwą archiwizację oraz czytelną prezentację wybranych obiektów. Możliwe jest również wykonanie analiz przestrzennych. Funkcje programu ArcView 3.2. oraz

  13. Extracellular vesicles of calcifying turkey leg tendon characterized by immunocytochemistry and high voltage electron microscopic tomography and 3-D graphic image reconstruction

    NASA Technical Reports Server (NTRS)

    Landis, W. J.; Hodgens, K. J.; McKee, M. D.; Nanci, A.; Song, M. J.; Kiyonaga, S.; Arena, J.; McEwen, B.

    1992-01-01

    To gain insight into the structure and possible function of extracellular vesicles in certain calcifying vertebrate tissues, normally mineralizing leg tendons from the domestic turkey, Meleagris gallopavo, have been studied in two separate investigations, one concerning the electron microscopic immunolocalization of the 66 kDa phosphoprotein, osteopontin, and the other detailing the organization and distribution of mineral crystals associated with the vesicles as determined by high voltage microscopic tomography and 3-D graphic image reconstruction. Immunolabeling shows that osteopontin is related to extracellular vesicles of the tendon in the sense that its initial presence appears coincident with the development of mineral associated with the vesicle loci. By high voltage electron microscopy and 3-D imaging techniques, mineral crystals are found to consist of small irregularly shaped particles somewhat randomly oriented throughout individual vesicles sites. Their appearance is different from that found for the mineral observed within calcifying tendon collagen, and their 3-D disposition is not regularly ordered. Possible spatial and temporal relationships of vesicles, osteopontin, mineral, and collagen are being examined further by these approaches.

  14. Graphics

    ERIC Educational Resources Information Center

    Post, Susan

    1975-01-01

    An art teacher described an elective course in graphics which was designed to enlarge a student's knowledge of value, color, shape within a shape, transparency, line and texture. This course utilized the technique of working a multi-colored print from a single block that was first introduced by Picasso. (Author/RK)

  15. ProteinVista: a fast molecular visualization system using Microsoft Direct3D.

    PubMed

    Park, Chan-Yong; Park, Sung-Hee; Park, Soo-Jun; Park, Sun-Hee; Hwang, Chi-Jung

    2008-09-01

    Many tools have been developed to visualize protein and molecular structures. Most high quality protein visualization tools use the OpenGL graphics library as a 3D graphics system. Currently, the performance of recent 3D graphics hardware has rapidly improved. Recent high-performance 3D graphics hardware support Microsoft Direct3D graphics library more than OpenGL and have become very popular in personal computers (PCs). In this paper, a molecular visualization system termed ProteinVista is proposed. ProteinVista is well-designed visualization system using the Microsoft Direct3D graphics library. It provides various visualization styles such as the wireframe, stick, ball and stick, space fill, ribbon, and surface model styles, in addition to display options for 3D visualization. As ProteinVista is optimized for recent 3D graphics hardware platforms and because it uses a geometry instancing technique, its rendering speed is 2.7 times faster compared to other visualization tools.

  16. A computer-controlled near-field electrospinning setup and its graphic user interface for precision patterning of functional nanofibers on 2D and 3D substrates.

    PubMed

    Bisht, Gobind; Nesterenko, Sergiy; Kulinsky, Lawrence; Madou, Marc

    2012-08-01

    Electrospinning is a versatile technique for production of nanofibers. However, it lacks the precision and control necessary for fabrication of nanofiber-based devices. The positional control of the nanofiber placement can be dramatically improved using low-voltage near-field electrospinning (LV-NFES). LV-NFES allows nanofibers to be patterned on 2D and 3D substrates. However, use of NFES requires low working distance between the electrospinning nozzle and substrate, manual jet initiation, and precise substrate movement to control fiber deposition. Environmental factors such as humidity also need to be controlled. We developed a computer-controlled automation strategy for LV-NFES to improve performance and reliability. With this setup, the user is able to control the relevant sensor and actuator parameters through a custom graphic user interface application programmed on the C#.NET platform. The stage movement can be programmed as to achieve any desired nanofiber pattern and thickness. The nanofiber generation step is initiated through a software-controlled linear actuator. Parameter setting files can be saved into an Excel sheet and can be used subsequently in running multiple experiments. Each experiment is automatically video recorded and stamped with the pertinent real-time parameters. Humidity is controlled with ±3% accuracy through a feedback loop. Further improvements, such as real-time droplet size control for feed rate regulation are in progress.

  17. Robot graphic simulation testbed

    NASA Technical Reports Server (NTRS)

    Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.

    1991-01-01

    The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts.

  18. Techniques for interactive 3-D scientific visualization

    SciTech Connect

    Glinert, E.P. . Dept. of Computer Science); Blattner, M.M. Hospital and Tumor Inst., Houston, TX . Dept. of Biomathematics California Univ., Davis, CA . Dept. of Applied Science Lawrence Livermore National Lab., CA ); Becker, B.G. . Dept. of Applied Science Lawrence Livermore National La

    1990-09-24

    Interest in interactive 3-D graphics has exploded of late, fueled by (a) the allure of using scientific visualization to go where no-one has gone before'' and (b) by the development of new input devices which overcome some of the limitations imposed in the past by technology, yet which may be ill-suited to the kinds of interaction required by researchers active in scientific visualization. To resolve this tension, we propose a flat 5-D'' environment in which 2-D graphics are augmented by exploiting multiple human sensory modalities using cheap, conventional hardware readily available with personal computers and workstations. We discuss how interactions basic to 3-D scientific visualization, like searching a solution space and comparing two such spaces, are effectively carried out in our environment. Finally, we describe 3DMOVE, an experimental microworld we have implemented to test out some of our ideas. 40 refs., 4 figs.

  19. Scalable large format 3D displays

    NASA Astrophysics Data System (ADS)

    Chang, Nelson L.; Damera-Venkata, Niranjan

    2010-02-01

    We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.

  20. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  1. Met.3D - a new open-source tool for interactive 3D visualization of ensemble weather forecasts

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Kern, Michael; Schäfler, Andreas; Westermann, Rüdiger

    2015-04-01

    We introduce Met.3D, a new open-source tool for the interactive 3D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns, however, is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output -- 3D visualisation, ensemble visualization, and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2D visualization methods commonly used in meteorology to 3D visualization by combining both visualization types in a 3D context. It implements methods that address the issue of spatial perception in the 3D view as well as approaches to using the ensemble in order to assess forecast uncertainty. Interactivity is key to the Met.3D approach. The tool uses modern graphics hardware technology to achieve interactive visualization of present-day numerical weather prediction datasets on standard consumer hardware. Met.3D supports forecast data from the European Centre for Medium Range Weather Forecasts and operates directly on ECMWF hybrid sigma-pressure level grids. In this presentation, we provide an overview of the software --illustrated with short video examples--, and give information on its availability.

  2. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  3. 3D Ta/TaO x /TiO2/Ti synaptic array and linearity tuning of weight update for hardware neural network applications

    NASA Astrophysics Data System (ADS)

    Wang, I.-Ting; Chang, Chih-Cheng; Chiu, Li-Wen; Chou, Teyuh; Hou, Tuo-Hung

    2016-09-01

    The implementation of highly anticipated hardware neural networks (HNNs) hinges largely on the successful development of a low-power, high-density, and reliable analog electronic synaptic array. In this study, we demonstrate a two-layer Ta/TaO x /TiO2/Ti cross-point synaptic array that emulates the high-density three-dimensional network architecture of human brains. Excellent uniformity and reproducibility among intralayer and interlayer cells were realized. Moreover, at least 50 analog synaptic weight states could be precisely controlled with minimal drifting during a cycling endurance test of 5000 training pulses at an operating voltage of 3 V. We also propose a new state-independent bipolar-pulse-training scheme to improve the linearity of weight updates. The improved linearity considerably enhances the fault tolerance of HNNs, thus improving the training accuracy.

  4. Computer graphics and the graphic artist

    NASA Technical Reports Server (NTRS)

    Taylor, N. L.; Fedors, E. G.; Pinelli, T. E.

    1985-01-01

    A centralized computer graphics system is being developed at the NASA Langley Research Center. This system was required to satisfy multiuser needs, ranging from presentation quality graphics prepared by a graphic artist to 16-mm movie simulations generated by engineers and scientists. While the major thrust of the central graphics system was directed toward engineering and scientific applications, hardware and software capabilities to support the graphic artists were integrated into the design. This paper briefly discusses the importance of computer graphics in research; the central graphics system in terms of systems, software, and hardware requirements; the application of computer graphics to graphic arts, discussed in terms of the requirements for a graphic arts workstation; and the problems encountered in applying computer graphics to the graphic arts. The paper concludes by presenting the status of the central graphics system.

  5. Application of innovative rendering techniques for the hardware-in-the-loop (HIL) scene generation

    NASA Astrophysics Data System (ADS)

    Bergin, Thomas P.

    2003-09-01

    A revolution is underway within commercial PC video graphics, driven mainly by the 3-D gaming community and its demands for customizable lighting effects and realistic, visually appealing, 3-D rendering. This revolution is bringing about a configurable transformation and lighting (T&L) engine within modern PC video graphics hardware. The results of these technological advancements will profoundly impact the way computer-based rendering is done. Although PC graphics hardware continues to change rapidly, it has evolved to the point where it can be made to address most of the Hardware-In-the-Loop (HWIL) scene generation demands which historically could be accomplished only on costly graphics workstations. With the ability to control how operations are performed within the hardware rendering process, it is possible to implement customized per-pixel spatial and lighting effects. To illustrate how these capabilities can be applied to solve certain HWIL scene generation problems. A graphics hardware approach will be implemented to demonstrate a method of achieving increased monochrome intensity resolution and a user-defined spatial distortion. There is great potential in modern graphics hardware. The limits are becoming less a function of the hardware capabilities and more a function of the ability of engineers and scientists to exploit the functionality of this rapidly advancing hardware rendering technology.

  6. Evaluating the Effectiveness of Waterside Security Alternatives for Force Protection of Navy Ships and Installations Using X3D Graphics and Agent-Based Simulation

    DTIC Science & Technology

    2006-09-01

    MOTIVATION ................................................................................................2 D. OBJECTIVES...16 Figure 9. Flux Studio 2.0 (formerly VizX3D) screen capture showing a close up of a female terrorist...since the USS Cole attack in Aden Harbor, Yemen on October 12, 2000 (CRS 2001). The Cole attack was a primary motivation for Harney’s work. On

  7. Unique digital imagery interface between a silicon graphics computer and the kinetic kill vehicle hardware-in-the-loop simulator (KHILS) wideband infrared scene projector (WISP)

    NASA Astrophysics Data System (ADS)

    Erickson, Ricky A.; Moren, Stephen E.; Skalka, Marion S.

    1998-07-01

    Providing a flexible and reliable source of IR target imagery is absolutely essential for operation of an IR Scene Projector in a hardware-in-the-loop simulation environment. The Kinetic Kill Vehicle Hardware-in-the-Loop Simulator (KHILS) at Eglin AFB provides the capability, and requisite interfaces, to supply target IR imagery to its Wideband IR Scene Projector (WISP) from three separate sources at frame rates ranging from 30 - 120 Hz. Video can be input from a VCR source at the conventional 30 Hz frame rate. Pre-canned digital imagery and test patterns can be downloaded into stored memory from the host processor and played back as individual still frames or movie sequences up to a 120 Hz frame rate. Dynamic real-time imagery to the KHILS WISP projector system, at a 120 Hz frame rate, can be provided from a Silicon Graphics Onyx computer system normally used for generation of digital IR imagery through a custom CSA-built interface which is available for either the SGI/DVP or SGI/DD02 interface port. The primary focus of this paper is to describe our technical approach and experience in the development of this unique SGI computer and WISP projector interface.

  8. Open-GL-based stereo system for 3D measurements

    NASA Astrophysics Data System (ADS)

    Boochs, Frank; Gehrhoff, Anja; Neifer, Markus

    2000-05-01

    A stereo system designed and used for the measurement of 3D- coordinates within metric stereo image pairs will be presented. First, the motivation for the development is shown, allowing to evaluate stereo images. As the use and availability of metric images of digital type rapidly increases corresponding equipment for the measuring process is needed. Systems which have been developed up to now are either very special ones, founded on high end graphics workstations with an according pricing or simple ones with restricted measuring functionality. A new conception will be shown, avoiding special high end graphics hardware but providing the measuring functionality required. The presented stereo system is based on PC-hardware equipped with a graphic board and uses an object oriented programming technique. The specific needs of a measuring system are shown and the corresponding requirements which have to be met by the system. The key role of OpenGL is described, which supplies some elementary graphic functions, being directly supported by graphic boards and thus provides the performance needed. Further important aspects as modularity and hardware independence and their value for the solution are shown. Finally some sample functions concerned with image display and handling are presented in more detail.

  9. [3D emulation of epicardium dynamic mapping].

    PubMed

    Lu, Jun; Yang, Cui-Wei; Fang, Zu-Xiang

    2005-03-01

    In order to realize epicardium dynamic mapping of the whole atria, 3-D graphics are drawn with OpenGL. Some source codes are introduced in the paper to explain how to produce, read, and manipulate 3-D model data.

  10. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  11. The Diagnostic Radiological Utilization Of 3-D Display Images

    NASA Astrophysics Data System (ADS)

    Cook, Larry T.; Dwyer, Samuel J.; Preston, David F.; Batnitzky, Solomon; Lee, Kyo R.

    1984-10-01

    In the practice of radiology, computer graphics systems have become an integral part of the use of computed tomography (CT), nuclear medicine (NM), magnetic resonance imaging (MRI), digital subtraction angiography (DSA) and ultrasound. Gray scale computerized display systems are used to display, manipulate, and record scans in all of these modalities. As the use of these imaging systems has spread, various applications involving digital image manipulation have also been widely accepted in the radiological community. We discuss one of the more esoteric of such applications, namely, the reconstruction of 3-D structures from plane section data, such as CT scans. Our technique is based on the acquisition of contour data from successive sections, the definition of the implicit surface defined by such contours, and the application of the appropriate computer graphics hardware and software to present reasonably pleasing pictures.

  12. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  13. CHARGE Interactive Graphics System Terminal: Theory of Operation. Technical Report 74-26.

    ERIC Educational Resources Information Center

    Swallow, Ronald J.

    The CHARGE computer terminal can provide graphics display for many applications in color, gray-level, 3-D, perspectives, and rapid updating. Perspective views can be generated from a three-dimensional coordinate system which changes to match actual physical descriptions. Image encoding and hardware design are described from a theoretical and…

  14. A 3D Geostatistical Mapping Tool

    SciTech Connect

    Weiss, W. W.; Stevenson, Graig; Patel, Ketan; Wang, Jun

    1999-02-09

    This software provides accurate 3D reservoir modeling tools and high quality 3D graphics for PC platforms enabling engineers and geologists to better comprehend reservoirs and consequently improve their decisions. The mapping algorithms are fractals, kriging, sequential guassian simulation, and three nearest neighbor methods.

  15. The National Shipbuilding Research Program, Proceedings of the REAPS Technical Symposium Paper No. 25: Computer Graphics Hardware and Application in Shipbuilding

    DTIC Science & Technology

    1976-06-01

    Autofit may be split in two categories (see fig. 1): 1. Direct output in connection application programs. 2. Editing and presentation of with the...draftsmaIi. The idea is, shall be used with Autokon and Autofit , both as a freestandiig system and in connection with direct output from application...database. In this version of the will be implemented. 3. The third project with a graphics approach, will be the Autofit subsystem fcr preparation of

  16. Computational challenges of emerging novel true 3D holographic displays

    NASA Astrophysics Data System (ADS)

    Cameron, Colin D.; Pain, Douglas A.; Stanley, Maurice; Slinger, Christopher W.

    2000-11-01

    A hologram can produce all the 3D depth cues that the human visual system uses to interpret and perceive real 3D objects. As such it is arguably the ultimate display technology. Computer generated holography, in which a computer calculates a hologram that is then displayed using a highly complex modulator, combines the ultimate qualities of a traditional hologram with the dynamic capabilities of a computer display producing a true 3D real image floating in space. This technology is set to emerge over the next decade, potentially revolutionizing application areas such as virtual prototyping (CAD-CAM, CAID etc.), tactical information displays, data visualization and simulation. In this paper we focus on the computational challenges of this technology. We consider different classes of computational algorithms from true computer-generated holograms (CGH) to holographic stereograms. Each has different characteristics in terms of image qualities, computational resources required, total CGH information content, and system performance. Possible trade- offs will be discussed including reducing the parallax. The software and hardware architectures used to implement the CGH algorithms have many possible forms. Different schemes, from high performance computing architectures to graphics based cluster architectures will be discussed and compared. Assessment will be made of current and future trends looking forward to a practical dynamic CGH based 3D display.

  17. Graphics mini manual

    NASA Technical Reports Server (NTRS)

    Taylor, Nancy L.; Randall, Donald P.; Bowen, John T.; Johnson, Mary M.; Roland, Vincent R.; Matthews, Christine G.; Gates, Raymond L.; Skeens, Kristi M.; Nolf, Scott R.; Hammond, Dana P.

    1990-01-01

    The computer graphics capabilities available at the Center are introduced and their use is explained. More specifically, the manual identifies and describes the various graphics software and hardware components, details the interfaces between these components, and provides information concerning the use of these components at LaRC.

  18. An interactive multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Zhang, Mei; Dong, Hui

    2013-03-01

    The progresses in 3D display systems and user interaction technologies will help more effective 3D visualization of 3D information. They yield a realistic representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them. In this paper, we describe an autostereoscopic multiview 3D display system with capability of real-time user interaction. Design principle of this autostereoscopic multiview 3D display system is presented, together with the details of its hardware/software architecture. A prototype is built and tested based upon multi-projectors and horizontal optical anisotropic display structure. Experimental results illustrate the effectiveness of this novel 3D display and user interaction system.

  19. Parameterized hardware description as object oriented hardware model implementation

    NASA Astrophysics Data System (ADS)

    Drabik, Pawel K.

    2010-09-01

    The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.

  20. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  1. Accelerating a 3D finite-difference wave propagation code by a factor of 50 and a spectral-element code by a factor of 25 using a cluster of GPU graphics cards

    NASA Astrophysics Data System (ADS)

    Komatitsch, Dimitri; Michéa, David; Erlebacher, Gordon; Göddeke, Dominik

    2010-05-01

    We first accelerate a three-dimensional finite-difference in the time domain (FDTD) wave propagation code by a factor of about 50 using Graphics Processing Unit (GPU) computing on a cheap NVIDIA graphics card with the CUDA programming language. We implement the code in CUDA in the case of the fully heterogeneous elastic wave equation. We also implement Convolution Perfectly Matched Layers (CPMLs) on the graphics card to efficiently absorb outgoing waves on the fictitious edges of the grid. We show that the code that runs on the graphics card gives the expected results by comparing our results to those obtained by running the same simulation on a classical processor core. The methodology that we present can be used for Maxwell's equations as well because their form is similar to that of the seismic wave equation written in velocity vector and stress tensor. We then implement a high-order finite-element (spectral-element) application, which performs the numerical simulation of seismic wave propagation resulting for instance from earthquakes at the scale of a continent or from active seismic acquisition experiments in the oil industry, on a cluster of NVIDIA Tesla graphics cards using the CUDA programming language and non blocking message passing based on MPI. We compare it to the implementation in C language and MPI on a classical cluster of CPU nodes. We use mesh coloring to efficiently handle summation operations over degrees of freedom on an unstructured mesh, and we exchange information between nodes using non blocking MPI messages. Using non-blocking communications allows us to overlap the communications across the network and the data transfer between the GPU card and the CPU node on which it is installed with calculations on that GPU card. We perform a number of numerical tests to validate the single-precision CUDA and MPI implementation and assess its accuracy. We then analyze performance measurements and in average we obtain a speedup of 20x to 25x.

  2. On 3D Dimension: Study cases for Archaeological sites

    NASA Astrophysics Data System (ADS)

    D'Urso, M. G.; Marino, C. L.; Rotondi, A.

    2014-04-01

    For more than a century the tridimensional vision has been of interest for scientists and users in several fields of application. The mathematical bases have remained substantially unchanged but only the new technologies have allowed us to make the vision really impressive. Photography opens new frontiers and has enriched of physical, mathematical, chemical, informatical and topographic notions by making the images so real to make the observer fully immersed into the represented scene. By means of active googless the 3D digital technique, commonly used for video games, makes possible animations without limitations in the dimension of the images thanks to the improved performances of the graphic processor units and related hardware components. In this paper we illustrate an experience made by the students of the MSc'degree course of Topography, active at the University of Cassino and Southern Lazio, in which the photography has been applied as an innovative technique for the surveying of cultural heritage. The tests foresee the use of traditional techniques of survey with 3D digital images and use of GPS sensors. The ultimate objective of our experience is the insertion in the web, allowing us the visualization of the 3D images equipped with all data. In conclusion these new methods of survey allow for the fusion of extremely different techniques, in such an impressive way to make them inseparable and justifying the origin of the neologism "Geomatics" coined at the Laval University (Canada) during the eighties.

  3. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  4. RT3D tutorials for GMS users

    SciTech Connect

    Clement, T.P.; Jones, N.L.

    1998-02-01

    RT3D (Reactive Transport in 3-Dimensions) is a computer code that solves coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in a three dimensional saturated porous media. RT3D was developed from the single-species transport code, MT3D (DoD-1.5, 1997 version). As with MT3D, RT3D also uses the USGS groundwater flow model MODFLOW for computing spatial and temporal variations in groundwater head distribution. This report presents a set of tutorial problems that are designed to illustrate how RT3D simulations can be performed within the Department of Defense Groundwater Modeling System (GMS). GMS serves as a pre- and post-processing interface for RT3D. GMS can be used to define all the input files needed by RT3D code, and later the code can be launched from within GMS and run as a separate application. Once the RT3D simulation is completed, the solution can be imported to GMS for graphical post-processing. RT3D v1.0 supports several reaction packages that can be used for simulating different types of reactive contaminants. Each of the tutorials, described below, provides training on a different RT3D reaction package. Each reaction package has different input requirements, and the tutorials are designed to describe these differences. Furthermore, the tutorials illustrate the various options available in GMS for graphical post-processing of RT3D results. Users are strongly encouraged to complete the tutorials before attempting to use RT3D and GMS on a routine basis.

  5. Implementation of Headtracking and 3D Stereo with Unity and VRPN for Computer Simulations

    NASA Technical Reports Server (NTRS)

    Noyes, Matthew A.

    2013-01-01

    This paper explores low-cost hardware and software methods to provide depth cues traditionally absent in monocular displays. The use of a VRPN server in conjunction with a Microsoft Kinect and/or Nintendo Wiimote to provide head tracking information to a Unity application, and NVIDIA 3D Vision for retinal disparity support, is discussed. Methods are suggested to implement this technology with NASA's EDGE simulation graphics package, along with potential caveats. Finally, future applications of this technology to astronaut crew training, particularly when combined with an omnidirectional treadmill for virtual locomotion and NASA's ARGOS system for reduced gravity simulation, are discussed.

  6. Integration of real-time 3D image acquisition and multiview 3D display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  7. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; ...

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  8. EarthServer - 3D Visualization on the Web

    NASA Astrophysics Data System (ADS)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  9. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  10. SlicerAstro: Astronomy (HI) extension for 3D Slicer

    NASA Astrophysics Data System (ADS)

    Punzo, Davide; van der Hulst, Thijs; Roerdink, Jos; Fillion-Robin, Jean-Christophe

    2016-11-01

    SlicerAstro extends 3D Slicer, a multi-platform package for visualization and medical image processing, to provide a 3-D interactive viewer with 3-D human-machine interaction features, based on traditional 2-D input/output hardware, and analysis capabilities.

  11. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  12. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  13. Animation graphic interface for the space shuttle onboard computer

    NASA Technical Reports Server (NTRS)

    Wike, Jeffrey; Griffith, Paul

    1989-01-01

    Graphics interfaces designed to operate on space qualified hardware challenge software designers to display complex information under processing power and physical size constraints. Under contract to Johnson Space Center, MICROEXPERT Systems is currently constructing an intelligent interface for the LASER DOCKING SENSOR (LDS) flight experiment. Part of this interface is a graphic animation display for Rendezvous and Proximity Operations. The displays have been designed in consultation with Shuttle astronauts. The displays show multiple views of a satellite relative to the shuttle, coupled with numeric attitude information. The graphics are generated using position data received by the Shuttle Payload and General Support Computer (PGSC) from the Laser Docking Sensor. Some of the design considerations include crew member preferences in graphic data representation, single versus multiple window displays, mission tailoring of graphic displays, realistic 3D images versus generic icon representations of real objects, the physical relationship of the observers to the graphic display, how numeric or textual information should interface with graphic data, in what frame of reference objects should be portrayed, recognizing conditions of display information-overload, and screen format and placement consistency.

  14. A modern approach to storing of 3D geometry of objects in machine engineering industry

    NASA Astrophysics Data System (ADS)

    Sokolova, E. A.; Aslanov, G. A.; Sokolov, A. A.

    2017-02-01

    3D graphics is a kind of computer graphics which has absorbed a lot from the vector and raster computer graphics. It is used in interior design projects, architectural projects, advertising, while creating educational computer programs, movies, visual images of parts and products in engineering, etc. 3D computer graphics allows one to create 3D scenes along with simulation of light conditions and setting up standpoints.

  15. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  16. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  17. Description of graphics translation software between Intergraph and Tektronix systems

    NASA Technical Reports Server (NTRS)

    Rieckhoff, Tom; Hixson, Jeff; Covan, Mark

    1988-01-01

    The requirement for Marshall Space Flight Center's Photo Analysis to use existing 3-D Intergraph graphic files on an existing Tektronix 4129 3-D graphics workstation and the unavailability of an off-the-shelf Intergraph to Tektronix translator required the development of such a translater. Using the output of Intergraph's standard interchange format converter, the 3-D graphic information of Intergraph's files are reformatted and compressed. The 3-D image is reconstructed using Tektronix's software terminal interface graphic library (STI).

  18. Optical 3D surface digitizing in forensic medicine: 3D documentation of skin and bone injuries.

    PubMed

    Thali, Michael J; Braun, Marcel; Dirnhofer, Richard

    2003-11-26

    Photography process reduces a three-dimensional (3D) wound to a two-dimensional level. If there is a need for a high-resolution 3D dataset of an object, it needs to be three-dimensionally scanned. No-contact optical 3D digitizing surface scanners can be used as a powerful tool for wound and injury-causing instrument analysis in trauma cases. The 3D skin wound and a bone injury documentation using the optical scanner Advanced TOpometric Sensor (ATOS II, GOM International, Switzerland) will be demonstrated using two illustrative cases. Using this 3D optical digitizing method the wounds (the virtual 3D computer model of the skin and the bone injuries) and the virtual 3D model of the injury-causing tool are graphically documented in 3D in real-life size and shape and can be rotated in the CAD program on the computer screen. In addition, the virtual 3D models of the bone injuries and tool can now be compared in a 3D CAD program against one another in virtual space, to see if there are matching areas. Further steps in forensic medicine will be a full 3D surface documentation of the human body and all the forensic relevant injuries using optical 3D scanners.

  19. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  20. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  1. 3-D Mesh Generation Nonlinear Systems

    SciTech Connect

    Christon, M. A.; Dovey, D.; Stillman, D. W.; Hallquist, J. O.; Rainsberger, R. B

    1994-04-07

    INGRID is a general-purpose, three-dimensional mesh generator developed for use with finite element, nonlinear, structural dynamics codes. INGRID generates the large and complex input data files for DYNA3D, NIKE3D, FACET, and TOPAZ3D. One of the greatest advantages of INGRID is that virtually any shape can be described without resorting to wedge elements, tetrahedrons, triangular elements or highly distorted quadrilateral or hexahedral elements. Other capabilities available are in the areas of geometry and graphics. Exact surface equations and surface intersections considerably improve the ability to deal with accurate models, and a hidden line graphics algorithm is included which is efficient on the most complicated meshes. The primary new capability is associated with the boundary conditions, loads, and material properties required by nonlinear mechanics programs. Commands have been designed for each case to minimize user effort. This is particularly important since special processing is almost always required for each load or boundary condition.

  2. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  3. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  4. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  5. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  6. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  7. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  8. Current and future graphics requirements for LaRC and proposed future graphics system

    NASA Technical Reports Server (NTRS)

    Taylor, N. L.; Bowen, J. T.; Randall, D. P.; Gates, R. L.

    1984-01-01

    The findings of an investigation to assess the current and future graphics requirements of the LaRC researchers with respect to both hardware and software are presented. A graphics system designed to meet these requirements is proposed.

  9. GPU-accelerated 3D mipmap for real-time visualization of ultrasound volume data.

    PubMed

    Kwon, Koojoo; Lee, Eun-Seok; Shin, Byeong-Seok

    2013-10-01

    Ultrasound volume rendering is an efficient method for visualizing the shape of fetuses in obstetrics and gynecology. However, in order to obtain high-quality ultrasound volume rendering, noise removal and coordinates conversion are essential prerequisites. Ultrasound data needs to undergo a noise filtering process; otherwise, artifacts and speckle noise cause quality degradation in the final images. Several two-dimensional (2D) noise filtering methods have been used to reduce this noise. However, these 2D filtering methods ignore relevant information in-between adjacent 2D-scanned images. Although three-dimensional (3D) noise filtering methods are used, they require more processing time than 2D-based methods. In addition, the sampling position in the ultrasonic volume rendering process has to be transformed between conical ultrasound coordinates and Cartesian coordinates. We propose a 3D-mipmap-based noise reduction method that uses graphics hardware, as a typical 3D mipmap requires less time to be generated and less storage capacity. In our method, we compare the density values of the corresponding points on consecutive mipmap levels and find the noise area using the difference in the density values. We also provide a noise detector for adaptively selecting the mipmap level using the difference of two mipmap levels. Our method can visualize 3D ultrasound data in real time with 3D noise filtering.

  10. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  11. AE3D

    SciTech Connect

    Spong, Donald A

    2016-06-20

    AE3D solves for the shear Alfven eigenmodes and eigenfrequencies in a torodal magnetic fusion confinement device. The configuration can be either 2D (e.g. tokamak, reversed field pinch) or 3D (e.g. stellarator, helical reversed field pinch, tokamak with ripple). The equations solved are based on a reduced MHD model and sound wave coupling effects are not currently included.

  12. Modeling cellular processes in 3D.

    PubMed

    Mogilner, Alex; Odde, David

    2011-12-01

    Recent advances in photonic imaging and fluorescent protein technology offer unprecedented views of molecular space-time dynamics in living cells. At the same time, advances in computing hardware and software enable modeling of ever more complex systems, from global climate to cell division. As modeling and experiment become more closely integrated we must address the issue of modeling cellular processes in 3D. Here, we highlight recent advances related to 3D modeling in cell biology. While some processes require full 3D analysis, we suggest that others are more naturally described in 2D or 1D. Keeping the dimensionality as low as possible reduces computational time and makes models more intuitively comprehensible; however, the ability to test full 3D models will build greater confidence in models generally and remains an important emerging area of cell biological modeling.

  13. An aerial 3D printing test mission

    NASA Astrophysics Data System (ADS)

    Hirsch, Michael; McGuire, Thomas; Parsons, Michael; Leake, Skye; Straub, Jeremy

    2016-05-01

    This paper provides an overview of an aerial 3D printing technology, its development and its testing. This technology is potentially useful in its own right. In addition, this work advances the development of a related in-space 3D printing technology. A series of aerial 3D printing test missions, used to test the aerial printing technology, are discussed. Through completing these test missions, the design for an in-space 3D printer may be advanced. The current design for the in-space 3D printer involves focusing thermal energy to heat an extrusion head and allow for the extrusion of molten print material. Plastics can be used as well as composites including metal, allowing for the extrusion of conductive material. A variety of experiments will be used to test this initial 3D printer design. High altitude balloons will be used to test the effects of microgravity on 3D printing, as well as parabolic flight tests. Zero pressure balloons can be used to test the effect of long 3D printing missions subjected to low temperatures. Vacuum chambers will be used to test 3D printing in a vacuum environment. The results will be used to adapt a current prototype of an in-space 3D printer. Then, a small scale prototype can be sent into low-Earth orbit as a 3-U cube satellite. With the ability to 3D print in space demonstrated, future missions can launch production hardware through which the sustainability and durability of structures in space will be greatly improved.

  14. 3D structured illumination microscopy

    NASA Astrophysics Data System (ADS)

    Dougherty, William M.; Goodwin, Paul C.

    2011-03-01

    Three-dimensional structured illumination microscopy achieves double the lateral and axial resolution of wide-field microscopy, using conventional fluorescent dyes, proteins and sample preparation techniques. A three-dimensional interference-fringe pattern excites the fluorescence, filling in the "missing cone" of the wide field optical transfer function, thereby enabling axial (z) discrimination. The pattern acts as a spatial carrier frequency that mixes with the higher spatial frequency components of the image, which usually succumb to the diffraction limit. The fluorescence image encodes the high frequency content as a down-mixed, moiré-like pattern. A series of images is required, wherein the 3D pattern is shifted and rotated, providing down-mixed data for a system of linear equations. Super-resolution is obtained by solving these equations. The speed with which the image series can be obtained can be a problem for the microscopy of living cells. Challenges include pattern-switching speeds, optical efficiency, wavefront quality and fringe contrast, fringe pitch optimization, and polarization issues. We will review some recent developments in 3D-SIM hardware with the goal of super-resolved z-stacks of motile cells.

  15. The 3D visualization technology research of submarine pipeline based Horde3D GameEngine

    NASA Astrophysics Data System (ADS)

    Yao, Guanghui; Ma, Xiushui; Chen, Genlang; Ye, Lingjian

    2013-10-01

    With the development of 3D display and virtual reality technology, its application gets more and more widespread. This paper applies 3D display technology to the monitoring of submarine pipeline. We reconstruct the submarine pipeline and its surrounding submarine terrain in computer using Horde3D graphics rendering engine on the foundation database "submarine pipeline and relative landforms landscape synthesis database" so as to display the virtual scene of submarine pipeline based virtual reality and show the relevant data collected from the monitoring of submarine pipeline.

  16. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  17. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  18. [Graphic reconstruction of anatomic surfaces].

    PubMed

    Ciobanu, O

    2004-01-01

    The paper deals with the graphic reconstruction of anatomic surfaces in a virtual 3D setting. Scanning technologies and soft provides a greater flexibility in the digitization of surfaces and a higher resolution and accuracy. An alternative cheap method for the reconstruction of 3D anatomic surfaces is presented in connection with some studies and international projects developed by Medical Design research team.

  19. Optically rewritable 3D liquid crystal displays.

    PubMed

    Sun, J; Srivastava, A K; Zhang, W; Wang, L; Chigrinov, V G; Kwok, H S

    2014-11-01

    Optically rewritable liquid crystal display (ORWLCD) is a concept based on the optically addressed bi-stable display that does not need any power to hold the image after being uploaded. Recently, the demand for the 3D image display has increased enormously. Several attempts have been made to achieve 3D image on the ORWLCD, but all of them involve high complexity for image processing on both hardware and software levels. In this Letter, we disclose a concept for the 3D-ORWLCD by dividing the given image in three parts with different optic axis. A quarter-wave plate is placed on the top of the ORWLCD to modify the emerging light from different domains of the image in different manner. Thereafter, Polaroid glasses can be used to visualize the 3D image. The 3D image can be refreshed, on the 3D-ORWLCD, in one-step with proper ORWLCD printer and image processing, and therefore, with easy image refreshing and good image quality, such displays can be applied for many applications viz. 3D bi-stable display, security elements, etc.

  20. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; ...

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  1. Bootstrapping 3D fermions

    SciTech Connect

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  2. Debris Dispersion Model Using Java 3D

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar; Bardina, Jorge

    2004-01-01

    This paper describes web based simulation of Shuttle launch operations and debris dispersion. Java 3D graphics provides geometric and visual content with suitable mathematical model and behaviors of Shuttle launch. Because the model is so heterogeneous and interrelated with various factors, 3D graphics combined with physical models provides mechanisms to understand the complexity of launch and range operations. The main focus in the modeling and simulation covers orbital dynamics and range safety. Range safety areas include destruct limit lines, telemetry and tracking and population risk near range. If there is an explosion of Shuttle during launch, debris dispersion is explained. The shuttle launch and range operations in this paper are discussed based on the operations from Kennedy Space Center, Florida, USA.

  3. Reviews Book: Visible Learning Book: Getting to Grips with Graphs Book: A Teacher's Guide to Classroom Research Book: Relativity: A Graphic Guide Book: The Last Man Who Knew Everything Game: Planet Quest Equipment: Minoru 3D Web Camera Equipment: Throwies Equipment: Go Science Optics Kit Web Watch

    NASA Astrophysics Data System (ADS)

    2009-05-01

    WE RECOMMEND Visible Learning A compilation of more than 800 meta-analyses of achievement A Teacher's Guide to Classroom Research A useful aid for teachers who want to improve standards in class The Last Man Who Knew Everything This biography of Thomas Young is a 'lucid account' of his life Novo Minoru 3D Web Camera Welcome a mini alien to your classroom for fun 3D lessons WORTH A LOOK Getting to Grips with Graphs A useful collection of worksheets for teaching about graphs Relativity: A Graphic Guide This book works best as a supplementary text on relativity Planet Quest A space board game that will engage younger children Throwies Make a torch and liven up lessons on conductors and insulators Go Science Optics Kit Do-it-yourself optics kit should be priced a little lower WEB WATCH This month we take a look at NASA's technology and education web pages, which offer a great selection of space-related topics and activities for young scientists

  4. A 3D Geometry Model Search Engine to Support Learning

    ERIC Educational Resources Information Center

    Tam, Gary K. L.; Lau, Rynson W. H.; Zhao, Jianmin

    2009-01-01

    Due to the popularity of 3D graphics in animation and games, usage of 3D geometry deformable models increases dramatically. Despite their growing importance, these models are difficult and time consuming to build. A distance learning system for the construction of these models could greatly facilitate students to learn and practice at different…

  5. Venus in 3D

    NASA Technical Reports Server (NTRS)

    Plaut, Jeffrey J.

    1993-01-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  6. PLOT3D Export Tool for Tecplot

    NASA Technical Reports Server (NTRS)

    Alter, Stephen

    2010-01-01

    The PLOT3D export tool for Tecplot solves the problem of modified data being impossible to output for use by another computational science solver. The PLOT3D Exporter add-on enables the use of the most commonly available visualization tools to engineers for output of a standard format. The exportation of PLOT3D data from Tecplot has far reaching effects because it allows for grid and solution manipulation within a graphical user interface (GUI) that is easily customized with macro language-based and user-developed GUIs. The add-on also enables the use of Tecplot as an interpolation tool for solution conversion between different grids of different types. This one add-on enhances the functionality of Tecplot so significantly, it offers the ability to incorporate Tecplot into a general suite of tools for computational science applications as a 3D graphics engine for visualization of all data. Within the PLOT3D Export Add-on are several functions that enhance the operations and effectiveness of the add-on. Unlike Tecplot output functions, the PLOT3D Export Add-on enables the use of the zone selection dialog in Tecplot to choose which zones are to be written by offering three distinct options - output of active, inactive, or all zones (grid blocks). As the user modifies the zones to output with the zone selection dialog, the zones to be written are similarly updated. This enables the use of Tecplot to create multiple configurations of a geometry being analyzed. For example, if an aircraft is loaded with multiple deflections of flaps, by activating and deactivating different zones for a specific flap setting, new specific configurations of that aircraft can be easily generated by only writing out specific zones. Thus, if ten flap settings are loaded into Tecplot, the PLOT3D Export software can output ten different configurations, one for each flap setting.

  7. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  8. Watermarking 3D Objects for Verification

    DTIC Science & Technology

    1999-01-01

    signal ( audio /image/video) pro- cessing and steganography fields, and even newer to the computer graphics community. Inherently, digital watermarking of...Many view digital watermarking as a potential solution for copyright protection of valuable digital materials like CD-quality audio , publication...watermark. The object can be an image, an audio clip, a video clip, or a 3D model. Some papers discuss watermarking other forms of multime- dia data

  9. A 3D Data Transformation Processor

    DTIC Science & Technology

    2012-10-01

    forensic purposes. Our work differs from XTRec in that we are proposing a specialized 3DIC approach, and we argue that our proposed sytem would fa...on Emerging Technologies and Factory Automation (ETFA), Patras, Greece, September 2007. [11] J. Kim, C. Nicopoulos, D. Park , R. Das, Y. Xie, N...R. Kastner, T. Huffmire, C. Irvine, and T. Levin. Hardware assistance for trustworthy systems through 3-D integration. In Proceedings of the Annual

  10. Scalable 3D GIS environment managed by 3D-XML-based modeling

    NASA Astrophysics Data System (ADS)

    Shi, Beiqi; Rui, Jianxun; Chen, Neng

    2008-10-01

    Nowadays, the namely 3D GIS technologies become a key factor in establishing and maintaining large-scale 3D geoinformation services. However, with the rapidly increasing size and complexity of the 3D models being acquired, a pressing needed for suitable data management solutions has become apparent. This paper outlines that storage and exchange of geospatial data between databases and different front ends like 3D models, GIS or internet browsers require a standardized format which is capable to represent instances of 3D GIS models, to minimize loss of information during data transfer and to reduce interface development efforts. After a review of previous methods for spatial 3D data management, a universal lightweight XML-based format for quick and easy sharing of 3D GIS data is presented. 3D data management based on XML is a solution meeting the requirements as stated, which can provide an efficient means for opening a new standard way to create an arbitrary data structure and share it over the Internet. To manage reality-based 3D models, this paper uses 3DXML produced by Dassault Systemes. 3DXML uses opening XML schemas to communicate product geometry, structure and graphical display properties. It can be read, written and enriched by standard tools; and allows users to add extensions based on their own specific requirements. The paper concludes with the presentation of projects from application areas which will benefit from the functionality presented above.

  11. Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays

    NASA Astrophysics Data System (ADS)

    Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.

    2005-03-01

    We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.

  12. gEMfitter: a highly parallel FFT-based 3D density fitting tool with GPU texture memory acceleration.

    PubMed

    Hoang, Thai V; Cavin, Xavier; Ritchie, David W

    2013-11-01

    Fitting high resolution protein structures into low resolution cryo-electron microscopy (cryo-EM) density maps is an important technique for modeling the atomic structures of very large macromolecular assemblies. This article presents "gEMfitter", a highly parallel fast Fourier transform (FFT) EM density fitting program which can exploit the special hardware properties of modern graphics processor units (GPUs) to accelerate both the translational and rotational parts of the correlation search. In particular, by using the GPU's special texture memory hardware to rotate 3D voxel grids, the cost of rotating large 3D density maps is almost completely eliminated. Compared to performing 3D correlations on one core of a contemporary central processor unit (CPU), running gEMfitter on a modern GPU gives up to 26-fold speed-up. Furthermore, using our parallel processing framework, this speed-up increases linearly with the number of CPUs or GPUs used. Thus, it is now possible to use routinely more robust but more expensive 3D correlation techniques. When tested on low resolution experimental cryo-EM data for the GroEL-GroES complex, we demonstrate the satisfactory fitting results that may be achieved by using a locally normalised cross-correlation with a Laplacian pre-filter, while still being up to three orders of magnitude faster than the well-known COLORES program.

  13. 3-D visualization of geologic structures and processes

    NASA Astrophysics Data System (ADS)

    Pflug, R.; Klein, H.; Ramshorn, Ch.; Genter, M.; Stärk, A.

    Interactive 3-D computer graphics techniques are used to visualize geologic structures and simulated geologic processes. Geometric models that serve as input to 3-D viewing programs are generated from contour maps, from serial sections, or directly from simulation program output. Choice of viewing parameters strongly affects the perception of irregular surfaces. An interactive 3-D rendering program and its graphical user interface provide visualization tools for structural geology, seismic interpretation, and visual post-processing of simulations. Dynamic display of transient ground-water simulations and sedimentary process simulations can visualize processes developing through time.

  14. Hardly Hardware

    ERIC Educational Resources Information Center

    Lott, Debra

    2007-01-01

    In a never-ending search for new and inspirational still-life objects, the author discovered that home improvement retailers make great resources for art teachers. Hardware and building materials are inexpensive and have interesting and variable shapes. She especially liked the dryer-vent coils and the electrical conduit. These items can be…

  15. 3D Simulation: Microgravity Environments and Applications

    NASA Technical Reports Server (NTRS)

    Hunter, Steve L.; Dischinger, Charles; Estes, Samantha; Parker, Nelson C. (Technical Monitor)

    2001-01-01

    Most, if not all, 3-D and Virtual Reality (VR) software programs are designed for one-G gravity applications. Space environments simulations require gravity effects of one one-thousandth to one one-million of that of the Earth's surface (10(exp -3) - 10(exp -6) G), thus one must be able to generate simulations that replicate those microgravity effects upon simulated astronauts. Unfortunately, the software programs utilized by the National Aeronautical and Space Administration does not have the ability to readily neutralize the one-G gravity effect. This pre-programmed situation causes the engineer or analysis difficulty during micro-gravity simulations. Therefore, microgravity simulations require special techniques or additional code in order to apply the power of 3D graphic simulation to space related applications. This paper discusses the problem and possible solutions to allow microgravity 3-D/VR simulations to be completed successfully without program code modifications.

  16. Impedance mammograph 3D phantom studies.

    PubMed

    Wtorek, J; Stelter, J; Nowakowski, A

    1999-04-20

    The results obtained using the Technical University of Gdansk Electroimpedance Mammograph (TUGEM) of a 3D phantom study are presented. The TUGEM system is briefly described. The hardware contains the measurement head and DSP-based identification modules controlled by a PC computer. A specially developed reconstruction algorithm, Regulated Correction Frequency Algebraic Reconstruction Technique (RCFART), is used to obtain 3D images. To visualize results, the Advance Visualization System (AVS) is used. It allows a powerful image processing on a fast workstation or on a high-performance computer. Results of three types of 3D conductivity perturbations used in the study (aluminum, Plexiglas, and cucumber) are shown. The relative volumes of perturbations less than 2% of the measurement chamber are easily evidenced.

  17. Urbanisation and 3d Spatial - a Geometric Approach

    NASA Astrophysics Data System (ADS)

    Duncan, E. E.; Rahman, A. Abdul

    2013-09-01

    Urbanisation creates immense competition for space, this may be attributed to an increase in population owing to domestic and external tourism. Most cities are constantly exploring all avenues in maximising its limited space. Hence, urban or city authorities need to plan, expand and use such three dimensional (3D) space above, on and below the city space. Thus, difficulties in property ownership and the geometric representation of the 3D city space is a major challenge. This research, investigates the concept of representing a geometric topological 3D spatial model capable of representing 3D volume parcels for man-made constructions above and below the 3D surface volume parcel. A review of spatial data models suggests that the 3D TIN (TEN) model is significant and can be used as a unified model. The concepts, logical and physical models of 3D TIN for 3D volumes using tetrahedrons as the base geometry is presented and implemented to show man-made constructions above and below the surface parcel within a user friendly graphical interface. Concepts for 3D topology and 3D analysis are discussed. Simulations of this model for 3D cadastre are implemented. This model can be adopted by most countries to enhance and streamline geometric 3D property ownership for urban centres. 3D TIN concept for spatial modelling can be adopted for the LA_Spatial part of the Land Administration Domain Model (LADM) (ISO/TC211, 2012), this satisfies the concept of 3D volumes.

  18. Computer Graphics.

    ERIC Educational Resources Information Center

    Halpern, Jeanne W.

    1970-01-01

    Computer graphics have been called the most exciting development in computer technology. At the University of Michigan, three kinds of graphics output equipment are now being used: symbolic printers, line plotters or drafting devices, and cathode-ray tubes (CRT). Six examples are given that demonstrate the range of graphics use at the University.…

  19. A novel visual hardware behavioral language

    NASA Technical Reports Server (NTRS)

    Li, Xueqin; Cheng, H. D.

    1992-01-01

    Most hardware behavioral languages just use texts to describe the behavior of the desired hardware design. This is inconvenient for VLSI designers who enjoy using the schematic approach. The proposed visual hardware behavioral language has the ability to graphically express design information using visual parallel models (blocks), visual sequential models (processes) and visual data flow graphs (which consist of primitive operational icons, control icons, and Data and Synchro links). Thus, the proposed visual hardware behavioral language can not only specify hardware concurrent and sequential functionality, but can also visually expose parallelism, sequentiality, and disjointness (mutually exclusive operations) for the hardware designers. That would make the hardware designers capture the design ideas easily and explicitly using this visual hardware behavioral language.

  20. Spatioangular Prefiltering for Multiview 3D Displays.

    PubMed

    Ramachandra, Vikas; Hirakawa, Keigo; Zwicker, Matthias; Nguyen, Truong

    2011-05-01

    In this paper, we analyze the reproduction of light fields on multiview 3D displays. A three-way interaction between the input light field signal (which is often aliased), the joint spatioangular sampling grids of multiview 3D displays, and the interview light leakage in modern multiview 3D displays is characterized in the joint spatioangular frequency domain. Reconstruction of light fields by all physical 3D displays is prone to light leakage, which means that the reconstruction low-pass filter implemented by the display is too broad in the angular domain. As a result, 3D displays excessively attenuate angular frequencies. Our analysis shows that this reduces sharpness of the images shown in the 3D displays. In this paper, stereoscopic image recovery is recast as a problem of joint spatioangular signal reconstruction. The combination of the 3D display point spread function and human visual system provides the narrow-band low-pass filter which removes spectral replicas in the reconstructed light field on the multiview display. The nonideality of this filter is corrected with the proposed prefiltering. The proposed light field reconstruction method performs light field antialiasing as well as angular sharpening to compensate for the nonideal response of the 3D display. The union of cosets approach which has been used earlier by others is employed here to model the nonrectangular spatioangular sampling grids on a multiview display in a generic fashion. We confirm the effectiveness of our approach in simulation and in physical hardware, and demonstrate improvement over existing techniques.

  1. Twin Peaks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The two hills in the distance, approximately one to two kilometers away, have been dubbed the 'Twin Peaks' and are of great interest to Pathfinder scientists as objects of future study. 3D glasses are necessary to identify surface detail. The white areas on the left hill, called the 'Ski Run' by scientists, may have been formed by hydrologic processes.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  2. 3D and beyond

    NASA Astrophysics Data System (ADS)

    Fung, Y. C.

    1995-05-01

    This conference on physiology and function covers a wide range of subjects, including the vasculature and blood flow, the flow of gas, water, and blood in the lung, the neurological structure and function, the modeling, and the motion and mechanics of organs. Many technologies are discussed. I believe that the list would include a robotic photographer, to hold the optical equipment in a precisely controlled way to obtain the images for the user. Why are 3D images needed? They are to achieve certain objectives through measurements of some objects. For example, in order to improve performance in sports or beauty of a person, we measure the form, dimensions, appearance, and movements.

  3. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  4. Simulation of automatic rotorcraft nap-of-the-earth flight in graphics workstation environment

    NASA Technical Reports Server (NTRS)

    Lam, T.; Cheng, Victor H. L.

    1992-01-01

    This paper describes a three-dimensional (3D) helicopter flight simulation system. The simulation is designed to be a readily available tool for concept verification and tuning of automatic obstacle-avoidance guidance algorithms. The system has been implemented on networked workstations capable of interactive 3D graphics simulation. The simulation uses realistic terrain and obstacle models. The dynamics of the rotorcraft and the functional capabilities of the range sensors are simulated to provide all the components required to evaluate the guidance function. Standard graphics hardware available on the workstation is utilized to accelerate the range-data calculations for sensor simulation at the guidance rate. An example is given to demonstrate the performance of the obstacle-avoidance capability.

  5. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  6. KAGLVis - On-line 3D Visualisation of Earth-observing-satellite Data

    NASA Astrophysics Data System (ADS)

    Szuba, Marek; Ameri, Parinaz; Grabowski, Udo; Maatouki, Ahmad; Meyer, Jörg

    2015-04-01

    One of the goals of the Large-Scale Data Management and Analysis project is to provide a high-performance framework facilitating management of data acquired by Earth-observing satellites such as Envisat. On the client-facing facet of this framework, we strive to provide visualisation and basic analysis tool which could be used by scientists with minimal to no knowledge of the underlying infrastructure. Our tool, KAGLVis, is a JavaScript client-server Web application which leverages modern Web technologies to provide three-dimensional visualisation of satellite observables on a wide range of client systems. It takes advantage of the WebGL API to employ locally available GPU power for 3D rendering; this approach has been demonstrated to perform well even on relatively weak hardware such as integrated graphics chipsets found in modern laptop computers and with some user-interface tuning could even be usable on embedded devices such as smartphones or tablets. Data is fetched from the database back-end using a ReST API and cached locally, both in memory and using HTML5 Web Storage, to minimise network use. Computations, calculation of cloud altitude from cloud-index measurements for instance, can depending on configuration be performed on either the client or the server side. Keywords: satellite data, Envisat, visualisation, 3D graphics, Web application, WebGL, MEAN stack.

  7. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  8. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    An area of rocky terrain near the landing site of the Sagan Memorial Station can be seen in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  9. Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.

    2015-12-01

    Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  10. Hardware-in-the-loop testing for the LOCAAS laser radar antimateriel munition

    NASA Astrophysics Data System (ADS)

    Byrd, Lawrence Y., III; Thompson, Rhoe A.

    1996-05-01

    The KHILS facility in the Wright Laboratory Armament Directorate at Eglin AFB has developed a hardware-in-the- loop (HWIL) simulation for the Low Cost Autonomous Attack System. Unique techniques have been developed for real-time closed-loop signal injection testing of this Laser Radar (LADAR) guided munition concept. The overall HWIL layout will be described including discussion of interfaces, real- time 3D LADAR scene generation, flight motion simulation, and real-time graphical visualization. In addition, the practical application of a new simulation Verification, Validation and Accreditation procedure will be described in relation to this HWIL simulation.

  11. Graphic engine resource management

    NASA Astrophysics Data System (ADS)

    Bautin, Mikhail; Dwarakinath, Ashok; Chiueh, Tzi-cker

    2008-01-01

    Modern consumer-grade 3D graphic cards boast a computation/memory resource that can easily rival or even exceed that of standard desktop PCs. Although these cards are mainly designed for 3D gaming applications, their enormous computational power has attracted developers to port an increasing number of scientific computation programs to these cards, including matrix computation, collision detection, cryptography, database sorting, etc. As more and more applications run on 3D graphic cards, there is a need to allocate the computation/memory resource on these cards among the sharing applications more fairly and efficiently. In this paper, we describe the design, implementation and evaluation of a Graphic Processing Unit (GPU) scheduler based on Deficit Round Robin scheduling that successfully allocates to every process an equal share of the GPU time regardless of their demand. This scheduler, called GERM, estimates the execution time of each GPU command group based on dynamically collected statistics, and controls each process's GPU command production rate through its CPU scheduling priority. Measurements on the first GERM prototype show that this approach can keep the maximal GPU time consumption difference among concurrent GPU processes consistently below 5% for a variety of application mixes.

  12. 3D printing: making things at the library.

    PubMed

    Hoy, Matthew B

    2013-01-01

    3D printers are a new technology that creates physical objects from digital files. Uses for these printers include printing models, parts, and toys. 3D printers are also being developed for medical applications, including printed bone, skin, and even complete organs. Although medical printing lags behind other uses for 3D printing, it has the potential to radically change the practice of medicine over the next decade. Falling costs for hardware have made 3D printers an inexpensive technology that libraries can offer their patrons. Medical librarians will want to be familiar with this technology, as it is sure to have wide-reaching effects on the practice of medicine.

  13. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  14. Scoops3D: software to analyze 3D slope stability throughout a digital landscape

    USGS Publications Warehouse

    Reid, Mark E.; Christian, Sarah B.; Brien, Dianne L.; Henderson, Scott T.

    2015-01-01

    The computer program, Scoops3D, evaluates slope stability throughout a digital landscape represented by a digital elevation model (DEM). The program uses a three-dimensional (3D) method of columns approach to assess the stability of many (typically millions) potential landslides within a user-defined size range. For each potential landslide (or failure), Scoops3D assesses the stability of a rotational, spherical slip surface encompassing many DEM cells using a 3D version of either Bishop’s simplified method or the Ordinary (Fellenius) method of limit-equilibrium analysis. Scoops3D has several options for the user to systematically and efficiently search throughout an entire DEM, thereby incorporating the effects of complex surface topography. In a thorough search, each DEM cell is included in multiple potential failures, and Scoops3D records the lowest stability (factor of safety) for each DEM cell, as well as the size (volume or area) associated with each of these potential landslides. It also determines the least-stable potential failure for the entire DEM. The user has a variety of options for building a 3D domain, including layers or full 3D distributions of strength and pore-water pressures, simplistic earthquake loading, and unsaturated suction conditions. Results from Scoops3D can be readily incorporated into a geographic information system (GIS) or other visualization software. This manual includes information on the theoretical basis for the slope-stability analysis, requirements for constructing and searching a 3D domain, a detailed operational guide (including step-by-step instructions for using the graphical user interface [GUI] software, Scoops3D-i) and input/output file specifications, practical considerations for conducting an analysis, results of verification tests, and multiple examples illustrating the capabilities of Scoops3D. Easy-to-use software installation packages are available for the Windows or Macintosh operating systems; these packages

  15. DNA Assembly in 3D Printed Fluidics

    PubMed Central

    Patrick, William G.; Nielsen, Alec A. K.; Keating, Steven J.; Levy, Taylor J.; Wang, Che-Wei; Rivera, Jaime J.; Mondragón-Palomino, Octavio; Carr, Peter A.; Voigt, Christopher A.; Oxman, Neri; Kong, David S.

    2015-01-01

    The process of connecting genetic parts—DNA assembly—is a foundational technology for synthetic biology. Microfluidics present an attractive solution for minimizing use of costly reagents, enabling multiplexed reactions, and automating protocols by integrating multiple protocol steps. However, microfluidics fabrication and operation can be expensive and requires expertise, limiting access to the technology. With advances in commodity digital fabrication tools, it is now possible to directly print fluidic devices and supporting hardware. 3D printed micro- and millifluidic devices are inexpensive, easy to make and quick to produce. We demonstrate Golden Gate DNA assembly in 3D-printed fluidics with reaction volumes as small as 490 nL, channel widths as fine as 220 microns, and per unit part costs ranging from $0.61 to $5.71. A 3D-printed syringe pump with an accompanying programmable software interface was designed and fabricated to operate the devices. Quick turnaround and inexpensive materials allowed for rapid exploration of device parameters, demonstrating a manufacturing paradigm for designing and fabricating hardware for synthetic biology. PMID:26716448

  16. Restoring Fort Frontenac in 3D: Effective Usage of 3D Technology for Heritage Visualization

    NASA Astrophysics Data System (ADS)

    Yabe, M.; Goins, E.; Jackson, C.; Halbstein, D.; Foster, S.; Bazely, S.

    2015-02-01

    This paper is composed of three elements: 3D modeling, web design, and heritage visualization. The aim is to use computer graphics design to inform and create an interest in historical visualization by rebuilding Fort Frontenac using 3D modeling and interactive design. The final model will be integr ated into an interactive website to learn more about the fort's historic imp ortance. It is apparent that using computer graphics can save time and money when it comes to historical visualization. Visitors do not have to travel to the actual archaeological buildings. They can simply use the Web in their own home to learn about this information virtually. Meticulously following historical records to create a sophisticated restoration of archaeological buildings will draw viewers into visualizations, such as the historical world of Fort Frontenac. As a result, it allows the viewers to effectively understand the fort's social sy stem, habits, and historical events.

  17. Scalable Multi-Platform Distribution of Spatial 3d Contents

    NASA Astrophysics Data System (ADS)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  18. A Framework for 3D Model-Based Visual Tracking Using a GPU-Accelerated Particle Filter.

    PubMed

    Brown, J A; Capson, D W

    2012-01-01

    A novel framework for acceleration of particle filtering approaches to 3D model-based, markerless visual tracking in monocular video is described. Specifically, we present a methodology for partitioning and mapping the computationally expensive weight-update stage of a particle filter to a graphics processing unit (GPU) to achieve particle- and pixel-level parallelism. Nvidia CUDA and Direct3D are employed to harness the massively parallel computational power of modern GPUs for simulation (3D model rendering) and evaluation (segmentation, feature extraction, and weight calculation) of hundreds of particles at high speeds. The proposed framework addresses the computational intensity that is intrinsic to all particle filter approaches, including those that have been modified to minimize the number of particles required for a particular task. Performance and tracking quality results for rigid object and articulated hand tracking experiments demonstrate markerless, model-based visual tracking on consumer-grade graphics hardware with pixel-level accuracy up to 95 percent at 60+ frames per second. The framework accelerates particle evaluation up to 49 times over a comparable CPU-only implementation, providing an increased particle count while maintaining real-time frame rates.

  19. Some Recent Advances in Computer Graphics.

    ERIC Educational Resources Information Center

    Whitted, Turner

    1982-01-01

    General principles of computer graphics are reviewed, including discussions of display hardware, geometric modeling, algorithms, and applications in science, computer-aided design, flight training, communications, business, art, and entertainment. (JN)

  20. Intraoral 3D scanner

    NASA Astrophysics Data System (ADS)

    Kühmstedt, Peter; Bräuer-Burchardt, Christian; Munkelt, Christoph; Heinze, Matthias; Palme, Martin; Schmidt, Ingo; Hintersehr, Josef; Notni, Gunther

    2007-09-01

    Here a new set-up of a 3D-scanning system for CAD/CAM in dental industry is proposed. The system is designed for direct scanning of the dental preparations within the mouth. The measuring process is based on phase correlation technique in combination with fast fringe projection in a stereo arrangement. The novelty in the approach is characterized by the following features: A phase correlation between the phase values of the images of two cameras is used for the co-ordinate calculation. This works contrary to the usage of only phase values (phasogrammetry) or classical triangulation (phase values and camera image co-ordinate values) for the determination of the co-ordinates. The main advantage of the method is that the absolute value of the phase at each point does not directly determine the coordinate. Thus errors in the determination of the co-ordinates are prevented. Furthermore, using the epipolar geometry of the stereo-like arrangement the phase unwrapping problem of fringe analysis can be solved. The endoscope like measurement system contains one projection and two camera channels for illumination and observation of the object, respectively. The new system has a measurement field of nearly 25mm × 15mm. The user can measure two or three teeth at one time. So the system can by used for scanning of single tooth up to bridges preparations. In the paper the first realization of the intraoral scanner is described.

  1. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  2. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  3. Business Graphics

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Genigraphics Corporation's Masterpiece 8770 FilmRecorder is an advanced high resolution system designed to improve and expand a company's in-house graphics production. GRAFTIME/software package was designed to allow office personnel with minimal training to produce professional level graphics for business communications and presentations. Products are no longer being manufactured.

  4. Graphic Storytelling

    ERIC Educational Resources Information Center

    Thompson, John

    2009-01-01

    Graphic storytelling is a medium that allows students to make and share stories, while developing their art communication skills. American comics today are more varied in genre, approach, and audience than ever before. When considering the impact of Japanese manga on the youth, graphic storytelling emerges as a powerful player in pop culture. In…

  5. Fallon FORGE 3D Geologic Model

    SciTech Connect

    Doug Blankenship

    2016-03-01

    An x,y,z scattered data file for the 3D geologic model of the Fallon FORGE site. Model created in Earthvision by Dynamic Graphic Inc. The model was constructed with a grid spacing of 100 m. Geologic surfaces were extrapolated from the input data using a minimum tension gridding algorithm. The data file is tabular data in a text file, with lithology data associated with X,Y,Z grid points. All the relevant information is in the file header (the spatial reference, the projection etc.) In addition all the fields in the data file are identified in the header.

  6. Faster Aerodynamic Simulation With Cart3D

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A NASA-developed aerodynamic simulation tool is ensuring the safety of future space operations while providing designers and engineers with an automated, highly accurate computer simulation suite. Cart3D, co-winner of NASA's 2002 Software of the Year award, is the result of over 10 years of research and software development conducted by Michael Aftosmis and Dr. John Melton of Ames Research Center and Professor Marsha Berger of the Courant Institute at New York University. Cart3D offers a revolutionary approach to computational fluid dynamics (CFD), the computer simulation of how fluids and gases flow around an object of a particular design. By fusing technological advancements in diverse fields such as mineralogy, computer graphics, computational geometry, and fluid dynamics, the software provides a new industrial geometry processing and fluid analysis capability with unsurpassed automation and efficiency.

  7. Visualizing realistic 3D urban environments

    NASA Astrophysics Data System (ADS)

    Lee, Aaron; Chen, Tuolin; Brunig, Michael; Schmidt, Hauke

    2003-05-01

    Visualizing complex urban environments has been an active research topic due to its wide variety of applications in city planning: road construction, emergency facilities planning, and optimal placement of wireless carrier base stations. Traditional 2D visualizations have been around for a long time but they only provide a schematic line-drawing bird's eye view and are sometimes confusing to understand due to the lack of depth information. Early versions of 3D systems have been developed for very expensive graphics workstations which seriously limited the availability. In this paper we describe a 3D visualization system for a desktop PC which integrates multiple resolutions of data and provides a realistic view of the urban environment.

  8. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  9. Methods For Electronic 3-D Moving Pictures Without Glasses

    NASA Astrophysics Data System (ADS)

    Collender, Robert B.

    1987-06-01

    This paper describes implementation approaches in image acquisition and playback for 3-D computer graphics, 3-D television and 3-D theatre movies without special glasses. Projection lamps, spatial light modulators, CRT's and dynamic scanning are all eliminated by the application of an active image array, all static components and a semi-specular screen. The resulting picture shows horizontal parallax with a wide horizontal view field (up to 360 de-grees) giving a holographic appearance in full color with smooth continuous viewing without speckle. Static component systems are compared with dynamic component systems using both linear and circular arrays. Implementation of computer graphic systems are shown that allow complex shaded color images to extend from the viewer's eyes to infinity. Large screen systems visible by hundreds of people are feasible by the use of low f-stops and high gain screens in projection. Screen geometries and special screen properties are shown. Viewing characteristics offer no restrictions in view-position over the entire view-field and have a "look-around" feature for all the categories of computer graphics, television and movies. Standard video cassettes and optical discs can also interface the system to generate a 3-D window viewable without glasses. A prognosis is given for technology application to 3-D pictures without glasses that replicate the daily viewing experience. Super-position of computer graphics on real-world pictures is shown feasible.

  10. 3D-CDTI User Manual v2.1

    NASA Technical Reports Server (NTRS)

    Johnson, Walter; Battiste, Vernol

    2016-01-01

    The 3D-Cockpit Display of Traffic Information (3D-CDTI) is a flight deck tool that presents aircrew with: proximal traffic aircraft location, their current status and flight plan data; strategic conflict detection and alerting; automated conflict resolution strategies; the facility to graphically plan manual route changes; time-based, in-trail spacing on approach. The CDTI is manipulated via a touchpad on the flight deck, and by mouse when presented as part of a desktop flight simulator.

  11. GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.

    PubMed

    Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H

    2012-09-01

    Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC

  12. A simultaneous 2D/3D autostereo workstation

    NASA Astrophysics Data System (ADS)

    Chau, Dennis; McGinnis, Bradley; Talandis, Jonas; Leigh, Jason; Peterka, Tom; Knoll, Aaron; Sumer, Aslihan; Papka, Michael; Jellinek, Julius

    2012-03-01

    We present a novel immersive workstation environment that scientists can use for 3D data exploration and as their everyday 2D computer monitor. Our implementation is based on an autostereoscopic dynamic parallax barrier 2D/3D display, interactive input devices, and a software infrastructure that allows client/server software modules to couple the workstation to scientists' visualization applications. This paper describes the hardware construction and calibration, software components, and a demonstration of our system in nanoscale materials science exploration.

  13. Graphic pathogeographies.

    PubMed

    Donovan, Courtney

    2014-09-01

    This paper focuses on the graphic pathogeographies in David B.'s Epileptic and David Small's Stitches: A Memoir to highlight the significance of geographic concepts in graphic novels of health and disease. Despite its importance in such works, few scholars have examined the role of geography in their narrative and structure. I examine the role of place in Epileptic and Stitches to extend the academic discussion on graphic novels of health and disease and identify how such works bring attention to the role of geography in the individual's engagement with health, disease, and related settings.

  14. Graphical programming of telerobotic tasks

    SciTech Connect

    Small, D.E.; McDonald, M.J.

    1996-11-01

    With a goal of producing faster, safer, and cheaper technologies for nuclear waste cleanup, Sandia is actively developing and extending intelligent systems technologies through the US Department of Energy Office of Technology Development (DOE OTD) Robotic Technology Development Program (RTDP). Graphical programming is a key technology for robotic waste cleanup that Sandia is developing for this goal. Graphical programming uses simulation such as TELEGRIP `on-line` to program and control robots. Characterized by its model-based control architecture, integrated simulation, `point-and-click` graphical user interfaces, task and path planning software, and network communications, Sandia`s Graphical Programming systems allow operators to focus on high-level robotic tasks rather than the low-level details. Use of scripted tasks, rather than customized programs minimizes the necessity of recompiling supervisory control systems and enhances flexibility. Rapid world-modelling technologies allow Graphical Programming to be used in dynamic and unpredictable environments including digging and pipe-cutting. This paper describes Sancho, Sandia`s most advanced graphical programming supervisory software. Sancho, now operational on several robot systems, incorporates all of Sandia`s recent advances in supervisory control. Graphical programming uses 3-D graphics models as intuitive operator interfaces to program and control complex robotic systems. The goal of the paper is to help the reader understand how Sandia implements graphical programming systems and which key features in Sancho have proven to be most effective.

  15. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  16. Color postprocessing for 3-dimensional finite element mesh quality evaluation and evolving graphical workstation

    NASA Technical Reports Server (NTRS)

    Panthaki, Malcolm J.

    1987-01-01

    Three general tasks on general-purpose, interactive color graphics postprocessing for three-dimensional computational mechanics were accomplished. First, the existing program (POSTPRO3D) is ported to a high-resolution device. In the course of this transfer, numerous enhancements are implemented in the program. The performance of the hardware was evaluated from the point of view of engineering postprocessing, and the characteristics of future hardware were discussed. Second, interactive graphical tools implemented to facilitate qualitative mesh evaluation from a single analysis. The literature was surveyed and a bibliography compiled. Qualitative mesh sensors were examined, and the use of two-dimensional plots of unaveraged responses on the surface of three-dimensional continua was emphasized in an interactive color raster graphics environment. Finally, a postprocessing environment was designed for state-of-the-art workstation technology. Modularity, personalization of the environment, integration of the engineering design processes, and the development and use of high-level graphics tools are some of the features of the intended environment.

  17. Bird's Eye View - A 3-D Situational Awareness Tool for the Space Station

    NASA Technical Reports Server (NTRS)

    Dershowitz, Adam; Chamitoff, Gregory

    2002-01-01

    Even as space-qualified computer hardware lags well behind the latest home computers, the possibility of using high-fidelity interactive 3-D graphics for displaying important on board information has finally arrived, and is being used on board the International Space Station (ISS). With the quantity and complexity of space-flight telemetry, 3-D displays can greatly enhance the ability of users, both onboard and on the ground, to interpret data quickly and accurately. This is particularly true for data related to vehicle attitude, position, configuration, and relation to other objects on the ground or in-orbit Bird's Eye View (BEV) is a 3-D real-time application that provides a high degree of Situational Awareness for the crew. Its purpose is to instantly convey important motion-related parameters to the crew and mission controllers by presenting 3-D simulated camera views of the International Space Station (ISS) in its actual environment Driven by actual telemetry, and running on board, as well as on the ground, the user can visualize the Space Station relative to the Earth, Sun, stars, various reference frames, and selected targets, such as ground-sites or communication satellites. Since the actual ISS configuration (geometry) is also modeled accurately, everything from the alignment of the solar panels to the expected view from a selected window can be visualized accurately. A virtual representation of the Space Station in real time has many useful applications. By selecting different cameras, the crew or mission control can monitor the station's orientation in space, position over the Earth, transition from day to night, direction to the Sun, the view from a particular window, or the motion of the robotic arm. By viewing the vehicle attitude and solar panel orientations relative to the Sun, the power status of the ISS can be easily visualized and understood. Similarly, the thermal impacts of vehicle attitude can be analyzed and visually confirmed. Communication

  18. Teaching Molecular 3-D Literacy

    ERIC Educational Resources Information Center

    Richardson, David C.; Richardson, Jane S.

    2002-01-01

    This article describes how the use of interactive molecular graphics makes a unique and important contribution to student learning of biochemistry and molecular biology at any level. These authors developed the concept of the kinemage (from "kinetic image"), a different way of organizing computer graphics that is aimed explicitly at the…

  19. Glnemo2: Interactive Visualization 3D Program

    NASA Astrophysics Data System (ADS)

    Lambert, Jean-Charles

    2011-10-01

    Glnemo2 is an interactive 3D visualization program developed in C++ using the OpenGL library and Nokia QT 4.X API. It displays in 3D the particles positions of the different components of an nbody snapshot. It quickly gives a lot of information about the data (shape, density area, formation of structures such as spirals, bars, or peanuts). It allows for in/out zooms, rotations, changes of scale, translations, selection of different groups of particles and plots in different blending colors. It can color particles according to their density or temperature, play with the density threshold, trace orbits, display different time steps, take automatic screenshots to make movies, select particles using the mouse, and fly over a simulation using a given camera path. All these features are accessible from a very intuitive graphic user interface. Glnemo2 supports a wide range of input file formats (Nemo, Gadget 1 and 2, phiGrape, Ramses, list of files, realtime gyrfalcON simulation) which are automatically detected at loading time without user intervention. Glnemo2 uses a plugin mechanism to load the data, so that it is easy to add a new file reader. It's powered by a 3D engine which uses the latest OpenGL technology, such as shaders (glsl), vertex buffer object, frame buffer object, and takes in account the power of the graphic card used in order to accelerate the rendering. With a fast GPU, millions of particles can be rendered in real time. Glnemo2 runs on Linux, Windows (using minGW compiler), and MaxOSX, thanks to the QT4API.

  20. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  1. Spherical 3D isotropic wavelets

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2012-04-01

    Context. Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D spherical Fourier-Bessel (SFB) analysis in spherical coordinates is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. Aims: The aim of this paper is to present a new formalism for a spherical 3D isotropic wavelet, i.e. one based on the SFB decomposition of a 3D field and accompany the formalism with a public code to perform wavelet transforms. Methods: We describe a new 3D isotropic spherical wavelet decomposition based on the undecimated wavelet transform (UWT) described in Starck et al. (2006). We also present a new fast discrete spherical Fourier-Bessel transform (DSFBT) based on both a discrete Bessel transform and the HEALPIX angular pixelisation scheme. We test the 3D wavelet transform and as a toy-application, apply a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and find we can successfully remove noise without much loss to the large scale structure. Results: We have described a new spherical 3D isotropic wavelet transform, ideally suited to analyse and denoise future 3D spherical cosmological surveys, which uses a novel DSFBT. We illustrate its potential use for denoising using a toy model. All the algorithms presented in this paper are available for download as a public code called MRS3D at http://jstarck.free.fr/mrs3d.html

  2. True 3D displays for avionics and mission crewstations

    NASA Astrophysics Data System (ADS)

    Sholler, Elizabeth A.; Meyer, Frederick M.; Lucente, Mark E.; Hopper, Darrel G.

    1997-07-01

    3D threat projection has been shown to decrease the human recognition time for events, especially for a jet fighter pilot or C4I sensor operator when the advantage of realization that a hostile threat condition exists is the basis of survival. Decreased threat recognition time improves the survival rate and results from more effective presentation techniques, including the visual cue of true 3D (T3D) display. The concept of 'font' describes the approach adopted here, but whereas a 2D font comprises pixel bitmaps, a T3D font herein comprises a set of hologram bitmaps. The T3D font bitmaps are pre-computed, stored, and retrieved as needed to build images comprising symbols and/or characters. Human performance improvement, hologram generation for a T3D symbol font, projection requirements, and potential hardware implementation schemes are described. The goal is to employ computer-generated holography to create T3D depictions of a dynamic threat environments using fieldable hardware.

  3. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  4. Performance and Cognitive Assessment in 3-D Modeling

    ERIC Educational Resources Information Center

    Fahrer, Nolan E.; Ernst, Jeremy V.; Branoff, Theodore J.; Clark, Aaron C.

    2011-01-01

    The purpose of this study was to investigate identifiable differences between performance and cognitive assessment scores in a 3-D modeling unit of an engineering drafting course curriculum. The study aimed to provide further investigation of the need of skill-based assessments in engineering/technical graphics courses to potentially increase…

  5. 3D scientific visualization of reservoir simulation post-processing

    SciTech Connect

    Sousa, M.C.; Miranda-Filho, D.N.

    1994-12-31

    This paper describes a 3D visualization software designed at PETROBRAS and TecGraf/PUC-RJ in Brazil for the analysis of reservoir engineering post-processing data. It offers an advanced functional environment on graphical workstations with intuitive and ergonomic interface. Applications to real reservoir models show the enriching features of the software.

  6. Postprocessing techniques for 3D non-linear structures

    NASA Technical Reports Server (NTRS)

    Gallagher, Richard S.

    1987-01-01

    How graphics postprocessing techniques are currently used to examine the results of 3-D nonlinear analyses, some new techniques which take advantage of recent technology, and how these results relate to both the finite element model and its geometric parent are reviewed.

  7. Modern hardware architectures accelerate porous media flow computations

    NASA Astrophysics Data System (ADS)

    Kulczewski, Michal; Kurowski, Krzysztof; Kierzynka, Michal; Dohnalik, Marek; Kaczmarczyk, Jan; Borujeni, Ali Takbiri

    2012-05-01

    Investigation of rock properties, porosity and permeability particularly, which determines transport media characteristic, is crucial to reservoir engineering. Nowadays, micro-tomography (micro-CT) methods allow to obtain vast of petro-physical properties. The micro-CT method facilitates visualization of pores structures and acquisition of total porosity factor, determined by sticking together 2D slices of scanned rock and applying proper absorption cut-off point. Proper segmentation of pores representation in 3D is important to solve the permeability of porous media. This factor is recently determined by the means of Computational Fluid Dynamics (CFD), a popular method to analyze problems related to fluid flows, taking advantage of numerical methods and constantly growing computing powers. The recent advent of novel multi-, many-core and graphics processing unit (GPU) hardware architectures allows scientists to benefit even more from parallel processing and built-in new features. The high level of parallel scalability offers both, the time-to-solution decrease and greater accuracy - top factors in reservoir engineering. This paper aims to present research results related to fluid flow simulations, particularly solving the total porosity and permeability of porous media, taking advantage of modern hardware architectures. In our approach total porosity is calculated by the means of general-purpose computing on multiple GPUs. This application sticks together 2D slices of scanned rock and by the means of a marching tetrahedra algorithm, creates a 3D representation of pores and calculates the total porosity. Experimental results are compared with data obtained via other popular methods, including Nuclear Magnetic Resonance (NMR), helium porosity and nitrogen permeability tests. Then CFD simulations are performed on a large-scale high performance hardware architecture to solve the flow and permeability of porous media. In our experiments we used Lattice Boltzmann

  8. Filming Underwater in 3d Respecting Stereographic Rules

    NASA Astrophysics Data System (ADS)

    Rinaldi, R.; Hordosch, H.

    2015-04-01

    After an experimental phase of many years, 3D filming is now effective and successful. Improvements are still possible, but the film industry achieved memorable success on 3D movie's box offices due to the overall quality of its products. Special environments such as space ("Gravity") and the underwater realm look perfect to be reproduced in 3D. "Filming in space" was possible in "Gravity" using special effects and computer graphic. The underwater realm is still difficult to be handled. Underwater filming in 3D was not that easy and effective as filming in 2D, since not long ago. After almost 3 years of research, a French, Austrian and Italian team realized a perfect tool to film underwater, in 3D, without any constrains. This allows filmmakers to bring the audience deep inside an environment where they most probably will never have the chance to be.

  9. A 3-d modular gripper design tool

    SciTech Connect

    Brown, R.G.; Brost, R.C.

    1997-02-01

    Modular fixturing kits are sets of components used for flexible, rapid construction of fixtures. A modular vise is a parallel-jaw vise, each jaw of which is a modular fixture plate with a regular grid of precisely positioned holes. To fixture a part, one places pins in some of the holes so that when the vise is closed, the part is reliably located and completely constrained. The modular vise concept can be adapted easily to the design of modular parallel-jaw grippers for robots. By attaching a grid-plate to each jaw of a parallel-jaw gripper, one gains the ability to easily construct high-quality grasps for a wide variety of parts from a standard set of hardware. Wallack and Canny developed an algorithm for planning planar grasp configurations for the modular vise. In this paper, the authors expand this work to produce a 3-d fixture/gripper design tool. They describe several analyses they have added to the planar algorithm, including a 3-d grasp quality metric based on force information, 3-d geometric loading analysis, and inter-gripper interference analysis. Finally, the authors describe two applications of their code. One of these is an internal application at Sandia, while the other shows a potential use of the code for designing part of an agile assembly line.

  10. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  11. 3D visualization of the human cerebral vasculature

    NASA Astrophysics Data System (ADS)

    Zrimec, Tatjana; Mander, Tom; Lambert, Timothy; Parker, Geoffrey

    1995-04-01

    Computer assisted 3D visualization of the human cerebro-vascular system can help to locate blood vessels during diagnosis and to approach them during treatment. Our aim is to reconstruct the human cerebro-vascular system from the partial information collected from a variety of medical imaging instruments and to generate a 3D graphical representation. This paper describes a tool developed for 3D visualization of cerebro-vascular structures. It also describes a symbolic approach to modeling vascular anatomy. The tool, called Ispline, is used to display the graphical information stored in a symbolic model of the vasculature. The vascular model was developed to assist image processing and image fusion. The model consists of a structural symbolic representation using frames and a geometrical representation of vessel shapes and vessel topology. Ispline has proved to be useful for visualizing both the synthetically constructed vessels of the symbolic model and the vessels extracted from a patient's MR angiograms.

  12. The EISCAT_3D Science Case

    NASA Astrophysics Data System (ADS)

    Tjulin, A.; Mann, I.; McCrea, I.; Aikio, A. T.

    2013-05-01

    projection in the high-latitude ionosphere. EISCAT_3D can also be used to study solar system properties. Thanks to the high power and great accuracy, mapping of objects like the Moon and asteroids is possible. With the high power and large antenna aperture, incoherent scatter radars can be extraordinarily good monitors of extraterrestrial dust and its interaction with the atmosphere. Although incoherent scatter radars, such as EISCAT_3D, are few in number, the power and versatility of their measurement technique mean that they can measure parameters which are not obtainable otherwise, and thus also be a cornerstone in the international efforts to measure and predict space weather effects. Finally, over the years the EISCAT radars have served as a testbed for new ideas in radar coding and data analysis. EISCAT_3D will be the first of a new generation of "software radars" whose advanced capabilities will be realised not by its hardware but by the flexibility and adaptability of the scheduling, beam-forming, signal processing and analysis software used to control the radar and process its data. Thus, new techniques will be developed into standard observing applications for implementation in the next generation of software radars.

  13. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  14. 3D Buckligami: Digital Matter

    NASA Astrophysics Data System (ADS)

    van Hecke, Martin; de Reus, Koen; Florijn, Bastiaan; Coulais, Corentin

    2014-03-01

    We present a class of elastic structures which exhibit collective buckling in 3D, and create these by a 3D printing/moulding technique. Our structures consist of cubic lattice of anisotropic unit cells, and we show that their mechanical properties are programmable via the orientation of these unit cells.

  15. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  16. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  17. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  18. 3D-CANVENT: An interactive mine ventilation simulator

    SciTech Connect

    Hardcastle, S.G.

    1995-12-31

    3D-CANVENT is a software package that integrates advanced computer aided design (ACAD) true 3D graphics with a mine ventilation simulator. The package runs as a Windows{trademark} application to access its printer drivers environment and does not need third party CAD software. It is composed of two primary modules: DMVENT and MINEDESIGNER. DMVENT is a traditional Fortran coded Hardy-Cross iterative ventilation network solver written in 1980 with thermodynamic capabilities. This module is relatively unchanged with the traditional data input options for branch type, specified or calculated resistances, fixed flows, and fixed or variable pressure fans. MINEDESIGNER is the graphics engine that optimizes the ventilation design process. It performs the front-end transformation of input data entered in the graphical interface into the correct format for the solver. At the back-end it reconverts the historically standard tabular data output from the solver into an easily viewed graphical format. ACAD features of MINEDESIGNER are used to generate a 3D wire-frame node and branch network of the mine`s ventilation system. The network can be displayed in up to 4 views orientated to XYZ planes or a 3D view. AU the views have zoom, pan, slice and rotate options. The graphical interface efficiently permits data entry and editing via a mouse with pick-and-point item selection. Branches can be found or added with {open_quotes}search{close_quotes} and {open_quotes}join{close_quotes} options. Visual interpretation is enhanced by the 16 colour options for branches and numerous graphical attributes. Network locations are readily identified by alpha-numeric names for branches, junctions and fans, and also the logical numbering of junctions. The program is also readily expandable for pollutant simulation and control/monitoring applications.

  19. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  20. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  1. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  2. Computer Series, 3: Computer Graphics for Chemical Education.

    ERIC Educational Resources Information Center

    Soltzberg, Leonard J.

    1979-01-01

    Surveys the current scene in computer graphics from the point of view of a chemistry educator. Discusses the scope of current applications of computer graphics in chemical education, and provides information about hardware and software systems to promote communication with vendors of computer graphics equipment. (HM)

  3. Applying a Genetic Algorithm to Reconfigurable Hardware

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl; Weir, John; Trevino, Luis; Patrick, Clint; Steincamp, Jim

    2004-01-01

    This paper investigates the feasibility of applying genetic algorithms to solve optimization problems that are implemented entirely in reconfgurable hardware. The paper highlights the pe$ormance/design space trade-offs that must be understood to effectively implement a standard genetic algorithm within a modem Field Programmable Gate Array, FPGA, reconfgurable hardware environment and presents a case-study where this stochastic search technique is applied to standard test-case problems taken from the technical literature. In this research, the targeted FPGA-based platform and high-level design environment was the Starbridge Hypercomputing platform, which incorporates multiple Xilinx Virtex II FPGAs, and the Viva TM graphical hardware description language.

  4. Novel 3D/VR interactive environment for MD simulations, visualization and analysis.

    PubMed

    Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P

    2014-12-18

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.

  5. Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis

    PubMed Central

    Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.

    2014-01-01

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300

  6. 3D Integration for Wireless Multimedia

    NASA Astrophysics Data System (ADS)

    Kimmich, Georg

    The convergence of mobile phone, internet, mapping, gaming and office automation tools with high quality video and still imaging capture capability is becoming a strong market trend for portable devices. High-density video encode and decode, 3D graphics for gaming, increased application-software complexity and ultra-high-bandwidth 4G modem technologies are driving the CPU performance and memory bandwidth requirements close to the PC segment. These portable multimedia devices are battery operated, which requires the deployment of new low-power-optimized silicon process technologies and ultra-low-power design techniques at system, architecture and device level. Mobile devices also need to comply with stringent silicon-area and package-volume constraints. As for all consumer devices, low production cost and fast time-to-volume production is key for success. This chapter shows how 3D architectures can bring a possible breakthrough to meet the conflicting power, performance and area constraints. Multiple 3D die-stacking partitioning strategies are described and analyzed on their potential to improve the overall system power, performance and cost for specific application scenarios. Requirements and maturity of the basic process-technology bricks including through-silicon via (TSV) and die-to-die attachment techniques are reviewed. Finally, we highlight new challenges which will arise with 3D stacking and an outlook on how they may be addressed: Higher power density will require thermal design considerations, new EDA tools will need to be developed to cope with the integration of heterogeneous technologies and to guarantee signal and power integrity across the die stack. The silicon/wafer test strategies have to be adapted to handle high-density IO arrays, ultra-thin wafers and provide built-in self-test of attached memories. New standards and business models have to be developed to allow cost-efficient assembly and testing of devices from different silicon and technology

  7. Interactive 3d Landscapes on Line

    NASA Astrophysics Data System (ADS)

    Fanini, B.; Calori, L.; Ferdani, D.; Pescarin, S.

    2011-09-01

    The paper describes challenges identified while developing browser embedded 3D landscape rendering applications, our current approach and work-flow and how recent development in browser technologies could affect. All the data, even if processed by optimization and decimation tools, result in very huge databases that require paging, streaming and Level-of-Detail techniques to be implemented to allow remote web based real time fruition. Our approach has been to select an open source scene-graph based visual simulation library with sufficient performance and flexibility and adapt it to the web by providing a browser plug-in. Within the current Montegrotto VR Project, content produced with new pipelines has been integrated. The whole Montegrotto Town has been generated procedurally by CityEngine. We used this procedural approach, based on algorithms and procedures because it is particularly functional to create extensive and credible urban reconstructions. To create the archaeological sites we used optimized mesh acquired with laser scanning and photogrammetry techniques whereas to realize the 3D reconstructions of the main historical buildings we adopted computer-graphic software like blender and 3ds Max. At the final stage, semi-automatic tools have been developed and used up to prepare and clusterise 3D models and scene graph routes for web publishing. Vegetation generators have also been used with the goal of populating the virtual scene to enhance the user perceived realism during the navigation experience. After the description of 3D modelling and optimization techniques, the paper will focus and discuss its results and expectations.

  8. Graphic Arts.

    ERIC Educational Resources Information Center

    Towler, Alan L.

    This guide to teaching graphic arts, one in a series of instructional materials for junior high industrial arts education, is designed to assist teachers as they plan and implement new courses of study and as they make revisions and improvements in existing courses in order to integrate classroom learning with real-life experiences. This graphic…

  9. Efficient 3D rendering for web-based medical imaging software: a proof of concept

    NASA Astrophysics Data System (ADS)

    Cantor-Rivera, Diego; Bartha, Robert; Peters, Terry

    2011-03-01

    Medical Imaging Software (MIS) found in research and in clinical practice, such as in Picture and Archiving Communication Systems (PACS) and Radiology Information Systems (RIS), has not been able to take full advantage of the Internet as a deployment platform. MIS is usually tightly coupled to algorithms that have substantial hardware and software requirements. Consequently, MIS is deployed on thick clients which usually leads project managers to allocate more resources during the deployment phase of the application than the resources that would be allocated if the application were deployed through a web interface.To minimize the costs associated with this scenario, many software providers use or develop plug-ins to provide the delivery platform (internet browser) with the features to load, interact and analyze medical images. Nevertheless there has not been a successful standard means to achieve this goal so far. This paper presents a study of WebGL as an alternative to plug-in development for efficient rendering of 3D medical models and DICOM images. WebGL is a technology that enables the internet browser to have access to the local graphics hardware in a native fashion. Because it is based in OpenGL, a widely accepted graphic industry standard, WebGL is being implemented in most of the major commercial browsers. After a discussion on the details of the technology, a series of experiments are presented to determine the operational boundaries in which WebGL is adequate for MIS. A comparison with current alternatives is also addressed. Finally conclusions and future work are discussed.

  10. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  11. 3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction

    PubMed Central

    Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie

    2015-01-01

    Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314

  12. Programming Language Software For Graphics Applications

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.

    1993-01-01

    New approach reduces repetitive development of features common to different applications. High-level programming language and interactive environment with access to graphical hardware and software created by adding graphical commands and other constructs to standardized, general-purpose programming language, "Scheme". Designed for use in developing other software incorporating interactive computer-graphics capabilities into application programs. Provides alternative to programming entire applications in C or FORTRAN, specifically ameliorating design and implementation of complex control and data structures typifying applications with interactive graphics. Enables experimental programming and rapid development of prototype software, and yields high-level programs serving as executable versions of software-design documentation.

  13. 3D Scan Systems Integration

    DTIC Science & Technology

    2007-11-02

    AGENCY USE ONLY (Leave Blank) 2. REPORT DATE 5 Feb 98 4. TITLE AND SUBTITLE 3D Scan Systems Integration REPORT TYPE AND DATES COVERED...2-89) Prescribed by ANSI Std. Z39-1 298-102 [ EDO QUALITY W3PECTEDI DLA-ARN Final Report for US Defense Logistics Agency on DDFG-T2/P3: 3D...SCAN SYSTEMS INTEGRATION Contract Number SPO100-95-D-1014 Contractor Ohio University Delivery Order # 0001 Delivery Order Title 3D Scan Systems

  14. Natural 3D content on glasses-free light-field 3D cinema

    NASA Astrophysics Data System (ADS)

    Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.

    2013-03-01

    This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.

  15. 3-D QSAutogrid/R: an alternative procedure to build 3-D QSAR models. Methodologies and applications.

    PubMed

    Ballante, Flavio; Ragno, Rino

    2012-06-25

    Since it first appeared in 1988 3-D QSAR has proved its potential in the field of drug design and activity prediction. Although thousands of citations now exist in 3-D QSAR, its development was rather slow with the majority of new 3-D QSAR applications just extensions of CoMFA. An alternative way to build 3-D QSAR models, based on an evolution of software, has been named 3-D QSAutogrid/R and has been developed to use only software freely available to academics. 3-D QSAutogrid/R covers all the main features of CoMFA and GRID/GOLPE with implementation by multiprobe/multiregion variable selection (MPGRS) that improves the simplification of interpretation of the 3-D QSAR map. The methodology is based on the integration of the molecular interaction fields as calculated by AutoGrid and the R statistical environment that can be easily coupled with many free graphical molecular interfaces such as UCSF-Chimera, AutoDock Tools, JMol, and others. The description of each R package is reported in detail, and, to assess its validity, 3-D QSAutogrid/R has been applied to three molecular data sets of which either CoMFA or GRID/GOLPE models were reported in order to compare the results. 3-D QSAutogrid/R has been used as the core engine to prepare more that 240 3-D QSAR models forming the very first 3-D QSAR server ( www.3d-qsar.com ) with its code freely available through R-Cran distribution.

  16. 3D polymer scaffold arrays.

    PubMed

    Simon, Carl G; Yang, Yanyin; Dorsey, Shauna M; Ramalingam, Murugan; Chatterjee, Kaushik

    2011-01-01

    We have developed a combinatorial platform for fabricating tissue scaffold arrays that can be used for screening cell-material interactions. Traditional research involves preparing samples one at a time for characterization and testing. Combinatorial and high-throughput (CHT) methods lower the cost of research by reducing the amount of time and material required for experiments by combining many samples into miniaturized specimens. In order to help accelerate biomaterials research, many new CHT methods have been developed for screening cell-material interactions where materials are presented to cells as a 2D film or surface. However, biomaterials are frequently used to fabricate 3D scaffolds, cells exist in vivo in a 3D environment and cells cultured in a 3D environment in vitro typically behave more physiologically than those cultured on a 2D surface. Thus, we have developed a platform for fabricating tissue scaffold libraries where biomaterials can be presented to cells in a 3D format.

  17. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  18. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  19. Engineering Graphics Educational Outcomes for the Global Engineer: An Update

    ERIC Educational Resources Information Center

    Barr, R. E.

    2012-01-01

    This paper discusses the formulation of educational outcomes for engineering graphics that span the global enterprise. Results of two repeated faculty surveys indicate that new computer graphics tools and techniques are now the preferred mode of engineering graphical communication. Specifically, 3-D computer modeling, assembly modeling, and model…

  20. 3-D adaptive nonlinear complex-diffusion despeckling filter.

    PubMed

    Rodrigues, Pedro; Bernardes, Rui

    2012-12-01

    This work aims to improve the process of speckle noise reduction while preserving edges and other relevant features through filter expansion from 2-D to 3-D. Despeckling is very important for data visual inspection and as a preprocessing step for other algorithms, as they are usually notably influenced by speckle noise. To that intent, a 3-D approach is proposed for the adaptive complex-diffusion filter. This 3-D iterative filter was applied to spectral-domain optical coherence tomography medical imaging volumes of the human retina and a quantitative evaluation of the results was performed to allow a demonstration of the better performance of the 3-D over the 2-D filtering and to choose the best total diffusion time. In addition, we propose a fast graphical processing unit parallel implementation so that the filter can be used in a clinical setting.

  1. From Surface Data to 3D Geologic Maps

    NASA Astrophysics Data System (ADS)

    Dhont, D.; Luxey, P.; Longuesserre, V.; Monod, B.; Guillaume, B.

    2008-12-01

    New trends in earth sciences are mostly related to technologies allowing graphical representations of the geology in 3D. However, the concept of 3D geologic map is commonly misused. For instance, displays of geologic maps draped onto DEM in rotating perspective views have been misleadingly called 3D geologic maps, but this still cannot provide any volumetric underground information as a true 3D geologic map should. Here, we present a way to produce mathematically and geometrically correct 3D geologic maps constituted by the volume and shape of all geologic features of a given area. The originality of the method is that it is based on the integration of surface data only consisting of (1) geologic maps, (2) satellite images, (3) DEM and (4) bedding dips and strikes. To generate 3D geologic maps, we used a 3D geologic modeler that combines and extrapolates the surface information into a coherent 3D data set. The significance of geometrically correct 3D geologic maps is demonstrated for various geologic settings and applications. 3D models are of primarily importance for educational purposes because they reveal features that standard 2D geologic maps by themselves could not show. The 3D visualization helps in the understanding of the geometrical relationship between the different geologic features and, in turn, for the quantification of the geology at the regional scale. Furthermore, given the logistical challenges associated with modern oil and mineral exploration in remote and rugged terrain, these volume-based models can provide geological and commercial insight prior to seismic evaluation.

  2. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  3. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  4. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  5. Efficient 3D nonlinear warping of computed tomography: two high-performance implementations using OpenGL

    NASA Astrophysics Data System (ADS)

    Levin, David; Dey, Damini; Slomka, Piotr

    2005-04-01

    We have implemented two hardware accelerated Thin Plate Spline (TPS) warping algorithms. The first algorithm is a hardware-software approach (HW-TPS) that uses OpenGL Vertex Shaders to perform a grid warp. The second is a Graphics Processor based approach (GPU-TPS) that uses the OpenGL Shading Language to perform all warping calculations on the GPU. Comparison with a software TPS algorithm was used to gauge the speed and quality of both hardware algorithms. Quality was analyzed visually and using the Sum of Absolute Difference (SAD) similarity metric. Warping was performed using 92 user-defined displacement vectors for 512x512x173 serial lung CT studies, matching normal-breathing and deep-inspiration scans. On a Xeon 2.2 Ghz machine with an ATI Radeon 9800XT GPU the GPU-TPS required 26.1 seconds to perform a per-voxel warp compared to 148.2 seconds for the software algorithm. The HW-TPS needed 1.63 seconds to warp the same study while the GPU-TPS required 1.94 seconds and the software grid transform required 22.8 seconds. The SAD values calculated between the outputs of each algorithm and the target CT volume were 15.2%, 15.4% and 15.5% for the HW-TPS, GPU-TPS and both software algorithms respectively. The computing power of ubiquitous 3D graphics cards can be exploited in medical image processing to provide order of magnitude acceleration of nonlinear warping algorithms without sacrificing output quality.

  6. Shape: A 3D Modeling Tool for Astrophysics.

    PubMed

    Steffen, Wolfgang; Koning, Nicholas; Wenger, Stephan; Morisset, Christophe; Magnor, Marcus

    2011-04-01

    We present a flexible interactive 3D morpho-kinematical modeling application for astrophysics. Compared to other systems, our application reduces the restrictions on the physical assumptions, data type, and amount that is required for a reconstruction of an object's morphology. It is one of the first publicly available tools to apply interactive graphics to astrophysical modeling. The tool allows astrophysicists to provide a priori knowledge about the object by interactively defining 3D structural elements. By direct comparison of model prediction with observational data, model parameters can then be automatically optimized to fit the observation. The tool has already been successfully used in a number of astrophysical research projects.

  7. Macrophage podosomes go 3D.

    PubMed

    Van Goethem, Emeline; Guiet, Romain; Balor, Stéphanie; Charrière, Guillaume M; Poincloux, Renaud; Labrousse, Arnaud; Maridonneau-Parini, Isabelle; Le Cabec, Véronique

    2011-01-01

    Macrophage tissue infiltration is a critical step in the immune response against microorganisms and is also associated with disease progression in chronic inflammation and cancer. Macrophages are constitutively equipped with specialized structures called podosomes dedicated to extracellular matrix (ECM) degradation. We recently reported that these structures play a critical role in trans-matrix mesenchymal migration mode, a protease-dependent mechanism. Podosome molecular components and their ECM-degrading activity have been extensively studied in two dimensions (2D), but yet very little is known about their fate in three-dimensional (3D) environments. Therefore, localization of podosome markers and proteolytic activity were carefully examined in human macrophages performing mesenchymal migration. Using our gelled collagen I 3D matrix model to obligate human macrophages to perform mesenchymal migration, classical podosome markers including talin, paxillin, vinculin, gelsolin, cortactin were found to accumulate at the tip of F-actin-rich cell protrusions together with β1 integrin and CD44 but not β2 integrin. Macrophage proteolytic activity was observed at podosome-like protrusion sites using confocal fluorescence microscopy and electron microscopy. The formation of migration tunnels by macrophages inside the matrix was accomplished by degradation, engulfment and mechanic compaction of the matrix. In addition, videomicroscopy revealed that 3D F-actin-rich protrusions of migrating macrophages were as dynamic as their 2D counterparts. Overall, the specifications of 3D podosomes resembled those of 2D podosome rosettes rather than those of individual podosomes. This observation was further supported by the aspect of 3D podosomes in fibroblasts expressing Hck, a master regulator of podosome rosettes in macrophages. In conclusion, human macrophage podosomes go 3D and take the shape of spherical podosome rosettes when the cells perform mesenchymal migration. This work

  8. 3D Printed Bionic Nanodevices.

    PubMed

    Kong, Yong Lin; Gupta, Maneesh K; Johnson, Blake N; McAlpine, Michael C

    2016-06-01

    The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and 'living' platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with the

  9. Hardware Review: What Hardware Should We Buy?

    ERIC Educational Resources Information Center

    Tinker, Robert

    1984-01-01

    Discusses trends and changes in hardware production. For example Sinclair/Timex has stopped mass marketing its computers while others (such as the IBM junior) has finally made its appearance. Strongly advises schools to re-evaluate their hardware purchasing programs in light of these and other changes. (JN)

  10. Petal, terrain & airbags - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Portions of the lander's deflated airbags and a petal are at the lower area of this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. The metallic object at lower right is part of the lander's low-gain antenna. This image is part of a 3D 'monster

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  11. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  12. Realtime multi-plot graphics system

    NASA Technical Reports Server (NTRS)

    Shipkowski, Michael S.

    1990-01-01

    The increased complexity of test operations and customer requirements at Langley Research Center's National Transonic Facility (NTF) surpassed the capabilities of the initial realtime graphics system. The analysis of existing hardware and software and the enhancements made to develop a new realtime graphics system are described. The result of this effort is a cost effective system, based on hardware already in place, that support high speed, high resolution, generation and display of multiple realtime plots. The enhanced graphics system (EGS) meets the current and foreseeable future realtime graphics requirements of the NTF. While this system was developed to support wind tunnel operations, the overall design and capability of the system is applicable to other realtime data acquisition systems that have realtime plot requirements.

  13. The World of 3-D.

    ERIC Educational Resources Information Center

    Mayshark, Robin K.

    1991-01-01

    Students explore three-dimensional properties by creating red and green wall decorations related to Christmas. Students examine why images seem to vibrate when red and green pieces are small and close together. Instructions to conduct the activity and construct 3-D glasses are given. (MDH)

  14. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  15. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  16. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  17. Voice and gesture-based 3D multimedia presentation tool

    NASA Astrophysics Data System (ADS)

    Fukutake, Hiromichi; Akazawa, Yoshiaki; Okada, Yoshihiro

    2007-09-01

    This paper proposes a 3D multimedia presentation tool that allows the user to manipulate intuitively only through the voice input and the gesture input without using a standard keyboard or a mouse device. The authors developed this system as a presentation tool to be used in a presentation room equipped a large screen like an exhibition room in a museum because, in such a presentation environment, it is better to use voice commands and the gesture pointing input rather than using a keyboard or a mouse device. This system was developed using IntelligentBox, which is a component-based 3D graphics software development system. IntelligentBox has already provided various types of 3D visible, reactive functional components called boxes, e.g., a voice input component and various multimedia handling components. IntelligentBox also provides a dynamic data linkage mechanism called slot-connection that allows the user to develop 3D graphics applications by combining already existing boxes through direct manipulations on a computer screen. Using IntelligentBox, the 3D multimedia presentation tool proposed in this paper was also developed as combined components only through direct manipulations on a computer screen. The authors have already proposed a 3D multimedia presentation tool using a stage metaphor and its voice input interface. This time, we extended the system to make it accept the user gesture input besides voice commands. This paper explains details of the proposed 3D multimedia presentation tool and especially describes its component-based voice and gesture input interfaces.

  18. IGES Interface for Medical 3-D Volume Data.

    PubMed

    Chen, Gong; Yi, Hong; Ni, Zhonghua

    2005-01-01

    Although there are many medical image processing and virtual surgery systems that provide rather consummate 3D-visualization and data manipulation techniques, few of them can export the volume data for engineering analyze. The thesis presents an interface implementing IGES (initial graphics exchange specification). Volume data such as bones, skins and other tissues can be exported as IGES files to be directly used for engineering analysis.

  19. Research and Teaching: Methods for Creating and Evaluating 3D Tactile Images to Teach STEM Courses to the Visually Impaired

    ERIC Educational Resources Information Center

    Hasper, Eric; Windhorst, Rogier; Hedgpeth, Terri; Van Tuyl, Leanne; Gonzales, Ashleigh; Martinez, Britta; Yu, Hongyu; Farkas, Zolton; Baluch, Debra P.

    2015-01-01

    Project 3D IMAGINE or 3D Image Arrays to Graphically Implement New Education is a pilot study that researches the effectiveness of incorporating 3D tactile images, which are critical for learning science, technology, engineering, and mathematics, into entry-level lab courses. The focus of this project is to increase the participation and…

  20. Optical Sensors and Methods for Underwater 3D Reconstruction

    PubMed Central

    Massot-Campos, Miquel; Oliver-Codina, Gabriel

    2015-01-01

    This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389

  1. Flight Avionics Hardware Roadmap

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Goforth, Monte; Chen, Yuan; Powell, Wes; Paulick, Paul; Vitalpur, Sharada; Buscher, Deborah; Wade, Ray; West, John; Redifer, Matt; Partridge, Harry; Sherman, Aaron; McCabe, Mary

    2014-01-01

    The Avionics Technology Roadmap takes an 80% approach to technology investment in spacecraft avionics. It delineates a suite of technologies covering foundational, component, and subsystem-levels, which directly support 80% of future NASA space mission needs. The roadmap eschews high cost, limited utility technologies in favor of lower cost, and broadly applicable technologies with high return on investment. The roadmap is also phased to support future NASA mission needs and desires, with a view towards creating an optimized investment portfolio that matures specific, high impact technologies on a schedule that matches optimum insertion points of these technologies into NASA missions. The roadmap looks out over 15+ years and covers some 114 technologies, 58 of which are targeted for TRL6 within 5 years, with 23 additional technologies to be at TRL6 by 2020. Of that number, only a few are recommended for near term investment: 1. Rad Hard High Performance Computing 2. Extreme temperature capable electronics and packaging 3. RFID/SAW-based spacecraft sensors and instruments 4. Lightweight, low power 2D displays suitable for crewed missions 5. Radiation tolerant Graphics Processing Unit to drive crew displays 6. Distributed/reconfigurable, extreme temperature and radiation tolerant, spacecraft sensor controller and sensor modules 7. Spacecraft to spacecraft, long link data communication protocols 8. High performance and extreme temperature capable C&DH subsystem In addition, the roadmap team recommends several other activities that it believes are necessary to advance avionics technology across NASA: center dot Engage the OCT roadmap teams to coordinate avionics technology advances and infusion into these roadmaps and their mission set center dot Charter a team to develop a set of use cases for future avionics capabilities in order to decouple this roadmap from specific missions center dot Partner with the Software Steering Committee to coordinate computing hardware

  2. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  3. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  4. Comparing swimsuits in 3D.

    PubMed

    van Geer, Erik; Molenbroek, Johan; Schreven, Sander; deVoogd-Claessen, Lenneke; Toussaint, Huib

    2012-01-01

    In competitive swimming, suits have become more important. These suits influence friction, pressure and wave drag. Friction drag is related to the surface properties whereas both pressure and wave drag are greatly influenced by body shape. To find a relationship between the body shape and the drag, the anthropometry of several world class female swimmers wearing different suits was accurately defined using a 3D scanner and traditional measuring methods. The 3D scans delivered more detailed information about the body shape. On the same day the swimmers did performance tests in the water with the tested suits. Afterwards the result of the performance tests and the differences found in body shape was analyzed to determine the deformation caused by a swimsuit and its effect on the swimming performance. Although the amount of data is limited because of the few test subjects, there is an indication that the deformation of the body influences the swimming performance.

  5. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  6. 3D-graphite structure

    SciTech Connect

    Belenkov, E. A. Ali-Pasha, V. A.

    2011-01-15

    The structure of clusters of some new carbon 3D-graphite phases have been calculated using the molecular-mechanics methods. It is established that 3D-graphite polytypes {alpha}{sub 1,1}, {alpha}{sub 1,3}, {alpha}{sub 1,5}, {alpha}{sub 2,1}, {alpha}{sub 2,3}, {alpha}{sub 3,1}, {beta}{sub 1,2}, {beta}{sub 1,4}, {beta}{sub 1,6}, {beta}{sub 2,1}, and {beta}{sub 3,2} consist of sp{sup 2}-hybridized atoms, have hexagonal unit cells, and differ in regards to the structure of layers and order of their alternation. A possible way to experimentally synthesize new carbon phases is proposed: the polymerization and carbonization of hydrocarbon molecules.

  7. 3D-Fun: predicting enzyme function from structure.

    PubMed

    von Grotthuss, Marcin; Plewczynski, Dariusz; Vriend, Gert; Rychlewski, Leszek

    2008-07-01

    The 'omics' revolution is causing a flurry of data that all needs to be annotated for it to become useful. Sequences of proteins of unknown function can be annotated with a putative function by comparing them with proteins of known function. This form of annotation is typically performed with BLAST or similar software. Structural genomics is nowadays also bringing us three dimensional structures of proteins with unknown function. We present here software that can be used when sequence comparisons fail to determine the function of a protein with known structure but unknown function. The software, called 3D-Fun, is implemented as a server that runs at several European institutes and is freely available for everybody at all these sites. The 3D-Fun servers accept protein coordinates in the standard PDB format and compare them with all known protein structures by 3D structural superposition using the 3D-Hit software. If structural hits are found with proteins with known function, these are listed together with their function and some vital comparison statistics. This is conceptually very similar in 3D to what BLAST does in 1D. Additionally, the superposition results are displayed using interactive graphics facilities. Currently, the 3D-Fun system only predicts enzyme function but an expanded version with Gene Ontology predictions will be available soon. The server can be accessed at http://3dfun.bioinfo.pl/ or at http://3dfun.cmbi.ru.nl/.

  8. Automated 3D reconstruction of interiors with multiple scan views

    NASA Astrophysics Data System (ADS)

    Sequeira, Vitor; Ng, Kia C.; Wolfart, Erik; Goncalves, Joao G. M.; Hogg, David C.

    1998-12-01

    This paper presents two integrated solutions for realistic 3D model acquisition and reconstruction; an early prototype, in the form of a push trolley, and a later prototype in the form of an autonomous robot. The systems encompass all hardware and software required, from laser and video data acquisition, processing and output of texture-mapped 3D models in VRML format, to batteries for power supply and wireless network communications. The autonomous version is also equipped with a mobile platform and other sensors for the purpose of automatic navigation. The applications for such a system range from real estate and tourism (e.g., showing a 3D computer model of a property to a potential buyer or tenant) or as tool for content creation (e.g., creating 3D models of heritage buildings or producing broadcast quality virtual studios). The system can also be used in industrial environments as a reverse engineering tool to update the design of a plant, or as a 3D photo-archive for insurance purposes. The system is Internet compatible: the photo-realistic models can be accessed via the Internet and manipulated interactively in 3D using a common Web browser with a VRML plug-in. Further information and example reconstructed models are available on- line via the RESOLV web-page at http://www.scs.leeds.ac.uk/resolv/.

  9. 3-D Printed Ultem 9085 Testing and Analysis

    NASA Technical Reports Server (NTRS)

    Aguilar, Daniel; Christensen, Sean; Fox, Emmet J.

    2015-01-01

    The purpose of this document is to analyze the mechanical properties of 3-D printed Ultem 9085. This document will focus on the capabilities, limitations, and complexities of 3D printing in general, and explain the methods by which this material is tested. Because 3-D printing is a relatively new process that offers an innovative means to produce hardware, it is important that the aerospace community understands its current advantages and limitations, so that future endeavors involving 3-D printing may be completely safe. This document encompasses three main sections: a Slosh damage assessment, a destructive test of 3-D printed Ultem 9085 samples, and a test to verify simulation for the 3-D printed SDP (SPHERES Docking Port). Described below, 'Slosh' and 'SDP' refer to two experiments that are built using Ultem 9085 for use with the SPHERES (Synchronized Position Hold, Engage, Reorient, Experimental Satellites) program onboard the International Space Station (ISS) [16]. The SPHERES Facility is managed out of the National Aeronautics and Space Administration (NASA) Ames Research Center in California.

  10. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  11. Graphical programming at Sandia National Laboratories

    SciTech Connect

    McDonald, M.J.; Palmquist, R.D.; Desjarlais, L.

    1993-09-01

    Sandia has developed an advanced operational control system approach, called Graphical Programming, to design, program, and operate robotic systems. The Graphical Programming approach produces robot systems that are faster to develop and use, safer in operation, and cheaper overall than altemative teleoperation or autonomous robot control systems. Graphical Programming also provides an efficient and easy-to-use interface to traditional robot systems for use in setup and programming tasks. This paper provides an overview of the Graphical Programming approach and lists key features of Graphical Programming systems. Graphical Programming uses 3-D visualization and simulation software with intuitive operator interfaces for the programming and control of complex robotic systems. Graphical Programming Supervisor software modules allow an operator to command and simulate complex tasks in a graphic preview mode and, when acceptable, command the actual robots and monitor their motions with the graphic system. Graphical Programming Supervisors maintain registration with the real world and allow the robot to perform tasks that cannot be accurately represented with models alone by using a combination of model and sensor-based control.

  12. Structured Light-Based 3D Reconstruction System for Plants.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-07-29

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  13. Structured Light-Based 3D Reconstruction System for Plants

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants.This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701

  14. The 3D widgets for exploratory scientific visualization

    NASA Technical Reports Server (NTRS)

    Herndon, Kenneth P.; Meyer, Tom

    1995-01-01

    Computational fluid dynamics (CFD) techniques are used to simulate flows of fluids like air or water around such objects as airplanes and automobiles. These techniques usually generate very large amounts of numerical data which are difficult to understand without using graphical scientific visualization techniques. There are a number of commercial scientific visualization applications available today which allow scientists to control visualization tools via textual and/or 2D user interfaces. However, these user interfaces are often difficult to use. We believe that 3D direct-manipulation techniques for interactively controlling visualization tools will provide opportunities for powerful and useful interfaces with which scientists can more effectively explore their datasets. A few systems have been developed which use these techniques. In this paper, we will present a variety of 3D interaction techniques for manipulating parameters of visualization tools used to explore CFD datasets, and discuss in detail various techniques for positioning tools in a 3D scene.

  15. Algorithms for Haptic Rendering of 3D Objects

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  16. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  17. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  18. GPU-Accelerated Denoising in 3D (GD3D)

    SciTech Connect

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer the second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.

  19. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  20. Design Graphics

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A mathematician, David R. Hedgley, Jr. developed a computer program that considers whether a line in a graphic model of a three-dimensional object should or should not be visible. Known as the Hidden Line Computer Code, the program automatically removes superfluous lines and displays an object from a specific viewpoint, just as the human eye would see it. An example of how one company uses the program is the experience of Birdair which specializes in production of fabric skylights and stadium covers. The fabric called SHEERFILL is a Teflon coated fiberglass material developed in cooperation with DuPont Company. SHEERFILL glazed structures are either tension structures or air-supported tension structures. Both are formed by patterned fabric sheets supported by a steel or aluminum frame or cable network. Birdair uses the Hidden Line Computer Code, to illustrate a prospective structure to an architect or owner. The program generates a three- dimensional perspective with the hidden lines removed. This program is still used by Birdair and continues to be commercially available to the public.

  1. Improving Semantic Updating Method on 3d City Models Using Hybrid Semantic-Geometric 3d Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Sharkawi, K.-H.; Abdul-Rahman, A.

    2013-09-01

    to LoD4. The accuracy and structural complexity of the 3D objects increases with the LoD level where LoD0 is the simplest LoD (2.5D; Digital Terrain Model (DTM) + building or roof print) while LoD4 is the most complex LoD (architectural details with interior structures). Semantic information is one of the main components in CityGML and 3D City Models, and provides important information for any analyses. However, more often than not, the semantic information is not available for the 3D city model due to the unstandardized modelling process. One of the examples is where a building is normally generated as one object (without specific feature layers such as Roof, Ground floor, Level 1, Level 2, Block A, Block B, etc). This research attempts to develop a method to improve the semantic data updating process by segmenting the 3D building into simpler parts which will make it easier for the users to select and update the semantic information. The methodology is implemented for 3D buildings in LoD2 where the buildings are generated without architectural details but with distinct roof structures. This paper also introduces hybrid semantic-geometric 3D segmentation method that deals with hierarchical segmentation of a 3D building based on its semantic value and surface characteristics, fitted by one of the predefined primitives. For future work, the segmentation method will be implemented as part of the change detection module that can detect any changes on the 3D buildings, store and retrieve semantic information of the changed structure, automatically updates the 3D models and visualize the results in a userfriendly graphical user interface (GUI).

  2. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  3. Tomographic 3D-PIV and Applications

    NASA Astrophysics Data System (ADS)

    Elsinga, Gerrit E.; Wieneke, Bernhard; Scarano, Fulvio; Schröder, Andreas

    Tomographic particle image velocimetry is a 3D PIV technique based on the illumination, recording, reconstruction and analysis of tracer-particle motion within a three-dimensional measurement volume. The recently developed technique makes use of several simultaneous views of the illuminated particles, typically 4, and their three-dimensional reconstruction as a light-intensity distribution by means of optical tomography. The reconstruction is performed with the MART algorithm (multiplicative algebraic reconstruction technique), yielding a 3D distribution of light intensity discretized over an array of voxels. The reconstructed tomogram pair is then analyzed by means of 3D crosscorrelation with an iterative multigrid volume-deformation technique, returning the three-component velocity vector distribution over the measurement volume. The implementation of the tomographic technique in time-resolved mode by means of high repetition rate PIV hardware has the capability to yield 4D velocity information. The first part of the chapter describes the operation principles and gives a detailed assessment of the tomographic reconstruction algorithm performance based upon a computer-simulated experiment. The second part of the chapter proposes four applications on two flow cases: 1. the transitional wake behind a circular cylinder; 2. the turbulent boundary layer developing over a flat plate. For the first case, experiments in air at ReD = 2700 are described together with the experimental assessment of the tomographic reconstruction accuracy. In this experiment a direct comparison is made between the results obtained by tomographic PIV and stereo-PIV. Experiments conducted in a water facility on the cylinder wake shows the extension of the technique to time-resolved measurements in water at ReD = 540 by means of a low repetition rate PIV system. A high data yield is obtained using high-resolution cameras (2k × 2k pixels) returning 650k vectors per volume. Measurements of the

  4. Engineering graphics data entry for space station data base

    NASA Technical Reports Server (NTRS)

    Lacovara, R. C.

    1986-01-01

    The entry of graphical engineering data into the Space Station Data Base was examined. Discussed were: representation of graphics objects; representation of connectivity data; graphics capture hardware; graphics display hardware; site-wide distribution of graphics, and consolidation of tools and hardware. A fundamental assumption was that existing equipment such as IBM based graphics capture software and VAX networked facilities would be exploited. Defensible conclusions reached after study and simulations of use of these systems at the engineering level are: (1) existing IBM based graphics capture software is an adequate and economical means of entry of schematic and block diagram data for present and anticipated electronic systems for Space Station; (2) connectivity data from the aforementioned system may be incorporated into the envisioned Space Station Data Base with modest effort; (3) graphics and connectivity data captured on the IBM based system may be exported to the VAX network in a simple and direct fashion; (4) graphics data may be displayed site-wide on VT-125 terminals and lookalikes; (5) graphics hard-copy may be produced site-wide on various dot-matrix printers; and (6) the system may provide integrated engineering services at both the engineering and engineering management level.

  5. Laserprinter applications in a medical graphics department.

    PubMed

    Lynch, P J

    1987-01-01

    Our experience with the Apple Macintosh and LaserWriter equipment has convinced us that lasergraphics holds much current and future promise in the creation of line graphics and typography for the biomedical community. Although we continue to use other computer graphics equipment to produce color slides and an occasional pen-plotter graphic, the most rapidly growing segment of our graphics workload is in material well-suited to production on the Macintosh/LaserWriter system. At present our goal is to integrate all of our computer graphics production (color slides, video paint graphics and monochrome print graphics) into a single Macintosh-based system within the next two years. The software and hardware currently available are capable of producing a wide range of science graphics very quickly and inexpensively. The cost-effectiveness, versatility and relatively low initial investment required to install this equipment make it an attractive alternative for cost-recovery departments just entering the field of computer graphics.

  6. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  7. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  8. 3D Nanostructuring of Semiconductors

    NASA Astrophysics Data System (ADS)

    Blick, Robert

    2000-03-01

    Modern semiconductor technology allows to machine devices on the nanometer scale. I will discuss the current limits of the fabrication processes, which enable the definition of single electron transistors with dimensions down to 8 nm. In addition to the conventional 2D patterning and structuring of semiconductors, I will demonstrate how to apply 3D nanostructuring techniques to build freely suspended single-crystal beams with lateral dimension down to 20 nm. In transport measurements in the temperature range from 30 mK up to 100 K these nano-crystals are characterized regarding their electronic as well as their mechanical properties. Moreover, I will present possible applications of these devices.

  9. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  10. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  11. A Clean Adirondack (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is a 3-D anaglyph showing a microscopic image taken of an area measuring 3 centimeters (1.2 inches) across on the rock called Adirondack. The image was taken at Gusev Crater on the 33rd day of the Mars Exploration Rover Spirit's journey (Feb. 5, 2004), after the rover used its rock abrasion tool brush to clean the surface of the rock. Dust, which was pushed off to the side during cleaning, can still be seen to the left and in low areas of the rock.

  12. 3D Printed Shelby Cobra

    SciTech Connect

    Love, Lonnie

    2015-01-09

    ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.

  13. LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.

  14. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    PubMed

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image.

  15. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing.

  16. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  17. 3D Printed Bionic Ears

    PubMed Central

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  18. Martian terrain & airbags - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Portions of the lander's deflated airbags and a petal are at lower left in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  19. Martian terrain & airbags - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Portions of the lander's deflated airbags and a petal are at the lower area of this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  20. Fast DRR generation for 2D to 3D registration on GPUs

    SciTech Connect

    Tornai, Gabor Janos; Cserey, Gyoergy

    2012-08-15

    Purpose: The generation of digitally reconstructed radiographs (DRRs) is the most time consuming step on the CPU in intensity based two-dimensional x-ray to three-dimensional (CT or 3D rotational x-ray) medical image registration, which has application in several image guided interventions. This work presents optimized DRR rendering on graphical processor units (GPUs) and compares performance achievable on four commercially available devices. Methods: A ray-cast based DRR rendering was implemented for a 512 Multiplication-Sign 512 Multiplication-Sign 72 CT volume. The block size parameter was optimized for four different GPUs for a region of interest (ROI) of 400 Multiplication-Sign 225 pixels with different sampling ratios (1.1%-9.1% and 100%). Performance was statistically evaluated and compared for the four GPUs. The method and the block size dependence were validated on the latest GPU for several parameter settings with a public gold standard dataset (512 Multiplication-Sign 512 Multiplication-Sign 825 CT) for registration purposes. Results: Depending on the GPU, the full ROI is rendered in 2.7-5.2 ms. If sampling ratio of 1.1%-9.1% is applied, execution time is in the range of 0.3-7.3 ms. On all GPUs, the mean of the execution time increased linearly with respect to the number of pixels if sampling was used. Conclusions: The presented results outperform other results from the literature. This indicates that automatic 2D to 3D registration, which typically requires a couple of hundred DRR renderings to converge, can be performed quasi on-line, in less than a second or depending on the application and hardware in less than a couple of seconds. Accordingly, a whole new field of applications is opened for image guided interventions, where the registration is continuously performed to match the real-time x-ray.

  1. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  2. INCREASING OIL RECOVERY THROUGH ADVANCED REPROCESSING OF 3D SEISMIC, GRANT CANYON AND BACON FLAT FIELDS, NYE COUNTY, NEVADA

    SciTech Connect

    Eric H. Johnson; Don E. French

    2001-06-01

    number of improvements in the processing of the survey were made compared to the original work. Pre-stack migration was employed, and some errors in muting in the original processing were found and corrected. In addition, improvements in computer hardware allowed interactive monitoring of the processing steps, so that parameters could be adjusted before completion of each step. The reprocessed survey was then loaded into SeisX, v. 3.5, for interpretation work. Interpretation was done on 2, 21-inch monitors connected to the work station. SeisX was prone to crashing, but little work was lost because of this. The program was developed for use under the Unix operating system, and some aspects of the design of the user interface betray that heritage. For example, printing is a 2-stage operation that involves creation of a graphic file using SeisX and printing the file with printer utility software. Because of problems inherent in using graphics files with different software, a significant amount of trial and error is introduced in getting printed output. Most of the interpretation work was done using vertical profiles. The interpretation tools used with time slices are limited and hard to use, but a number to tools and techniques are available to use with vertical profiles. Although this project encountered a number of delays and difficulties, some unavoidable and some self-inflicted, the result is an improved 3D survey and greater confidence in the interpretation. The experiences described in this report will be useful to those that are embarking on a 3D seismic interpretation project.

  3. 3D head model classification using optimized EGI

    NASA Astrophysics Data System (ADS)

    Tong, Xin; Wong, Hau-san; Ma, Bo

    2006-02-01

    With the general availability of 3D digitizers and scanners, 3D graphical models have been used widely in a variety of applications. This has led to the development of search engines for 3D models. Especially, 3D head model classification and retrieval have received more and more attention in view of their many potential applications in criminal identifications, computer animation, movie industry and medical industry. This paper addresses the 3D head model classification problem using 2D subspace analysis methods such as 2D principal component analysis (2D PCA[3]) and 2D fisher discriminant analysis (2DLDA[5]). It takes advantage of the fact that the histogram is a 2D image, and we can extract the most useful information from these 2D images to get a good result accordingingly. As a result, there are two main advantages: First, we can perform less calculation to obtain the same rate of classification; second, we can reduce the dimensionality more than PCA to obtain a higher efficiency.

  4. 3D Medical Collaboration Technology to Enhance Emergency Healthcare

    PubMed Central

    Welch, Greg; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M.; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E.

    2009-01-01

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare. PMID:19521951

  5. 3D Left Ventricular Strain from Unwrapped Harmonic Phase Measurements

    PubMed Central

    Venkatesh, Bharath Ambale; Gupta, Himanshu; Lloyd, Steven G.; ‘Italia, Louis Dell; Denney, Thomas S.

    2010-01-01

    Purpose To validate a method for measuring 3D left ventricular (LV) strain from phase-unwrapped harmonic phase (HARP) images derived from tagged cardiac magnetic resonance imaging (MRI). Materials and Methods A set of 40 human subjects were imaged with tagged MRI. In each study HARP phase was computed and unwrapped in each short-axis and long-axis image. Inconsistencies in unwrapped phase were resolved using branch cuts manually placed with a graphical user interface. 3D strain maps were computed for all imaged timeframes in each study. The strain from unwrapped phase (SUP) and displacements were compared to those estimated by a feature-based (FB) technique and a HARP technique. Results 3D strain was computed in each timeframe through systole and mid diastole in approximately 30 minutes per study. The standard deviation of the difference between strains measured by the FB and the SUP methods was less than 5% of the average of the strains from the two methods. The correlation between peak circumferential strain measured using the SUP and HARP techniques was over 83%. Conclusion The SUP technique can reconstruct full 3-D strain maps from tagged MR images through the cardiac cycle in a reasonable amount of time and user interaction compared to other 3D analysis methods. PMID:20373429

  6. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  7. Programming standards for effective S-3D game development

    NASA Astrophysics Data System (ADS)

    Schneider, Neil; Matveev, Alexander

    2008-02-01

    When a video game is in development, more often than not it is being rendered in three dimensions - complete with volumetric depth. It's the PC monitor that is taking this three-dimensional information, and artificially displaying it in a flat, two-dimensional format. Stereoscopic drivers take the three-dimensional information captured from DirectX and OpenGL calls and properly display it with a unique left and right sided view for each eye so a proper stereoscopic 3D image can be seen by the gamer. The two-dimensional limitation of how information is displayed on screen has encouraged programming short-cuts and work-arounds that stifle this stereoscopic 3D effect, and the purpose of this guide is to outline techniques to get the best of both worlds. While the programming requirements do not significantly add to the game development time, following these guidelines will greatly enhance your customer's stereoscopic 3D experience, increase your likelihood of earning Meant to be Seen certification, and give you instant cost-free access to the industry's most valued consumer base. While this outline is mostly based on NVIDIA's programming guide and iZ3D resources, it is designed to work with all stereoscopic 3D hardware solutions and is not proprietary in any way.

  8. 3D measurement system based on computer-generated gratings

    NASA Astrophysics Data System (ADS)

    Zhu, Yongjian; Pan, Weiqing; Luo, Yanliang

    2010-08-01

    A new kind of 3D measurement system has been developed to achieve the 3D profile of complex object. The principle of measurement system is based on the triangular measurement of digital fringe projection, and the fringes are fully generated from computer. Thus the computer-generated four fringes form the data source of phase-shifting 3D profilometry. The hardware of system includes the computer, video camera, projector, image grabber, and VGA board with two ports (one port links to the screen, another to the projector). The software of system consists of grating projection module, image grabbing module, phase reconstructing module and 3D display module. A software-based synchronizing method between grating projection and image capture is proposed. As for the nonlinear error of captured fringes, a compensating method is introduced based on the pixel-to-pixel gray correction. At the same time, a least square phase unwrapping is used to solve the problem of phase reconstruction by using the combination of Log Modulation Amplitude and Phase Derivative Variance (LMAPDV) as weight. The system adopts an algorithm from Matlab Tool Box for camera calibration. The 3D measurement system has an accuracy of 0.05mm. The execution time of system is 3~5s for one-time measurement.

  9. Interactive 3D visualisation of ECMWF ensemble weather forecasts

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Grams, Christian M.; Schäfler, Andreas; Westermann, Rüdiger

    2013-04-01

    We investigate the feasibility of interactive 3D visualisation of ensemble weather predictions in a way suited for weather forecasting during aircraft-based atmospheric field campaigns. The study builds upon our previous work on web-based, 2D visualisation of numerical weather prediction data for the purpose of research flight planning (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). Now we explore how interactive 3D visualisation of ensemble forecasts can be used to quickly identify atmospheric features relevant to a flight and to assess their uncertainty. We use data from the European Centre for Medium Range Weather Forecasts (ECMWF) Ensemble Prediction System (EPS) and present techniques to interactively visualise the forecasts on a commodity desktop PC with a state-of-the-art graphics card. Major objectives of this study are: (1) help the user transition from the ``familiar'' 2D views (horizontal maps and vertical cross-sections) to 3D visualisation by putting interactive 2D views into a 3D context and enriching them with 3D elements, at the same time (2) maintain a high degree of quantitativeness in the visualisation to facilitate easy interpretation; (3) exploitation of the Graphics Processing Unit (GPU) for maximum interactivity; (4) investigation of how visualisation can be performed directly from datasets on ECMWF hybrid model levels; (5) development of a basic forecasting tool that provides synchronized navigation through forecast base and lead times, as well as through the ensemble dimension and (6) interactive computation and visualisation of ensemble-based quantities. A prototype of our tool was used for weather forecasting during the aircraft-based T-NAWDEX-Falcon field campaign, which took place in October 2012 at the German Aerospace Centre's (DLR) Oberpfaffenhofen base. We reconstruct the forecast of a warm conveyor belt situation that occurred during the campaign and discuss challenges and opportunities posed by employing three

  10. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-06

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction.

  11. Evolution of 3D surface imaging systems in facial plastic surgery.

    PubMed

    Tzou, Chieh-Han John; Frey, Manfred

    2011-11-01

    Recent advancements in computer technologies have propelled the development of 3D imaging systems. 3D surface-imaging is taking surgeons to a new level of communication with patients; moreover, it provides quick and standardized image documentation. This article recounts the chronologic evolution of 3D surface imaging, and summarizes the current status of today's facial surface capturing technology. This article also discusses current 3D surface imaging hardware and software, and their different techniques, technologies, and scientific validation, which provides surgeons with the background information necessary for evaluating the systems and knowledge about the systems they might incorporate into their own practice.

  12. DIY 3D printing of custom orthopaedic implants: a proof of concept study.

    PubMed

    Frame, Mark; Leach, William

    2014-03-01

    3D printing is an emerging technology that is primarily used for aiding the design and prototyping of implants. As this technology has evolved it has now become possible to produce functional and definitive implants manufactured using a 3D printing process. This process, however, previously required a large financial investment in complex machinery and professionals skilled in 3D product design. Our pilot study's aim was to design and create a 3D printed custom orthopaedic implant using only freely available consumer hardware and software.

  13. Shuttle Systems 3-D Applications: Application of 3-D Graphics in Engineering Training for Shuttle Ground Processing

    NASA Technical Reports Server (NTRS)

    Godfrey, Gary S.

    2003-01-01

    This project illustrates an animation of the orbiter mate to the external tank, an animation of the OMS POD installation to the orbiter, and a simulation of the landing gear mechanism at the Kennedy Space Center. A detailed storyboard was created to reflect each animation or simulation. Solid models were collected and translated into Pro/Engineer's prt and asm formats. These solid models included computer files of the: orbiter, external tank, solid rocket booster, mobile launch platform, transporter, vehicle assembly building, OMS POD fixture, and landing gear. A depository of the above solid models was established. These solid models were translated into several formats. This depository contained the following files: stl for sterolithography, stp for neutral file work, shrinkwrap for compression, tiff for photoshop work, jpeg for Internet use, and prt and asm for Pro/Engineer use. Solid models were created of the material handling sling, bay 3 platforms, and orbiter contact points. Animations were developed using mechanisms to reflect each storyboard. Every effort was made to build all models technically correct for engineering use. The result was an animated routine that could be used by NASA for training material handlers and uncovering engineering safety issues.

  14. Quasi 3D dispersion experiment

    NASA Astrophysics Data System (ADS)

    Bakucz, P.

    2003-04-01

    This paper studies the problem of tracer dispersion in a coloured fluid flowing through a two-phase 3D rough channel-system in a 40 cm*40 cm plexi-container filled by homogen glass fractions and colourless fluid. The unstable interface between the driving coloured fluid and the colourless fluid develops viscous fingers with a fractal structure at high capillary number. Five two-dimensional fractal fronts have been observed at the same time using four cameras along the vertical side-walls and using one camera located above the plexi-container. In possession of five fronts the spatial concentration contours are determined using statistical models. The concentration contours are self-affine fractal curves with a fractal dimension D=2.19. This result is valid for disperison at high Péclet numbers.

  15. 3D Printed Shelby Cobra

    ScienceCinema

    Love, Lonnie

    2016-11-02

    ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.

  16. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  17. Integrating Rapid Prototyping into Graphic Communications

    ERIC Educational Resources Information Center

    Xu, Renmei; Flowers, Jim

    2015-01-01

    Integrating different science, technology, engineering, and mathematics (STEM) areas can help students learn and leverage both the equipment and expertise at a single school. In comparing graphic communications classes with classes that involve rapid prototyping (RP) technologies like 3D printing, there are sufficient similarities between goals,…

  18. Discrete Method of Images for 3D Radio Propagation Modeling

    NASA Astrophysics Data System (ADS)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  19. Effective 3-D surface modeling for geographic information systems

    NASA Astrophysics Data System (ADS)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2016-01-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform with spatial data and query processing capabilities of geographic information systems, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized directional replacement policy (DRP) based buffer management scheme. Polyhedron structures are used in digital surface modeling and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g., X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  20. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  1. 3D Stratigraphic Modeling of Central Aachen

    NASA Astrophysics Data System (ADS)

    Dong, M.; Neukum, C.; Azzam, R.; Hu, H.

    2010-05-01

    Since 1980s, advanced computer hardware and software technologies, as well as multidisciplinary research have provided possibilities to develop advanced three dimensional (3D) simulation software for geosciences application. Some countries, such as USA1) and Canada2) 3), have built up regional 3D geological models based on archival geological data. Such models have played huge roles in engineering geology2), hydrogeology2) 3), geothermal industry1) and so on. In cooperating with the Municipality of Aachen, the Department of Engineering Geology of RWTH Aachen University have built up a computer-based 3D stratigraphic model of 50 meter' depth for the center of Aachen, which is a 5 km by 7 km geologically complex area. The uncorrelated data from multi-resources, discontinuous nature and unconformable connection of the units are main challenges for geological modeling in this area. The reliability of 3D geological models largely depends on the quality and quantity of data. Existing 1D and 2D geological data were collected, including 1) approximately 6970 borehole data of different depth compiled in Microsoft Access database and MapInfo database; 2) a Digital Elevation Model (DEM); 3) geological cross sections; and 4) stratigraphic maps in 1m, 2m and 5m depth. Since acquired data are of variable origins, they were managed step by step. The main processes are described below: 1) Typing errors of borehole data were identified and the corrected data were exported to Variowin2.2 to distinguish duplicate points; 2) The surface elevation of borehole data was compared to the DEM, and differences larger than 3m were eliminated. Moreover, where elevation data missed, it was read from the DEM; 3) Considerable data were collected from municipal constructions, such as residential buildings, factories, and roads. Therefore, many boreholes are spatially clustered, and only one or two representative points were picked out in such areas; After above procedures, 5839 boreholes with -x

  2. Colossal Tooling Design: 3D Simulation for Ergonomic Analysis

    NASA Technical Reports Server (NTRS)

    Hunter, Steve L.; Dischinger, Charles; Thomas, Robert E.; Babai, Majid

    2003-01-01

    The application of high-level 3D simulation software to the design phase of colossal mandrel tooling for composite aerospace fuel tanks was accomplished to discover and resolve safety and human engineering problems. The analyses were conducted to determine safety, ergonomic and human engineering aspects of the disassembly process of the fuel tank composite shell mandrel. Three-dimensional graphics high-level software, incorporating various ergonomic analysis algorithms, was utilized to determine if the process was within safety and health boundaries for the workers carrying out these tasks. In addition, the graphical software was extremely helpful in the identification of material handling equipment and devices for the mandrel tooling assembly/disassembly process.

  3. Interactive 3D visualization speeds well, reservoir planning

    SciTech Connect

    Petzet, G.A.

    1997-11-24

    Texaco Exploration and Production has begun making expeditious analyses and drilling decisions that result from interactive, large screen visualization of seismic and other three dimensional data. A pumpkin shaped room or pod inside a 3,500 sq ft, state-of-the-art facility in Southwest Houston houses a supercomputer and projection equipment Texaco said will help its people sharply reduce 3D seismic project cycle time, boost production from existing fields, and find more reserves. Oil and gas related applications of the visualization center include reservoir engineering, plant walkthrough simulation for facilities/piping design, and new field exploration. The center houses a Silicon Graphics Onyx2 infinite reality supercomputer configured with 8 processors, 3 graphics pipelines, and 6 gigabytes of main memory.

  4. 3D Kitaev spin liquids

    NASA Astrophysics Data System (ADS)

    Hermanns, Maria

    The Kitaev honeycomb model has become one of the archetypal spin models exhibiting topological phases of matter, where the magnetic moments fractionalize into Majorana fermions interacting with a Z2 gauge field. In this talk, we discuss generalizations of this model to three-dimensional lattice structures. Our main focus is the metallic state that the emergent Majorana fermions form. In particular, we discuss the relation of the nature of this Majorana metal to the details of the underlying lattice structure. Besides (almost) conventional metals with a Majorana Fermi surface, one also finds various realizations of Dirac semi-metals, where the gapless modes form Fermi lines or even Weyl nodes. We introduce a general classification of these gapless quantum spin liquids using projective symmetry analysis. Furthermore, we briefly outline why these Majorana metals in 3D Kitaev systems provide an even richer variety of Dirac and Weyl phases than possible for electronic matter and comment on possible experimental signatures. Work done in collaboration with Kevin O'Brien and Simon Trebst.

  5. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  6. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  7. A 3-d modular gripper design tool

    SciTech Connect

    Brown, R.G.; Brost, R.C.

    1997-01-01

    Modular fixturing kits are precisely machined sets of components used for flexible, short-turnaround construction of fixtures for a variety of manufacturing purposes. A modular vise is a parallel-jaw vise, where each jaw is a modular fixture plate with a regular grid of precisely positioned holes. A modular vise can be used to locate and hold parts for machining, assembly, and inspection tasks. To fixture a part, one places pins in some of the holes so that when the vise is closed, the part is reliably located and completely constrained. The modular vise concept can be adapted easily to the design of modular parallel-jaw grippers for robots. By attaching a grid plate to each jaw of a parallel-jaw gripper, the authors gain the ability to easily construct high-quality grasps for a wide variety of parts from a standard set of hardware. Wallack and Canny developed a previous algorithm for planning planar grasp configurations for the modular vise. In this paper, the authors expand this work to produce a 3-d fixture/gripper design tool. They describe several analyses added to the planar algorithm to improve its utility, including a three-dimensional grasp quality metric based on geometric and force information, three-dimensional geometric loading analysis, and inter-gripper interference analysis to determine the compatibility of multiple grasps for handing the part from one gripper to another. Finally, the authors describe two applications which combine the utility of modular vise-style grasping with inter-gripper interference: The first is the design of a flexible part-handling subsystem for a part cleaning workcell under development at Sandia National Laboratories; the second is the automatic design of grippers that support the assembly of multiple products on a single assembly line.

  8. Teaching Geometry through Dynamic Modeling in Introductory Engineering Graphics.

    ERIC Educational Resources Information Center

    Wiebe, Eric N.; Branoff, Ted J.; Hartman, Nathan W.

    2003-01-01

    Examines how constraint-based 3D modeling can be used as a vehicle for rethinking instructional approaches to engineering design graphics. Focuses on moving from a mode of instruction based on the crafting by students and assessment by instructors of static 2D drawings and 3D models. Suggests that the new approach is better aligned with…

  9. RELAP5-3D Developer Guidelines and Programming Practices

    SciTech Connect

    Dr. George L Mesina

    2014-03-01

    Our ultimate goal is to create and maintain RELAP5-3D as the best software tool available to analyze nuclear power plants. This begins with writing excellent programming and requires thorough testing. This document covers development of RELAP5-3D software, the behavior of the RELAP5-3D program that must be maintained, and code testing. RELAP5-3D must perform in a manner consistent with previous code versions with backward compatibility for the sake of the users. Thus file operations, code termination, input and output must remain consistent in form and content while adding appropriate new files, input and output as new features are developed. As computer hardware, operating systems, and other software change, RELAP5-3D must adapt and maintain performance. The code must be thoroughly tested to ensure that it continues to perform robustly on the supported platforms. The coding must be written in a consistent manner that makes the program easy to read to reduce the time and cost of development, maintenance and error resolution. The programming guidelines presented her are intended to institutionalize a consistent way of writing FORTRAN code for the RELAP5-3D computer program that will minimize errors and rework. A common format and organization of program units creates a unifying look and feel to the code. This in turn increases readability and reduces time required for maintenance, development and debugging. It also aids new programmers in reading and understanding the program. Therefore, when undertaking development of the RELAP5-3D computer program, the programmer must write computer code that follows these guidelines. This set of programming guidelines creates a framework of good programming practices, such as initialization, structured programming, and vector-friendly coding. It sets out formatting rules for lines of code, such as indentation, capitalization, spacing, etc. It creates limits on program units, such as subprograms, functions, and modules. It

  10. West Flank Coso, CA FORGE 3D geologic model

    SciTech Connect

    Doug Blankenship

    2016-03-01

    This is an x,y,z file of the West Flank FORGE 3D geologic model. Model created in Earthvision by Dynamic Graphic Inc. The model was constructed with a grid spacing of 100 m. Geologic surfaces were extrapolated from the input data using a minimum tension gridding algorithm. The data file is tabular data in a text file, with lithology data associated with X,Y,Z grid points. All the relevant information is in the file header (the spatial reference, the projection etc.) In addition all the fields in the data file are identified in the header.

  11. 3D simulation of coaxial carbon nanotube field effect transistor

    NASA Astrophysics Data System (ADS)

    Hien, Dinh Sy; Thi Luong, Nguyen; Tuan, Thi Tran Anh; Viet Nga, Dinh

    2009-09-01

    We provide a model of coaxial CNTFET geometry. Coaxial devices are of special interest because their geometry allows for better electrostatics. We explore the possibilities of using non-equilibrium Green's function method to get I-V characteristics for CNTFETs. This simulator also includes a graphic user interface (GUI) of Matlab. We review the capabilities of the simulator, and give examples of typical CNTFET's 3D simulations (current-voltage characteristics are a function of parameters such as the length of CNTFET, gate thickness and temperature). The obtained I-V characteristics of the CNTFET are also presented by analytical equations.

  12. Large Terrain Continuous Level of Detail 3D Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan

    2012-01-01

    This software solved the problem of displaying terrains that are usually too large to be displayed on standard workstations in real time. The software can visualize terrain data sets composed of billions of vertices, and can display these data sets at greater than 30 frames per second. The Large Terrain Continuous Level of Detail 3D Visualization Tool allows large terrains, which can be composed of billions of vertices, to be visualized in real time. It utilizes a continuous level of detail technique called clipmapping to support this. It offloads much of the work involved in breaking up the terrain into levels of details onto the GPU (graphics processing unit) for faster processing.

  13. Beam Optics Analysis - An Advanced 3D Trajectory Code

    SciTech Connect

    Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark

    2006-01-03

    Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.

  14. Architectural Advancements in RELAP5-3D

    SciTech Connect

    Dr. George L. Mesina

    2005-11-01

    As both the computer industry and field of nuclear science and engineering move forward, there is a need to improve the computing tools used in the nuclear industry to keep pace with these changes. By increasing the capability of the codes, the growing modeling needs of nuclear plant analysis will be met and advantage can be taken of more powerful computer languages and architecture. In the past eighteen months, improvements have been made to RELAP5-3D [1] for these reasons. These architectural advances include code restructuring, conversion to Fortran 90, high performance computing upgrades, and rewriting of the RELAP5 Graphical User Interface (RGUI) [2] and XMGR5 [3] in Java. These architectural changes will extend the lifetime of RELAP5-3D, reduce the costs for development and maintenance, and improve it speed and reliability.

  15. Open-source 3D-printable optics equipment.

    PubMed

    Zhang, Chenlong; Anzalone, Nicholas C; Faria, Rodrigo P; Pearce, Joshua M

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.

  16. Open-Source 3D-Printable Optics Equipment

    PubMed Central

    Zhang, Chenlong; Anzalone, Nicholas C.; Faria, Rodrigo P.; Pearce, Joshua M.

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods. PMID:23544104

  17. A full field, 3-D velocimeter for microgravity crystallization experiments

    NASA Technical Reports Server (NTRS)

    Brodkey, Robert S.; Russ, Keith M.

    1991-01-01

    The programming and algorithms needed for implementing a full-field, 3-D velocimeter for laminar flow systems and the appropriate hardware to fully implement this ultimate system are discussed. It appears that imaging using a synched pair of video cameras and digitizer boards with synched rails for camera motion will provide a viable solution to the laminar tracking problem. The algorithms given here are simple, which should speed processing. On a heavily loaded VAXstation 3100 the particle identification can take 15 to 30 seconds, with the tracking taking less than one second. It seeems reasonable to assume that four image pairs can thus be acquired and analyzed in under one minute.

  18. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  19. Irregular Grid Generation and Rapid 3D Color Display Algorithm

    SciTech Connect

    Wilson D. Chin, Ph.D.

    2000-05-10

    Computationally efficient and fast methods for irregular grid generation are developed to accurately characterize wellbore and fracture boundaries, and farfield reservoir boundaries, in oil and gas petroleum fields. Advanced reservoir simulation techniques are developed for oilfields described by such ''boundary conforming'' mesh systems. Very rapid, three-dimensional color display algorithms are also developed that allow users to ''interrogate'' 3D earth cubes using ''slice, rotate, and zoom'' functions. Based on expert system ideas, the new methods operate much faster than existing display methodologies and do not require sophisticated computer hardware or software. They are designed to operate with PC based applications.

  20. A 3D Printed Toolbox for Opto-Mechanical Components

    PubMed Central

    P. Torres, Juan; Valencia, Alejandra

    2017-01-01

    In this article we present the development of a set of opto-mechanical components (a kinematic mount, a translation stage and an integrating sphere) that can be easily built using a 3D printer based on Fused Filament Fabrication (FFF) and parts that can be found in any hardware store. Here we provide a brief description of the 3D models used and some details on the fabrication process. Moreover, with the help of three simple experimental setups, we evaluate the performance of the opto-mechanical components developed by doing a quantitative comparison with its commercial counterparts. Our results indicate that the components fabricated are highly customizable, low-cost, require a short time to be fabricated and surprisingly, offer a performance that compares favorably with respect to low-end commercial alternatives. PMID:28099494

  1. Real-time structured light intraoral 3D measurement pipeline

    NASA Astrophysics Data System (ADS)

    Gheorghe, Radu; Tchouprakov, Andrei; Sokolov, Roman

    2013-02-01

    Computer aided design and manufacturing (CAD/CAM) is increasingly becoming a standard feature and service provided to patients in dentist offices and denture manufacturing laboratories. Although the quality of the tools and data has slowly improved in the last years, due to various surface measurement challenges, practical, accurate, invivo, real-time 3D high quality data acquisition and processing still needs improving. Advances in GPU computational power have allowed for achieving near real-time 3D intraoral in-vivo scanning of patient's teeth. We explore in this paper, from a real-time perspective, a hardware-software-GPU solution that addresses all the requirements mentioned before. Moreover we exemplify and quantify the hard and soft deadlines required by such a system and illustrate how they are supported in our implementation.

  2. A 3D Hydrodynamic Model for Heterogeneous Biofilms with Antimicrobial Persistence

    DTIC Science & Technology

    2014-01-01

    EPS production [9], which leads to gradients in osmotic pressure and contributes to pattern formation of mushroom or tower shaped. Figure 5 depicts two...implemented on graphic processing units (GPUs) for high performance computing, in 3-D space and time. Antimicrobial treatment in an infinitely long quiescent...scheme is devised to solve the model consisting of partial differential equations, which is implemented on graphic processing units (GPUs) for high

  3. Enhanced visualization of angiograms using 3D models

    NASA Astrophysics Data System (ADS)

    Marovic, Branko S.; Duckwiler, Gary R.; Villablanca, Pablo; Valentino, Daniel J.

    1999-05-01

    The 3D visualization of intracranial vasculature can facilitate the planning of endovascular therapy and the evaluation of interventional result. To create 3D visualizations, volumetric datasets from x-ray computed tomography angiography (CTA) and magnetic resonance angiography (MRA) are commonly rendered using maximum intensity projection (MIP), volume rendering, or surface rendering techniques. However, small aneurysms and mild stenoses are very difficult to detect using these methods. Furthermore, the instruments used during endovascular embolization or surgical treatment produce artifacts that typically make post-intervention CTA inapplicable, and the presence of magnetic material prohibits the use of MRA. Therefore, standard digital angiography is typically used. In order to address these problems, we developed a visualization and modeling system that displays 2D and 3D angiographic images using a simple Web-based interface. Polygonal models of vasculature were generated from CT and MR data using 3D segmentation of bones and vessels and polygonal surface extraction and simplification. A web-based 3D environment was developed for interactive examination of reconstructed surface models, creation of oblique cross- sections and maximum intensity projections, and distance measurements and annotations. This environment uses a multi- tier client/server approach employing VRML and Java. The 3D surface model and angiographic images can be aligned and displayed simultaneously to permit better perception of complex vasculature and to determine optical viewing positions and angles before starting an angiographic sessions. Polygonal surface reconstruction allows interactive display of complex spatial structures on inexpensive platforms such as personal computers as well as graphic workstations. The aneurysm assessment procedure demonstrated the utility of web-based technology for clinical visualization. The resulting system facilitated the treatment of serious vascular

  4. Stereoscopic contents authoring system for 3D DMB data service

    NASA Astrophysics Data System (ADS)

    Lee, BongHo; Yun, Kugjin; Hur, Namho; Kim, Jinwoong; Lee, SooIn

    2009-02-01

    This paper presents a stereoscopic contents authoring system that covers the creation and editing of stereoscopic multimedia contents for the 3D DMB (Digital Multimedia Broadcasting) data services. The main concept of 3D DMB data service is that, instead of full 3D video, partial stereoscopic objects (stereoscopic JPEG, PNG and MNG) are stereoscopically displayed on the 2D background video plane. In order to provide stereoscopic objects, we design and implement a 3D DMB content authoring system which provides the convenient and straightforward contents creation and editing functionalities. For the creation of stereoscopic contents, we mainly focused on two methods: CG (Computer Graphics) based creation and real image based creation. In the CG based creation scenario where the generated CG data from the conventional MAYA or 3DS MAX tool is rendered to generate the stereoscopic images by applying the suitable disparity and camera parameters, we use X-file for the direct conversion to stereoscopic objects, so called 3D DMB objects. In the case of real image based creation, the chroma-key method is applied to real video sequences to acquire the alpha-mapped images which are in turn directly converted to stereoscopic objects. The stereoscopic content editing module includes the timeline editor for both the stereoscopic video and stereoscopic objects. For the verification of created stereoscopic contents, we implemented the content verification module to verify and modify the contents by adjusting the disparity. The proposed system will leverage the power of stereoscopic contents creation for mobile 3D data service especially targeted for T-DMB with the capabilities of CG and real image based contents creation, timeline editing and content verification.

  5. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation.

  6. Laser Based 3D Volumetric Display System

    DTIC Science & Technology

    1993-03-01

    Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye

  7. True 3d Images and Their Applications

    NASA Astrophysics Data System (ADS)

    Wang, Z.; wang@hzgeospace., zheng.

    2012-07-01

    A true 3D image is a geo-referenced image. Besides having its radiometric information, it also has true 3Dground coordinates XYZ for every pixels of it. For a true 3D image, especially a true 3D oblique image, it has true 3D coordinates not only for building roofs and/or open grounds, but also for all other visible objects on the ground, such as visible building walls/windows and even trees. The true 3D image breaks the 2D barrier of the traditional orthophotos by introducing the third dimension (elevation) into the image. From a true 3D image, for example, people will not only be able to read a building's location (XY), but also its height (Z). true 3D images will fundamentally change, if not revolutionize, the way people display, look, extract, use, and represent the geospatial information from imagery. In many areas, true 3D images can make profound impacts on the ways of how geospatial information is represented, how true 3D ground modeling is performed, and how the real world scenes are presented. This paper first gives a definition and description of a true 3D image and followed by a brief review of what key advancements of geospatial technologies have made the creation of true 3D images possible. Next, the paper introduces what a true 3D image is made of. Then, the paper discusses some possible contributions and impacts the true 3D images can make to geospatial information fields. At the end, the paper presents a list of the benefits of having and using true 3D images and the applications of true 3D images in a couple of 3D city modeling projects.

  8. Characterizing Properties and Performance of 3D Printed Plastic Scintillators

    NASA Astrophysics Data System (ADS)

    McCormick, Jacob

    2015-10-01

    We are determining various characteristics of the performance of 3D printed scintillators. A scintillator luminesces when an energetic particle raises electrons to an excited state by depositing some of its energy in the atom. When these excited electrons fall back down to their stable states, they emit the excess energy as light. We have characterized the transmission spectrum, emission spectrum, and relative intensity of light produced by 3D printed scintillators. We are also determining mechanical properties such as tensile strength and compressibility, and the refractive index. The emission and transmission spectra were measured using a monochromator. By observing the transmission spectrum, we can see which optical wavelengths are absorbed by the scintillator. This is then used to correct the emission spectrum, since this absorption is present in the emission spectrum. Using photomultiplier tubes in conjunction with integration hardware (QDC) to measure the intensity of light emitted by 3D printed scintillators, we compare with commercial plastic scintillators. We are using the characterizations to determine if 3D printed scintillators are a viable alternative to commercial scintillators for use at Jefferson Lab in nuclear and accelerated physics detectors. I would like to thank Wouter Deconinck, as well as the Parity group at the College of William and Mary for all advice and assistance with my research.

  9. A new visualization method for 3D head MRA data

    NASA Astrophysics Data System (ADS)

    Ohashi, Satoshi; Hatanaka, Masahiko

    2008-03-01

    In this paper, we propose a new visualization method for head MRA data which supports the user to easily determine the positioning of MPR images and/or MIP images based on the blood vessel network structure (the anatomic location of blood vessels). This visualization method has following features: (a) the blood vessel (cerebral artery) network structure in 3D head MRA data is portrayed the 3D line structure; (b) the MPR or MIP images are combined with the blood vessel network structure and displayed in a 3D visualization space; (c) the positioning of MPR or MIP is decided based on the anatomic location of blood vessels; (d) The image processing and drawing can be operated at real-time without a special hardware accelerator. As a result, we believe that our method is available to position MPR images or MIP images related to the blood vessel network structure. Moreover, we think that the user using this method can obtain the 3D information (position, angle, direction) of both these images and the blood vessel network structure.

  10. Augmented Reality vs Virtual Reality for 3D Object Manipulation.

    PubMed

    Krichenbauer, Max; Yamamoto, Goshiro; Taketomi, Takafumi; Sandor, Christian; Kato, Hirokazu

    2017-01-25

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5% on average compared to AR (p < 0:024). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3% slower in VR than in AR (p < 0:04). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  11. 3D Printing and Its Urologic Applications

    PubMed Central

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  12. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  13. Expanding Geometry Understanding with 3D Printing

    ERIC Educational Resources Information Center

    Cochran, Jill A.; Cochran, Zane; Laney, Kendra; Dean, Mandi

    2016-01-01

    With the rise of personal desktop 3D printing, a wide spectrum of educational opportunities has become available for educators to leverage this technology in their classrooms. Until recently, the ability to create physical 3D models was well beyond the scope, skill, and budget of many schools. However, since desktop 3D printers have become readily…

  14. Beowulf 3D: a case study

    NASA Astrophysics Data System (ADS)

    Engle, Rob

    2008-02-01

    This paper discusses the creative and technical challenges encountered during the production of "Beowulf 3D," director Robert Zemeckis' adaptation of the Old English epic poem and the first film to be simultaneously released in IMAX 3D and digital 3D formats.

  15. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  16. Tools for 3D scientific visualization in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    The purpose is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post-processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. The software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics is described. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording.

  17. Hardware description languages

    NASA Technical Reports Server (NTRS)

    Tucker, Jerry H.

    1994-01-01

    Hardware description languages are special purpose programming languages. They are primarily used to specify the behavior of digital systems and are rapidly replacing traditional digital system design techniques. This is because they allow the designer to concentrate on how the system should operate rather than on implementation details. Hardware description languages allow a digital system to be described with a wide range of abstraction, and they support top down design techniques. A key feature of any hardware description language environment is its ability to simulate the modeled system. The two most important hardware description languages are Verilog and VHDL. Verilog has been the dominant language for the design of application specific integrated circuits (ASIC's). However, VHDL is rapidly gaining in popularity.

  18. Hardware removal - extremity

    MedlinePlus

    ... enable JavaScript. Surgeons use hardware such as pins, plates, or screws to help fix a broken bone ... SW, Hotchkiss RN, Pederson WC, Kozin SH, Cohen MS, eds. Green's Operative Hand Surgery . 7th ed. Philadelphia, ...

  19. Initial Hardware Development Schedule

    NASA Technical Reports Server (NTRS)

    Culpepper, William X.

    1991-01-01

    The hardware development schedule for the Common Lunar Lander's (CLLs) tracking system is presented. Among the topics covered are the following: historical perspective, solution options, industry contacts, and the rationale for selection.

  20. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  1. Exascale Hardware Architectures Working Group

    SciTech Connect

    Hemmert, S; Ang, J; Chiang, P; Carnes, B; Doerfler, D; Leininger, M; Dosanjh, S; Fields, P; Koch, K; Laros, J; Noe, J; Quinn, T; Torrellas, J; Vetter, J; Wampler, C; White, A

    2011-03-15

    The ASC Exascale Hardware Architecture working group is challenged to provide input on the following areas impacting the future use and usability of potential exascale computer systems: processor, memory, and interconnect architectures, as well as the power and resilience of these systems. Going forward, there are many challenging issues that will need to be addressed. First, power constraints in processor technologies will lead to steady increases in parallelism within a socket. Additionally, all cores may not be fully independent nor fully general purpose. Second, there is a clear trend toward less balanced machines, in terms of compute capability compared to memory and interconnect performance. In order to mitigate the memory issues, memory technologies will introduce 3D stacking, eventually moving on-socket and likely on-die, providing greatly increased bandwidth but unfortunately also likely providing smaller memory capacity per core. Off-socket memory, possibly in the form of non-volatile memory, will create a complex memory hierarchy. Third, communication energy will dominate the energy required to compute, such that interconnect power and bandwidth will have a significant impact. All of the above changes are driven by the need for greatly increased energy efficiency, as current technology will prove unsuitable for exascale, due to unsustainable power requirements of such a system. These changes will have the most significant impact on programming models and algorithms, but they will be felt across all layers of the machine. There is clear need to engage all ASC working groups in planning for how to deal with technological changes of this magnitude. The primary function of the Hardware Architecture Working Group is to facilitate codesign with hardware vendors to ensure future exascale platforms are capable of efficiently supporting the ASC applications, which in turn need to meet the mission needs of the NNSA Stockpile Stewardship Program. This issue is

  2. Electrophysiological evidence of separate pathways for the perception of depth and 3D objects.

    PubMed

    Gao, Feng; Cao, Bihua; Cao, Yunfei; Li, Fuhong; Li, Hong

    2015-05-01

    Previous studies have investigated the neural mechanism of 3D perception, but the neural distinction between 3D-objects and depth processing remains unclear. In the present study, participants viewed three types of graphics (planar graphics, perspective drawings, and 3D objects) while event-related potentials (ERP) were recorded. The ERP results revealed the following: (1) 3D objects elicited a larger and delayed N1 component than the other two types of stimuli; (2) during the P2 time window, significant differences between 3D objects and the perspective drawings were found mainly over a group of electrode sites in the left lateral occipital region; and (3) during the N2 complex, differences between planar graphics and perspective drawings were found over a group of electrode sites in the right hemisphere, whereas differences between perspective drawings and 3D objects were observed at another group of electrode sites in the left hemisphere. These findings support the claim that depth processing and object identification might be processed by separate pathways and at different latencies.

  3. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  4. MOM3D/EM-ANIMATE - MOM3D WITH ANIMATION CODE

    NASA Technical Reports Server (NTRS)

    Shaeffer, J. F.

    1994-01-01

    compare surface-current distribution due to various initial excitation directions or electric field orientations. The program can accept up to 50 planes of field data consisting of a grid of 100 by 100 field points. These planes of data are user selectable and can be viewed individually or concurrently. With these preset limits, the program requires 55 megabytes of core memory to run. These limits can be changed in the header files to accommodate the available core memory of an individual workstation. An estimate of memory required can be made as follows: approximate memory in bytes equals (number of nodes times number of surfaces times 14 variables times bytes per word, typically 4 bytes per floating point) plus (number of field planes times number of nodes per plane times 21 variables times bytes per word). This gives the approximate memory size required to store the field and surface-current data. The total memory size is approximately 400,000 bytes plus the data memory size. The animation calculations are performed in real time at any user set time step. For Silicon Graphics Workstations that have multiple processors, this program has been optimized to perform these calculations on multiple processors to increase animation rates. The optimized program uses the SGI PFA (Power FORTRAN Accelerator) library. On single processor machines, the parallelization directives are seen as comments to the program and will have no effect on compilation or execution. MOM3D and EM-ANIMATE are written in FORTRAN 77 for interactive or batch execution on SGI series computers running IRIX 3.0 or later. The RAM requirements for these programs vary with the size of the problem being solved. A minimum of 30Mb of RAM is required for execution of EM-ANIMATE; however, the code may be modified to accommodate the available memory of an individual workstation. For EM-ANIMATE, twenty-four bit, double-buffered color capability is suggested, but not required. Sample executables and sample input and

  5. The OpenEarth Framework (OEF) for the 3D Visualization of Integrated Earth Science Data

    NASA Astrophysics Data System (ADS)

    Nadeau, David; Moreland, John; Baru, Chaitan; Crosby, Chris

    2010-05-01

    seismic tomography may be sliced by multiple oriented cutting planes and isosurfaced to create 3D skins that trace feature boundaries within the data. Topography may be overlaid with satellite imagery, maps, and data such as gravity and magnetics measurements. Multiple data sets may be visualized simultaneously using overlapping layers within a common 3D coordinate space. Data management within the OEF handles and hides the inevitable quirks of differing file formats, web protocols, storage structures, coordinate spaces, and metadata representations. Heuristics are used to extract necessary metadata used to guide data and visual operations. Derived data representations are computed to better support fluid interaction and visualization while the original data is left unchanged in its original form. Data is cached for better memory and network efficiency, and all visualization makes use of 3D graphics hardware support found on today's computers. The OpenEarth Framework project is currently prototyping the software for use in the visualization, and integration of continental scale geophysical data being produced by EarthScope-related research in the Western US. The OEF is providing researchers with new ways to display and interrogate their data and is anticipated to be a valuable tool for future EarthScope-related research.

  6. Mini 3D for shallow gas reconnaissance

    SciTech Connect

    Vallieres, T. des; Enns, D.; Kuehn, H.; Parron, D.; Lafet, Y.; Van Hulle, D.

    1996-12-31

    The Mini 3D project was undertaken by TOTAL and ELF with the support of CEPM (Comite d`Etudes Petrolieres et Marines) to define an economical method of obtaining 3D seismic HR data for shallow gas assessment. An experimental 3D survey was carried out with classical site survey techniques in the North Sea. From these data 19 simulations, were produced to compare different acquisition geometries ranging from dual, 600 m long cables to a single receiver. Results show that short offset, low fold and very simple streamer positioning are sufficient to give a reliable 3D image of gas charged bodies. The 3D data allow a much more accurate risk delineation than 2D HR data. Moreover on financial grounds Mini-3D is comparable in cost to a classical HR 2D survey. In view of these results, such HR 3D should now be the standard for shallow gas surveying.

  7. 3D Printing in Zero-G ISS Technology Demonstration

    NASA Technical Reports Server (NTRS)

    Johnston, Mallory M.; Werkheiser, Mary J.; Cooper, Kenneth G.; Snyder, Michael P.; Edmunson, Jennifer E.

    2014-01-01

    The National Aeronautics and Space Administration (NASA) has a long term strategy to fabricate components and equipment on-demand for manned missions to the Moon, Mars, and beyond. To support this strategy, NASA and Made in Space, Inc. are developing the 3D Printing In Zero-G payload as a Technology Demonstration for the International Space Station. The 3D Printing In Zero-G experiment will be the first machine to perform 3D printing in space. The greater the distance from Earth and the longer the mission duration, the more difficult resupply becomes; this requires a change from the current spares, maintenance, repair, and hardware design model that has been used on the International Space Station up until now. Given the extension of the ISS Program, which will inevitably result in replacement parts being required, the ISS is an ideal platform to begin changing the current model for resupply and repair to one that is more suitable for all exploration missions. 3D Printing, more formally known as Additive Manufacturing, is the method of building parts/ objects/tools layer-by-layer. The 3D Print experiment will use extrusion-based additive manufacturing, which involves building an object out of plastic deposited by a wire-feed via an extruder head. Parts can be printed from data files loaded on the device at launch, as well as additional files uplinked to the device while on-orbit. The plastic extrusion additive manufacturing process is a low-energy, low-mass solution to many common needs on board the ISS. The 3D Print payload will serve as the ideal first step to proving that process in space. It is unreasonable to expect NASA to launch large blocks of material from which parts or tools can be traditionally machined, and even more unreasonable to fly up specialized manufacturing hardware to perform the entire range of function traditionally machining requires. The technology to produce parts on demand, in space, offers unique design options that are not possible

  8. NeuroTerrain – a client-server system for browsing 3D biomedical image data sets

    PubMed Central

    Gustafson, Carl; Bug, William J; Nissanov, Jonathan

    2007-01-01

    -interactive applications. The server implementation takes full advantage of data center's high-performance hardware, where it can be co-localized with centrally-located, 3D dataset repositories, extending access to the researcher community throughout the Internet. Conclusion The combination of an optimized server and modular, platform-independent client provides an ideal environment for viewing complex 3D biomedical datasets, taking full advantage of high-performance servers to prepare images and subsets of associated meta-data for viewing, as well as the graphical capabilities in Java to actually display the data. PMID:17280615

  9. Interactive 2D to 3D stereoscopic image synthesis

    NASA Astrophysics Data System (ADS)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  10. Real-time visualization of large volume datasets on standard PC hardware.

    PubMed

    Xie, Kai; Yang, Jie; Zhu, Y M

    2008-05-01

    In medical area, interactive three-dimensional volume visualization of large volume datasets is a challenging task. One of the major challenges in graphics processing unit (GPU)-based volume rendering algorithms is the limited size of texture memory imposed by current GPU architecture. We attempt to overcome this limitation by rendering only visible parts of large CT datasets. In this paper, we present an efficient, high-quality volume rendering algorithm using GPUs for rendering large CT datasets at interactive frame rates on standard PC hardware. We subdivide the volume dataset into uniform sized blocks and take advantage of combinations of early ray termination, empty-space skipping and visibility culling to accelerate the whole rendering process and render visible parts of volume data. We have implemented our volume rendering algorithm for a large volume data of 512 x 304 x 1878 dimensions (visible female), and achieved real-time performance (i.e., 3-4 frames per second) on a Pentium 4 2.4GHz PC equipped with NVIDIA Geforce 6600 graphics card ( 256 MB video memory). This method can be used as a 3D visualization tool of large CT datasets for doctors or radiologists.

  11. Repellency Awareness Graphic

    EPA Pesticide Factsheets

    Companies can apply to use the voluntary new graphic on product labels of skin-applied insect repellents. This graphic is intended to help consumers easily identify the protection time for mosquitoes and ticks and select appropriately.

  12. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  13. Impact of the 3-D model strategy on science learning of the solar system

    NASA Astrophysics Data System (ADS)

    Alharbi, Mohammed

    The purpose of this mixed method study, quantitative and descriptive, was to determine whether the first-middle grade (seventh grade) students at Saudi schools are able to learn and use the Autodesk Maya software to interact and create their own 3-D models and animations and whether their use of the software influences their study habits and their understanding of the school subject matter. The study revealed that there is value to the science students regarding the use of 3-D software to create 3-D models to complete science assignments. Also, this study aimed to address the middle-school students' ability to learn 3-D software in art class, and then ultimately use it in their science class. The success of this study may open the way to consider the impact of 3-D modeling on other school subjects, such as mathematics, art, and geography. When the students start using graphic design, including 3-D software, at a young age, they tend to develop personal creativity and skills. The success of this study, if applied in schools, will provide the community with skillful young designers and increase awareness of graphic design and the new 3-D technology. Experimental method was used to answer the quantitative research question, are there significant differences applying the learning method using 3-D models (no 3-D, premade 3-D, and create 3-D) in a science class being taught about the solar system and its impact on the students' science achievement scores? Descriptive method was used to answer the qualitative research questions that are about the difficulty of learning and using Autodesk Maya software, time that students take to use the basic levels of Polygon and Animation parts of the Autodesk Maya software, and level of students' work quality.

  14. Photogrammetry for rapid prototyping: development of noncontact 3D reconstruction technologies

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.

    2002-04-01

    An important stage of rapid prototyping technology is generating computer 3D model of an object to be reproduced. Wide variety of techniques for 3D model generation exists beginning with manual 3D models generation and finishing with full-automated reverse engineering system. The progress in CCD sensors and computers provides the background for integration of photogrammetry as an accurate 3D data source with CAD/CAM. The paper presents the results of developing photogrammetric methods for non-contact spatial coordinates measurements and generation of computer 3D model of real objects. The technology is based on object convergent images processing for calculating its 3D coordinates and surface reconstruction. The hardware used for spatial coordinates measurements is based on PC as central processing unit and video camera as image acquisition device. The original software for Windows 9X realizes the complete technology of 3D reconstruction for rapid input of geometry data in CAD/CAM systems. Technical characteristics of developed systems are given along with the results of applying for various tasks of 3D reconstruction. The paper describes the techniques used for non-contact measurements and the methods providing metric characteristics of reconstructed 3D model. Also the results of system application for 3D reconstruction of complex industrial objects are presented.

  15. 3-D Technology Approaches for Biological Ecologies

    NASA Astrophysics Data System (ADS)

    Liu, Liyu; Austin, Robert; U. S-China Physical-Oncology Sciences Alliance (PS-OA) Team

    Constructing three dimensional (3-D) landscapes is an inevitable issue in deep study of biological ecologies, because in whatever scales in nature, all of the ecosystems are composed by complex 3-D environments and biological behaviors. Just imagine if a 3-D technology could help complex ecosystems be built easily and mimic in vivo microenvironment realistically with flexible environmental controls, it will be a fantastic and powerful thrust to assist researchers for explorations. For years, we have been utilizing and developing different technologies for constructing 3-D micro landscapes for biophysics studies in in vitro. Here, I will review our past efforts, including probing cancer cell invasiveness with 3-D silicon based Tepuis, constructing 3-D microenvironment for cell invasion and metastasis through polydimethylsiloxane (PDMS) soft lithography, as well as explorations of optimized stenting positions for coronary bifurcation disease with 3-D wax printing and the latest home designed 3-D bio-printer. Although 3-D technologies is currently considered not mature enough for arbitrary 3-D micro-ecological models with easy design and fabrication, I hope through my talk, the audiences will be able to sense its significance and predictable breakthroughs in the near future. This work was supported by the State Key Development Program for Basic Research of China (Grant No. 2013CB837200), the National Natural Science Foundation of China (Grant No. 11474345) and the Beijing Natural Science Foundation (Grant No. 7154221).

  16. 3D change detection - Approaches and applications

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Tian, Jiaojiao; Reinartz, Peter

    2016-12-01

    Due to the unprecedented technology development of sensors, platforms and algorithms for 3D data acquisition and generation, 3D spaceborne, airborne and close-range data, in the form of image based, Light Detection and Ranging (LiDAR) based point clouds, Digital Elevation Models (DEM) and 3D city models, become more accessible than ever before. Change detection (CD) or time-series data analysis in 3D has gained great attention due to its capability of providing volumetric dynamics to facilitate more applications and provide more accurate results. The state-of-the-art CD reviews aim to provide a comprehensive synthesis and to simplify the taxonomy of the traditional remote sensing CD techniques, which mainly sit within the boundary of 2D image/spectrum analysis, largely ignoring the particularities of 3D aspects of the data. The inclusion of 3D data for change detection (termed 3D CD), not only provides a source with different modality for analysis, but also transcends the border of traditional top-view 2D pixel/object-based analysis to highly detailed, oblique view or voxel-based geometric analysis. This paper reviews the recent developments and applications of 3D CD using remote sensing and close-range data, in support of both academia and industry researchers who seek for solutions in detecting and analyzing 3D dynamics of various objects of interest. We first describe the general considerations of 3D CD problems in different processing stages and identify CD types based on the information used, being the geometric comparison and geometric-spectral analysis. We then summarize relevant works and practices in urban, environment, ecology and civil applications, etc. Given the broad spectrum of applications and different types of 3D data, we discuss important issues in 3D CD methods. Finally, we present concluding remarks in algorithmic aspects of 3D CD.

  17. Output-sensitive 3D line integral convolution.

    PubMed

    Falk, Martin; Weiskopf, Daniel

    2008-01-01

    We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance

  18. NASA HUNCH Hardware

    NASA Technical Reports Server (NTRS)

    Hall, Nancy R.; Wagner, James; Phelps, Amanda

    2014-01-01

    What is NASA HUNCH? High School Students United with NASA to Create Hardware-HUNCH is an instructional partnership between NASA and educational institutions. This partnership benefits both NASA and students. NASA receives cost-effective hardware and soft goods, while students receive real-world hands-on experiences. The 2014-2015 was the 12th year of the HUNCH Program. NASA Glenn Research Center joined the program that already included the NASA Johnson Space Flight Center, Marshall Space Flight Center, Langley Research Center and Goddard Space Flight Center. The program included 76 schools in 24 states and NASA Glenn worked with the following five schools in the HUNCH Build to Print Hardware Program: Medina Career Center, Medina, OH; Cattaraugus Allegheny-BOCES, Olean, NY; Orleans Niagara-BOCES, Medina, NY; Apollo Career Center, Lima, OH; Romeo Engineering and Tech Center, Washington, MI. The schools built various parts of an International Space Station (ISS) middeck stowage locker and learned about manufacturing process and how best to build these components to NASA specifications. For the 2015-2016 school year the schools will be part of a larger group of schools building flight hardware consisting of 20 ISS middeck stowage lockers for the ISS Program. The HUNCH Program consists of: Build to Print Hardware; Build to Print Soft Goods; Design and Prototyping; Culinary Challenge; Implementation: Web Page and Video Production.

  19. The 1986/87 Classroom Computer Learning Hardware Buyers' Guide.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1986

    1986-01-01

    Provides information on selected computer peripherals which seem most appropriate for education in terms of availability, price, and application. Hardware includes modems, local area networks, printers, graphics tables (as well as touch tablets and alternate keyboards), and joysticks. Each item listed includes company, computer capability, price,…

  20. Skylab biomedical hardware development

    NASA Technical Reports Server (NTRS)

    Huffstetler, W. J., Jr.; Lem, J. D.

    1974-01-01

    The development of hardware to support biomedical experimentation and operations in the Skylab vehicle presented unique technical problems. Designs were required to enable the accurate measurement of many varied physiological parameters and to compensate for zero g such that uninhibited equipment operation would be possible. Because of problems that occurred during the orbital workshop launch, special tests were run and new equipment was designed and built for use by the first Skylab crew. Design concepts used in the development of hardware to support cardiovascular, pulmonary, vestibular, body, and specimen mass measuring experiments are discussed. Additionally, major problem areas and the corresponding design solutions, as well as knowledge gained that will be pertinent for future life sciences hardware development, are presented.

  1. Fast 3-d tomographic microwave imaging for breast cancer detection.

    PubMed

    Grzegorczyk, Tomasz M; Meaney, Paul M; Kaufman, Peter A; diFlorio-Alexander, Roberta M; Paulsen, Keith D

    2012-08-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring.

  2. 3D measurement for rapid prototyping

    NASA Astrophysics Data System (ADS)

    Albrecht, Peter; Lilienblum, Tilo; Sommerkorn, Gerd; Michaelis, Bernd

    1996-08-01

    Optical 3-D measurement is an interesting approach for rapid prototyping. On one hand it's necessary to get the 3-D data of an object and on the other hand it's necessary to check the manufactured object (quality checking). Optical 3-D measurement can realize both. Classical 3-D measurement procedures based on photogrammetry cause systematic errors at strongly curved surfaces or steps in surfaces. One possibility to reduce these errors is to calculate the 3-D coordinates from several successively taken images. Thus it's possible to get higher spatial resolution and to reduce the systematic errors at 'problem surfaces.' Another possibility is to process the measurement values by neural networks. A modified associative memory smoothes and corrects the calculated 3-D coordinates using a-priori knowledge about the measurement object.

  3. Engineering computer graphics in gas turbine engine design, analysis and manufacture

    NASA Technical Reports Server (NTRS)

    Lopatka, R. S.

    1975-01-01

    A time-sharing and computer graphics facility designed to provide effective interactive tools to a large number of engineering users with varied requirements was described. The application of computer graphics displays at several levels of hardware complexity and capability is discussed, with examples of graphics systems tracing gas turbine product development, beginning with preliminary design through manufacture. Highlights of an operating system stylized for interactive engineering graphics is described.

  4. IMPROMPTU: a system for automatic 3D medical image-analysis.

    PubMed

    Sundaramoorthy, G; Hoford, J D; Hoffman, E A; Higgins, W E

    1995-01-01

    The utility of three-dimensional (3D) medical imaging is hampered by difficulties in extracting anatomical regions and making measurements in 3D images. Presently, a user is generally forced to use time-consuming, subjective, manual methods, such as slice tracing and region painting, to define regions of interest. Automatic image-analysis methods can ameliorate the difficulties of manual methods. This paper describes a graphical user interface (GUI) system for constructing automatic image-analysis processes for 3D medical-imaging applications. The system, referred to as IMPROMPTU, provides a user-friendly environment for prototyping, testing and executing complex image-analysis processes. IMPROMPTU can stand alone or it can interact with an existing graphics-based 3D medical image-analysis package (VIDA), giving a strong environment for 3D image-analysis, consisting of tools for visualization, manual interaction, and automatic processing. IMPROMPTU links to a large library of 1D, 2D, and 3D image-processing functions, referred to as VIPLIB, but a user can easily link in custom-made functions. 3D applications of the system are given for left-ventricular chamber, myocardial, and upper-airway extractions.

  5. The capture and recreation of 3D auditory scenes

    NASA Astrophysics Data System (ADS)

    Li, Zhiyun

    The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.

  6. Photorefractive Polymers for Updateable 3D Displays

    DTIC Science & Technology

    2010-02-24

    Final Performance Report 3. DATES COVERED (From - To) 01-01-2007 to 11-30-2009 4. TITLE AND SUBTITLE Photorefractive Polymers for Updateable 3D ...ABSTRACT During the tenure of this project a large area updateable 3D color display has been developed for the first time using a new co-polymer...photorefractive polymers have been demonstrated. Moreover, a 6 inch × 6 inch sample was fabricated demonstrating the feasibility of making large area 3D

  7. 3D Microperfusion Model of ADPKD

    DTIC Science & Technology

    2015-10-01

    Stratasys 3D printer . PDMS was cast in the negative molds in order to create permanent biocompatible plastic masters (SmoothCast 310). All goals of task...1 AWARD NUMBER: W81XWH-14-1-0304 TITLE: 3D Microperfusion Model of ADPKD PRINCIPAL INVESTIGATOR: David L. Kaplan CONTRACTING ORGANIZATION...ADDRESS. 1. REPORT DATE October 2015 2. REPORT TYPE Annual Report 3. DATES COVERED 15 Sep 2014 - 14 Sep 2015 4. TITLE AND SUBTITLE 3D

  8. 3D carotid plaque MR Imaging

    PubMed Central

    Parker, Dennis L.

    2015-01-01

    SYNOPSIS There has been significant progress made in 3D carotid plaque magnetic resonance imaging techniques in recent years. 3D plaque imaging clearly represents the future in clinical use. With effective flow suppression techniques, choices of different contrast weighting acquisitions, and time-efficient imaging approaches, 3D plaque imaging offers flexible imaging plane and view angle analysis, large coverage, multi-vascular beds capability, and even can be used in fast screening. PMID:26610656

  9. 3-D Extensions for Trustworthy Systems

    DTIC Science & Technology

    2011-01-01

    3- D Extensions for Trustworthy Systems (Invited Paper) Ted Huffmire∗, Timothy Levin∗, Cynthia Irvine∗, Ryan Kastner† and Timothy Sherwood...address these problems, we propose an approach to trustworthy system development based on 3- D integration, an emerging chip fabrication technique in...which two or more integrated circuit dies are fabricated individually and then combined into a single stack using vertical conductive posts. With 3- D

  10. Digital holography and 3-D imaging.

    PubMed

    Banerjee, Partha; Barbastathis, George; Kim, Myung; Kukhtarev, Nickolai

    2011-03-01

    This feature issue on Digital Holography and 3-D Imaging comprises 15 papers on digital holographic techniques and applications, computer-generated holography and encryption techniques, and 3-D display. It is hoped that future work in the area leads to innovative applications of digital holography and 3-D imaging to biology and sensing, and to the development of novel nonlinear dynamic digital holographic techniques.

  11. MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems

    NASA Astrophysics Data System (ADS)

    Kopecky, Ken; Winer, Eliot

    2014-06-01

    Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.

  12. Development of a low cost, 3-DOF desktop laser cutter using 3D printer hardware

    NASA Astrophysics Data System (ADS)

    Jivraj, Jamil; Huang, Yize; Wong, Ronnie; Lu, Yi; Vuong, Barry; Ramjist, Joel; Gu, Xijia; Yang, Victor X. D.

    2015-03-01

    This paper presents the development of a compact, desktop laser-cutting system capable of cutting materials such as wood, metal and plastic. A re-commissioned beheaded MakerBot® Replicator 2X is turned into a 3-DOF laser cutter by way of integration with 800W (peak power) fiber laser. Special attention is paid to tear-down, modification and integration of the objective lens in place of the print head. Example cuts in wood and metal will be presented, as well as design of an exhaust system.

  13. Computer hardware fault administration

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  14. 3D Visualization of Radar Backscattering Diagrams Based on OpenGL

    NASA Astrophysics Data System (ADS)

    Zhulina, Yulia V.

    2004-12-01

    A digital method of calculating the radar backscattering diagrams is presented. The method uses a digital model of an arbitrary scattering object in the 3D graphics package "OpenGL" and calculates the backscattered signal in the physical optics approximation. The backscattering diagram is constructed by means of rotating the object model around the radar-target line.

  15. 3D Printing of Protein Models in an Undergraduate Laboratory: Leucine Zippers

    ERIC Educational Resources Information Center

    Meyer, Scott C.

    2015-01-01

    An upper-division undergraduate laboratory experiment is described that explores the structure/function relationship of protein domains, namely leucine zippers, through a molecular graphics computer program and physical models fabricated by 3D printing. By generating solvent accessible surfaces and color-coding hydrophobic, basic, and acidic amino…

  16. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  17. 3D Printing In Zero-G ISS Technology Demonstration

    NASA Technical Reports Server (NTRS)

    Werkheiser, Niki; Cooper, Kenneth; Edmunson, Jennifer; Dunn, Jason; Snyder, Michael

    2014-01-01

    The National Aeronautics and Space Administration (NASA) has a long term strategy to fabricate components and equipment on-demand for manned missions to the Moon, Mars, and beyond. To support this strategy, NASA and Made in Space, Inc. are developing the 3D Printing In Zero-G payload as a Technology Demonstration for the International Space Station (ISS). The 3D Printing In Zero-G experiment ('3D Print') will be the first machine to perform 3D printing in space. The greater the distance from Earth and the longer the mission duration, the more difficult resupply becomes; this requires a change from the current spares, maintenance, repair, and hardware design model that has been used on the International Space Station (ISS) up until now. Given the extension of the ISS Program, which will inevitably result in replacement parts being required, the ISS is an ideal platform to begin changing the current model for resupply and repair to one that is more suitable for all exploration missions. 3D Printing, more formally known as Additive Manufacturing, is the method of building parts/objects/tools layer-by-layer. The 3D Print experiment will use extrusion-based additive manufacturing, which involves building an object out of plastic deposited by a wire-feed via an extruder head. Parts can be printed from data files loaded on the device at launch, as well as additional files uplinked to the device while on-orbit. The plastic extrusion additive manufacturing process is a low-energy, low-mass solution to many common needs on board the ISS. The 3D Print payload will serve as the ideal first step to proving that process in space. It is unreasonable to expect NASA to launch large blocks of material from which parts or tools can be traditionally machined, and even more unreasonable to fly up multiple drill bits that would be required to machine parts from aerospace-grade materials such as titanium 6-4 alloy and Inconel. The technology to produce parts on demand, in space, offers

  18. CityGML - Interoperable semantic 3D city models

    NASA Astrophysics Data System (ADS)

    Gröger, Gerhard; Plümer, Lutz

    2012-07-01

    CityGML is the international standard of the Open Geospatial Consortium (OGC) for the representation and exchange of 3D city models. It defines the three-dimensional geometry, topology, semantics and appearance of the most relevant topographic objects in urban or regional contexts. These definitions are provided in different, well-defined Levels-of-Detail (multiresolution model). The focus of CityGML is on the semantical aspects of 3D city models, its structures, taxonomies and aggregations, allowing users to employ virtual 3D city models for advanced analysis and visualization tasks in a variety of application domains such as urban planning, indoor/outdoor pedestrian navigation, environmental simulations, cultural heritage, or facility management. This is in contrast to purely geometrical/graphical models such as KML, VRML, or X3D, which do not provide sufficient semantics. CityGML is based on the Geography Markup Language (GML), which provides a standardized geometry model. Due to this model and its well-defined semantics and structures, CityGML facilitates interoperable data exchange in the context of geo web services and spatial data infrastructures. Since its standardization in 2008, CityGML has become used on a worldwide scale: tools from notable companies in the geospatial field provide CityGML interfaces. Many applications and projects use this standard. CityGML is also having a strong impact on science: numerous approaches use CityGML, particularly its semantics, for disaster management, emergency responses, or energy-related applications as well as for visualizations, or they contribute to CityGML, improving its consistency and validity, or use CityGML, particularly its different Levels-of-Detail, as a source or target for generalizations. This paper gives an overview of CityGML, its underlying concepts, its Levels-of-Detail, how to extend it, its applications, its likely future development, and the role it plays in scientific research. Furthermore, its

  19. Interaction Design and Usability of Learning Spaces in 3D Multi-user Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Minocha, Shailey; Reeves, Ahmad John

    Three-dimensional virtual worlds are multimedia, simulated environments, often managed over the Web, which users can 'inhabit' and interact via their own graphical, self-representations known as 'avatars'. 3D virtual worlds are being used in many applications: education/training, gaming, social networking, marketing and commerce. Second Life is the most widely used 3D virtual world in education. However, problems associated with usability, navigation and way finding in 3D virtual worlds may impact on student learning and engagement. Based on empirical investigations of learning spaces in Second Life, this paper presents design guidelines to improve the usability and ease of navigation in 3D spaces. Methods of data collection include semi-structured interviews with Second Life students, educators and designers. The findings have revealed that design principles from the fields of urban planning, Human- Computer Interaction, Web usability, geography and psychology can influence the design of spaces in 3D multi-user virtual environments.

  20. 3D Mesh Segmentation Based on Markov Random Fields and Graph Cuts

    NASA Astrophysics Data System (ADS)

    Shi, Zhenfeng; Le, Dan; Yu, Liyang; Niu, Xiamu

    3D Mesh segmentation has become an important research field in computer graphics during the past few decades. Many geometry based and semantic oriented approaches for 3D mesh segmentation has been presented. However, only a few algorithms based on Markov Random Field (MRF) has been presented for 3D object segmentation. In this letter, we present a definition of mesh segmentation according to the labeling problem. Inspired by the capability of MRF combining the geometric information and the topology information of a 3D mesh, we propose a novel 3D mesh segmentation model based on MRF and Graph Cuts. Experimental results show that our MRF-based schema achieves an effective segmentation.

  1. Life Sciences Division Spaceflight Hardware

    NASA Technical Reports Server (NTRS)

    Yost, B.

    1999-01-01

    The Ames Research Center (ARC) is responsible for the development, integration, and operation of non-human life sciences payloads in support of NASA's Gravitational Biology and Ecology (GB&E) program. To help stimulate discussion and interest in the development and application of novel technologies for incorporation within non-human life sciences experiment systems, three hardware system models will be displayed with associated graphics/text explanations. First, an Animal Enclosure Model (AEM) will be shown to communicate the nature and types of constraints physiological researchers must deal with during manned space flight experiments using rodent specimens. Second, a model of the Modular Cultivation System (MCS) under development by ESA will be presented to highlight technologies that may benefit cell-based research, including advanced imaging technologies. Finally, subsystems of the Cell Culture Unit (CCU) in development by ARC will also be shown. A discussion will be provided on candidate technology requirements in the areas of specimen environmental control, biotelemetry, telescience and telerobotics, and in situ analytical techniques and imaging. In addition, an overview of the Center for Gravitational Biology Research facilities will be provided.

  2. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    SciTech Connect

    Kerr, J. ); Jones, G.L. )

    1996-01-01

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting and detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.

  3. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    SciTech Connect

    Kerr, J.; Jones, G.L.

    1996-12-31

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting and detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.

  4. High-throughput imaging: Focusing in on drug discovery in 3D.

    PubMed

    Li, Linfeng; Zhou, Qiong; Voss, Ty C; Quick, Kevin L; LaBarbera, Daniel V

    2016-03-01

    3D organotypic culture models such as organoids and multicellular tumor spheroids (MCTS) are becoming more widely used for drug discovery and toxicology screening. As a result, 3D culture technologies adapted for high-throughput screening formats are prevalent. While a multitude of assays have been reported and validated for high-throughput imaging (HTI) and high-content screening (HCS) for novel drug discovery and toxicology, limited HTI/HCS with large compound libraries have been reported. Nonetheless, 3D HTI instrumentation technology is advancing and this technology is now on the verge of allowing for 3D HCS of thousands of samples. This review focuses on the state-of-the-art high-throughput imaging systems, including hardware and software, and recent literature examples of 3D organotypic culture models employing this technology for drug discovery and toxicology screening.

  5. Rotation invariance principles in 2D/3D registration

    NASA Astrophysics Data System (ADS)

    Birkfellner, Wolfgang; Wirth, Joachim; Burgstaller, Wolfgang; Baumann, Bernard; Staedele, Harald; Hammer, Beat; Gellrich, Niels C.; Jacob, Augustinus L.; Regazzoni, Pietro; Messmer, Peter

    2003-05-01

    2D/3D patient-to-computed tomography (CT) registration is a method to determine a transformation that maps two coordinate systems by comparing a projection image rendered from CT to a real projection image. Applications include exact patient positioning in radiation therapy, calibration of surgical robots, and pose estimation in computer-aided surgery. One of the problems associated with 2D/3D registration is the fast that finding a registration includes sovling a minimization problem in six degrees-of-freedom in motion. This results in considerable time expenses since for each iteration step at least one volume rendering has to be computed. We show that by choosing an appropriate world coordinate system and by applying a 2D/2D registration method in each iteration step, the number of iterations can be grossly reduced from n6 to n5. Here, n is the number of discrete variations aroudn a given coordinate. Depending on the configuration of the optimization algorithm, this reduces the total number of iterations necessary to at least 1/3 of its original value. The method was implemented and extensively tested on simulated x-ray images of a pelvis. We conclude that this hardware-indepenent optimization of 2D/3D registration is a step towards increasing the acceptance of this promising method for a wide number of clinical applications.

  6. Super stereoscopy technique for comfortable and realistic 3D displays.

    PubMed

    Akşit, Kaan; Niaki, Amir Hossein Ghanbari; Ulusoy, Erdem; Urey, Hakan

    2014-12-15

    Two well-known problems of stereoscopic displays are the accommodation-convergence conflict and the lack of natural blur for defocused objects. We present a new technique that we name Super Stereoscopy (SS3D) to provide a convenient solution to these problems. Regular stereoscopic glasses are replaced by SS3D glasses which deliver at least two parallax images per eye through pinholes equipped with light selective filters. The pinholes generate blur-free retinal images so as to enable correct accommodation, while the delivery of multiple parallax images per eye creates an approximate blur effect for defocused objects. Experiments performed with cameras and human viewers indicate that the technique works as desired. In case two, pinholes equipped with color filters per eye are used; the technique can be used on a regular stereoscopic display by only uploading a new content, without requiring any change in display hardware, driver, or frame rate. Apart from some tolerable loss in display brightness and decrease in natural spatial resolution limit of the eye because of pinholes, the technique is quite promising for comfortable and realistic 3D vision, especially enabling the display of close objects that are not possible to display and comfortably view on regular 3DTV and cinema.

  7. Neuromorphic Event-Based 3D Pose Estimation

    PubMed Central

    Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.

    2016-01-01

    Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547

  8. Panoramic, large-screen, 3-D flight display system design

    NASA Technical Reports Server (NTRS)

    Franklin, Henry; Larson, Brent; Johnson, Michael; Droessler, Justin; Reinhart, William F.

    1995-01-01

    The report documents and summarizes the results of the required evaluations specified in the SOW and the design specifications for the selected display system hardware. Also included are the proposed development plan and schedule as well as the estimated rough order of magnitude (ROM) cost to design, fabricate, and demonstrate a flyable prototype research flight display system. The thrust of the effort was development of a complete understanding of the user/system requirements for a panoramic, collimated, 3-D flyable avionic display system and the translation of the requirements into an acceptable system design for fabrication and demonstration of a prototype display in the early 1997 time frame. Eleven display system design concepts were presented to NASA LaRC during the program, one of which was down-selected to a preferred display system concept. A set of preliminary display requirements was formulated. The state of the art in image source technology, 3-D methods, collimation methods, and interaction methods for a panoramic, 3-D flight display system were reviewed in depth and evaluated. Display technology improvements and risk reductions associated with maturity of the technologies for the preferred display system design concept were identified.

  9. Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization

    NASA Astrophysics Data System (ADS)

    Johnston, Semay; Renambot, Luc; Sauter, Daniel

    2013-03-01

    Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.

  10. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-07

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  11. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  12. Removal of broken hardware.

    PubMed

    Hak, David J; McElvany, Matthew

    2008-02-01

    Despite advances in metallurgy, fatigue failure of hardware is common when a fracture fails to heal. Revision procedures can be difficult, usually requiring removal of intact or broken hardware. Several different methods may need to be attempted to successfully remove intact or broken hardware. Broken intramedullary nail cross-locking screws may be advanced out by impacting with a Steinmann pin. Broken open-section (Küntscher type) intramedullary nails may be removed using a hook. Closed-section cannulated intramedullary nails require additional techniques, such as the use of guidewires or commercially available extraction tools. Removal of broken solid nails requires use of a commercial ratchet grip extractor or a bone window to directly impact the broken segment. Screw extractors, trephines, and extraction bolts are useful for removing stripped or broken screws. Cold-welded screws and plates can complicate removal of locked implants and require the use of carbide drills or high-speed metal cutting tools. Hardware removal can be a time-consuming process, and no single technique is uniformly successful.

  13. Brandenburg 3D - a comprehensive 3D Subsurface Model, Conception of an Infrastructure Node and a Web Application

    NASA Astrophysics Data System (ADS)

    Kerschke, Dorit; Schilling, Maik; Simon, Andreas; Wächter, Joachim

    2014-05-01

    The Energiewende and the increasing scarcity of raw materials will lead to an intensified utilization of the subsurface in Germany. Within this context, geological 3D modeling is a fundamental approach for integrated decision and planning processes. Initiated by the development of the European Geospatial Infrastructure INSPIRE, the German State Geological Offices started digitizing their predominantly analog archive inventory. Until now, a comprehensive 3D subsurface model of Brandenburg did not exist. Therefore the project B3D strived to develop a new 3D model as well as a subsequent infrastructure node to integrate all geological and spatial data within the Geodaten-Infrastruktur Brandenburg (Geospatial Infrastructure, GDI-BB) and provide it to the public through an interactive 2D/3D web application. The functionality of the web application is based on a client-server architecture. Server-sided, all available spatial data is published through GeoServer. GeoServer is designed for interoperability and acts as the reference implementation of the Open Geospatial Consortium (OGC) Web Feature Service (WFS) standard that provides the interface that allows requests for geographical features. In addition, GeoServer implements, among others, the high performance certified compliant Web Map Service (WMS) that serves geo-referenced map images. For publishing 3D data, the OGC Web 3D Service (W3DS), a portrayal service for three-dimensional geo-data, is used. The W3DS displays elements representing the geometry, appearance, and behavior of geographic objects. On the client side, the web application is solely based on Free and Open Source Software and leans on the JavaScript API WebGL that allows the interactive rendering of 2D and 3D graphics by means of GPU accelerated usage of physics and image processing as part of the web page canvas without the use of plug-ins. WebGL is supported by most web browsers (e.g., Google Chrome, Mozilla Firefox, Safari, and Opera). The web

  14. Software Development: 3D Animations and Creating User Interfaces for Realistic Simulations

    NASA Technical Reports Server (NTRS)

    Gordillo, Orlando Enrique

    2015-01-01

    My fall 2015 semester was spent at the Lyndon B. Johnson Space Center working in the Integrated Graphics, Operations, and Analysis Laboratory (IGOAL). My first project was to create a video animation that could tell the story of OMICS. OMICS is a term being used in the field of biomedical science to describe the collective technologies that study biological systems, such as what makes up a cell and how it functions with other systems. In the IGOAL I used a large 23 inch Wacom monitor to draw storyboards, graphics, and line art animations. I used Blender as the 3D environment to sculpt, shape, cut or modify the several scenes and models for the video. A challenge creating this video was to take a term used in biomedical science and describe it in such a way that an 8th grade student can understand. I used a line art style because it would visually set the tone for what we thought was an educational style. In order to get a handle on the perspective and overall feel for the animation without overloading my workspace, I split up the 2 minute animation into several scenes. I used Blender's python scripting capabilities which allowed for the addition of plugins to add or modify tools. The scripts can also directly interact with the objects to create naturalistic patterns or movements. After collecting the rendered scenes, I used Blender's built-in video editing workspace to output the animation. My second project was to write software that emulates a physical system's interface. The interface was to simulate a boat, ROV, and winch system. Simulations are a time and cost effective way to test complicated data and provide training for operators without having to use expensive hardware. We created the virtual controls with 3-D Blender models and 2-D graphics, and then add functionality in C# using the Unity game engine. The Unity engine provides several essential behaviors of a simulator, such as the start and update functions. A framework for Unity, which was developed in

  15. Holovideo: Real-time 3D range video encoding and decoding on GPU

    NASA Astrophysics Data System (ADS)

    Karpinsky, Nikolaus; Zhang, Song

    2012-02-01

    We present a 3D video-encoding technique called Holovideo that is capable of encoding high-resolution 3D videos into standard 2D videos, and then decoding the 2D videos back into 3D rapidly without significant loss of quality. Due to the nature of the algorithm, 2D video compression such as JPEG encoding with QuickTime Run Length Encoding (QTRLE) can be applied with little quality loss, resulting in an effective way to store 3D video at very small file sizes. We found that under a compression ratio of 134:1, Holovideo to OBJ file format, the 3D geometry quality drops at a negligible level. Several sets of 3D videos were captured using a structured light scanner, compressed using the Holovideo codec, and then uncompressed and displayed to demonstrate the effectiveness of the codec. With the use of OpenGL Shaders (GLSL), the 3D video codec can encode and decode in realtime. We demonstrated that for a video size of 512×512, the decoding speed is 28 frames per second (FPS) with a laptop computer using an embedded NVIDIA GeForce 9400 m graphics processing unit (GPU). Encoding can be done with this same setup at 18 FPS, making this technology suitable for applications such as interactive 3D video games and 3D video conferencing.

  16. 3D printing meets computational astrophysics: deciphering the structure of η Carinae's inner colliding winds

    NASA Astrophysics Data System (ADS)

    Madura, T. I.; Clementel, N.; Gull, T. R.; Kruip, C. J. H.; Paardekooper, J.-P.

    2015-06-01

    We present the first 3D prints of output from a supercomputer simulation of a complex astrophysical system, the colliding stellar winds in the massive (≳120 M⊙), highly eccentric (e ˜ 0.9) binary star system η Carinae. We demonstrate the methodology used to incorporate 3D interactive figures into a PDF (Portable Document Format) journal publication and the benefits of using 3D visualization and 3D printing as tools to analyse data from multidimensional numerical simulations. Using a consumer-grade 3D printer (MakerBot Replicator 2X), we successfully printed 3D smoothed particle hydrodynamics simulations of η Carinae's inner (r ˜ 110 au) wind-wind collision interface at multiple orbital phases. The 3D prints and visualizations reveal important, previously unknown `finger-like' structures at orbital phases shortly after periastron (φ ˜ 1.045) that protrude radially outwards from the spiral wind-wind collision region. We speculate that these fingers are related to instabilities (e.g. thin-shell, Rayleigh-Taylor) that arise at the interface between the radiatively cooled layer of dense post-shock primary-star wind and the fast (3000 km s-1), adiabatic post-shock companion-star wind. The success of our work and easy identification of previously unrecognized physical features highlight the important role 3D printing and interactive graphics can play in the visualization and understanding of complex 3D time-dependent numerical simulations of astrophysical phenomena.

  17. Nerves of Steel: a Low-Cost Method for 3D Printing the Cranial Nerves.

    PubMed

    Javan, Ramin; Davidson, Duncan; Javan, Afshin

    2017-02-21

    Steady-state free precession (SSFP) magnetic resonance imaging (MRI) can demonstrate details down to the cranial nerve (CN) level. High-resolution three-dimensional (3D) visualization can now quickly be performed at the workstation. However, we are still limited by visualization on flat screens. The emerging technologies in rapid prototyping or 3D printing overcome this limitation. It comprises a variety of automated manufacturing techniques, which use virtual 3D data sets to fabricate solid forms in a layer-by-layer technique. The complex neuroanatomy of the CNs may be better understood and depicted by the use of highly customizable advanced 3D printed models. In this technical note, after manually perfecting the segmentation of each CN and brain stem on each SSFP-MRI image, initial 3D reconstruction was performed. The bony skull base was also reconstructed from computed tomography (CT) data. Autodesk 3D Studio Max, available through freeware student/educator license, was used to three-dimensionally trace the 3D reconstructed CNs in order to create smooth graphically designed CNs and to assure proper fitting of the CNs into their respective neural foramina and fissures. This model was then 3D printed with polyamide through a commercial online service. Two different methods are discussed for the key segmentation and 3D reconstruction steps, by either using professional commercial software, i.e., Materialise Mimics, or utilizing a combination of the widely available software Adobe Photoshop, as well as a freeware software, OsiriX Lite.

  18. Immersive 3D Geovisualization in Higher Education

    ERIC Educational Resources Information Center

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2015-01-01

    In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students…

  19. 3D Printing. What's the Harm?

    ERIC Educational Resources Information Center

    Love, Tyler S.; Roy, Ken

    2016-01-01

    Health concerns from 3D printing were first documented by Stephens, Azimi, Orch, and Ramos (2013), who found that commercially available 3D printers were producing hazardous levels of ultrafine particles (UFPs) and volatile organic compounds (VOCs) when plastic materials were melted through the extruder. UFPs are particles less than 100 nanometers…

  20. Topology dictionary for 3D video understanding.

    PubMed

    Tung, Tony; Matsuyama, Takashi

    2012-08-01

    This paper presents a novel approach that achieves 3D video understanding. 3D video consists of a stream of 3D models of subjects in motion. The acquisition of long sequences requires large storage space (2 GB for 1 min). Moreover, it is tedious to browse data sets and extract meaningful information. We propose the topology dictionary to encode and describe 3D video content. The model consists of a topology-based shape descriptor dictionary which can be generated from either extracted patterns or training sequences. The model relies on 1) topology description and classification using Reeb graphs, and 2) a Markov motion graph to represent topology change states. We show that the use of Reeb graphs as the high-level topology descriptor is relevant. It allows the dictionary to automatically model complex sequences, whereas other strategies would require prior knowledge on the shape and topology of the captured subjects. Our approach serves to encode 3D video sequences, and can be applied for content-based description and summarization of 3D video sequences. Furthermore, topology class labeling during a learning process enables the system to perform content-based event recognition. Experiments were carried out on various 3D videos. We showcase an application for 3D video progressive summarization using the topology dictionary.

  1. 3D elastic control for mobile devices.

    PubMed

    Hachet, Martin; Pouderoux, Joachim; Guitton, Pascal

    2008-01-01

    To increase the input space of mobile devices, the authors developed a proof-of-concept 3D elastic controller that easily adapts to mobile devices. This embedded device improves the completion of high-level interaction tasks such as visualization of large documents and navigation in 3D environments. It also opens new directions for tomorrow's mobile applications.

  2. 3D Printing of Molecular Models

    ERIC Educational Resources Information Center

    Gardner, Adam; Olson, Arthur

    2016-01-01

    Physical molecular models have played a valuable role in our understanding of the invisible nano-scale world. We discuss 3D printing and its use in producing models of the molecules of life. Complex biomolecular models, produced from 3D printed parts, can demonstrate characteristics of molecular structure and function, such as viral self-assembly,…

  3. 3D Printed Block Copolymer Nanostructures

    ERIC Educational Resources Information Center

    Scalfani, Vincent F.; Turner, C. Heath; Rupar, Paul A.; Jenkins, Alexander H.; Bara, Jason E.

    2015-01-01

    The emergence of 3D printing has dramatically advanced the availability of tangible molecular and extended solid models. Interestingly, there are few nanostructure models available both commercially and through other do-it-yourself approaches such as 3D printing. This is unfortunate given the importance of nanotechnology in science today. In this…

  4. Infrastructure for 3D Imaging Test Bed

    DTIC Science & Technology

    2007-05-11

    analysis. (c.) Real time detection & analysis of human gait: using a video camera we capture walking human silhouette for pattern modeling and gait ... analysis . Fig. 5 shows the scanning result result that is fed into a Geo-magic software tool for 3D meshing. Fig. 5: 3D scanning result In

  5. Wow! 3D Content Awakens the Classroom

    ERIC Educational Resources Information Center

    Gordon, Dan

    2010-01-01

    From her first encounter with stereoscopic 3D technology designed for classroom instruction, Megan Timme, principal at Hamilton Park Pacesetter Magnet School in Dallas, sensed it could be transformative. Last spring, when she began pilot-testing 3D content in her third-, fourth- and fifth-grade classrooms, Timme wasn't disappointed. Students…

  6. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  7. Pathways for Learning from 3D Technology

    ERIC Educational Resources Information Center

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2012-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion"…

  8. 3D, or Not to Be?

    ERIC Educational Resources Information Center

    Norbury, Keith

    2012-01-01

    It may be too soon for students to be showing up for class with popcorn and gummy bears, but technology similar to that behind the 3D blockbuster movie "Avatar" is slowly finding its way into college classrooms. 3D classroom projectors are taking students on fantastic voyages inside the human body, to the ruins of ancient Greece--even to faraway…

  9. Static & Dynamic Response of 3D Solids

    SciTech Connect

    Lin, Jerry

    1996-07-15

    NIKE3D is a large deformations 3D finite element code used to obtain the resulting displacements and stresses from multi-body static and dynamic structural thermo-mechanics problems with sliding interfaces. Many nonlinear and temperature dependent constitutive models are available.

  10. Massive parallelization of a 3D finite difference electromagnetic forward solution using domain decomposition methods on multiple CUDA enabled GPUs

    NASA Astrophysics Data System (ADS)

    Schultz, A.

    2010-12-01

    3D forward solvers lie at the core of inverse formulations used to image the variation of electrical conductivity within the Earth's interior. This property is associated with variations in temperature, composition, phase, presence of volatiles, and in specific settings, the presence of groundwater, geothermal resources, oil/gas or minerals. The high cost of 3D solutions has been a stumbling block to wider adoption of 3D methods. Parallel algorithms for modeling frequency domain 3D EM problems have not achieved wide scale adoption, with emphasis on fairly coarse grained parallelism using MPI and similar approaches. The communications bandwidth as well as the latency required to send and receive network communication packets is a limiting factor in implementing fine grained parallel strategies, inhibiting wide adoption of these algorithms. Leading Graphics Processor Unit (GPU) companies now produce GPUs with hundreds of GPU processor cores per die. The footprint, in silicon, of the GPU's restricted instruction set is much smaller than the general purpose instruction set required of a CPU. Consequently, the density of processor cores on a GPU can be much greater than on a CPU. GPUs also have local memory, registers and high speed communication with host CPUs, usually through PCIe type interconnects. The extremely low cost and high computational power of GPUs provides the EM geophysics community with an opportunity to achieve fine grained (i.e. massive) parallelization of codes on low cost hardware. The current generation of GPUs (e.g. NVidia Fermi) provides 3 billion transistors per chip die, with nearly 500 processor cores and up to 6 GB of fast (DDR5) GPU memory. This latest generation of GPU supports fast hardware double precision (64 bit) floating point operations of the type required for frequency domain EM forward solutions. Each Fermi GPU board can sustain nearly 1 TFLOP in double precision, and multiple boards can be installed in the host computer system. We

  11. Intracellular nanomanipulation by a photonic-force microscope with real-time acquisition of a 3D stiffness matrix

    NASA Astrophysics Data System (ADS)

    Bertseva, E.; Singh, A. S. G.; Lekki, J.; Thévenaz, P.; Lekka, M.; Jeney, S.; Gremaud, G.; Puttini, S.; Nowak, W.; Dietler, G.; Forró, L.; Unser, M.; Kulik, A. J.

    2009-07-01

    A traditional photonic-force microscope (PFM) results in huge sets of data, which requires tedious numerical analysis. In this paper, we propose instead an analog signal processor to attain real-time capabilities while retaining the richness of the traditional PFM data. Our system is devoted to intracellular measurements and is fully interactive through the use of a haptic joystick. Using our specialized analog hardware along with a dedicated algorithm, we can extract the full 3D stiffness matrix of the optical trap in real time, including the off-diagonal cross-terms. Our system is also capable of simultaneously recording data for subsequent offline analysis. This allows us to check that a good correlation exists between the classical analysis of stiffness and our real-time measurements. We monitor the PFM beads using an optical microscope. The force-feedback mechanism of the haptic joystick helps us in interactively guiding the bead inside living cells and collecting information from its (possibly anisotropic) environment. The instantaneous stiffness measurements are also displayed in real time on a graphical user interface. The whole system has been built and is operational; here we present early results that confirm the consistency of the real-time measurements with offline computations.

  12. MOM3D/EM-ANIMATE - MOM3D WITH ANIMATION CODE

    NASA Technical Reports Server (NTRS)

    Shaeffer, J. F.

    1994-01-01

    MOM3D (LAR-15074) is a FORTRAN method-of-moments electromagnetic analysis algorithm for open or closed 3-D perfectly conducting or resistive surfaces. Radar cross section with plane wave illumination is the prime analysis emphasis; however, provision is also included for local port excitation for computing antenna gain patterns and input impedances. The Electric Field Integral Equation form of Maxwell's equations is solved using local triangle couple basis and testing functions with a resultant system impedance matrix. The analysis emphasis is not only for routine RCS pattern predictions, but also for phenomenological diagnostics: bistatic imaging, currents, and near scattered/total electric fields. The images, currents, and near fields are output in form suitable for animation. MOM3D computes the full backscatter and bistatic radar cross section polarization scattering matrix (amplitude and phase), body currents and near scattered and total fields for plane wave illumination. MOM3D also incorporates a new bistatic k space imaging algorithm for computing down range and down/cross range diagnostic images using only one matrix inversion. MOM3D has been made memory and cpu time efficient by using symmetric matrices, symmetric geometry, and partitioned fixed and variable geometries suitable for design iteration studies. MOM3D may be run interactively or in batch mode on 486 IBM PCs and compatibles, UNIX workstations or larger computers. A 486 PC with 16 megabytes of memory has the potential to solve a 30 square wavelength (containing 3000 unknowns) symmetric configuration. Geometries are described using a triangular mesh input in the form of a list of spatial vertex points and a triangle join connection list. The EM-ANIMATE (LAR-15075) program is a specialized visualization program that displays and animates the near-field and surface-current solutions obtained from an electromagnetics program, in particular, that from MOM3D. The EM-ANIMATE program is windows based and

  13. BEAMS3D Neutral Beam Injection Model

    SciTech Connect

    Lazerson, Samuel

    2014-04-14

    With the advent of applied 3D fi elds in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous velocity reduction, and pitch angle scattering are modeled with the ADAS atomic physics database [1]. Benchmark calculations are presented to validate the collisionless particle orbits, neutral beam injection model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields.

  14. Fabrication of 3D Silicon Sensors

    SciTech Connect

    Kok, A.; Hansen, T.E.; Hansen, T.A.; Lietaer, N.; Summanwar, A.; Kenney, C.; Hasi, J.; Da Via, C.; Parker, S.I.; /Hawaii U.

    2012-06-06

    Silicon sensors with a three-dimensional (3-D) architecture, in which the n and p electrodes penetrate through the entire substrate, have many advantages over planar silicon sensors including radiation hardness, fast time response, active edge and dual readout capabilities. The fabrication of 3D sensors is however rather complex. In recent years, there have been worldwide activities on 3D fabrication. SINTEF in collaboration with Stanford Nanofabrication Facility have successfully fabricated the original (single sided double column type) 3D detectors in two prototype runs and the third run is now on-going. This paper reports the status of this fabrication work and the resulted yield. The work of other groups such as the development of double sided 3D detectors is also briefly reported.

  15. 2D/3D switchable displays

    NASA Astrophysics Data System (ADS)

    Dekker, T.; de Zwart, S. T.; Willemsen, O. H.; Hiddink, M. G. H.; IJzerman, W. L.

    2006-02-01

    A prerequisite for a wide market acceptance of 3D displays is the ability to switch between 3D and full resolution 2D. In this paper we present a robust and cost effective concept for an auto-stereoscopic switchable 2D/3D display. The display is based on an LCD panel, equipped with switchable LC-filled lenticular lenses. We will discuss 3D image quality, with the focus on display uniformity. We show that slanting the lenticulars in combination with a good lens design can minimize non-uniformities in our 20" 2D/3D monitors. Furthermore, we introduce fractional viewing systems as a very robust concept to further improve uniformity in the case slanting the lenticulars and optimizing the lens design are not sufficient. We will discuss measurements and numerical simulations of the key optical characteristics of this display. Finally, we discuss 2D image quality, the switching characteristics and the residual lens effect.

  16. 6D Interpretation of 3D Gravity

    NASA Astrophysics Data System (ADS)

    Herfray, Yannick; Krasnov, Kirill; Scarinci, Carlos

    2017-02-01

    We show that 3D gravity, in its pure connection formulation, admits a natural 6D interpretation. The 3D field equations for the connection are equivalent to 6D Hitchin equations for the Chern–Simons 3-form in the total space of the principal bundle over the 3-dimensional base. Turning this construction around one gets an explanation of why the pure connection formulation of 3D gravity exists. More generally, we interpret 3D gravity as the dimensional reduction of the 6D Hitchin theory. To this end, we show that any \\text{SU}(2) invariant closed 3-form in the total space of the principal \\text{SU}(2) bundle can be parametrised by a connection together with a 2-form field on the base. The dimensional reduction of the 6D Hitchin theory then gives rise to 3D gravity coupled to a topological 2-form field.

  17. Biocompatible 3D Matrix with Antimicrobial Properties.

    PubMed

    Ion, Alberto; Andronescu, Ecaterina; Rădulescu, Dragoș; Rădulescu, Marius; Iordache, Florin; Vasile, Bogdan Ștefan; Surdu, Adrian Vasile; Albu, Madalina Georgiana; Maniu, Horia; Chifiriuc, Mariana Carmen; Grumezescu, Alexandru Mihai; Holban, Alina Maria

    2016-01-20

    The aim of this study was to develop, characterize and assess the biological activity of a new regenerative 3D matrix with antimicrobial properties, based on collagen (COLL), hydroxyapatite (HAp), β-cyclodextrin (β-CD) and usnic acid (UA). The prepared 3D matrix was characterized by Scanning Electron Microscopy (SEM), Fourier Transform Infrared Microscopy (FT-IRM), Transmission Electron Microscopy (TEM), and X-ray Diffraction (XRD). In vitro qualitative and quantitative analyses performed on cultured diploid cells demonstrated that the 3D matrix is biocompatible, allowing the normal development and growth of MG-63 osteoblast-like cells and exhibited an antimicrobial effect, especially on the Staphylococcus aureus strain, explained by the particular higher inhibitory activity of usnic acid (UA) against Gram positive bacterial strains. Our data strongly recommend the obtained 3D matrix to be used as a successful alternative for the fabrication of three dimensional (3D) anti-infective regeneration matrix for bone tissue engineering.

  18. Quon 3D language for quantum information

    PubMed Central

    Liu, Zhengwei; Wozniakowski, Alex; Jaffe, Arthur M.

    2017-01-01

    We present a 3D topological picture-language for quantum information. Our approach combines charged excitations carried by strings, with topological properties that arise from embedding the strings in the interior of a 3D manifold with boundary. A quon is a composite that acts as a particle. Specifically, a quon is a hemisphere containing a neutral pair of open strings with opposite charge. We interpret multiquons and their transformations in a natural way. We obtain a type of relation, a string–genus “joint relation,” involving both a string and the 3D manifold. We use the joint relation to obtain a topological interpretation of the C∗-Hopf algebra relations, which are widely used in tensor networks. We obtain a 3D representation of the controlled NOT (CNOT) gate that is considerably simpler than earlier work, and a 3D topological protocol for teleportation. PMID:28167790

  19. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  20. Pathways for Learning from 3D Technology

    PubMed Central

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2016-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D presentations could provide additional sensorial cues (e.g., depth cues) that lead to a higher sense of being surrounded by the stimulus; a connection through general interest such that 3D presentation increases a viewer’s interest that leads to greater attention paid to the stimulus (e.g., "involvement"); and a connection through discomfort, with the 3D goggles causing discomfort that interferes with involvement and thus with memory. The memories of 396 participants who viewed two-dimensional (2D) or 3D movies at movie theaters in Southern California were tested. Within three days of viewing a movie, participants filled out an online anonymous questionnaire that queried them about their movie content memories, subjective movie-going experiences (including emotional reactions and "presence") and demographic backgrounds. The responses to the questionnaire were subjected to path analyses in which several different links between 3D presentation to memory (and other variables) were explored. The results showed there were no effects of 3D presentation, either directly or indirectly, upon memory. However, the largest effects of 3D presentation were on emotions and immersion, with 3D presentation leading to reduced positive emotions, increased negative emotions and lowered immersion, compared to 2D presentations. PMID:28078331

  1. The psychology of the 3D experience

    NASA Astrophysics Data System (ADS)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence.

  2. The 3D Flow Field Around an Embedded Planet

    NASA Astrophysics Data System (ADS)

    Fung, Jeffrey; Artymowicz, Pawel; Wu, Yanqin

    2015-10-01

    3D modifications to the well-studied 2D flow topology around an embedded planet have the potential to resolve long-standing problems in planet formation theory. We present a detailed analysis of the 3D isothermal flow field around a 5 Earth-mass planet on a fixed circular orbit, simulated using our graphics processing unit hydrodynamics code PEnGUIn. We find that, overall, the horseshoe region has a columnar structure extending vertically much beyond the Hill sphere of the planet. This columnar structure is only broken for some of the widest horseshoe streamlines, along which high altitude fluid descends rapidly into the planet’s Bondi sphere, performs one horseshoe turn, and exits the Bondi sphere radially in the midplane. A portion of this flow exits the horseshoe region altogether, which we refer to as the “transient” horseshoe flow. The flow continues as it rolls up into a pair of up-down symmetric horizontal vortex lines shed into the wake of the planet. This flow, unique to 3D, affects both planet accretion and migration. It prevents the planet from sustaining a hydrostatic atmosphere due to its intrusion into the Bondi sphere, and leads to a significant corotation torque on the planet, unanticipated by 2D analysis. In the reported simulation, starting with a {{Σ }}˜ {r}-3/2 radial surface density profile, this torque is positive and partially cancels with the negative differential Lindblad torque, resulting in a factor of three slower planet migration rate. Finally, we report 3D effects can be suppressed by a sufficiently large disk viscosity, leading to results similar to 2D.

  3. Solar carbon monoxide: poster child for 3D effects .

    NASA Astrophysics Data System (ADS)

    Ayres, T. R.; Lyons, J. R.; Ludwig, H.-G.; Caffau, E.; Wedemeyer-Böhm, S.

    Photospheric infrared (2-6 mu m) rovibrational bands of carbon monoxide (CO) provide a tough test for 3D convection models such as those calculated using CO5BOLD. The molecular formation is highly temperature-sensitive, and thus responds in an exaggerated way to thermal fluctuations in the dynamic atmosphere. CO, itself, is an important tracer of the oxygen abundance, a still controversial issue in solar physics; as well as the heavy isotopes of carbon (13C) and oxygen (18O, 17O), which, relative to terrestrial values, are fingerprints of fractionation processes that operated in the primitive solar nebula. We show how 3D models impact the CO line formation, and add in a second constraint involving the near-UV Ca RIPTSIZE II line wings, which also are highly temperature sensitive, but in the opposite sense to the molecules. We find that our reference CO5BOLD snapshots appear to be slightly too cool on average in the outer layers of the photosphere where the CO absorptions and Ca RIPTSIZE II wing emissions arise. We show, further, that previous 1D modeling was systematically biased toward higher oxygen abundances and lower isotopic ratios (e.g., R23equiv 12C/13C), suggesting an isotopically ``heavy'' Sun contrary to direct capture measurements of solar wind light ions by the Genesis Discovery Mission. New 3D ratios for the oxygen isotopes are much closer to those reported by Genesis, and the associated oxygen abundance from CO now is consistent with the recent Caffau et al. study of atomic oxygen. Some lingering discrepancies perhaps can be explained by magnetic bright points. Solar CO demonstrates graphically the wide gulf that can occur between a 3D analysis and 1D.

  4. IMAT graphics manual

    NASA Technical Reports Server (NTRS)

    Stockwell, Alan E.; Cooper, Paul A.

    1991-01-01

    The Integrated Multidisciplinary Analysis Tool (IMAT) consists of a menu driven executive system coupled with a relational database which links commercial structures, structural dynamics and control codes. The IMAT graphics system, a key element of the software, provides a common interface for storing, retrieving, and displaying graphical information. The IMAT Graphics Manual shows users of commercial analysis codes (MATRIXx, MSC/NASTRAN and I-DEAS) how to use the IMAT graphics system to obtain high quality graphical output using familiar plotting procedures. The manual explains the key features of the IMAT graphics system, illustrates their use with simple step-by-step examples, and provides a reference for users who wish to take advantage of the flexibility of the software to customize their own applications.

  5. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  6. 3D bioprinting of tissues and organs.

    PubMed

    Murphy, Sean V; Atala, Anthony

    2014-08-01

    Additive manufacturing, otherwise known as three-dimensional (3D) printing, is driving major innovations in many areas, such as engineering, manufacturing, art, education and medicine. Recent advances have enabled 3D printing of biocompatible materials, cells and supporting components into complex 3D functional living tissues. 3D bioprinting is being applied to regenerative medicine to address the need for tissues and organs suitable for transplantation. Compared with non-biological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine. 3D bioprinting has already been used for the generation and transplantation of several tissues, including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue and cartilaginous structures. Other applications include developing high-throughput 3D-bioprinted tissue models for research, drug discovery and toxicology.

  7. Medical 3D Printing for the Radiologist.

    PubMed

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article.

  8. Medical 3D Printing for the Radiologist

    PubMed Central

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A.; Cai, Tianrun; Kumamaru, Kanako K.; George, Elizabeth; Wake, Nicole; Caterson, Edward J.; Pomahac, Bohdan; Ho, Vincent B.; Grant, Gerald T.

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. ©RSNA, 2015 PMID:26562233

  9. 3D imaging in forensic odontology.

    PubMed

    Evans, Sam; Jones, Carl; Plassmann, Peter

    2010-06-16

    This paper describes the investigation of a new 3D capture method for acquiring and subsequent forensic analysis of bite mark injuries on human skin. When documenting bite marks with standard 2D cameras errors in photographic technique can occur if best practice is not followed. Subsequent forensic analysis of the mark is problematic when a 3D structure is recorded into a 2D space. Although strict guidelines (BAFO) exist, these are time-consuming to follow and, due to their complexity, may produce errors. A 3D image capture and processing system might avoid the problems resulting from the 2D reduction process, simplifying the guidelines and reducing errors. Proposed Solution: a series of experiments are described in this paper to demonstrate that the potential of a 3D system might produce suitable results. The experiments tested precision and accuracy of the traditional 2D and 3D methods. A 3D image capture device minimises the amount of angular distortion, therefore such a system has the potential to create more robust forensic evidence for use in courts. A first set of experiments tested and demonstrated which method of forensic analysis creates the least amount of intra-operator error. A second set tested and demonstrated which method of image capture creates the least amount of inter-operator error and visual distortion. In a third set the effects of angular distortion on 2D and 3D methods of image capture were evaluated.

  10. NUBEAM developments and 3d halo modeling

    NASA Astrophysics Data System (ADS)

    Gorelenkova, M. V.; Medley, S. S.; Kaye, S. M.

    2012-10-01

    Recent developments related to the 3D halo model in NUBEAM code are described. To have a reliable halo neutral source for diagnostic simulation, the TRANSP/NUBEAM code has been enhanced with full implementation of ADAS atomic physic ground state and excited state data for hydrogenic beams and mixed species plasma targets. The ADAS codes and database provide the density and temperature dependence of the atomic data, and the collective nature of the state excitation process. To be able to populate 3D halo output with sufficient statistical resolution, the capability to control the statistics of fast ion CX modeling and for thermal halo launch has been added to NUBEAM. The 3D halo neutral model is based on modification and extension of the ``beam in box'' aligned 3d Cartesian grid that includes the neutral beam itself, 3D fast neutral densities due to CX of partially slowed down fast ions in the beam halo region, 3D thermal neutral densities due to CX deposition and fast neutral recapture source. More details on the 3D halo simulation design will be presented.

  11. DCSP hardware maintenance system

    SciTech Connect

    Pazmino, M.

    1995-11-01

    This paper discusses the necessary changes to be implemented on the hardware side of the DCSP database. DCSP is currently tracking hardware maintenance costs in six separate databases. The goal is to develop a system that combines all data and works off a single database. Some of the tasks that will be discussed in this paper include adding the capability for report generation, creating a help package and preparing a users guide, testing the executable file, and populating the new database with data taken from the old database. A brief description of the basic process used in developing the system will also be discussed. Conclusions about the future of the database and the delivery of the final product are then addressed, based on research and the desired use of the system.

  12. Hardware Accelerated Simulated Radiography

    SciTech Connect

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R

    2005-04-12

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32 bit floating point texture capabilities to obtain validated solutions to the radiative transport equation for X-rays. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedra that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester. We show that the hardware accelerated solution is faster than the current technique used by scientists.

  13. Sterilization of space hardware.

    NASA Technical Reports Server (NTRS)

    Pflug, I. J.

    1971-01-01

    Discussion of various techniques of sterilization of space flight hardware using either destructive heating or the action of chemicals. Factors considered in the dry-heat destruction of microorganisms include the effects of microbial water content, temperature, the physicochemical properties of the microorganism and adjacent support, and nature of the surrounding gas atmosphere. Dry-heat destruction rates of microorganisms on the surface, between mated surface areas, or buried in the solid material of space vehicle hardware are reviewed, along with alternative dry-heat sterilization cycles, thermodynamic considerations, and considerations of final sterilization-process design. Discussed sterilization chemicals include ethylene oxide, formaldehyde, methyl bromide, dimethyl sulfoxide, peracetic acid, and beta-propiolactone.

  14. Novel fully integrated computer system for custom footwear: from 3D digitization to manufacturing

    NASA Astrophysics Data System (ADS)

    Houle, Pascal-Simon; Beaulieu, Eric; Liu, Zhaoheng

    1998-03-01

    This paper presents a recently developed custom footwear system, which integrates 3D digitization technology, range image fusion techniques, a 3D graphical environment for corrective actions, parametric curved surface representation and computer numerical control (CNC) machining. In this system, a support designed with the help of biomechanics experts can stabilize the foot in a correct and neutral position. The foot surface is then captured by a 3D camera using active ranging techniques. A software using a library of documented foot pathologies suggests corrective actions on the orthosis. Three kinds of deformations can be achieved. The first method uses previously scanned pad surfaces by our 3D scanner, which can be easily mapped onto the foot surface to locally modify the surface shape. The second kind of deformation is construction of B-Spline surfaces by manipulating control points and modifying knot vectors in a 3D graphical environment to build desired deformation. The last one is a manual electronic 3D pen, which may be of different shapes and sizes, and has an adjustable 'pressure' information. All applied deformations should respect a G1 surface continuity, which ensure that the surface can accustom a foot. Once the surface modification process is completed, the resulting data is sent to manufacturing software for CNC machining.

  15. Procedural 3d Modelling for Traditional Settlements. The Case Study of Central Zagori

    NASA Astrophysics Data System (ADS)

    Kitsakis, D.; Tsiliakou, E.; Labropoulos, T.; Dimopoulou, E.

    2017-02-01

    Over the last decades 3D modelling has been a fast growing field in Geographic Information Science, extensively applied in various domains including reconstruction and visualization of cultural heritage, especially monuments and traditional settlements. Technological advances in computer graphics, allow for modelling of complex 3D objects achieving high precision and accuracy. Procedural modelling is an effective tool and a relatively novel method, based on algorithmic modelling concept. It is utilized for the generation of accurate 3D models and composite facade textures from sets of rules which are called Computer Generated Architecture grammars (CGA grammars), defining the objects' detailed geometry, rather than altering or editing the model manually. In this paper, procedural modelling tools have been exploited to generate the 3D model of a traditional settlement in the region of Central Zagori in Greece. The detailed geometries of 3D models derived from the application of shape grammars on selected footprints, and the process resulted in a final 3D model, optimally describing the built environment of Central Zagori, in three levels of Detail (LoD). The final 3D scene was exported and published as 3D web-scene which can be viewed with 3D CityEngine viewer, giving a walkthrough the whole model, same as in virtual reality or game environments. This research work addresses issues regarding textures' precision, LoD for 3D objects and interactive visualization within one 3D scene, as well as the effectiveness of large scale modelling, along with the benefits and drawbacks that derive from procedural modelling techniques in the field of cultural heritage and more specifically on 3D modelling of traditional settlements.

  16. RRFC hardware operation manual

    SciTech Connect

    Abhold, M.E.; Hsue, S.T.; Menlove, H.O.; Walton, G.

    1996-05-01

    The Research Reactor Fuel Counter (RRFC) system was developed to assay the {sup 235}U content in spent Material Test Reactor (MTR) type fuel elements underwater in a spent fuel pool. RRFC assays the {sup 235}U content using active neutron coincidence counting and also incorporates an ion chamber for gross gamma-ray measurements. This manual describes RRFC hardware, including detectors, electronics, and performance characteristics.

  17. 3D packaging for integrated circuit systems

    SciTech Connect

    Chu, D.; Palmer, D.W.

    1996-11-01

    A goal was set for high density, high performance microelectronics pursued through a dense 3D packing of integrated circuits. A {open_quotes}tool set{close_quotes} of assembly processes have been developed that enable 3D system designs: 3D thermal analysis, silicon electrical through vias, IC thinning, mounting wells in silicon, adhesives for silicon stacking, pretesting of IC chips before commitment to stacks, and bond pad bumping. Validation of these process developments occurred through both Sandia prototypes and subsequent commercial examples.

  18. FUN3D Manual: 12.5

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.5, including optional dependent packages. FUN3D is a suite of computational uid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables ecient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  19. FUN3D Manual: 12.4

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.4, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixedelement unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  20. 3D Immersive Visualization with Astrophysical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2017-01-01

    We present the refinement of a new 3D immersion technique for astrophysical data visualization.Methodology to create 360 degree spherical panoramas is reviewed. The 3D software package Blender coupled with Python and the Google Spatial Media module are used together to create the final data products. Data can be viewed interactively with a mobile phone or tablet or in a web browser. The technique can apply to different kinds of astronomical data including 3D stellar and galaxy catalogs, images, and planetary maps.

  1. A high capacity 3D steganography algorithm.

    PubMed

    Chao, Min-Wen; Lin, Chao-hung; Yu, Cheng-Wei; Lee, Tong-Yee

    2009-01-01

    In this paper, we present a very high-capacity and low-distortion 3D steganography scheme. Our steganography approach is based on a novel multilayered embedding scheme to hide secret messages in the vertices of 3D polygon models. Experimental results show that the cover model distortion is very small as the number of hiding layers ranges from 7 to 13 layers. To the best of our knowledge, this novel approach can provide much higher hiding capacity than other state-of-the-art approaches, while obeying the low distortion and security basic requirements for steganography on 3D models.

  2. How We 3D-Print Aerogel

    SciTech Connect

    2015-04-23

    A new type of graphene aerogel will make for better energy storage, sensors, nanoelectronics, catalysis and separations. Lawrence Livermore National Laboratory researchers have made graphene aerogel microlattices with an engineered architecture via a 3D printing technique known as direct ink writing. The research appears in the April 22 edition of the journal, Nature Communications. The 3D printed graphene aerogels have high surface area, excellent electrical conductivity, are lightweight, have mechanical stiffness and exhibit supercompressibility (up to 90 percent compressive strain). In addition, the 3D printed graphene aerogel microlattices show an order of magnitude improvement over bulk graphene materials and much better mass transport.

  3. FUN3D Manual: 12.6

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.6, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  4. FUN3D Manual: 12.9

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  5. FUN3D Manual: 13.1

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2017-01-01

    This manual describes the installation and execution of FUN3D version 13.1, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  6. FUN3D Manual: 12.7

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  7. FUN3D Manual: 13.0

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  8. FUN3D Manual: 12.8

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  9. An Improved Version of TOPAZ 3D

    SciTech Connect

    Krasnykh, Anatoly

    2003-07-29

    An improved version of the TOPAZ 3D gun code is presented as a powerful tool for beam optics simulation. In contrast to the previous version of TOPAZ 3D, the geometry of the device under test is introduced into TOPAZ 3D directly from a CAD program, such as Solid Edge or AutoCAD. In order to have this new feature, an interface was developed, using the GiD software package as a meshing code. The article describes this method with two models to illustrate the results.

  10. RHOCUBE: 3D density distributions modeling code

    NASA Astrophysics Data System (ADS)

    Nikutta, Robert; Agliozzo, Claudia

    2016-11-01

    RHOCUBE models 3D density distributions on a discrete Cartesian grid and their integrated 2D maps. It can be used for a range of applications, including modeling the electron number density in LBV shells and computing the emission measure. The RHOCUBE Python package provides several 3D density distributions, including a powerlaw shell, truncated Gaussian shell, constant-density torus, dual cones, and spiralling helical tubes, and can accept additional distributions. RHOCUBE provides convenient methods for shifts and rotations in 3D, and if necessary, an arbitrary number of density distributions can be combined into the same model cube and the integration ∫ dz performed through the joint density field.

  11. Explicit 3-D Hydrodynamic FEM Program

    SciTech Connect

    2000-11-07

    DYNA3D is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, including frictional sliding, single surface contact and automatic contact generation.

  12. 3D-HIM: A 3D High-density Interleaved Memory for Bipolar RRAM Design

    DTIC Science & Technology

    2013-05-01

    JOURNAL ARTICLE (Post Print ) 3. DATES COVERED (From - To) DEC 2010 – NOV 2012 4. TITLE AND SUBTITLE 3D -HIM: A 3D HIGH-DENSITY INTERLEAVED MEMORY...emerged as one of the promising candidates for large data storage in computing systems. Moreover, building up RRAM in a three dimensional ( 3D ) stacking...brings in the potential reliability issue. To alleviate the situation, we introduce two novel 3D stacking structures built upon bipolar RRAM

  13. Do-It-Yourself: 3D Models of Hydrogenic Orbitals through 3D Printing

    ERIC Educational Resources Information Center

    Griffith, Kaitlyn M.; de Cataldo, Riccardo; Fogarty, Keir H.

    2016-01-01

    Introductory chemistry students often have difficulty visualizing the 3-dimensional shapes of the hydrogenic electron orbitals without the aid of physical 3D models. Unfortunately, commercially available models can be quite expensive. 3D printing offers a solution for producing models of hydrogenic orbitals. 3D printing technology is widely…

  14. XML3D and Xflow: combining declarative 3D for the Web with generic data flows.

    PubMed

    Klein, Felix; Sons, Kristian; Rubinstein, Dmitri; Slusallek, Philipp

    2013-01-01

    Researchers have combined XML3D, which provides declarative, interactive 3D scene descriptions based on HTML5, with Xflow, a language for declarative, high-performance data processing. The result lets Web developers combine a 3D scene graph with data flows for dynamic meshes, animations, image processing, and postprocessing.

  15. Coniferous Canopy BRF Simulation Based on 3-D Realistic Scene

    NASA Technical Reports Server (NTRS)

    Wang, Xin-yun; Guo, Zhi-feng; Qin, Wen-han; Sun, Guo-qing

    2011-01-01

    It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigate d in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerf ul in remote sensing of heterogeneous coniferous forests over a large -scale region. L-systems is applied to render 3-D coniferous forest scenarios: and RGM model was used to calculate BRF (bidirectional refle ctance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhiie at a tree and forest level. the results are also good.

  16. Whole-body 3D scanner and scan data report

    NASA Astrophysics Data System (ADS)

    Addleman, Stephen R.

    1997-03-01

    With the first whole-body 3D scanner now available the next adventure confronting the user is what to do with all of the data. While the system was built for anthropologists, it has created interest among users from a wide variety of fields. Users with applications in the fields of anthropology, costume design, garment design, entertainment, VR and gaming have a need for the data in formats unique to their fields. Data from the scanner is being converted to solid models for art and design and NURBS for computer graphics applications. Motion capture has made scan data move and dance. The scanner has created a need for advanced application software just as other scanners have in the past.

  17. (abstract) A High Throughput 3-D Inner Product Processor

    NASA Technical Reports Server (NTRS)

    Daud, Tuan

    1996-01-01

    A particularily challenging image processing application is the real time scene acquisition and object discrimination. It requires spatio-temporal recognition of point and resolved objects at high speeds with parallel processing algorithms. Neural network paradigms provide fine grain parallism and, when implemented in hardware, offer orders of magnitude speed up. However, neural networks implemented on a VLSI chip are planer architectures capable of efficient processing of linear vector signals rather than 2-D images. Therefore, for processing of images, a 3-D stack of neural-net ICs receiving planar inputs and consuming minimal power are required. Details of the circuits with chip architectures will be described with need to develop ultralow-power electronics. Further, use of the architecture in a system for high-speed processing will be illustrated.

  18. 3D lidar imaging for detecting and understanding plant responses and canopy structure.

    PubMed

    Omasa, Kenji; Hosoi, Fumiki; Konishi, Atsumi

    2007-01-01

    Understanding and diagnosing plant responses to stress will benefit greatly from three-dimensional (3D) measurement and analysis of plant properties because plant responses are strongly related to their 3D structures. Light detection and ranging (lidar) has recently emerged as a powerful tool for direct 3D measurement of plant structure. Here the use of 3D lidar imaging to estimate plant properties such as canopy height, canopy structure, carbon stock, and species is demonstrated, and plant growth and shape responses are assessed by reviewing the development of lidar systems and their applications from the leaf level to canopy remote sensing. In addition, the recent creation of accurate 3D lidar images combined with natural colour, chlorophyll fluorescence, photochemical reflectance index, and leaf temperature images is demonstrated, thereby providing information on responses of pigments, photosynthesis, transpiration, stomatal opening, and shape to environmental stresses; these data can be integrated with 3D images of the plants using computer graphics techniques. Future lidar applications that provide more accurate dynamic estimation of various plant properties should improve our understanding of plant responses to stress and of interactions between plants and their environment. Moreover, combining 3D lidar with other passive and active imaging techniques will potentially improve the accuracy of airborne and satellite remote sensing, and make it possible to analyse 3D information on ecophysiological responses and levels of various substances in agricultural and ecological applications and in observations of the global biosphere.

  19. 3D hydrodynamical and radiative transfer modeling of η Carinae's colliding winds

    NASA Astrophysics Data System (ADS)

    Madura, T. I.; Clementel, N.; Gull, T. R.; Kruip, C. J. H.; Paardekooper, J.-P.; Icke, V.

    We present results of full 3D hydrodynamical and radiative transfer simulations of the colliding stellar winds in the massive binary system η Carinae. We accomplish this by applying the SimpleX algorithm for 3D radiative transfer on an unstructured Voronoi-Delaunay grid to recent 3D smoothed particle hydrodynamics (SPH) simulations of the binary colliding winds. We use SimpleX to obtain detailed ionization fractions of hydrogen and helium, in 3D, at the resolution of the original SPH simulations. We investigate several computational domain sizes and Luminous Blue Variable primary star mass-loss rates. We furthermore present new methods of visualizing and interacting with output from complex 3D numerical simulations, including 3D interactive graphics and 3D printing. While we initially focus on η Car, the methods employed can be applied to numerous other colliding wind (WR 140, WR 137, WR 19) and dusty `pinwheel' (WR 104, WR 98a) binary systems. Coupled with 3D hydrodynamical simulations, SimpleX simulations have the potential to help determine the regions where various observed time-variable emission and absorption lines form in these unique objects.

  20. Midsagittal plane extraction from brain images based on 3D SIFT

    NASA Astrophysics Data System (ADS)

    Wu, Huisi; Wang, Defeng; Shi, Lin; Wen, Zhenkun; Ming, Zhong

    2014-03-01

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°.