Science.gov

Sample records for 3d real space

  1. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  2. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  3. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  4. 3D MR imaging in real time

    NASA Astrophysics Data System (ADS)

    Guttman, Michael A.; McVeigh, Elliot R.

    2001-05-01

    A system has been developed to produce live 3D volume renderings from an MR scanner. Whereas real-time 2D MR imaging has been demonstrated by several groups, 3D volumes are currently rendered off-line to gain greater understanding of anatomical structures. For example, surgical planning is sometimes performed by viewing 2D images or 3D renderings from previously acquired image data. A disadvantage of this approach is misregistration which could occur if the anatomy changes due to normal muscle contractions or surgical manipulation. The ability to produce volume renderings in real-time and present them in the magnet room could eliminate this problem, and enable or benefit other types of interventional procedures. The system uses the data stream generated by a fast 2D multi- slice pulse sequence to update a volume rendering immediately after a new slice is available. We demonstrate some basic types of user interaction with the rendering during imaging at a rate of up to 20 frames per second.

  5. A heterogeneous sensor network simulation system with integrated terrain data for real-time target detection in 3D space

    NASA Astrophysics Data System (ADS)

    Lin, Hong; Tanner, Steve; Rushing, John; Graves, Sara; Criswell, Evans

    2008-03-01

    Large scale sensor networks composed of many low-cost small sensors networked together with a small number of high fidelity position sensors can provide a robust, fast and accurate air defense and warning system. The team has been developing simulations of such large networks, and is now adding terrain data in an effort to provide more realistic analysis of the approach. This work, a heterogeneous sensor network simulation system with integrated terrain data for real-time target detection in a three-dimensional environment is presented. The sensor network can be composed of large numbers of low fidelity binary and bearing-only sensors, and small numbers of high fidelity position sensors, such as radars. The binary and bearing-only sensors are randomly distributed over a large geographic region; while the position sensors are distributed evenly. The elevations of the sensors are determined through the use of DTED Level 0 dataset. The targets are located through fusing measurement information from all types of sensors modeled by the simulation. The network simulation utilizes the same search-based optimization algorithm as in our previous two-dimensional sensor network simulation with some significant modifications. The fusion algorithm is parallelized using spatial decomposition approach: the entire surveillance area is divided into small regions and each region is assigned to one compute node. Each node processes sensor measurements and terrain data only for the assigned sub region. A master process combines the information from all the compute nodes to get the overall network state. The simulation results have indicated that the distributed fusion algorithm is efficient enough so that an optimal solution can be reached before the arrival of the next sensor data with a reasonable time interval, and real-time target detection can be achieved. The simulation was performed on a Linux cluster with communication between nodes facilitated by the Message Passing Interface

  6. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2016-07-12

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  7. Ames Lab 101: Real-Time 3D Imaging

    SciTech Connect

    Zhang, Song

    2010-01-01

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  8. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network.

    PubMed

    Bukhari, W; Hong, S-M

    2016-03-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient's breathing cycle. The algorithm, named EKF-GPRN(+) , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN(+) prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN(+) implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN(+) . The experimental results show that the EKF-GPRN(+) algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN(+) algorithm can further reduce the prediction error by employing the gating

  9. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor.

  10. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor. PMID:26386332

  11. 3D RISM theory with fast reciprocal-space electrostatics

    SciTech Connect

    Heil, Jochen; Kast, Stefan M.

    2015-03-21

    The calculation of electrostatic solute-solvent interactions in 3D RISM (“three-dimensional reference interaction site model”) integral equation theory is recast in a form that allows for a computational treatment analogous to the “particle-mesh Ewald” formalism as used for molecular simulations. In addition, relations that connect 3D RISM correlation functions and interaction potentials with thermodynamic quantities such as the chemical potential and average solute-solvent interaction energy are reformulated in a way that calculations of expensive real-space electrostatic terms on the 3D grid are completely avoided. These methodical enhancements allow for both, a significant speedup particularly for large solute systems and a smoother convergence of predicted thermodynamic quantities with respect to box size, as illustrated for several benchmark systems.

  12. Real-time depth map manipulation for 3D visualization

    NASA Astrophysics Data System (ADS)

    Ideses, Ianir; Fishbain, Barak; Yaroslavsky, Leonid

    2009-02-01

    One of the key aspects of 3D visualization is computation of depth maps. Depth maps enables synthesis of 3D video from 2D video and use of multi-view displays. Depth maps can be acquired in several ways. One method is to measure the real 3D properties of the scene objects. Other methods rely on using two cameras and computing the correspondence for each pixel. Once a depth map is acquired for every frame, it can be used to construct its artificial stereo pair. There are many known methods for computing the optical flow between adjacent video frames. The drawback of these methods is that they require extensive computation power and are not very well suited to high quality real-time 3D rendering. One efficient method for computing depth maps is extraction of motion vector information from standard video encoders. In this paper we present methods to improve the 3D visualization quality acquired from compression CODECS by spatial/temporal and logical operations and manipulations. We show how an efficient real time implementation of spatial-temporal local order statistics such as median and local adaptive filtering in 3D-DCT domain can substantially improve the quality of depth maps and consequently 3D video while retaining real-time rendering. Real-time performance is achived by utilizing multi-core technology using standard parallelization algorithms and libraries (OpenMP, IPP).

  13. Acquiring 3-D Spatial Data Of A Real Object

    NASA Astrophysics Data System (ADS)

    Wu, C. K.; Wang, D. Q.; Bajcsy, R. K...

    1983-10-01

    A method of acquiring spatial data of a real object via a stereometric system is presented. Three-dimensional (3-D) data of an object are acquired by: (1) camera calibration; (2) stereo matching; (3) multiple stereo views covering the whole object; (4) geometrical computations to determine the 3-D coordinates for each sample point of the object. The analysis and the experimental results indicate the method implemented is capable of measuring the spatial data of a real object with satisfactory accuracy.

  14. [Real time 3D echocardiography in congenital heart disease].

    PubMed

    Acar, P; Dulac, Y; Taktak, A; Villacèque, M

    2004-05-01

    The introduction of the 3D mode in echocardiography has led to its use in everyday clinical practice. One hundred and fifty real time 3D echocardiographic examinations were performed in 20 foetus, 110 children and 20 adults with various congenital heart lesions (shunts, valvular lesions, aortic diseases). The 4x matricial probe enables the instantaneous acquisition of transthoracic volumes. Four modes of 3D imaging were used: real time, total volume, colour Doppler and biplane. Quantitative measurements were performed at an outlying station. The feasibility of the method in the foetus, the child and the adult was respectively 90%, 99% and 85%. Real time 3D echocardiography did not affect the diagnoses made by standard echocardiography. The 3D imaging gave a more accurate description of atrial septal defects and congenital valvular lesions. Biplane imaging was decisive in the quantitative approach to aortic dilatation of Marfan's syndrome and in segmental analysis of the foetal heart. 3D colour Doppler imaging has been disappointing but the possibilities of volumic quantification of blood flow are very promising. The present limitations of the method are the inadequate resolution in the small child and the absence of quantitative measurement on the echograph. The facility of utilisation of the matricial probe should lead to routine usage of 3D echocardiography as with 2D and the Doppler modes. Its value should be decisive in many congenital cardiac lesions requiring surgery or interventional catheterisation. PMID:15214550

  15. Real time 3D and heterogeneous data fusion

    SciTech Connect

    Little, C.Q.; Small, D.E.

    1998-03-01

    This project visualizes characterization data in a 3D setting, in real time. Real time in this sense means collecting the data and presenting it before it delays the user, and processing faster than the acquisition systems so no bottlenecks occur. The goals have been to build a volumetric viewer to display 3D data, demonstrate projecting other data, such as images, onto the 3D data, and display both the 3D and projected images as fast as the data became available. The authors have examined several ways to display 3D surface data. The most effective was generating polygonal surface meshes. They have created surface maps form a continuous stream of 3D range data, fused image data onto the geometry, and displayed the data with a standard 3D rendering package. In parallel with this, they have developed a method to project real-time images onto the surface created. A key component is mapping the data on the correct surfaces, which requires a-priori positional information along with accurate calibration of the camera and lens system.

  16. Real-time structured light intraoral 3D measurement pipeline

    NASA Astrophysics Data System (ADS)

    Gheorghe, Radu; Tchouprakov, Andrei; Sokolov, Roman

    2013-02-01

    Computer aided design and manufacturing (CAD/CAM) is increasingly becoming a standard feature and service provided to patients in dentist offices and denture manufacturing laboratories. Although the quality of the tools and data has slowly improved in the last years, due to various surface measurement challenges, practical, accurate, invivo, real-time 3D high quality data acquisition and processing still needs improving. Advances in GPU computational power have allowed for achieving near real-time 3D intraoral in-vivo scanning of patient's teeth. We explore in this paper, from a real-time perspective, a hardware-software-GPU solution that addresses all the requirements mentioned before. Moreover we exemplify and quantify the hard and soft deadlines required by such a system and illustrate how they are supported in our implementation.

  17. [Space biology in the 3d decade].

    PubMed

    Gazenko, O G; Parfenov, G P

    1982-01-01

    The paper reviews the major results of experiments on microorganisms, plants and animals flown onboard space vehicles during the past two decades. To explain the experimental findings, it is hypothesized that living beings develop an indirect adaptation to gravity effects which has a bearing only on the phylogenetic process.

  18. Design for an in-space 3D printer

    NASA Astrophysics Data System (ADS)

    McGuire, Thomas; Hirsch, Michael; Parsons, Michael; Leake, Skye; Straub, Jeremy

    2016-05-01

    This paper presents a space mission enablement and cost reduction technology: in-space 3D printing. Using in-space 3D printing, spacecraft can be lighter, require less launch volume and be designed solely for orbital operations. The proposed technology, which supports various thermoplastics and prospectively metals, is presented in detail. Key subsystems such as the energy collection system, the melting unit, and the printing unit are explained.

  19. Study on basic problems in real-time 3D holographic display

    NASA Astrophysics Data System (ADS)

    Jia, Jia; Liu, Juan; Wang, Yongtian; Pan, Yijie; Li, Xin

    2013-05-01

    In recent years, real-time three-dimensional (3D) holographic display has attracted more and more attentions. Since a holographic display can entirely reconstruct the wavefront of an actual 3D scene, it can provide all the depth cues for human eye's observation and perception, and it is believed to be the most promising technology for future 3D display. However, there are several unsolved basic problems for realizing large-size real-time 3D holographic display with a wide field of view. For examples, commercial pixelated spatial light modulators (SLM) always lead to zero-order intensity distortion; 3D holographic display needs a huge number of sampling points for the actual objects or scenes, resulting in enormous computational time; The size and the viewing zone of the reconstructed 3D optical image are limited by the space bandwidth product of the SLM; Noise from the coherent light source as well as from the system severely degrades the quality of the 3D image; and so on. Our work is focused on these basic problems, and some initial results are presented, including a technique derived theoretically and verified experimentally to eliminate the zero-order beam caused by a pixelated phase-only SLM; a method to enlarge the reconstructed 3D image and shorten the reconstruction distance using a concave reflecting mirror; and several algorithms to speed up the calculation of computer generated holograms (CGH) for the display.

  20. Double Negativity in 3D Space Coiling Metamaterials.

    PubMed

    Maurya, Santosh K; Pandey, Abhishek; Shukla, Shobha; Saxena, Sumit

    2016-09-21

    Metamaterials displaying negative refractive index has remarkable potential to facilitate the manipulation of incident waves for wide variety of applications such as cloaking, superlensing and the like. Space-coiling approach is a recently explored technique to achieve extreme properties. The space coiling phenomena cause less energy absorption as compared to local resonating phenomena for obtaining extreme parameters. Here we show extreme properties in doubly negative 3D space coiling acoustic metamaterials. Frequency dispersive spectrum of extreme constitutive parameters has been calculated for 2D maze and 3D space coiling labyrinthine structure. This is in good agreement to the calculated acoustic band dispersion.

  1. Double Negativity in 3D Space Coiling Metamaterials

    NASA Astrophysics Data System (ADS)

    Maurya, Santosh K.; Pandey, Abhishek; Shukla, Shobha; Saxena, Sumit

    2016-09-01

    Metamaterials displaying negative refractive index has remarkable potential to facilitate the manipulation of incident waves for wide variety of applications such as cloaking, superlensing and the like. Space-coiling approach is a recently explored technique to achieve extreme properties. The space coiling phenomena cause less energy absorption as compared to local resonating phenomena for obtaining extreme parameters. Here we show extreme properties in doubly negative 3D space coiling acoustic metamaterials. Frequency dispersive spectrum of extreme constitutive parameters has been calculated for 2D maze and 3D space coiling labyrinthine structure. This is in good agreement to the calculated acoustic band dispersion.

  2. Double Negativity in 3D Space Coiling Metamaterials

    PubMed Central

    Maurya, Santosh K.; Pandey, Abhishek; Shukla, Shobha; Saxena, Sumit

    2016-01-01

    Metamaterials displaying negative refractive index has remarkable potential to facilitate the manipulation of incident waves for wide variety of applications such as cloaking, superlensing and the like. Space-coiling approach is a recently explored technique to achieve extreme properties. The space coiling phenomena cause less energy absorption as compared to local resonating phenomena for obtaining extreme parameters. Here we show extreme properties in doubly negative 3D space coiling acoustic metamaterials. Frequency dispersive spectrum of extreme constitutive parameters has been calculated for 2D maze and 3D space coiling labyrinthine structure. This is in good agreement to the calculated acoustic band dispersion. PMID:27649966

  3. Double Negativity in 3D Space Coiling Metamaterials.

    PubMed

    Maurya, Santosh K; Pandey, Abhishek; Shukla, Shobha; Saxena, Sumit

    2016-01-01

    Metamaterials displaying negative refractive index has remarkable potential to facilitate the manipulation of incident waves for wide variety of applications such as cloaking, superlensing and the like. Space-coiling approach is a recently explored technique to achieve extreme properties. The space coiling phenomena cause less energy absorption as compared to local resonating phenomena for obtaining extreme parameters. Here we show extreme properties in doubly negative 3D space coiling acoustic metamaterials. Frequency dispersive spectrum of extreme constitutive parameters has been calculated for 2D maze and 3D space coiling labyrinthine structure. This is in good agreement to the calculated acoustic band dispersion. PMID:27649966

  4. V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets.

    PubMed

    Peng, Hanchuan; Ruan, Zongcai; Long, Fuhui; Simpson, Julie H; Myers, Eugene W

    2010-04-01

    The V3D system provides three-dimensional (3D) visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3D-based application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a 3D digital atlas of neurite tracts in the fruitfly brain. PMID:20231818

  5. Custom 3D Printers Revolutionize Space Supply Chain

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Marshall Space Flight Center, start-up company Made In Space, located on the center's campus, developed a high-precision 3D printer capable of manufacturing items in microgravity. The company will soon have a printer installed on the International Space Station, altering the space supply chain. It will print supplies and tools for NASA, as well as nanosatellite shells and other items for public and private entities.

  6. Real-Time Camera Guidance for 3d Scene Reconstruction

    NASA Astrophysics Data System (ADS)

    Schindler, F.; Förstner, W.

    2012-07-01

    We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  7. Real-time cylindrical curvilinear 3-D ultrasound imaging.

    PubMed

    Pua, E C; Yen, J T; Smith, S W

    2003-07-01

    In patients who are obese or exhibit signs of pulmonary disease, standard transthoracic scanning may yield poor quality cardiac images. For these conditions, two-dimensional transesophageal echocardiography (TEE) is established as an essential diagnostic tool. Current techniques in transesophageal scanning, though, are limited by incomplete visualization of cardiac structures in close proximity to the transducer. Thus, we propose a 2D curvilinear array for 3D transesophageal echocardiography in order to widen the field of view and increase visualization close to the transducer face. In this project, a 440 channel 5 MHz two-dimensional array with a 12.6 mm aperture diameter on a flexible interconnect circuit has been molded to a 4 mm radius of curvature. A 75% element yield was achieved during fabrication and an average -6dB bandwidth of 30% was observed in pulse-echo tests. Using this transducer in conjunction with modifications to the beam former delay software and scan converter display software of the our 3D scanner, we obtained cylindrical real-time curvilinear volumetric scans of tissue phantoms, including a field of view of greater than 120 degrees in the curved, azimuth direction and 65 degrees phased array sector scans in the elevation direction. These images were achieved using a stepped subaperture across the cylindrical curvilinear direction of the transducer face and phased array sector scanning in the noncurved plane. In addition, real-time volume rendered images of a tissue mimicking phantom with holes ranging from 1 cm to less than 4 mm have been obtained. 3D color flow Doppler results have also been acquired. This configuration can theoretically achieve volumes displaying 180 degrees by 120 degrees. The transducer is also capable of obtaining images through a curvilinear stepped subaperture in azimuth in conjunction with a rectilinear stepped subaperture in elevation, further increasing the field of view close to the transducer face. Future work

  8. Space Partitioning for Privacy Enabled 3D City Models

    NASA Astrophysics Data System (ADS)

    Filippovska, Y.; Wichmann, A.; Kada, M.

    2016-10-01

    Due to recent technological progress, data capturing and processing of highly detailed (3D) data has become extensive. And despite all prospects of potential uses, data that includes personal living spaces and public buildings can also be considered as a serious intrusion into people's privacy and a threat to security. It becomes especially critical if data is visible by the general public. Thus, a compromise is needed between open access to data and privacy requirements which can be very different for each application. As privacy is a complex and versatile topic, the focus of this work particularly lies on the visualization of 3D urban data sets. For the purpose of privacy enabled visualizations of 3D city models, we propose to partition the (living) spaces into privacy regions, each featuring its own level of anonymity. Within each region, the depicted 2D and 3D geometry and imagery is anonymized with cartographic generalization techniques. The underlying spatial partitioning is realized as a 2D map generated as a straight skeleton of the open space between buildings. The resulting privacy cells are then merged according to the privacy requirements associated with each building to form larger regions, their borderlines smoothed, and transition zones established between privacy regions to have a harmonious visual appearance. It is exemplarily demonstrated how the proposed method generates privacy enabled 3D city models.

  9. 3D Network Analysis for Indoor Space Applications

    NASA Astrophysics Data System (ADS)

    Tsiliakou, E.; Dimopoulou, E.

    2016-10-01

    Indoor space differs from outdoor environments, since it is characterized by a higher level of structural complexity, geometry, as well as topological relations. Indoor space can be considered as the most important component in a building's conceptual modelling, on which applications such as indoor navigation, routing or analysis are performed. Therefore, the conceptual meaning of sub spaces or the activities taking place in physical building boundaries (e.g. walls), require the comprehension of the building's indoor hierarchical structure. The scope of this paper is to perform 3D network analysis in a building's interior and is structured as follows: In Section 1 the definition of indoor space is provided and indoor navigation requirements are analysed. Section 2 describes the processes of indoor space modeling, as well as routing applications. In Section 3, a case study is examined involving a 3D building model generated in CityEngine (exterior shell) and ArcScene (interior parts), in which the use of commercially available software tools (ArcGIS, ESRI), in terms of indoor routing and 3D network analysis, are explored. The fundamentals of performing 3D analysis with the ArcGIS Network Analyst extension were tested. Finally a geoprocessing model was presented, which was specifically designed to be used to interactively find the best route in ArcScene. The paper ends with discussion and concluding remarks on Section 4.

  10. A technique for 3-D robot vision for space applications

    NASA Technical Reports Server (NTRS)

    Markandey, V.; Tagare, H.; Defigueiredo, R. J. P.

    1987-01-01

    An extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using Moment Invariants as features of object representation is discussed. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.

  11. Demonstration of a 3D vision algorithm for space applications

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P. (Editor)

    1987-01-01

    This paper reports an extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using movement invariants as features of object representation. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.

  12. Undersampling k-space using fast progressive 3D trajectories.

    PubMed

    Spiniak, Juan; Guesalaga, Andres; Mir, Roberto; Guarini, Marcelo; Irarrazaval, Pablo

    2005-10-01

    In 3D MRI, sampling k-space with traditional trajectories can be excessively time-consuming. Fast imaging trajectories are used in an attempt to efficiently cover the k-space and reduce the scan time without significantly affecting the image quality. In many applications, further reductions in scan time can be achieved via undersampling of the k-space; however, no clearly optimal method exists. In most 3D trajectories the k-space is divided into regions that are sampled with shots that share a common geometry (e.g., spirals). A different approach is to design trajectories that gradually but uniformly cover the k-space. In the current work, successive shots progressively add sampled regions to the 3D frequency space. By cutting the sequence short, a natural undersampled method is obtained. This can be particularly efficient because in these types of trajectories the contribution of new information by later shots is less significant. In this work the performance of progressive trajectories for different degrees of undersampling is assessed with trajectories based on missile guidance (MG) ideas. The results show that the approach can be efficient in terms of reducing the scan time, and performs better than the stack of spirals (SOS) technique, particularly under nonideal conditions.

  13. Systems biology in 3D space--enter the morphome.

    PubMed

    Lucocq, John M; Mayhew, Terry M; Schwab, Yannick; Steyer, Anna M; Hacker, Christian

    2015-02-01

    Systems-based understanding of living organisms depends on acquiring huge datasets from arrays of genes, transcripts, proteins, and lipids. These data, referred to as 'omes', are assembled using 'omics' methodologies. Currently a comprehensive, quantitative view of cellular and organellar systems in 3D space at nanoscale/molecular resolution is missing. We introduce here the term 'morphome' for the distribution of living matter within a 3D biological system, and 'morphomics' for methods of collecting 3D data systematically and quantitatively. A sampling-based approach termed stereology currently provides rapid, precise, and minimally biased morphomics. We propose that stereology solves the 'big data' problem posed by emerging wide-scale electron microscopy (EM) and can establish quantitative links between the newer nanoimaging platforms such as electron tomography, cryo-EM, and correlative microscopy.

  14. Real-time 3D video conference on generic hardware

    NASA Astrophysics Data System (ADS)

    Desurmont, X.; Bruyelle, J. L.; Ruiz, D.; Meessen, J.; Macq, B.

    2007-02-01

    Nowadays, video-conference tends to be more and more advantageous because of the economical and ecological cost of transport. Several platforms exist. The goal of the TIFANIS immersive platform is to let users interact as if they were physically together. Unlike previous teleimmersion systems, TIFANIS uses generic hardware to achieve an economically realistic implementation. The basic functions of the system are to capture the scene, transmit it through digital networks to other partners, and then render it according to each partner's viewing characteristics. The image processing part should run in real-time. We propose to analyze the whole system. it can be split into different services like central processing unit (CPU), graphical rendering, direct memory access (DMA), and communications trough the network. Most of the processing is done by CPU resource. It is composed of the 3D reconstruction and the detection and tracking of faces from the video stream. However, the processing needs to be parallelized in several threads that have as little dependencies as possible. In this paper, we present these issues, and the way we deal with them.

  15. Real time 3D scanner: investigations and results

    NASA Astrophysics Data System (ADS)

    Nouri, Taoufik; Pflug, Leopold

    1993-12-01

    This article presents a concept of reconstruction of 3-D objects using non-invasive and touch loss techniques. The principle of this method is to display parallel interference optical fringes on an object and then to record the object under two angles of view. According to an appropriated treatment one reconstructs the 3-D object even when the object has no symmetrical plan. The 3-D surface data is available immediately in digital form for computer- visualization and for analysis software tools. The optical set-up for recording the 3-D object, the 3-D data extraction and treatment, as well as the reconstruction of the 3-D object are reported and commented on. This application is dedicated for reconstructive/cosmetic surgery, CAD, animation and research purposes.

  16. Double Ring Array Catheter for In Vivo Real-Time 3D Ultrasound.

    PubMed

    Smith, Stephen W; Gardea, Paul; Patel, Vivek; Douglas, Stephen J; Wolf, Patrick D

    2014-03-12

    We developed new forward-viewing matrix transducers consisting of double ring arrays of 118 total PZT elements integrated into catheters used to deploy medical interventional devices. Our goal is 3D ultrasound guidance of medical device implantation to reduce x-ray fluoroscopy exposure. The double ring arrays were fabricated on inner and outer custom polyimide flexible circuits with inter-element spacing of 0.20 mm and then wrapped around an 11 French (Fr) catheter to produce a 15 Fr catheter (outer diameter [O.D.]). We used a braided cabling technology to connect the elements to the Volumetrics Medical Imaging (VMI) real-time 3D ultrasound scanner. Transducer performance yielded an average -6 dB fractional bandwidth of 49% ± 11% centered at 4.4 MHz for 118 elements. Real-time 3D cardiac scans of the in vivo pig model yielded good image quality including en face views of the tricuspid valve and real-time 3D guidance of an endo-myocardial biopsy catheter introduced into the left ventricle. PMID:24626564

  17. Towards a 3D Space Radiation Transport Code

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathl, R. K.; Cicomptta, F. A.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    High-speed computational procedures for space radiation shielding have relied on asymptotic expansions in terms of the off-axis scatter and replacement of the general geometry problem by a collection of flat plates. This type of solution was derived for application to human rated systems in which the radius of the shielded volume is large compared to the off-axis diffusion limiting leakage at lateral boundaries. Over the decades these computational codes are relatively complete and lateral diffusion effects are now being added. The analysis for developing a practical full 3D space shielding code is presented.

  18. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    PubMed

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image. PMID:21979427

  19. Integration of GPR and Laser Position Sensors for Real-Time 3D Data Fusion

    NASA Astrophysics Data System (ADS)

    Grasmueck, M.; Viggiano, D.

    2005-05-01

    Non-invasive 3D imaging visualizes anatomy and contents inside objects. Such tools are a commodity for medical doctors diagnosing a patient's health without scalpel and airport security staff inspecting the contents of baggage without opening. For geologists, hydrologists, archeologists and engineers wanting to see inside the shallow subsurface, such 3D tools are still a rarity. Theory and practice show that full-resolution 3D Ground Penetrating Radar (GPR) imaging requires unaliased recording of dipping reflections and diffractions. For a heterogeneous subsurface, minimum grid spacing of GPR measurements should be at least quarter wavelength or less in all directions. Consequently, positioning precision needs to be better than eighth wavelength for correct grid point assignment. Until now 3D GPR imaging has not been practical: data acquisition and processing took weeks to months, data analysis required geophysical training with no versatile 3D systems commercially available. We have integrated novel rotary laser positioning technology with GPR into a highly efficient and simple to use 3D imaging system. The laser positioning enables acquisition of centimeter accurate x, y, and z coordinates from multiple small detectors attached to moving GPR antennae. Positions streaming with 20 updates/second from each detector are fused in real-time with the GPR data. We developed software for automated data acquisition and real-time 3D GPR data quality control on slices at selected depths. Standard formatted (SEGY) data cubes and animations are generated within an hour after the last trace has been acquired. Examples can be seen at www.3dgpr.info. Such instant 3D GPR can be used as an on-site imaging tool supporting field work, hypothesis testing, and optimal sample collection. Rotary laser positioning has the flexibility to be integrated with multiple moving GPR antennae and other geophysical sensors enabling simple and efficient high resolution 3D data acquisition at

  20. Sonification of range information for 3-D space perception.

    PubMed

    Milios, Evangelos; Kapralos, Bill; Kopinska, Agnieszka; Stergiopoulos, Sotirios

    2003-12-01

    We present a device that allows three-dimensional (3-D) space perception by sonification of range information obtained via a point laser range sensor. The laser range sensor is worn by a blindfolded user, who scans space by pointing the laser beam in different directions. The resulting stream of range measurements is then converted to an auditory signal whose frequency or amplitude varies with the range. Our device differs from existing navigation aids for the visually impaired. Such devices use sonar ranging whose primary purpose is to detect obstacles for navigation, a task to which sonar is well suited due to its wide beam width. In contrast, the purpose of our device is to allow users to perceive the details of 3-D space that surrounds them, a task to which sonar is ill suited, due to artifacts generated by multiple reflections and due to its limited range. Preliminary trials demonstrate that the user is able to easily and accurately detect corners and depth discontinuities and to perceive the size of the surrounding space.

  1. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  2. Improving 3d Spatial Queries Search: Newfangled Technique of Space Filling Curves in 3d City Modeling

    NASA Astrophysics Data System (ADS)

    Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.

    2013-09-01

    The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its

  3. Embodied collaboration support system for 3D shape evaluation in virtual space

    NASA Astrophysics Data System (ADS)

    Okubo, Masashi; Watanabe, Tomio

    2005-12-01

    Collaboration mainly consists of two tasks; one is each partner's task that is performed by the individual, the other is communication with each other. Both of them are very important objectives for all the collaboration support system. In this paper, a collaboration support system for 3D shape evaluation in virtual space is proposed on the basis of both studies in 3D shape evaluation and communication support in virtual space. The proposed system provides the two viewpoints for each task. One is the viewpoint of back side of user's own avatar for the smooth communication. The other is that of avatar's eye for 3D shape evaluation. Switching the viewpoints satisfies the task conditions for 3D shape evaluation and communication. The system basically consists of PC, HMD and magnetic sensors, and users can share the embodied interaction by observing interaction between their avatars in virtual space. However, the HMD and magnetic sensors, which are put on the users, would restrict the nonverbal communication. Then, we have tried to compensate the loss of nodding of partner's avatar by introducing the speech-driven embodied interactive actor InterActor. Sensory evaluation by paired comparison of 3D shapes in the collaborative situation in virtual space and in real space and the questionnaire are performed. The result demonstrates the effectiveness of InterActor's nodding in the collaborative situation.

  4. Making It Real: A Cooperative, Multigrade, 3D Design Project

    ERIC Educational Resources Information Center

    Shealer, Ron; Shealer, Michelle

    2014-01-01

    This article describes a cooperative project between eighth graders and first graders called "Going Green in the Neighborhood." The project entailed the first grade students sketching home designs on paper to make a model community, and the eighth grade students taking those drawings and making them into 3D computer models and then…

  5. Geodiversity: Exploration of 3D geological model space

    NASA Astrophysics Data System (ADS)

    Lindsay, M. D.; Jessell, M. W.; Ailleres, L.; Perrouty, S.; de Kemp, E.; Betts, P. G.

    2013-05-01

    The process of building a 3D model necessitates the reconciliation of field observations, geophysical interpretation, geological data uncertainty and the prevailing tectonic evolution hypotheses and interpretations. Uncertainty is compounded when clustered data points collected at local scales are statistically upscaled to one or two points for use in regional models. Interpretation is required to interpolate between sparse field data points using ambiguous geophysical data in covered terranes. It becomes clear that multiple interpretations are possible during model construction. The various interpretations are considered as potential natural representatives, but pragmatism typically dictates that just a single interpretation is offered by the modelling process. Uncertainties are introduced into the 3D model during construction from a variety of sources and through data set optimisation that produces a single model. Practices such as these are likely to result in a model that does not adequately represent the target geology. A set of geometrical ‘geodiversity’ metrics are used to analyse a 3D model of the Gippsland Basin, southeastern Australia after perturbing geological input data via uncertainty simulation. The resulting sets of perturbed geological observations are used to calculate a suite of geological 3D models that display a range of geological architectures. The concept of biodiversity has been adapted for the geosciences to quantify geometric variability, or geodiversity, between models in order to understand the effect uncertainty has models geometry. Various geometrical relationships (depth, volume, contact surface area, curvature and geological complexity) are used to describe the range of possibilities exhibited throughout the model suite. End-member models geodiversity metrics are classified in a similar manner to taxonomic descriptions. Further analysis of the model suite is performed using principal component analysis (PCA) to determine

  6. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  7. Real-time 3D vision solution for on-orbit autonomous rendezvous and docking

    NASA Astrophysics Data System (ADS)

    Ruel, S.; English, C.; Anctil, M.; Daly, J.; Smith, C.; Zhu, S.

    2006-05-01

    Neptec has developed a vision system for the capture of non-cooperative objects on orbit. This system uses an active TriDAR sensor and a model based tracking algorithm to provide 6 degree of freedom pose information in real-time from mid range to docking. This system was selected for the Hubble Robotic Vehicle De-orbit Module (HRVDM) mission and for a Detailed Test Objective (DTO) mission to fly on the Space Shuttle. TriDAR (triangulation + LIDAR) technology makes use of a novel approach to 3D sensing by combining triangulation and Time-of-Flight (ToF) active ranging techniques in the same optical path. This approach exploits the complementary nature of these sensing technologies. Real-time tracking of target objects is accomplished using 3D model based tracking algorithms developed at Neptec in partnership with the Canadian Space Agency (CSA). The system provides 6 degrees of freedom pose estimation and incorporates search capabilities to initiate and recover tracking. Pose estimation is performed using an innovative approach that is faster than traditional techniques. This performance allows the algorithms to operate in real-time on the TriDAR's flight certified embedded processor. This paper presents results from simulation and lab testing demonstrating that the system's performance meets the requirements of a complete tracking system for on-orbit autonomous rendezvous and docking.

  8. Display of real-time 3D sensor data in a DVE system

    NASA Astrophysics Data System (ADS)

    Völschow, Philipp; Münsterer, Thomas; Strobel, Michael; Kuhn, Michael

    2016-05-01

    This paper describes the implementation of displaying real-time processed LiDAR 3D data in a DVE pilot assistance system. The goal is to display to the pilot a comprehensive image of the surrounding world without misleading or cluttering information. 3D data which can be attributed, i.e. classified, to terrain or predefined obstacle classes is depicted differently from data belonging to elevated objects which could not be classified. Display techniques may be different for head-down and head-up displays to avoid cluttering of the outside view in the latter case. While terrain is shown as shaded surfaces with grid structures or as grid structures alone, respectively, classified obstacles are typically displayed with obstacle symbols only. Data from objects elevated above ground are displayed as shaded 3D points in space. In addition the displayed 3D points are accumulated over a certain time frame allowing on the one hand side a cohesive structure being displayed and on the other hand displaying moving objects correctly. In addition color coding or texturing can be applied based on known terrain features like land use.

  9. 3D virtual screening of large combinatorial spaces.

    PubMed

    Muegge, Ingo; Zhang, Qiang

    2015-01-01

    A new method for 3D in silico screening of large virtual combinatorial chemistry spaces is described. The software PharmShape screens millions of individual compounds applying a multi-conformational pharmacophore and shape based approach. Its extension, PharmShapeCC, is capable of screening trillions of compounds from tens of thousands of combinatorial libraries. Key elements of PharmShape and PharmShapeCC are customizable pharmacophore features, a composite inclusion sphere, library core intermediate clustering, and the determination of combinatorial library consensus orientations that allow for orthogonal enumeration of libraries. The performance of the software is illustrated by the prospective identification of a novel CXCR5 antagonist and examples of finding novel chemotypes from synthesizing and evaluating combinatorial hit libraries identified from PharmShapeCC screens for CCR1, LTA4 hydrolase, and MMP-13.

  10. Characterizing 3D Vegetation Structure from Space: Mission Requirements

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G.; Bergen, Kathleen; Blair, James B.; Dubayah, Ralph; Houghton, Richard; Hurtt, George; Kellndorfer, Josef; Lefsky, Michael; Ranson, Jon; Saatchi, Sasan; Shugart, H. H.; Wickland, Diane

    2012-01-01

    Human and natural forces are rapidly modifying the global distribution and structure of terrestrial ecosystems on which all of life depends, altering the global carbon cycle, affecting our climate now and for the foreseeable future, causing steep reductions in species diversity, and endangering Earth s sustainability. To understand changes and trends in terrestrial ecosystems and their functioning as carbon sources and sinks, and to characterize the impact of their changes on climate, habitat and biodiversity, new space assets are urgently needed to produce high spatial resolution global maps of the three-dimensional (3D) structure of vegetation, its biomass above ground, the carbon stored within and the implications for atmospheric green house gas concentrations and climate. These needs were articulated in a 2007 National Research Council (NRC) report (NRC, 2007) recommending a new satellite mission, DESDynI, carrying an L-band Polarized Synthetic Aperture Radar (Pol-SAR) and a multi-beam lidar (Light RAnging And Detection) operating at 1064 nm. The objectives of this paper are to articulate the importance of these new, multi-year, 3D vegetation structure and biomass measurements, to briefly review the feasibility of radar and lidar remote sensing technology to meet these requirements, to define the data products and measurement requirements, and to consider implications of mission durations. The paper addresses these objectives by synthesizing research results and other input from a broad community of terrestrial ecology, carbon cycle, and remote sensing scientists and working groups. We conclude that: (1) current global biomass and 3-D vegetation structure information is unsuitable for both science and management and policy. The only existing global datasets of biomass are approximations based on combining land cover type and representative carbon values, instead of measurements of actual biomass. Current measurement attempts based on radar and multispectral

  11. Eye-safe digital 3-D sensing for space applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Blais, Francois; Rioux, Marc; Cournoyer, Luc; Laurin, Denis G.; MacLean, Steve G.

    2000-01-01

    This paper focuses on the characteristics and performance of an eye-safe laser range scanner (LARS) with short- and medium-range 3D sensing capabilities for space applications. This versatile LARS is a precision measurement tool that will complement the current Canadian Space Vision System. The major advantages of the LARS over conventional video- based imaging are its ability to operate with sunlight shining directly into the scanner and its immunity to spurious reflections and shadows, which occur frequently in space. Because the LARS is equipped with two high-speed galvanometers to steer the laser beam, any spatial location within the field of view of the camera can be addressed. This versatility enables the LARS to operate in two basis scan pattern modes: (1) variable-scan-resolution mode and (2) raster-scan mode. In the variable-resolution mode, the LARS can search and track targets and geometrical features on objects located within a field of view of 30 by 30 deg and with corresponding range from about 0.5 to 2000 m. The tracking mode can reach a refresh rate of up to 130 Hz. The raster mode is used primarily for the measurement of registered range and intensity information on large stationary objects. It allows, among other things, target- based measurements, feature-based measurements, and surface- reflectance monitoring. The digitizing and modeling of human subjects, cargo payloads, and environments are also possible with the LARS. Examples illustrating its capabilities are presented.

  12. Future enhancements to 3D printing and real time production

    NASA Astrophysics Data System (ADS)

    Landa, Joseph; Jenkins, Jeffery; Wu, Jerry; Szu, Harold

    2014-05-01

    The cost and scope of additive printing machines range from several hundred to hundreds of thousands of dollars. For the extra money, one can get improvements in build size, selection of material properties, resolution, and consistency. However, temperature control during build and fusing predicts outcome and protects the IP by large high cost machines. Support material options determine geometries that can be accomplished which drives cost and complexity of printing heads. Historically, 3D printers have been used for design and prototyping efforts. Recent advances and cost reduction sparked new interest in developing printed products and consumables such as NASA who is printing food, printing consumer parts (e.g. cell phone cases, novelty toys), making tools and fixtures in manufacturing, and recursively print a self-similar printer (c.f. makerbot). There is a near term promise of the capability to print on demand products at the home or office... directly from the printer to use.

  13. The Digital Space Shuttle, 3D Graphics, and Knowledge Management

    NASA Technical Reports Server (NTRS)

    Gomez, Julian E.; Keller, Paul J.

    2003-01-01

    The Digital Shuttle is a knowledge management project that seeks to define symbiotic relationships between 3D graphics and formal knowledge representations (ontologies). 3D graphics provides geometric and visual content, in 2D and 3D CAD forms, and the capability to display systems knowledge. Because the data is so heterogeneous, and the interrelated data structures are complex, 3D graphics combined with ontologies provides mechanisms for navigating the data and visualizing relationships.

  14. 3D Modelling of Interior Spaces: Learning the Language of Indoor Architecture

    NASA Astrophysics Data System (ADS)

    Khoshelham, K.; Díaz-Vilariño, L.

    2014-06-01

    3D models of indoor environments are important in many applications, but they usually exist only for newly constructed buildings. Automated approaches to modelling indoor environments from imagery and/or point clouds can make the process easier, faster and cheaper. We present an approach to 3D indoor modelling based on a shape grammar. We demonstrate that interior spaces can be modelled by iteratively placing, connecting and merging cuboid shapes. We also show that the parameters and sequence of grammar rules can be learned automatically from a point cloud. Experiments with simulated and real point clouds show promising results, and indicate the potential of the method in 3D modelling of large indoor environments.

  15. An optical real-time 3D measurement for analysis of facial shape and movement

    NASA Astrophysics Data System (ADS)

    Zhang, Qican; Su, Xianyu; Chen, Wenjing; Cao, Yiping; Xiang, Liqun

    2003-12-01

    Optical non-contact 3-D shape measurement provides a novel and useful tool for analysis of facial shape and movement in presurgical and postsurgical regular check. In this article we present a system, which allows a precise 3-D visualization of the patient's facial before and after craniofacial surgery. We discussed, in this paper, the real time 3-D image capture, processing and the 3-D phase unwrapping method to recover complex shape deformation when the movement of the mouth. The result of real-time measurement for facial shape and movement will be helpful for the more ideal effect in plastic surgery.

  16. Transport of 3D space charge dominated beams

    NASA Astrophysics Data System (ADS)

    Lü, Jian-Qin

    2013-10-01

    In this paper we present the theoretical analysis and the computer code design for the intense pulsed beam transport. Intense beam dynamics is a very important issue in low-energy high-current accelerators and beam transport systems. This problem affects beam transmission and beam qualities. Therefore, it attracts the attention of the accelerator physicists worldwide. The analysis and calculation for the intense beam dynamics are very complicated, because the state of particle motion is dominated not only by the applied electromagnetic fields, but also by the beam-induced electromagnetic fields (self-fields). Moreover, the self fields are related to the beam dimensions and particle distributions. So, it is very difficult to get the self-consistent solutions of particle motion analytically. For this reason, we combine the Lie algebraic method and the particle in cell (PIC) scheme together to simulate intense 3D beam transport. With the Lie algebraic method we analyze the particle nonlinear trajectories in the applied electromagnetic fields up to third order approximation, and with the PIC algorithm we calculate the space charge effects to the particle motion. Based on the theoretical analysis, we have developed a computer code, which calculates beam transport systems consisting of electrostatic lenses, electrostatic accelerating columns, solenoid lenses, magnetic and electric quadruples, magnetic sextupoles, octopuses and different kinds of electromagnetic analyzers. The optimization calculations and the graphic display for the calculated results are provided by the code.

  17. 3D thin film microstructures for space microrobots

    NASA Technical Reports Server (NTRS)

    Shimoyama, Isao

    1995-01-01

    Micromechanisms of locomotion and a manipulator with an external skeleton like the structure of an insect are proposed. Several micro-sized models were built on silicon wafers by using polysilicon for rigid plates and polyimide for elastic joints. Due to scale effects, friction in micromechanical components is dominant as compared to the inertial forces because friction is proportional to L(exp 2) while mass is proportional to L(exp 3). Therefore, to ensure efficient motion, rotational joint that exhibits rubbing should be avoided. In this paper, paper models of a robot leg and a micro-manipulator are presented to show structures with external skeletons and elastic joints. Then the large scale implementation using plastic plates, springs, and solenoids is demonstrated. Since the assembly technique is based on paper folding, it is compatible with thin film micro-fabrication and integrated circuit (IC) planar processes. Finally, several micromechanisms were fabricated on silicon wafers to demonstrate the feasibility of building a 3D microstructure from a single planar structure that can be used for space microrobots.

  18. Real Time Quantitative 3-D Imaging of Diffusion Flame Species

    NASA Technical Reports Server (NTRS)

    Kane, Daniel J.; Silver, Joel A.

    1997-01-01

    A low-gravity environment, in space or ground-based facilities such as drop towers, provides a unique setting for study of combustion mechanisms. Understanding the physical phenomena controlling the ignition and spread of flames in microgravity has importance for space safety as well as better characterization of dynamical and chemical combustion processes which are normally masked by buoyancy and other gravity-related effects. Even the use of so-called 'limiting cases' or the construction of 1-D or 2-D models and experiments fail to make the analysis of combustion simultaneously simple and accurate. Ideally, to bridge the gap between chemistry and fluid mechanics in microgravity combustion, species concentrations and temperature profiles are needed throughout the flame. However, restrictions associated with performing measurements in reduced gravity, especially size and weight considerations, have generally limited microgravity combustion studies to the capture of flame emissions on film or video laser Schlieren imaging and (intrusive) temperature measurements using thermocouples. Given the development of detailed theoretical models, more sophisticated studies are needed to provide the kind of quantitative data necessary to characterize the properties of microgravity combustion processes as well as provide accurate feedback to improve the predictive capabilities of the computational models. While there have been a myriad of fluid mechanical visualization studies in microgravity combustion, little experimental work has been completed to obtain reactant and product concentrations within a microgravity flame. This is largely due to the fact that traditional sampling methods (quenching microprobes using GC and/or mass spec analysis) are too heavy, slow, and cumbersome for microgravity experiments. Non-intrusive optical spectroscopic techniques have - up until now - also required excessively bulky, power hungry equipment. However, with the advent of near-IR diode

  19. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  20. Rapid object indexing using locality sensitive hashing and joint 3D-signature space estimation.

    PubMed

    Matei, Bogdan; Shan, Ying; Sawhney, Harpreet S; Tan, Yi; Kumar, Rakesh; Huber, Daniel; Hebert, Martial

    2006-07-01

    We propose a new method for rapid 3D object indexing that combines feature-based methods with coarse alignment-based matching techniques. Our approach achieves a sublinear complexity on the number of models, maintaining at the same time a high degree of performance for real 3D sensed data that is acquired in largely uncontrolled settings. The key component of our method is to first index surface descriptors computed at salient locations from the scene into the whole model database using the Locality Sensitive Hashing (LSH), a probabilistic approximate nearest neighbor method. Progressively complex geometric constraints are subsequently enforced to further prune the initial candidates and eliminate false correspondences due to inaccuracies in the surface descriptors and the errors of the LSH algorithm. The indexed models are selected based on the MAP rule using posterior probability of the models estimated in the joint 3D-signature space. Experiments with real 3D data employing a large database of vehicles, most of them very similar in shape, containing 1,000,000 features from more than 365 models demonstrate a high degree of performance in the presence of occlusion and obscuration, unmodeled vehicle interiors and part articulations, with an average processing time between 50 and 100 seconds per query.

  1. Tensor3D: A computer graphics program to simulate 3D real-time deformation and visualization of geometric bodies

    NASA Astrophysics Data System (ADS)

    Pallozzi Lavorante, Luca; Dirk Ebert, Hans

    2008-07-01

    Tensor3D is a geometric modeling program with the capacity to simulate and visualize in real-time the deformation, specified through a tensor matrix and applied to triangulated models representing geological bodies. 3D visualization allows the study of deformational processes that are traditionally conducted in 2D, such as simple and pure shears. Besides geometric objects that are immediately available in the program window, the program can read other models from disk, thus being able to import objects created with different open-source or proprietary programs. A strain ellipsoid and a bounding box are simultaneously shown and instantly deformed with the main object. The principal axes of strain are visualized as well to provide graphical information about the orientation of the tensor's normal components. The deformed models can also be saved, retrieved later and deformed again, in order to study different steps of progressive strain, or to make this data available to other programs. The shape of stress ellipsoids and the corresponding Mohr circles defined by any stress tensor can also be represented. The application was written using the Visualization ToolKit, a powerful scientific visualization library in the public domain. This development choice, allied to the use of the Tcl/Tk programming language, which is independent on the host computational platform, makes the program a useful tool for the study of geometric deformations directly in three dimensions in teaching as well as research activities.

  2. NASA's 3-D Animation of Tropical Storm Ulika from Space

    NASA Video Gallery

    An animated 3-D flyby of Tropical Storm Ulika using GPM's Radar data showed some strong convective storms inside the tropical storm were dropping precipitation at a rate of over 187 mm (7.4 inches)...

  3. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  4. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  5. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  6. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  7. Communicating Experience of 3D Space: Mathematical and Everyday Discourse

    ERIC Educational Resources Information Center

    Morgan, Candia; Alshwaikh, Jehad

    2012-01-01

    In this article we consider data arising from student-teacher-researcher interactions taking place in the context of an experimental teaching program making use of multiple modes of communication and representation to explore three-dimensional (3D) shape. As teachers/researchers attempted to support student use of a logo-like formal language for…

  8. Bone tissue phantoms for optical flowmeters at large interoptode spacing generated by 3D-stereolithography

    PubMed Central

    Binzoni, Tiziano; Torricelli, Alessandro; Giust, Remo; Sanguinetti, Bruno; Bernhard, Paul; Spinelli, Lorenzo

    2014-01-01

    A bone tissue phantom prototype allowing to test, in general, optical flowmeters at large interoptode spacings, such as laser-Doppler flowmetry or diffuse correlation spectroscopy, has been developed by 3D-stereolithography technique. It has been demonstrated that complex tissue vascular systems of any geometrical shape can be conceived. Absorption coefficient, reduced scattering coefficient and refractive index of the optical phantom have been measured to ensure that the optical parameters reasonably reproduce real human bone tissue in vivo. An experimental demonstration of a possible use of the optical phantom, utilizing a laser-Doppler flowmeter, is also presented. PMID:25136496

  9. Bone tissue phantoms for optical flowmeters at large interoptode spacing generated by 3D-stereolithography.

    PubMed

    Binzoni, Tiziano; Torricelli, Alessandro; Giust, Remo; Sanguinetti, Bruno; Bernhard, Paul; Spinelli, Lorenzo

    2014-08-01

    A bone tissue phantom prototype allowing to test, in general, optical flowmeters at large interoptode spacings, such as laser-Doppler flowmetry or diffuse correlation spectroscopy, has been developed by 3D-stereolithography technique. It has been demonstrated that complex tissue vascular systems of any geometrical shape can be conceived. Absorption coefficient, reduced scattering coefficient and refractive index of the optical phantom have been measured to ensure that the optical parameters reasonably reproduce real human bone tissue in vivo. An experimental demonstration of a possible use of the optical phantom, utilizing a laser-Doppler flowmeter, is also presented.

  10. Bone tissue phantoms for optical flowmeters at large interoptode spacing generated by 3D-stereolithography.

    PubMed

    Binzoni, Tiziano; Torricelli, Alessandro; Giust, Remo; Sanguinetti, Bruno; Bernhard, Paul; Spinelli, Lorenzo

    2014-08-01

    A bone tissue phantom prototype allowing to test, in general, optical flowmeters at large interoptode spacings, such as laser-Doppler flowmetry or diffuse correlation spectroscopy, has been developed by 3D-stereolithography technique. It has been demonstrated that complex tissue vascular systems of any geometrical shape can be conceived. Absorption coefficient, reduced scattering coefficient and refractive index of the optical phantom have been measured to ensure that the optical parameters reasonably reproduce real human bone tissue in vivo. An experimental demonstration of a possible use of the optical phantom, utilizing a laser-Doppler flowmeter, is also presented. PMID:25136496

  11. Compilation of 3D global conductivity model of the Earth for space weather applications

    NASA Astrophysics Data System (ADS)

    Alekseev, Dmitry; Kuvshinov, Alexey; Palshin, Nikolay

    2015-07-01

    We have compiled a global three-dimensional (3D) conductivity model of the Earth with an ultimate goal to be used for realistic simulation of geomagnetically induced currents (GIC), posing a potential threat to man-made electric systems. Bearing in mind the intrinsic frequency range of the most intense disturbances (magnetospheric substorms) with typical periods ranging from a few minutes to a few hours, the compiled 3D model represents the structure in depth range of 0-100 km, including seawater, sediments, earth crust, and partly the lithosphere/asthenosphere. More explicitly, the model consists of a series of spherical layers, whose vertical and lateral boundaries are established based on available data. To compile a model, global maps of bathymetry, sediment thickness, and upper and lower crust thicknesses as well as lithosphere thickness are utilized. All maps are re-interpolated on a common grid of 0.25×0.25 degree lateral spacing. Once the geometry of different structures is specified, each element of the structure is assigned either a certain conductivity value or conductivity versus depth distribution, according to available laboratory data and conversion laws. A numerical formalism developed for compilation of the model, allows for its further refinement by incorporation of regional 3D conductivity distributions inferred from the real electromagnetic data. So far we included into our model four regional conductivity models, available from recent publications, namely, surface conductance model of Russia, and 3D conductivity models of Fennoscandia, Australia, and northwest of the United States.

  12. Bird's Eye View - A 3-D Situational Awareness Tool for the Space Station

    NASA Technical Reports Server (NTRS)

    Dershowitz, Adam; Chamitoff, Gregory

    2002-01-01

    Even as space-qualified computer hardware lags well behind the latest home computers, the possibility of using high-fidelity interactive 3-D graphics for displaying important on board information has finally arrived, and is being used on board the International Space Station (ISS). With the quantity and complexity of space-flight telemetry, 3-D displays can greatly enhance the ability of users, both onboard and on the ground, to interpret data quickly and accurately. This is particularly true for data related to vehicle attitude, position, configuration, and relation to other objects on the ground or in-orbit Bird's Eye View (BEV) is a 3-D real-time application that provides a high degree of Situational Awareness for the crew. Its purpose is to instantly convey important motion-related parameters to the crew and mission controllers by presenting 3-D simulated camera views of the International Space Station (ISS) in its actual environment Driven by actual telemetry, and running on board, as well as on the ground, the user can visualize the Space Station relative to the Earth, Sun, stars, various reference frames, and selected targets, such as ground-sites or communication satellites. Since the actual ISS configuration (geometry) is also modeled accurately, everything from the alignment of the solar panels to the expected view from a selected window can be visualized accurately. A virtual representation of the Space Station in real time has many useful applications. By selecting different cameras, the crew or mission control can monitor the station's orientation in space, position over the Earth, transition from day to night, direction to the Sun, the view from a particular window, or the motion of the robotic arm. By viewing the vehicle attitude and solar panel orientations relative to the Sun, the power status of the ISS can be easily visualized and understood. Similarly, the thermal impacts of vehicle attitude can be analyzed and visually confirmed. Communication

  13. 3D-Printing of Arteriovenous Malformations for Radiosurgical Treatment: Pushing Anatomy Understanding to Real Boundaries.

    PubMed

    Conti, Alfredo; Pontoriero, Antonio; Iatì, Giuseppe; Marino, Daniele; La Torre, Domenico; Vinci, Sergio; Germanò, Antonino; Pergolizzi, Stefano; Tomasello, Francesco

    2016-04-29

    Radiosurgery of arteriovenous malformations (AVMs) is a challenging procedure. Accuracy of target volume contouring is one major issue to achieve AVM obliteration while avoiding disastrous complications due to suboptimal treatment. We describe a technique to improve the understanding of the complex AVM angioarchitecture by 3D prototyping of individual lesions. Arteriovenous malformations of ten patients were prototyped by 3D printing using 3D rotational angiography (3DRA) as a template. A target volume was obtained using the 3DRA; a second volume was obtained, without awareness of the first volume, using 3DRA and the 3D-printed model. The two volumes were superimposed and the conjoint and disjoint volumes were measured. We also calculated the time needed to perform contouring and assessed the confidence of the surgeons in the definition of the target volumes using a six-point scale. The time required for the contouring of the target lesion was shorter when the surgeons used the 3D-printed model of the AVM (p=0.001). The average volume contoured without the 3D model was 5.6 ± 3 mL whereas it was 5.2 ± 2.9 mL with the 3D-printed model (p=0.003). The 3D prototypes proved to be spatially reliable. Surgeons were absolutely confident or very confident in all cases that the volume contoured using the 3D-printed model was plausible and corresponded to the real boundaries of the lesion. The total cost for each case was 50 euros whereas the cost of the 3D printer was 1600 euros. 3D prototyping of AVMs is a simple, affordable, and spatially reliable procedure that can be beneficial for radiosurgery treatment planning. According to our preliminary data, individual prototyping of the brain circulation provides an intuitive comprehension of the 3D anatomy of the lesion that can be rapidly and reliably translated into the target volume.

  14. LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.

  15. Multithreaded real-time 3D image processing software architecture and implementation

    NASA Astrophysics Data System (ADS)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  16. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  17. PRIMAS: a real-time 3D motion-analysis system

    NASA Astrophysics Data System (ADS)

    Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans

    1994-03-01

    The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.

  18. FPGA-based real-time anisotropic diffusion filtering of 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Castro-Pareja, Carlos R.; Dandekar, Omkar S.; Shekhar, Raj

    2005-02-01

    Three-dimensional ultrasonic imaging, especially the emerging real-time version of it, is particularly valuable in medical applications such as echocardiography, obstetrics and surgical navigation. A known problem with ultrasound images is their high level of speckle noise. Anisotropic diffusion filtering has been shown to be effective in enhancing the visual quality of 3D ultrasound images and as preprocessing prior to advanced image processing. However, due to its arithmetic complexity and the sheer size of 3D ultrasound images, it is not possible to perform online, real-time anisotropic diffusion filtering using standard software implementations. We present an FPGA-based architecture that allows performing anisotropic diffusion filtering of 3D images at acquisition rates, thus enabling the use of this filtering technique in real-time applications, such as visualization, registration and volume rendering.

  19. Real-time 3D measurement based on structured light illumination considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, ShiLing

    2014-12-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. In traditional 3-D measurement system where the processing time is not a key factor, camera lens distortion correction is performed directly. However, for the time-critical high-speed applications, the time-consuming correction algorithm is inappropriate to be performed directly during the real-time process. To cope with this issue, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. And a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the merit of the LUT, the 3-D reconstruction can be achieved at 92.34 frames per second.

  20. 3D-Pathology: a real-time system for quantitative diagnostic pathology and visualisation in 3D

    NASA Astrophysics Data System (ADS)

    Gottrup, Christian; Beckett, Mark G.; Hager, Henrik; Locht, Peter

    2005-02-01

    This paper presents the results of the 3D-Pathology project conducted under the European EC Framework 5. The aim of the project was, through the application of 3D image reconstruction and visualization techniques, to improve the diagnostic and prognostic capabilities of medical personnel when analyzing pathological specimens using transmitted light microscopy. A fully automated, computer-controlled microscope system has been developed to capture 3D images of specimen content. 3D image reconstruction algorithms have been implemented and applied to the acquired volume data in order to facilitate the subsequent 3D visualization of the specimen. Three potential application fields, immunohistology, cromogenic in situ hybridization (CISH) and cytology, have been tested using the prototype system. For both immunohistology and CISH, use of the system furnished significant additional information to the pathologist.

  1. Real-time visualization of 3-D dynamic microscopic objects using optical diffraction tomography.

    PubMed

    Kim, Kyoohyun; Kim, Kyung Sang; Park, Hyunjoo; Ye, Jong Chul; Park, Yongkeun

    2013-12-30

    3-D refractive index (RI) distribution is an intrinsic bio-marker for the chemical and structural information about biological cells. Here we develop an optical diffraction tomography technique for the real-time reconstruction of 3-D RI distribution, employing sparse angle illumination and a graphic processing unit (GPU) implementation. The execution time for the tomographic reconstruction is 0.21 s for 96(3) voxels, which is 17 times faster than that of a conventional approach. We demonstrated the real-time visualization capability with imaging the dynamics of Brownian motion of an anisotropic colloidal dimer and the dynamic shape change in a red blood cell upon shear flow.

  2. Real-time 3D-surface-guided head refixation useful for fractionated stereotactic radiotherapy

    SciTech Connect

    Li Shidong; Liu Dezhi; Yin Gongjie; Zhuang Ping; Geng, Jason

    2006-02-15

    Accurate and precise head refixation in fractionated stereotactic radiotherapy has been achieved through alignment of real-time 3D-surface images with a reference surface image. The reference surface image is either a 3D optical surface image taken at simulation with the desired treatment position, or a CT/MRI-surface rendering in the treatment plan with corrections for patient motion during CT/MRI scans and partial volume effects. The real-time 3D surface images are rapidly captured by using a 3D video camera mounted on the ceiling of the treatment vault. Any facial expression such as mouth opening that affects surface shape and location can be avoided using a new facial monitoring technique. The image artifacts on the real-time surface can generally be removed by setting a threshold of jumps at the neighboring points while preserving detailed features of the surface of interest. Such a real-time surface image, registered in the treatment machine coordinate system, provides a reliable representation of the patient head position during the treatment. A fast automatic alignment between the real-time surface and the reference surface using a modified iterative-closest-point method leads to an efficient and robust surface-guided target refixation. Experimental and clinical results demonstrate the excellent efficacy of <2 min set-up time, the desired accuracy and precision of <1 mm in isocenter shifts, and <1 deg. in rotation.

  3. Learning Dictionaries of Sparse Codes of 3D Movements of Body Joints for Real-Time Human Activity Understanding

    PubMed Central

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications. PMID:25473850

  4. [A new approach to the tricuspid valve in Ebstein's anomaly by real time 3D echocardiography].

    PubMed

    Taktak, A; Acar, P; Dulac, Y; Abadir, S; Chilon, T; Roux, D; Glock, Y; Fournial, G

    2005-05-01

    Ebstein's anomaly affects the tricuspid valve with a large range of anatomical forms. Successful tricuspid valvuloplasty depends mainly on the ability to mobilise the leaflets. Evaluation of the leaflet surface is difficult with 2D echocardiography whereas 3D echocardiography provides intracardiac views of the valve. The authors used this method in 10 patients with 3 modes of imaging: biplane, real time and total volume. The study population (age: 1 day to 30 years) included: 1 prenatal diagnosis, 1 neonate with refractory cyanosis, 5 patients with mild tricuspid regurgitation, 3 patients with severe tricuspid regurgitation, 2 of whom underwent valvuloplasty. 3D echocardiography was disappointing in the foetus and neonate because of poor spatial resolution. The ventricular view of the tricuspid valve in older children and adults allowed analysis of tricuspid leaflet coaptation and of the mechanism of regurgitation. The commissures and leaflet surfaces were assessed. The results of surgical valvuloplasty could be evaluated by 3D echocardiography. 3D echocardiography is now transthoracic and a real time investigation. Technical advances are required before it comes into routine usage: a more manoeuvrable matricial probe (integrating pulsed and continuous wave Doppler) and larger volume real time 3D imaging with better resolution. Its role in the assessment of Ebstein's anomaly should be evaluated in a larger series of patients. PMID:15966604

  5. A real-time noise filtering strategy for photon counting 3D imaging lidar.

    PubMed

    Zhang, Zijing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jianzhong

    2013-04-22

    For a direct-detection 3D imaging lidar, the use of Geiger mode avalanche photodiode (Gm-APD) could greatly enhance the detection sensitivity of the lidar system since each range measurement requires a single detected photon. Furthermore, Gm-APD offers significant advantages in reducing the size, mass, power and complexity of the system. However the inevitable noise, including the background noise, the dark count noise and so on, remains a significant challenge to obtain a clear 3D image of the target of interest. This paper presents a smart strategy, which can filter out false alarms in the stage of acquisition of raw time of flight (TOF) data and obtain a clear 3D image in real time. As a result, a clear 3D image is taken from the experimental system despite the background noise of the sunny day.

  6. A real-time noise filtering strategy for photon counting 3D imaging lidar.

    PubMed

    Zhang, Zijing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jianzhong

    2013-04-22

    For a direct-detection 3D imaging lidar, the use of Geiger mode avalanche photodiode (Gm-APD) could greatly enhance the detection sensitivity of the lidar system since each range measurement requires a single detected photon. Furthermore, Gm-APD offers significant advantages in reducing the size, mass, power and complexity of the system. However the inevitable noise, including the background noise, the dark count noise and so on, remains a significant challenge to obtain a clear 3D image of the target of interest. This paper presents a smart strategy, which can filter out false alarms in the stage of acquisition of raw time of flight (TOF) data and obtain a clear 3D image in real time. As a result, a clear 3D image is taken from the experimental system despite the background noise of the sunny day. PMID:23609635

  7. [3D real time contrast enhanced ultrasonography,a new technique].

    PubMed

    Dietrich, C F

    2002-02-01

    While 3D sonography has become established in gynecology, abdominal applications have been mainly restricted to case reports. However, recent advances in computer technology have supported the development of new systems with motion detection methods and image registration algorithms - making it possible to acquire 3D data without position sensors, before and after administration of contrast enhancing agents. Hepatic (and also splenic) applications involve the topographic localization of masses in relation to the vessels, e.g. hepatic veins and portal vein branches prior to surgical procedures (segment localization). 3D imaging in the characterization of liver tumors after administration of contrast enhancing agents could become of special importance. We report on the first use of 3D imaging of the liver and spleen under real time conditions in 10 patients, using contrast enhanced phase inversion imaging with low mechanical index, which may improve the detection rate and characterization of liver and splenic tumors. PMID:11898076

  8. 3D model-based detection and tracking for space autonomous and uncooperative rendezvous

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Zhang, Yueqiang; Liu, Haibo

    2015-10-01

    In order to fully navigate using a vision sensor, a 3D edge model based detection and tracking technique was developed. Firstly, we proposed a target detection strategy over a sequence of several images from the 3D model to initialize the tracking. The overall purpose of such approach is to robustly match each image with the model views of the target. Thus we designed a line segment detection and matching method based on the multi-scale space technology. Experiments on real images showed that our method is highly robust under various image changes. Secondly, we proposed a method based on 3D particle filter (PF) coupled with M-estimation to track and estimate the pose of the target efficiently. In the proposed approach, a similarity observation model was designed according to a new distance function of line segments. Then, based on the tracking results of PF, the pose was optimized using M-estimation. Experiments indicated that the proposed method can effectively track and accurately estimate the pose of freely moving target in unconstrained environment.

  9. Testing the hybrid-3D Hillslope Hydrological Model in a Real-World Controlled Environment

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Broxton, P. D.; Gochis, D. J.; Niu, G. Y.; Pelletier, J. D.; Troch, P. A. A.; Zeng, X.

    2015-12-01

    Hillslopes play an important role for converting rainfall into runoff, and as such, influence theterrestrial dynamics of the Earth's climate system. Recently, we have developed a hybrid-3D (h3D) hillslope hydrological model that couples a 1D vertical soil column model with a lateral pseudo-2D saturated zone and overland flow model. The h3D model gives similar results as the CATchment HYdrological model (CATHY), which simulates the subsurface movement of water with the 3D Richards equation, though the runtime efficiency of the h3D model is about 2-3 orders of magnitude faster. In the current work, the ability of the h3D model to predict real-world hydrological dynamics is assessed using a number of recharge-drainage experiments within the Landscape Evolution Observatory (LEO) at the Biosphere 2 near Tucson, Arizona, USA. LEO offers accurate and high-resolution (both temporally and spatially) observations of the inputs, outputs and storage dynamics of several hillslopes. The level of detail of these observations is generally not possible with real-world hillslope studies. Therefore, LEO offers an optimal environment to test the h3D model. The h3D model captures the observed storage, baseflow, and overland flow dynamics of both a larger and a smaller hillslope. Furthermore, it simulates overland flow better than CATHY. The h3D model has difficulties correctly representing the height of the saturated zone close to the seepage face of the smaller hillslope, though. There is a gravel layer near this seepage face, and the numerical boundary condition of the h3D model is insufficient to capture the hydrological dynamics within this region. In addition, the h3D model is used to test the hypothesis that model parameters change through time due to the migration of soil particles during the recharge-drainage experiments. An in depth calibration of the h3D model parameters reveals that the best results are obtained by applying an event-based optimization procedure as compared

  10. Automatic 2D to 3D conversion implemented for real-time applications

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr; Ramos-Diaz, Eduardo; Gonzalez Huitron, Victor

    2014-05-01

    Different hardware implementations of designed automatic 2D to 3D video color conversion employing 2D video sequence are presented. The analyzed framework includes together processing of neighboring frames using the following blocks: CIELa*b* color space conversion, wavelet transform, edge detection using HF wavelet sub-bands (HF, LH and HH), color segmentation via k-means on a*b* color plane, up-sampling, disparity map (DM) estimation, adaptive postfiltering, and finally, the anaglyph 3D scene generation. During edge detection, the Donoho threshold is computed, then each sub-band is binarized according to a threshold chosen and finally the thresholding image is formed. DM estimation is performed in the following matter: in left stereo image (or frame), a window with varying sizes is used according to the information obtained from binarized sub-band image, distinguishing different texture areas into LL sub-band image. The stereo matching is performed between two (left and right) LL sub-band images using processing with different window sizes. Upsampling procedure is employed in order to obtain the enhanced DM. Adaptive post-processing procedure is based on median filter and k-means segmentation in a*b* color plane. The SSIM and QBP criteria are applied in order to compare the performance of the proposed framework against other disparity map computation techniques. The designed technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7 and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode.

  11. Real-Time Modeling and 3D Visualization of Source Dynamics and Connectivity Using Wearable EEG

    PubMed Central

    Mullen, Tim; Kothe, Christian; Chi, Yu Mike; Ojeda, Alejandro; Kerth, Trevor; Makeig, Scott; Cauwenberghs, Gert; Jung, Tzyy-Ping

    2014-01-01

    This report summarizes our recent efforts to deliver real-time data extraction, preprocessing, artifact rejection, source reconstruction, multivariate dynamical system analysis (including spectral Granger causality) and 3D visualization as well as classification within the open-source SIFT and BCILAB toolboxes. We report the application of such a pipeline to simulated data and real EEG data obtained from a novel wearable high-density (64-channel) dry EEG system. PMID:24110155

  12. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  13. 3D-Printing of Arteriovenous Malformations for Radiosurgical Treatment: Pushing Anatomy Understanding to Real Boundaries

    PubMed Central

    Pontoriero, Antonio; Iatì, Giuseppe; Marino, Daniele; La Torre, Domenico; Vinci, Sergio; Germanò, Antonino; Pergolizzi, Stefano; Tomasello, Francesco,

    2016-01-01

    Radiosurgery of arteriovenous malformations (AVMs) is a challenging procedure. Accuracy of target volume contouring is one major issue to achieve AVM obliteration while avoiding disastrous complications due to suboptimal treatment. We describe a technique to improve the understanding of the complex AVM angioarchitecture by 3D prototyping of individual lesions. Arteriovenous malformations of ten patients were prototyped by 3D printing using 3D rotational angiography (3DRA) as a template. A target volume was obtained using the 3DRA; a second volume was obtained, without awareness of the first volume, using 3DRA and the 3D-printed model. The two volumes were superimposed and the conjoint and disjoint volumes were measured. We also calculated the time needed to perform contouring and assessed the confidence of the surgeons in the definition of the target volumes using a six-point scale. The time required for the contouring of the target lesion was shorter when the surgeons used the 3D-printed model of the AVM (p=0.001). The average volume contoured without the 3D model was 5.6 ± 3 mL whereas it was 5.2 ± 2.9 mL with the 3D-printed model (p=0.003). The 3D prototypes proved to be spatially reliable. Surgeons were absolutely confident or very confident in all cases that the volume contoured using the 3D-printed model was plausible and corresponded to the real boundaries of the lesion. The total cost for each case was 50 euros whereas the cost of the 3D printer was 1600 euros. 3D prototyping of AVMs is a simple, affordable, and spatially reliable procedure that can be beneficial for radiosurgery treatment planning. According to our preliminary data, individual prototyping of the brain circulation provides an intuitive comprehension of the 3D anatomy of the lesion that can be rapidly and reliably translated into the target volume. PMID:27335707

  14. 3D-Printing of Arteriovenous Malformations for Radiosurgical Treatment: Pushing Anatomy Understanding to Real Boundaries.

    PubMed

    Conti, Alfredo; Pontoriero, Antonio; Iatì, Giuseppe; Marino, Daniele; La Torre, Domenico; Vinci, Sergio; Germanò, Antonino; Pergolizzi, Stefano; Tomasello, Francesco

    2016-01-01

    Radiosurgery of arteriovenous malformations (AVMs) is a challenging procedure. Accuracy of target volume contouring is one major issue to achieve AVM obliteration while avoiding disastrous complications due to suboptimal treatment. We describe a technique to improve the understanding of the complex AVM angioarchitecture by 3D prototyping of individual lesions. Arteriovenous malformations of ten patients were prototyped by 3D printing using 3D rotational angiography (3DRA) as a template. A target volume was obtained using the 3DRA; a second volume was obtained, without awareness of the first volume, using 3DRA and the 3D-printed model. The two volumes were superimposed and the conjoint and disjoint volumes were measured. We also calculated the time needed to perform contouring and assessed the confidence of the surgeons in the definition of the target volumes using a six-point scale. The time required for the contouring of the target lesion was shorter when the surgeons used the 3D-printed model of the AVM (p=0.001). The average volume contoured without the 3D model was 5.6 ± 3 mL whereas it was 5.2 ± 2.9 mL with the 3D-printed model (p=0.003). The 3D prototypes proved to be spatially reliable. Surgeons were absolutely confident or very confident in all cases that the volume contoured using the 3D-printed model was plausible and corresponded to the real boundaries of the lesion. The total cost for each case was 50 euros whereas the cost of the 3D printer was 1600 euros. 3D prototyping of AVMs is a simple, affordable, and spatially reliable procedure that can be beneficial for radiosurgery treatment planning. According to our preliminary data, individual prototyping of the brain circulation provides an intuitive comprehension of the 3D anatomy of the lesion that can be rapidly and reliably translated into the target volume. PMID:27335707

  15. Real-time 3D surface-image-guided beam setup in radiotherapy of breast cancer

    SciTech Connect

    Djajaputra, David; Li Shidong

    2005-01-01

    We describe an approach for external beam radiotherapy of breast cancer that utilizes the three-dimensional (3D) surface information of the breast. The surface data of the breast are obtained from a 3D optical camera that is rigidly mounted on the ceiling of the treatment vault. This 3D camera utilizes light in the visible range therefore it introduces no ionization radiation to the patient. In addition to the surface topographical information of the treated area, the camera also captures gray-scale information that is overlaid on the 3D surface image. This allows us to visualize the skin markers and automatically determine the isocenter position and the beam angles in the breast tangential fields. The field sizes and shapes of the tangential, supraclavicular, and internal mammary gland fields can all be determined according to the 3D surface image of the target. A least-squares method is first introduced for the tangential-field setup that is useful for compensation of the target shape changes. The entire process of capturing the 3D surface data and subsequent calculation of beam parameters typically requires less than 1 min. Our tests on phantom experiments and patient images have achieved the accuracy of 1 mm in shift and 0.5 deg. in rotation. Importantly, the target shape and position changes in each treatment session can both be corrected through this real-time image-guided system.

  16. 3D Embedded Reconfigurable Riometer for Heliospheric Space Missions

    NASA Astrophysics Data System (ADS)

    Dekoulis, George

    2016-07-01

    This paper describes the development of a new three-dimensional embedded reconfigurable Riometer for performing remote sensing of planetary magnetospheres. The system couples the in situ measurements of probe or orbiter magnetospheric space missions. The new prototype features a multi-frequency mode that allows measurements at frequencies, where heliospheric physics events' signatures are distinct on the ionized planetary plasma. For our planet similar measurements are meaningful for frequencies below 55 MHz. Observation frequencies above 55 MHz yield to direct measurements of the Cosmic Microwave Background intensity. The system acts as a prototyping platform for subsequent space exploration phased-array imaging experiments, due to its high-intensity scientific processing capabilities. The performance improvement over existing systems in operation is in the range of 80%, due to the state-of-the-art hardware and scientific processing used.

  17. A volumetric sensor for real-time 3D mapping and robot navigation

    NASA Astrophysics Data System (ADS)

    Fournier, Jonathan; Ricard, Benoit; Laurendeau, Denis

    2006-05-01

    The use of robots for (semi-) autonomous operations in complex terrains such as urban environments poses difficult mobility, mapping, and perception challenges. To be able to work efficiently, a robot should be provided with sensors and software such that it can perceive and analyze the world in 3D. Real-time 3D sensing and perception in this operational context are paramount. To address these challenges, DRDC Valcartier has developed over the past years a compact sensor that combines a wide baseline stereo camera and a laser scanner with a full 360 degree azimuth and 55 degree elevation field of view allowing the robot to view and manage overhang obstacles as well as obstacles at ground level. Sensing in 3D is common but to efficiently navigate and work in complex terrain, the robot should also perceive, decide and act in three dimensions. Therefore, 3D information should be preserved and exploited in all steps of the process. To achieve this, we use a multiresolution octree to store the acquired data, allowing mapping of large environments while keeping the representation compact and memory efficient. Ray tracing is used to build and update the 3D occupancy model. This model is used, via a temporary 2.5D map, for navigation, obstacle avoidance and efficient frontier-based exploration. This paper describes the volumetric sensor concept, describes its design features and presents an overview of the 3D software framework that allows 3D information persistency through all computation steps. Simulation and real-world experiments are presented at the end of the paper to demonstrate the key elements of our approach.

  18. Real-time 3D human pose recognition from reconstructed volume via voxel classifiers

    NASA Astrophysics Data System (ADS)

    Yoo, ByungIn; Choi, Changkyu; Han, Jae-Joon; Lee, Changkyo; Kim, Wonjun; Suh, Sungjoo; Park, Dusik; Kim, Junmo

    2014-03-01

    This paper presents a human pose recognition method which simultaneously reconstructs a human volume based on ensemble of voxel classifiers from a single depth image in real-time. The human pose recognition is a difficult task since a single depth camera can capture only visible surfaces of a human body. In order to recognize invisible (self-occluded) surfaces of a human body, the proposed algorithm employs voxel classifiers trained with multi-layered synthetic voxels. Specifically, ray-casting onto a volumetric human model generates a synthetic voxel, where voxel consists of a 3D position and ID corresponding to the body part. The synthesized volumetric data which contain both visible and invisible body voxels are utilized to train the voxel classifiers. As a result, the voxel classifiers not only identify the visible voxels but also reconstruct the 3D positions and the IDs of the invisible voxels. The experimental results show improved performance on estimating the human poses due to the capability of inferring the invisible human body voxels. It is expected that the proposed algorithm can be applied to many fields such as telepresence, gaming, virtual fitting, wellness business, and real 3D contents control on real 3D displays.

  19. Space Radar Image of Long Valley, California - 3D view

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory

  20. Space Radar Image of Long Valley, California in 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This three-dimensional perspective view of Long Valley, California was created from data taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This image was constructed by overlaying a color composite SIR-C radar image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The interferometry data were acquired on April 13,1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR instrument. The color composite radar image was taken in October and was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is the large dark feature in the foreground. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are

  1. Space Radar Image Isla Isabela in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional view of Isabela, one of the Galapagos Islands located off the western coast of Ecuador, South America. This view was constructed by overlaying a Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) image on a digital elevation map produced by TOPSAR, a prototype airborne interferometric radar which produces simultaneous image and elevation data. The vertical scale in this image is exaggerated by a factor of 1.87. The SIR-C/X-SAR image was taken on the 40th orbit of space shuttle Endeavour. The image is centered at about 0.5 degree south latitude and 91 degrees west longitude and covers an area of 75 by 60 kilometers (47 by 37 miles). The radar incidence angle at the center of the image is about 20 degrees. The western Galapagos Islands, which lie about 1,200 kilometers (750 miles)west of Ecuador in the eastern Pacific, have six active volcanoes similar to the volcanoes found in Hawaii and reflect the volcanic processes that occur where the ocean floor is created. Since the time of Charles Darwin's visit to the area in 1835, there have been more than 60 recorded eruptions on these volcanoes. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flows as bright features, while ash deposits and smooth pahoehoe lava flows appear dark. Vertical exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults, and fractures) and topography. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data

  2. Space Radar Image of Mammoth, California in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective of Mammoth Mountain, California. This view was constructed by overlaying a Spaceborne Imaging Radar-C (SIR-C) radar image on a U.S. Geological Survey digital elevation map. Vertical exaggeration is 1.87 times. The image is centered at 37.6 degrees north, 119.0 degrees west. It was acquired from the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) onboard space shuttle Endeavour on its 67th orbit on April 13, 1994. In this color representation, red is C-band HV-polarization, green is C-band VV-polarization and blue is the ratio of C-band VV to C-band HV. Blue areas are smooth, and yellow areas are rock out-crops with varying amounts of snow and vegetation. Crowley Lake is in the foreground, and Highway 395 crosses in the middle of the image. Mammoth Mountain is shown in the upper right. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI).

  3. Space Radar Image of Missoula, Montana in 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-dimensional perspective view of Missoula, Montana, created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are useful because they show scientists the shapes of the topographic features such as mountains and valleys. This technique helps to clarify the relationships of the different types of materials on the surface detected by the radar. The view is looking north-northeast. The blue circular area at the lower left corner is a bend of the Bitterroot River just before it joins the Clark Fork, which runs through the city. Crossing the Bitterroot River is the bridge of U.S. Highway 93. Highest mountains in this image are at elevations of 2,200 meters (7,200 feet). The city is about 975 meters (3,200 feet) above sea level. The bright yellow areas are urban and suburban zones, dark brown and blue-green areas are grasslands, bright green areas are farms, light brown and purple areas are scrub and forest, and bright white and blue areas are steep rocky slopes. The two radar images were taken on successive days by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) onboard the space shuttle Endeavour in October 1994. The digital elevation map was produced using radar interferometry, a process in which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. Radar image data are draped over the topography to provide the color with the following assignments: red is L-band vertically transmitted, vertically received; green is C-band vertically transmitted, vertically received; and blue are differences seen in the L-band data between the two days. This image is centered near 46.9 degrees north latitude and 114.1 degrees west longitude. No vertical exaggeration factor has been applied to the data. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA

  4. New fabrication techniques for ring-array transducers for real-time 3D intravascular ultrasound.

    PubMed

    Light, Edward D; Lieu, Victor; Smith, Stephen W

    2009-10-01

    We have previously described miniature 2D array transducers integrated into a Cook Medical, Inc. vena cava filter deployment device. While functional, the fabrication technique was very labor intensive and did not lend itself well to efficient fabrication of large numbers of devices. We developed two new fabrication methods that we believe can be used to efficiently manufacture these types of devices in greater than prototype numbers. One transducer consisted of 55 elements operating near 5 MHz. The interelement spacing is 0.20 mm. It was constructed on a flat piece of copper-clad polyimide and then wrapped around an 11 French catheter of a Cook Medical, Inc. inferior vena cava (IVC) filter deployment device. We used a braided wiring technology from Tyco Electronics Corp. to connect the elements to our real-time 3D ultrasound scanner. Typical measured transducer element bandwidth was 20% centered at 4.7 MHz and the 50 Omega round trip insertion loss was --82 dB. The mean of the nearest neighbor cross talk was -37.0 dB. The second method consisted of a 46-cm long single layer flex circuit from MicroConnex that terminates in an interconnect that plugs directly into our system cable. This transducer had 70 elements at 0.157 mm interelement spacing operating at 4.8 MHz. Typical measured transducer element bandwidth was 29% and the 50 Omega round trip insertion loss was -83 dB. The mean of the nearest neighbor cross talk was -33.0 dB. PMID:20458877

  5. Real-time 3D visualization of volumetric video motion sensor data

    SciTech Connect

    Carlson, J.; Stansfield, S.; Shawver, D.; Flachs, G.M.; Jordan, J.B.; Bao, Z.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to be immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.

  6. Feasibility of real-time 3D echocardiography in weightlessness during parabolic flight.

    PubMed

    Caiani, E G; Sugeng, L; Weinert, L; Husson, S; Bailliart, O; Capderou, A; Lang, R M; Vaida, P

    2004-07-01

    Aim of the study was to test the feasibility of transthoracic real-time 3D (Philips) echocardiography (RT3D) during parabolic flight, to allow direct measurement of heart chambers volumes modifications during the parabola. One RT3D dataset corresponding to one cardiac cycle was acquired at each gravity phase (1 Gz, 1.8 Gz, 0 Gz, 1.8 Gz) during breath-hold in 8 unmedicated normal subjects (41 +/- 8 years old) in standing upright position. Preliminary results, obtained by semi-automatically tracing left ventricular (LV) and left atrial (LA) endocardial contours in multiple views (Tomtec), showed a significant (p<0.05) reduction, compared to 1 Gz, of LV and LA volumes with 1.8 Gz, and a significant increase with 0 Gz. Further analysis will focus on the right heart.

  7. Autonomous Real-Time Interventional Scan Plane Control With a 3-D Shape-Sensing Needle

    PubMed Central

    Plata, Juan Camilo; Holbrook, Andrew B.; Park, Yong-Lae; Pauly, Kim Butts; Daniel, Bruce L.; Cutkosky, Mark R.

    2016-01-01

    This study demonstrates real-time scan plane control dependent on three-dimensional needle bending, as measured from magnetic resonance imaging (MRI)-compatible optical strain sensors. A biopsy needle with embedded fiber Bragg grating (FBG) sensors to measure surface strains is used to estimate its full 3-D shape and control the imaging plane of an MR scanner in real-time, based on the needle’s estimated profile. The needle and scanner coordinate frames are registered to each other via miniature radio-frequency (RF) tracking coils, and the scan planes autonomously track the needle as it is deflected, keeping its tip in view. A 3-D needle annotation is superimposed over MR-images presented in a 3-D environment with the scanner’s frame of reference. Scan planes calculated based on the FBG sensors successfully follow the tip of the needle. Experiments using the FBG sensors and RF coils to track the needle shape and location in real-time had an average root mean square error of 4.2 mm when comparing the estimated shape to the needle profile as seen in high resolution MR images. This positional variance is less than the image artifact caused by the needle in high resolution SPGR (spoiled gradient recalled) images. Optical fiber strain sensors can estimate a needle’s profile in real-time and be used for MRI scan plane control to potentially enable faster and more accurate physician response. PMID:24968093

  8. Interaction Design and Usability of Learning Spaces in 3D Multi-user Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Minocha, Shailey; Reeves, Ahmad John

    Three-dimensional virtual worlds are multimedia, simulated environments, often managed over the Web, which users can 'inhabit' and interact via their own graphical, self-representations known as 'avatars'. 3D virtual worlds are being used in many applications: education/training, gaming, social networking, marketing and commerce. Second Life is the most widely used 3D virtual world in education. However, problems associated with usability, navigation and way finding in 3D virtual worlds may impact on student learning and engagement. Based on empirical investigations of learning spaces in Second Life, this paper presents design guidelines to improve the usability and ease of navigation in 3D spaces. Methods of data collection include semi-structured interviews with Second Life students, educators and designers. The findings have revealed that design principles from the fields of urban planning, Human- Computer Interaction, Web usability, geography and psychology can influence the design of spaces in 3D multi-user virtual environments.

  9. Calculating Least Risk Paths in 3d Indoor Space

    NASA Astrophysics Data System (ADS)

    Vanclooster, A.; De Maeyer, Ph.; Fack, V.; Van de Weghe, N.

    2013-08-01

    Over the last couple of years, research on indoor environments has gained a fresh impetus; more specifically applications that support navigation and wayfinding have become one of the booming industries. Indoor navigation research currently covers the technological aspect of indoor positioning and the modelling of indoor space. The algorithmic development to support navigation has so far been left mostly untouched, as most applications mainly rely on adapting Dijkstra's shortest path algorithm to an indoor network. However, alternative algorithms for outdoor navigation have been proposed adding a more cognitive notion to the calculated paths and as such adhering to the natural wayfinding behaviour (e.g. simplest paths, least risk paths). These algorithms are currently restricted to outdoor applications. The need for indoor cognitive algorithms is highlighted by a more challenged navigation and orientation due to the specific indoor structure (e.g. fragmentation, less visibility, confined areas…). As such, the clarity and easiness of route instructions is of paramount importance when distributing indoor routes. A shortest or fastest path indoors not necessarily aligns with the cognitive mapping of the building. Therefore, the aim of this research is to extend those richer cognitive algorithms to three-dimensional indoor environments. More specifically for this paper, we will focus on the application of the least risk path algorithm of Grum (2005) to an indoor space. The algorithm as proposed by Grum (2005) is duplicated and tested in a complex multi-storey building. The results of several least risk path calculations are compared to the shortest paths in indoor environments in terms of total length, improvement in route description complexity and number of turns. Several scenarios are tested in this comparison: paths covering a single floor, paths crossing several building wings and/or floors. Adjustments to the algorithm are proposed to be more aligned to the

  10. Space Radar Image of Karakax Valley, China 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This three-dimensional perspective of the remote Karakax Valley in the northern Tibetan Plateau of western China was created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are helpful to scientists because they reveal where the slopes of the valley are cut by erosion, as well as the accumulations of gravel deposits at the base of the mountains. These gravel deposits, called alluvial fans, are a common landform in desert regions that scientists are mapping in order to learn more about Earth's past climate changes. Higher up the valley side is a clear break in the slope, running straight, just below the ridge line. This is the trace of the Altyn Tagh fault, which is much longer than California's San Andreas fault. Geophysicists are studying this fault for clues it may be able to give them about large faults. Elevations range from 4000 m (13,100 ft) in the valley to over 6000 m (19,700 ft) at the peaks of the glaciated Kun Lun mountains running from the front right towards the back. Scale varies in this perspective view, but the area is about 20 km (12 miles) wide in the middle of the image, and there is no vertical exaggeration. The two radar images were acquired on separate days during the second flight of the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour in October 1994. The interferometry technique provides elevation measurements of all points in the scene. The resulting digital topographic map was used to create this view, looking northwest from high over the valley. Variations in the colors can be related to gravel, sand and rock outcrops. This image is centered at 36.1 degrees north latitude, 79.2 degrees east longitude. Radar image data are draped over the topography to provide the color with the following assignments: Red is L-band vertically transmitted, vertically received; green is the average of L-band vertically transmitted

  11. Towards a Real Estate Registry 3d Model in Portugal: Some Illustrative Case Studies

    NASA Astrophysics Data System (ADS)

    de Almeida, J.-P.; Ellul, C.; Rodrigues-de-Carvalho, M. M.

    2013-09-01

    The 3D concept emerged as a key concept within geoinformation science. 3D geoinformation has been proved to be feasible and its added value over 2D geoinformation is widely acknowledged by researchers from various fields. Even so, 3D concept merits still need to be exploited further and more specific applications and associate products are needed - such as within real estate cadastre, our ultimate field of interest. The growing densification of urban land use is consequently increasing situations of vertical stratification of ownership rights. Traditional 2D cadastral models are not able to fully handle spatial information on those rights in the third dimension. Thus, 3D cadastre has been attracting researchers to better register and spatially represent real world overlapping situations. A centralised distributed cadastral management system, implementing a 2D cadastral model, has been conceived by the national cadastral agency in Portugal: the so-called SiNErGIC. The authors seek to show with this paper that there is room though for further investigation on the suitability of a 3D modelling approach instead, which should not be confined only to topologicalgeometric representations but should also be extended in order to be able to incorporate the legal/administrative component. This paper intends to be the first step towards the design of a prototype of a 3D cadastral model capable of handling the overall multipurpose cadastral reality in Portugal; it focuses primarily on the clear identification of some case studies that may illustrate the pertinence of such an approach in the context of this country.

  12. Space Radar Image of Kilauea, Hawaii in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is

  13. Extraction of the 3D Free Space from Building Models for Indoor Navigation

    NASA Astrophysics Data System (ADS)

    Diakité, A. A.; Zlatanova, S.

    2016-10-01

    For several decades, indoor navigation has been exclusively investigated in a 2D perspective, based on floor plans, projection and other 2D representations of buildings. Nevertheless, 3D representations are closer to our reality and offer a more intuitive description of the space configuration. Thanks to recent advances in 3D modelling, 3D navigation is timidly but increasingly gaining in interest through the indoor applications. But, because the structure of indoor environment is often more complex than outdoor, very simplified models are used and obstacles are not considered for indoor navigation leading to limited possibilities in complex buildings. In this paper we consider the entire configuration of the indoor environment in 3D and introduce a method to extract from it the actual navigable space as a network of connected 3D spaces (volumes). We describe how to construct such 3D free spaces from semantically rich and furnished IFC models. The approach combines the geometric, the topological and the semantic information available in a 3D model to isolate the free space from the rest of the components. Furthermore, the extraction of such navigable spaces in building models lacking of semantic information is also considered. A data structure named combinatorial maps is used to support the operations required by the process while preserving the topological and semantic information of the input models.

  14. Design of Learning Spaces in 3D Virtual Worlds: An Empirical Investigation of "Second Life"

    ERIC Educational Resources Information Center

    Minocha, Shailey; Reeves, Ahmad John

    2010-01-01

    "Second Life" (SL) is a three-dimensional (3D) virtual world, and educational institutions are adopting SL to support their teaching and learning. Although the question of how 3D learning spaces should be designed to support student learning and engagement has been raised among SL educators and designers, there is hardly any guidance or research…

  15. Space Radar Image of Death Valley in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This picture is a three-dimensional perspective view of Death Valley, California. This view was constructed by overlaying a SIR-C radar image on a U.S. Geological Survey digital elevation map. The SIR-C image is centered at 36.629 degrees north latitude and 117.069 degrees west longitude. We are looking at Stove Pipe Wells, which is the bright rectangle located in the center of the picture frame. Our vantage point is located atop a large alluvial fan centered at the mouth of Cottonwood Canyon. In the foreground on the left, we can see the sand dunes near Stove Pipe Wells. In the background on the left, the Valley floor gradually falls in elevation toward Badwater, the lowest spot in the United States. In the background on the right we can see Tucki Mountain. This SIR-C/X-SAR supersite is an area of extensive field investigations and has been visited by both Space Radar Lab astronaut crews. Elevations in the Valley range from 70 meters (230 feet) below sea level, the lowest in the United States, to more than 3,300 meters (10,800 feet) above sea level. Scientists are using SIR-C/X-SAR data from Death Valley to help the answer a number of different questions about Earth's geology. One question concerns how alluvial fans are formed and change through time under the influence of climatic changes and earthquakes. Alluvial fans are gravel deposits that wash down from the mountains over time. They are visible in the image as circular, fan-shaped bright areas extending into the darker valley floor from the mountains. Information about the alluvial fans helps scientists study Earth's ancient climate. Scientists know the fans are built up through climatic and tectonic processes and they will use the SIR-C/X-SAR data to understand the nature and rates of weathering processes on the fans, soil formation and the transport of sand and dust by the wind. SIR-C/X-SAR's sensitivity to centimeter-scale (inch-scale) roughness provides detailed maps of surface texture. Such information

  16. Space Radar Image of Kilauea, Hawaii in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is

  17. 3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.

    PubMed

    Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S

    2015-10-20

    Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates.

  18. Real-Time 3d Reconstruction from Images Taken from AN Uav

    NASA Astrophysics Data System (ADS)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  19. 3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.

    PubMed

    Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S

    2015-10-20

    Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates. PMID:26488641

  20. Space Radar Image of Death Valley in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This picture is a three-dimensional perspective view of Death Valley, California. This view was constructed by overlaying a SIR-C radar image on a U.S. Geological Survey digital elevation map. The SIR-C image is centered at 36.629 degrees north latitude and 117.069 degrees west longitude. We are looking at Stove Pipe Wells, which is the bright rectangle located in the center of the picture frame. Our vantage point is located atop a large alluvial fan centered at the mouth of Cottonwood Canyon. In the foreground on the left, we can see the sand dunes near Stove Pipe Wells. In the background on the left, the Valley floor gradually falls in elevation toward Badwater, the lowest spot in the United States. In the background on the right we can see Tucki Mountain. This SIR-C/X-SAR supersite is an area of extensive field investigations and has been visited by both Space Radar Lab astronaut crews. Elevations in the Valley range from 70 meters (230 feet) below sea level, the lowest in the United States, to more than 3,300 meters (10,800 feet) above sea level. Scientists are using SIR-C/X-SAR data from Death Valley to help the answer a number of different questions about Earth's geology. One question concerns how alluvial fans are formed and change through time under the influence of climatic changes and earthquakes. Alluvial fans are gravel deposits that wash down from the mountains over time. They are visible in the image as circular, fan-shaped bright areas extending into the darker valley floor from the mountains. Information about the alluvial fans helps scientists study Earth's ancient climate. Scientists know the fans are built up through climatic and tectonic processes and they will use the SIR-C/X-SAR data to understand the nature and rates of weathering processes on the fans, soil formation and the transport of sand and dust by the wind. SIR-C/X-SAR's sensitivity to centimeter-scale (inch-scale) roughness provides detailed maps of surface texture. Such information

  1. 3D real-time measurement system of seam with laser

    NASA Astrophysics Data System (ADS)

    Huang, Min-shuang; Huang, Jun-fen

    2014-02-01

    3-D Real-time Measurement System of seam outline based on Moiré Projection is proposed and designed. The system is composed of LD, grating, CCD, video A/D, FPGA, DSP and an output interface. The principle and hardware makeup of high-speed and real-time image processing circuit based on a Digital Signal Processor (DSP) and a Field Programmable Gate Array (FPGA) are introduced. Noise generation mechanism in poor welding field conditions is analyzed when Moiré stripes are projected on a welding workpiece surface. Median filter is adopted to smooth the acquired original laser image of seam, and then measurement results of a 3-D outline image of weld groove are provided.

  2. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  3. Web GIS in practice V: 3-D interactive and real-time mapping in Second Life

    PubMed Central

    Boulos, Maged N Kamel; Burden, David

    2007-01-01

    This paper describes technologies from Daden Limited for geographically mapping and accessing live news stories/feeds, as well as other real-time, real-world data feeds (e.g., Google Earth KML feeds and GeoRSS feeds) in the 3-D virtual world of Second Life, by plotting and updating the corresponding Earth location points on a globe or some other suitable form (in-world), and further linking those points to relevant information and resources. This approach enables users to visualise, interact with, and even walk or fly through, the plotted data in 3-D. Users can also do the reverse: put pins on a map in the virtual world, and then view the data points on the Web in Google Maps or Google Earth. The technologies presented thus serve as a bridge between mirror worlds like Google Earth and virtual worlds like Second Life. We explore the geo-data display potential of virtual worlds and their likely convergence with mirror worlds in the context of the future 3-D Internet or Metaverse, and reflect on the potential of such technologies and their future possibilities, e.g. their use to develop emergency/public health virtual situation rooms to effectively manage emergencies and disasters in real time. The paper also covers some of the issues associated with these technologies, namely user interface accessibility and individual privacy. PMID:18042275

  4. Web GIS in practice V: 3-D interactive and real-time mapping in Second Life.

    PubMed

    Boulos, Maged N Kamel; Burden, David

    2007-01-01

    This paper describes technologies from Daden Limited for geographically mapping and accessing live news stories/feeds, as well as other real-time, real-world data feeds (e.g., Google Earth KML feeds and GeoRSS feeds) in the 3-D virtual world of Second Life, by plotting and updating the corresponding Earth location points on a globe or some other suitable form (in-world), and further linking those points to relevant information and resources. This approach enables users to visualise, interact with, and even walk or fly through, the plotted data in 3-D. Users can also do the reverse: put pins on a map in the virtual world, and then view the data points on the Web in Google Maps or Google Earth. The technologies presented thus serve as a bridge between mirror worlds like Google Earth and virtual worlds like Second Life. We explore the geo-data display potential of virtual worlds and their likely convergence with mirror worlds in the context of the future 3-D Internet or Metaverse, and reflect on the potential of such technologies and their future possibilities, e.g. their use to develop emergency/public health virtual situation rooms to effectively manage emergencies and disasters in real time. The paper also covers some of the issues associated with these technologies, namely user interface accessibility and individual privacy. PMID:18042275

  5. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  6. Evaluation of Home Delivery of Lectures Utilizing 3D Virtual Space Infrastructure

    ERIC Educational Resources Information Center

    Nishide, Ryo; Shima, Ryoichi; Araie, Hiromu; Ueshima, Shinichi

    2007-01-01

    Evaluation experiments have been essential in exploring home delivery of lectures for which users can experience campus lifestyle and distant learning through 3D virtual space. This paper discusses the necessity of virtual space for distant learners by examining the effects of virtual space. The authors have pursued the possibility of…

  7. 3D-SURFER 2.0: web platform for real-time search and characterization of protein surfaces.

    PubMed

    Xiong, Yi; Esquivel-Rodriguez, Juan; Sael, Lee; Kihara, Daisuke

    2014-01-01

    The increasing number of uncharacterized protein structures necessitates the development of computational approaches for function annotation using the protein tertiary structures. Protein structure database search is the basis of any structure-based functional elucidation of proteins. 3D-SURFER is a web platform for real-time protein surface comparison of a given protein structure against the entire PDB using 3D Zernike descriptors. It can smoothly navigate the protein structure space in real-time from one query structure to another. A major new feature of Release 2.0 is the ability to compare the protein surface of a single chain, a single domain, or a single complex against databases of protein chains, domains, complexes, or a combination of all three in the latest PDB. Additionally, two types of protein structures can now be compared: all-atom-surface and backbone-atom-surface. The server can also accept a batch job for a large number of database searches. Pockets in protein surfaces can be identified by VisGrid and LIGSITE (csc) . The server is available at http://kiharalab.org/3d-surfer/. PMID:24573477

  8. 3D-SURFER 2.0: web platform for real-time search and characterization of protein surfaces.

    PubMed

    Xiong, Yi; Esquivel-Rodriguez, Juan; Sael, Lee; Kihara, Daisuke

    2014-01-01

    The increasing number of uncharacterized protein structures necessitates the development of computational approaches for function annotation using the protein tertiary structures. Protein structure database search is the basis of any structure-based functional elucidation of proteins. 3D-SURFER is a web platform for real-time protein surface comparison of a given protein structure against the entire PDB using 3D Zernike descriptors. It can smoothly navigate the protein structure space in real-time from one query structure to another. A major new feature of Release 2.0 is the ability to compare the protein surface of a single chain, a single domain, or a single complex against databases of protein chains, domains, complexes, or a combination of all three in the latest PDB. Additionally, two types of protein structures can now be compared: all-atom-surface and backbone-atom-surface. The server can also accept a batch job for a large number of database searches. Pockets in protein surfaces can be identified by VisGrid and LIGSITE (csc) . The server is available at http://kiharalab.org/3d-surfer/.

  9. V-Man Generation for 3-D Real Time Animation. Chapter 5

    NASA Technical Reports Server (NTRS)

    Nebel, Jean-Christophe; Sibiryakov, Alexander; Ju, Xiangyang

    2007-01-01

    The V-Man project has developed an intuitive authoring and intelligent system to create, animate, control and interact in real-time with a new generation of 3D virtual characters: The V-Men. It combines several innovative algorithms coming from Virtual Reality, Physical Simulation, Computer Vision, Robotics and Artificial Intelligence. Given a high-level task like "walk to that spot" or "get that object", a V-Man generates the complete animation required to accomplish the task. V-Men synthesise motion at runtime according to their environment, their task and their physical parameters, drawing upon its unique set of skills manufactured during the character creation. The key to the system is the automated creation of realistic V-Men, not requiring the expertise of an animator. It is based on real human data captured by 3D static and dynamic body scanners, which is then processed to generate firstly animatable body meshes, secondly 3D garments and finally skinned body meshes.

  10. Enablement of defense missions with in-space 3D printing

    NASA Astrophysics Data System (ADS)

    Parsons, Michael; McGuire, Thomas; Hirsch, Michael; Leake, Skye; Straub, Jeremy

    2016-05-01

    Outer space has the potential to become the battlefield of the 21st century. If this occurs, the United States will need to invest heavily into research and development regarding space assets, construction approaches, and anti-satellite technologies in order to ensure the requisite level of offensive and deterrent capabilities exist. One challenge that the U.S. faces is the expense of inserting satellites into orbit. With an in-space 3D printer, engineers would not need to incur the design and construction costs for developing a satellite that can survive the launch into orbit. Instead, they could just create the best design for their application and the in-space 3D printer could print and deploy it in orbit. This paper considers the foregoing and other uses for a 3D printer in space that advance national security.

  11. 2D array transducers for real-time 3D ultrasound guidance of interventional devices

    NASA Astrophysics Data System (ADS)

    Light, Edward D.; Smith, Stephen W.

    2009-02-01

    We describe catheter ring arrays for real-time 3D ultrasound guidance of devices such as vascular grafts, heart valves and vena cava filters. We have constructed several prototypes operating at 5 MHz and consisting of 54 elements using the W.L. Gore & Associates, Inc. micro-miniature ribbon cables. We have recently constructed a new transducer using a braided wiring technology from Precision Interconnect. This transducer consists of 54 elements at 4.8 MHz with pitch of 0.20 mm and typical -6 dB bandwidth of 22%. In all cases, the transducer and wiring assembly were integrated with an 11 French catheter of a Cook Medical deployment device for vena cava filters. Preliminary in vivo and in vitro testing is ongoing including simultaneous 3D ultrasound and x-ray fluoroscopy.

  12. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments.

    PubMed

    Szőke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-06-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation's lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers.IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry.This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors. PMID:24727389

  13. Demonstration of digital hologram recording and 3D-scenes reconstruction in real-time

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Kulakov, Mikhail N.; Kurbatova, Ekaterina A.; Molodtsov, Dmitriy Y.; Rodin, Vladislav G.

    2016-04-01

    Digital holography is technique that allows to reconstruct information about 2D-objects and 3D-scenes. This is achieved by registration of interference pattern formed by two beams: object and reference ones. Pattern registered by the digital camera is processed. This allows to obtain amplitude and phase of the object beam. Reconstruction of shape of the 2D objects and 3D-scenes can be obtained numerically (using computer) and optically (using spatial light modulators - SLMs). In this work camera Megaplus II ES11000 was used for digital holograms recording. The camera has 4008 × 2672 pixels with sizes of 9 μm × 9 μm. For hologram recording, 50 mW frequency-doubled Nd:YAG laser with wavelength 532 nm was used. Liquid crystal on silicon SLM HoloEye PLUTO VIS was used for optical reconstruction of digital holograms. SLM has 1920 × 1080 pixels with sizes of 8 μm × 8 μm. At objects reconstruction 10 mW He-Ne laser with wavelength 632.8 nm was used. Setups for digital holograms recording and their optical reconstruction with the SLM were combined as follows. MegaPlus Central Control Software allows to display registered frames by the camera with a little delay on the computer monitor. The SLM can work as additional monitor. In result displayed frames can be shown on the SLM display in near real-time. Thus recording and reconstruction of the 3D-scenes was obtained in real-time. Preliminary, resolution of displayed frames was chosen equaled to the SLM one. Quantity of the pixels was limited by the SLM resolution. Frame rate was limited by the camera one. This holographic video setup was applied without additional program implementations that would increase time delays between hologram recording and object reconstruction. The setup was demonstrated for reconstruction of 3D-scenes.

  14. Development of a Wireless and Near Real-Time 3D Ultrasound Strain Imaging System.

    PubMed

    Chen, Zhaohong; Chen, Yongdong; Huang, Qinghua

    2016-04-01

    Ultrasound elastography is an important medical imaging tool for characterization of lesions. In this paper, we present a wireless and near real-time 3D ultrasound strain imaging system. It uses a 3D translating device to control a commercial linear ultrasound transducer to collect pre-compression and post-compression radio-frequency (RF) echo signal frames. The RF frames are wirelessly transferred to a high-performance server via a local area network (LAN). A dynamic programming strain estimation algorithm is implemented with the compute unified device architecture (CUDA) on the graphic processing unit (GPU) in the server to calculate the strain image after receiving a pre-compression RF frame and a post-compression RF frame at the same position. Each strain image is inserted into a strain volume which can be rendered in near real-time. We take full advantage of the translating device to precisely control the probe movement and compression. The GPU-based parallel computing techniques are designed to reduce the computation time. Phantom and in vivo experimental results demonstrate that our system can generate strain volumes with good quality and display an incrementally reconstructed volume image in near real-time. PMID:26954841

  15. A Real-time, 3D Musculoskeletal Model for Dynamic Simulation of Arm Movements

    PubMed Central

    Chadwick, Edward K.; Blana, Dimitra; van den Bogert, Antonie J.; Kirsch, Robert F.

    2010-01-01

    Neuroprostheses can be used to restore movement of the upper limb in individuals with high-level spinal cord injury. Development and evaluation of command and control schemes for such devices typically requires real-time, “patient-in-the-loop” experimentation. A real-time, three-dimensional, musculoskeletal model of the upper limb has been developed for use in a simulation environment to allow such testing to be carried out non-invasively. The model provides real-time feedback of human arm dynamics that can be displayed to the user in a virtual reality environment. The model has a three degree-of-freedom gleno-humeral joint as well as elbow flexion/extension and pronation/supination, and contains 22 muscles of the shoulder and elbow divided into multiple elements. The model is able to run in real time on modest desktop hardware and demonstrates that a large-scale, 3D model can be made to run in real time. This is a prerequisite for a real-time, whole arm model that will form part of a dynamic arm simulator for use in the development, testing and user training of neural prosthesis systems. PMID:19272926

  16. Coarse integral holography approach for real 3D color video displays.

    PubMed

    Chen, J S; Smithwick, Q Y J; Chu, D P

    2016-03-21

    A colour holographic display is considered the ultimate apparatus to provide the most natural 3D viewing experience. It encodes a 3D scene as holographic patterns that then are used to reproduce the optical wavefront. The main challenge at present is for the existing technologies to cope with the full information bandwidth required for the computation and display of holographic video. We have developed a dynamic coarse integral holography approach using opto-mechanical scanning, coarse integral optics and a low space-bandwidth-product high-bandwidth spatial light modulator to display dynamic holograms with a large space-bandwidth-product at video rates, combined with an efficient rendering algorithm to reduce the information content. This makes it possible to realise a full-parallax, colour holographic video display with a bandwidth of 10 billion pixels per second, and an adequate image size and viewing angle, as well as all relevant 3D cues. Our approach is scalable and the prototype can achieve even better performance with continuing advances in hardware components. PMID:27136858

  17. Coarse integral holography approach for real 3D color video displays.

    PubMed

    Chen, J S; Smithwick, Q Y J; Chu, D P

    2016-03-21

    A colour holographic display is considered the ultimate apparatus to provide the most natural 3D viewing experience. It encodes a 3D scene as holographic patterns that then are used to reproduce the optical wavefront. The main challenge at present is for the existing technologies to cope with the full information bandwidth required for the computation and display of holographic video. We have developed a dynamic coarse integral holography approach using opto-mechanical scanning, coarse integral optics and a low space-bandwidth-product high-bandwidth spatial light modulator to display dynamic holograms with a large space-bandwidth-product at video rates, combined with an efficient rendering algorithm to reduce the information content. This makes it possible to realise a full-parallax, colour holographic video display with a bandwidth of 10 billion pixels per second, and an adequate image size and viewing angle, as well as all relevant 3D cues. Our approach is scalable and the prototype can achieve even better performance with continuing advances in hardware components.

  18. A scalable beamforming architecture for real-time 3D ultrasonic imaging using nonuniform sampling

    NASA Astrophysics Data System (ADS)

    Dandekar, Omkar; Castro-Pareja, Carlos R.; Shekhar, Raj

    2006-03-01

    Real-time acquisition of 3D volumes is an emerging trend in medical imaging. True real-time 3D ultrasonic imaging is particularly valuable for echocardiography and trauma imaging as well as an intraoperative imaging technique for surgical navigation. Since the frame rate of ultrasonic imaging is fundamentally limited by the speed of sound, many schemes of forming multiple receive beams with a single transmit event have been proposed. With the advent of parallel receive beamforming, several architectures to form multiple (4-8) scan lines at a time have been suggested. Most of these architectures employ uniform sampling and input memory banks to store the samples acquired from all the channels. Some recent developments like crossed electrode array, coded excitation, and synthetic aperture imaging facilitate forming an entire 2D plane with a single transmit event. These techniques are speeding up frame rate to eventually accomplish true real-time 3D ultrasonic imaging. We present an FPGA-based scalable architecture capable of forming a complete scan plane in the time it usually takes to form a single scan line. Our current implementation supports 32 input channels per FPGA and up to 128 dynamically focused beam outputs. The desired focusing delay resolution is achieved using a hybrid scheme, with a combination of nonuniform sampling of the analog channels and linear interpolation for nonsparse delays within a user-specified minimum sampling interval. Overall, our pipelined architecture is capable of processing the input RF data in an online fashion, thereby reducing the input storage requirements and potentially providing better image quality.

  19. 3D space perception as embodied cognition in the history of art images

    NASA Astrophysics Data System (ADS)

    Tyler, Christopher W.

    2014-02-01

    Embodied cognition is a concept that provides a deeper understanding of the aesthetics of art images. This study considers the role of embodied cognition in the appreciation of 3D pictorial space, 4D action space, its extension through mirror reflection to embodied self-­-cognition, and its relation to the neuroanatomical organization of the aesthetic response.

  20. Superintegrable potentials on 3D Riemannian and Lorentzian spaces with nonconstant curvature

    SciTech Connect

    Ballesteros, A.; Enciso, A.; Herranz, F. J.; Ragnisco, O.

    2010-02-15

    A quantum sl(2,R) coalgebra (with deformation parameter z) is shown to underly the construction of a large class of superintegrable potentials on 3D curved spaces, that include the nonconstant curvature analog of the spherical, hyperbolic, and (anti-)de Sitter spaces. The connection and curvature tensors for these 'deformed' spaces are fully studied by working on two different phase spaces. The former directly comes from a 3D symplectic realization of the deformed coalgebra, while the latter is obtained through a map leading to a spherical-type phase space. In this framework, the nondeformed limit z {yields} 0 is identified with the flat contraction leading to the Euclidean and Minkowskian spaces/potentials. The resulting Hamiltonians always admit, at least, three functionally independent constants of motion coming from the coalgebra structure. Furthermore, the intrinsic oscillator and Kepler potentials on such Riemannian and Lorentzian spaces of nonconstant curvature are identified, and several examples of them are explicitly presented.

  1. A real-time misalignment correction algorithm for stereoscopic 3D cameras

    NASA Astrophysics Data System (ADS)

    Pekkucuksen, Ibrahim E.; Batur, Aziz Umit; Zhang, Buyue

    2012-03-01

    Camera calibration is an important problem for stereo 3-D cameras since the misalignment between the two views can lead to vertical disparities that significantly degrade 3-D viewing quality. Offline calibration during manufacturing is not always an option especially for mass produced cameras due to cost. In addition, even if one-time calibration is performed during manufacturing, its accuracy cannot be maintained indefinitely because environmental factors can lead to changes in camera hardware. In this paper, we propose a real-time stereo calibration solution that runs inside a consumer camera and continuously estimates and corrects for the misalignment between the stereo cameras. Our algorithm works by processing images of natural scenes and does not require the use of special calibration charts. The algorithm first estimates the disparity in horizontal and vertical directions between the corresponding blocks from stereo images. Then, this initial estimate is refined with two dimensional search using smaller sub-blocks. The displacement data and block coordinates are fed to a modified affine transformation model and outliers are discarded to keep the modeling error low. Finally, the estimated affine parameters are split by half and misalignment correction is applied to each view accordingly. The proposed algorithm significantly reduces the misalignment between stereo frames and enables a more comfortable 3-D viewing experience.

  2. Real-Time 3D Tracking and Reconstruction on Mobile Phones.

    PubMed

    Prisacariu, Victor Adrian; Kähler, Olaf; Murray, David W; Reid, Ian D

    2015-05-01

    We present a novel framework for jointly tracking a camera in 3D and reconstructing the 3D model of an observed object. Due to the region based approach, our formulation can handle untextured objects, partial occlusions, motion blur, dynamic backgrounds and imperfect lighting. Our formulation also allows for a very efficient implementation which achieves real-time performance on a mobile phone, by running the pose estimation and the shape optimisation in parallel. We use a level set based pose estimation but completely avoid the, typically required, explicit computation of a global distance. This leads to tracking rates of more than 100 Hz on a desktop PC and 30 Hz on a mobile phone. Further, we incorporate additional orientation information from the phone's inertial sensor which helps us resolve the tracking ambiguities inherent to region based formulations. The reconstruction step first probabilistically integrates 2D image statistics from selected keyframes into a 3D volume, and then imposes coherency and compactness using a total variational regularisation term. The global optimum of the overall energy function is found using a continuous max-flow algorithm and we show that, similar to tracking, the integration of per voxel posteriors instead of likelihoods improves the precision and accuracy of the reconstruction.

  3. View generation for 3D-TV using image reconstruction from irregularly spaced samples

    NASA Astrophysics Data System (ADS)

    Vázquez, Carlos

    2007-02-01

    Three-dimensional television (3D-TV) will become the next big step in the development of advanced TV systems. One of the major challenges for the deployment of 3D-TV systems is the diversity of display technologies and the high cost of capturing multi-view content. Depth image-based rendering (DIBR) has been identified as a key technology for the generation of new views for stereoscopic and multi-view displays from a small number of views captured and transmitted. We propose a disparity compensation method for DIBR that does not require spatial interpolation of the disparity map. We use a forward-mapping disparity compensation with real precision. The proposed method deals with the irregularly sampled image resulting from this disparity compensation process by applying a re-sampling algorithm based on a bi-cubic spline function space that produces smooth images. The fact that no approximation is made on the position of the samples implies that geometrical distortions in the final images due to approximations in sample positions are minimized. We also paid attention to the occlusion problem. Our algorithm detects the occluded regions in the newly generated images and uses simple depth-aware inpainting techniques to fill the gaps created by newly exposed areas. We tested the proposed method in the context of generation of views needed for viewing on SynthaGram TM auto-stereoscopic displays. We used as input either a 2D image plus a depth map or a stereoscopic pair with the associated disparity map. Our results show that this technique provides high quality images to be viewed on different display technologies such as stereoscopic viewing with shutter glasses (two views) and lenticular auto-stereoscopic displays (nine views).

  4. Real-Time Interactive Facilities Associated With A 3-D Medical Workstation

    NASA Astrophysics Data System (ADS)

    Goldwasser, S. M.; Reynolds, R. A.; Talton, D.; Walsh, E.

    1986-06-01

    Biomedical workstations of the future will incorporate three-dimensional interactive capabilities which provide real-time response to most common operator requests. Such systems will find application in many areas of medicine including clinical diagnosis, surgical and radiation therapy planning, biomedical research based on functional imaging, and medical education. This paper considers the requirements of these future systems in terms of image quality, performance, and the interactive environment, and examines the relationship of workstation capabilities to specific medical applications. We describe a prototype physician's workstation that we have designed and built to meet many of these requirements (using conventional graphics technology in conjunction with a custom real-time 3-D processor), and give an account of the remaining issues and challenges that future designers of such systems will have to address.

  5. Delft3D-FLOW on PRACE infrastructures for real life hydrodynamic applications.

    NASA Astrophysics Data System (ADS)

    Donners, John; Genseberger, Menno; Jagers, Bert; de Goede, Erik; Mourits, Adri

    2013-04-01

    PRACE, the Partnership for Advanced Computing in Europe, offers access to the largest high-performance computing systems in Europe. PRACE invites and helps industry to increase their innovative potential through the use of the PRACE infrastructure. This poster describes different efforts to assist Deltares with porting the open-source simulation software Delft3D-FLOW to PRACE infrastructures. Analysis of the performance on these infrastructures has been done for real life flow applications. Delft3D-FLOW is a 2D and 3D shallow water solver which calculates non-steady flow and transport phenomena resulting from tidal and meteorological forcing on a curvilinear, boundary fitted grid in Cartesian or spherical coordinates. It also includes a module which sediment transport (both suspended and bed total load) and morphological changes for an arbitrary number of cohesive and non-cohesive fractions. As Delft3D-FLOW has been developed over several decades, with a variety of functionality and over 350k lines of source code, porting to PRACE infrastructures needs some effort. At the moment Delft3D-FLOW uses MPI with domain decomposition in one direction as its parallellisation approach. Because it is hard to identify scaling issues if one immediately starts with a complex case with many features enabled, different cases with increasing complexity have been used to investigate scaling of this parallellisation approach on several PRACE platforms. As a base reference case we started with a schematic high-resolution 2D hydrodynamic model of the river Waal that turned out to be surprisingly well-suited to the highly-parallel PRACE machines. Although Delft3D-FLOW employs a sophisticated build system, several modifications were required to port it to most PRACE systems due to the use of specific, highly-tuned compilers and MPI-libraries. After this we moved to a 3D hydrodynamic model of Rotterdam harbour that includes sections of the rivers Rhine and Meuse and a part of the North

  6. Soil water content variability in the 3D 'support-spacing-extent' space of scale metrics

    NASA Astrophysics Data System (ADS)

    Pachepsky, Yakov; Martinez, Gonzalo; Vereecken, Harry

    2014-05-01

    Knowledge of soil water content variability provides important insight into soil functioning, and is essential in many applications. This variability is known to be scale-dependent, and divergent statements about the change of the variability magnitude with scale can be found in literature. We undertook a systematic review to see how the definition of scale can affect conclusions about the scale-dependence in soil water content variability. Support, spacing, and extent are three metrics used to characterize scale in hydrology. Available data sets describe changes in soil moisture variability with changes in one or more of these scale metrics. We found six types of experiments with the scale change. With data obtained without a change in extent, the scale change in some cases consisted in the simultaneous change of support and spacing. This was done with remote sensing data, and the power law decrease in variance with support increase was found. Datasets that were collected with different support or sample volumes for the same extent and spacing showed the decrease of variance as the sample size increased. A variance increase was common when the scale change consisted in change in spacing without the change in supports and extents. An increase in variance with the extent of the study area was demonstrated with data an evolution of variability with increasing size of the area under investigation (extent) without modification of support. The variance generally increased with the extent when the spacing was changed so that the change in variability at areas of different sizes was studied with the same number of samples with equal support. Finally, there are remote sensing datasets that document decrease in variability with a change in extent for a given support without modification of spacing. Overall, published information on the effect of scale on soil water content variability in the 3D space of scale metrics did not contain controversies in qualitative terms

  7. Real time 3D visualization of ultrasonic data using a standard PC.

    PubMed

    Nikolov, Svetoslav Ivanov; Pablo Gómez Gonzaléz, Juan; Arendt Jensen, Jørgen

    2003-08-01

    This paper describes a flexible, software-based scan converter capable of rendering 3D volumetric data in real time on a standard PC. The display system is used in the remotely accessible and software-configurable multichannel ultrasound sampling system (RASMUS system) developed at the Center for Fast Ultrasound Imaging. The display system is split into two modules: data transfer and display. These two modules are independent and communicate using shared memory and a predefined set of functions. It is, thus, possible to use the display program with a different data-transfer module which is tailored to another source of data (scanner, database, etc.). The data-transfer module of the RASMUS system is based on a digital signal processor from Analog Devices--ADSP 21060. The beamformer is connected to a PC via the link channels of the ADSP. A direct memory access channel transfers the data from the ADSP to a memory buffer. The display module, which is based on OpenGL, uses this memory buffer as a texture map that is passed to the graphics board. The scan conversion, image interpolation, and logarithmic compression are performed by the graphics board, thus reducing the load on the main processor to a minimum. The scan conversion is done by mapping the ultrasonic data to polygons. The format of the image is determined only by the coordinates of the polygons allowing for any kind of geometry to be displayed on the screen. Data from color flow mapping is added by alpha-blending. The 3D data are displayed either as cross-sectional planes, or as a fully rendered 3D volume displayed as a pyramid. All sides of the pyramid can be changed to reveal B-mode or C-mode scans, and the pyramid can be rotated in all directions in real time.

  8. 2.5D real waveform and real noise simulation of receiver functions in 3D models

    NASA Astrophysics Data System (ADS)

    Schiffer, Christian; Jacobsen, Bo; Balling, Niels

    2014-05-01

    There are several reasons why a real-data receiver function differs from the theoretical receiver function in a 1D model representing the stratification under the seismometer. Main reasons are ambient noise, spectral deficiencies in the impinging P-waveform, and wavefield propagation in laterally varying velocity variations. We present a rapid "2.5D" modelling approach which takes these aspects into account, so that a given 3D velocity model of the crust and uppermost mantle can be tested more realistically against observed recordings from seismometer arrays. Each recorded event at each seismometer is simulated individually through the following steps: A 2D section is extracted from the 3D model along the direction towards the hypocentre. A properly slanted plane or curved impulsive wavefront is propagated through this 2D section, resulting in noise free and spectrally complete synthetic seismometer data. The real vertical component signal is taken as a proxy of the real impingent wavefield, so by convolution and subsequent addition of real ambient noise recorded just before the P-arrival we get synthetic vertical and horizontal component data which very closely match the spectral signal content and signal to noise ratio of this specific recording. When these realistic synthetic data undergo exactly the same receiver function estimation and subsequent graphical display we get a much more realistic image to compare to the real-data receiver functions. We applied this approach to the Central Fjord area in East Greenland (Schiffer et al., 2013), where a 3D velocity model of crust and uppermost mantle was adjusted to receiver functions from 2 years of seismometer recordings and wide angle crustal profiles (Schlindwein and Jokat, 1999; Voss and Jokat, 2007). Computationally this substitutes tens or hundreds of heavy 3D computations with hundreds or thousands of single-core 2D computations which parallelize very efficiently on common multicore systems. In perspective

  9. Powering an in-space 3D printer using solar light energy

    NASA Astrophysics Data System (ADS)

    Leake, Skye; McGuire, Thomas; Parsons, Michael; Hirsch, Michael P.; Straub, Jeremy

    2016-05-01

    This paper describes how a solar power source can enable in-space 3D printing without requiring conversion to electric power and back. A design for an in-space 3D printer is presented, with a particular focus on the power generation system. Then, key benefits are presented and evaluated. Specifically, the approach facilitates the design of a spacecraft that can be built, launched, and operated at very low cost levels. The proposed approach also facilitates easy configuration of the amount of energy that is supplied. Finally, it facilitates easier disposal by removing the heavy metals and radioactive materials required for a nuclear-power solution.

  10. Miniature stereoscopic video system provides real-time 3D registration and image fusion for minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav

    2007-02-01

    Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.

  11. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    SciTech Connect

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-09-26

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  12. Usage of Underground Space for 3D Cadastre Purposes and Related Problems in Turkey

    PubMed Central

    Aydin, Cevdet C.

    2008-01-01

    Modern cities have been trying to meet their needs for space by using not only surface structures but also by considering subsurface space use. It is also anticipated that without planning of underground spaces for supporting surface city life in the years and generations to come, there will be serious and unavoidable problems with growing populations. The current Turkish cadastral system, including land right registrations, has been trying to meet users' needs in all aspects since 1924. Today Turkey's national cadastre services are carried out by the General Directorate of Land Titles and Cadastre (TKGM). The Cadastre Law, Number 3402, was approved in 1985 to eliminate problems by gathering all existing cadastral regulations under one law and also to produce 3D cadastral bases to include underground spaces and determine their legal status in Turkey. Although the mandate for 3D cadastre works is described and explained by the laws, until now the bases have been created in 2D and the reality is that legal gaps and deficiencies presently exist in them. In this study, the usage of underground spaces for the current cadastral system in Turkey was briefly evaluated, the concept of 3D cadastral data is examined and the need for using subsurface and 3D cadastre in addition to the traditional 2D register system, related problems and registration are mentioned with specific examples, but without focusing on a specific model.

  13. Fusion of laser and image sensory data for 3-D modeling of the free navigation space

    NASA Technical Reports Server (NTRS)

    Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.

    1994-01-01

    A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.

  14. An analytic solution of initial boundary value problem for 3D quasicrystals in half space

    NASA Astrophysics Data System (ADS)

    Akmaz, Hakan K.; Korkmaz, Alper

    2012-10-01

    In this article, initial boundary value problem for 3D quasicrystals in half space is considered. An analytic method is proposed for special form of initial conditions and nonhomogeneous term. It is explained that a weak solution of the problem can be constructed in the similar form of data by using symbolic calculations.

  15. A Secret 3D Model Sharing Scheme with Reversible Data Hiding Based on Space Subdivision

    NASA Astrophysics Data System (ADS)

    Tsai, Yuan-Yu

    2016-03-01

    Secret sharing is a highly relevant research field, and its application to 2D images has been thoroughly studied. However, secret sharing schemes have not kept pace with the advances of 3D models. With the rapid development of 3D multimedia techniques, extending the application of secret sharing schemes to 3D models has become necessary. In this study, an innovative secret 3D model sharing scheme for point geometries based on space subdivision is proposed. Each point in the secret point geometry is first encoded into a series of integer values that fall within [0, p - 1], where p is a predefined prime number. The share values are derived by substituting the specified integer values for all coefficients of the sharing polynomial. The surface reconstruction and the sampling concepts are then integrated to derive a cover model with sufficient model complexity for each participant. Finally, each participant has a separate 3D stego model with embedded share values. Experimental results show that the proposed technique supports reversible data hiding and the share values have higher levels of privacy and improved robustness. This technique is simple and has proven to be a feasible secret 3D model sharing scheme.

  16. Real-time 3D vectorcardiography: an application for didactic use

    NASA Astrophysics Data System (ADS)

    Daniel, G.; Lissa, G.; Medina Redondo, D.; Vásquez, L.; Zapata, D.

    2007-11-01

    The traditional approach to teach the physiological basis of electrocardiography, based only on textbooks, turns out to be insufficient or confusing for students of biomedical sciences. The addition of laboratory practice to the curriculum enables students to approach theoretical aspects from a hands-on experience, resulting in a more efficient and deeper knowledge of the phenomena of interest. Here, we present the development of a PC-based application meant to facilitate the understanding of cardiac bioelectrical phenomena by visualizing in real time the instantaneous 3D cardiac vector. The system uses 8 standard leads from a 12-channel electrocardiograph. The application interface has pedagogic objectives, and facilitates the observation of cardiac depolarization and repolarization and its temporal relationship with the ECG, making it simpler to interpret.

  17. Real-time 3D medical structure segmentation using fast evolving active contours

    NASA Astrophysics Data System (ADS)

    Wang, Xiaotao; Wang, Qiang; Hao, Zhihui; Xu, Kuanhong; Guo, Ping; Ren, Haibing; Jang, Wooyoung; Kim, Jung-bae

    2014-03-01

    Segmentation of 3D medical structures in real-time is an important as well as intractable problem for clinical applications due to the high computation and memory cost. We propose a novel fast evolving active contour model in this paper to reduce the requirements of computation and memory. The basic idea is to evolve the brief represented dynamic contour interface as far as possible per iteration. Our method encodes zero level set via a single unordered list, and evolves the list recursively by adding activated adjacent neighbors to its end, resulting in active parts of the zero level set moves far enough per iteration along with list scanning. To guarantee the robustness of this process, a new approximation of curvature for integer valued level set is proposed as the internal force to penalize the list smoothness and restrain the list continual growth. Besides, list scanning times are also used as an upper hard constraint to control the list growing. Together with the internal force, efficient regional and constrained external forces, whose computations are only performed along the unordered list, are also provided to attract the list toward object boundaries. Specially, our model calculates regional force only in a narrowband outside the zero level set and can efficiently segment multiple regions simultaneously as well as handle the background with multiple components. Compared with state-of-the-art algorithms, our algorithm is one-order of magnitude faster with similar segmentation accuracy and can achieve real-time performance for the segmentation of 3D medical structures on a standard PC.

  18. Etiology of phantom limb syndrome: Insights from a 3D default space consciousness model.

    PubMed

    Jerath, Ravinder; Crawford, Molly W; Jensen, Mike

    2015-08-01

    In this article, we examine phantom limb syndrome to gain insights into how the brain functions as the mind and how consciousness arises. We further explore our previously proposed consciousness model in which consciousness and body schema arise when information from throughout the body is processed by corticothalamic feedback loops and integrated by the thalamus. The parietal lobe spatially maps visual and non-visual information and the thalamus integrates and recreates this processed sensory information within a three-dimensional space termed the "3D default space." We propose that phantom limb syndrome and phantom limb pain arise when the afferent signaling from the amputated limb is lost but the neural circuits remain intact. In addition, integration of conflicting sensory information within the default 3D space and the loss of inhibitory afferent feedback to efferent motor activity from the amputated limb may underlie phantom limb pain.

  19. Enablement of scientific remote sensing missions with in-space 3D printing

    NASA Astrophysics Data System (ADS)

    Hirsch, Michael; McGuire, Thomas; Parsons, Michael; Leake, Skye; Straub, Jeremy

    2016-05-01

    This paper provides an overview of the capability of a 3D printer to successfully operate in-space to create structures and equipment useful in the field of scientific remote sensing. Applications of this printer involve oceanography, weather tracking, as well as space exploration sensing. The design for the 3D printer includes a parabolic array to collect and focus thermal energy. This thermal energy then be used to heat the extrusion head, allowing for the successful extrusion of the print material. Print material can range from plastics to metals, with the hope of being able to extrude aluminum for its low-mass structural integrity and its conductive properties. The printer will be able to print structures as well as electrical components. The current process of creating and launching a remote sensor into space is constrained by many factors such as gravity on earth, the forces of launch, the size of the launch vehicle, and the number of available launches. The design intent of the in-space 3D printer is to ease or eliminate these constraints, making space-based scientific remote sensors a more readily available resource.

  20. International Space Station (ISS) 3D Printer Performance and Material Characterization Methodology

    NASA Technical Reports Server (NTRS)

    Bean, Q. A.; Cooper, K. G.; Edmunson, J. E.; Johnston, M. M.; Werkheiser, M. J.

    2015-01-01

    In order for human exploration of the Solar System to be sustainable, manufacturing of necessary items on-demand in space or on planetary surfaces will be a requirement. As a first step towards this goal, the 3D Printing In Zero-G (3D Print) technology demonstration made the first items fabricated in space on the International Space Station. From those items, and comparable prints made on the ground, information about the microgravity effects on the printing process can be determined. Lessons learned from this technology demonstration will be applicable to other in-space manufacturing technologies, and may affect the terrestrial manufacturing industry as well. The flight samples were received at the George C. Marshall Space Flight Center on 6 April 2015. These samples will undergo a series of tests designed to not only thoroughly characterize the samples, but to identify microgravity effects manifested during printing by comparing their results to those of samples printed on the ground. Samples will be visually inspected, photographed, scanned with structured light, and analyzed with scanning electron microscopy. Selected samples will be analyzed with computed tomography; some will be assessed using ASTM standard tests. These tests will provide the information required to determine the effects of microgravity on 3D printing in microgravity.

  1. CAD Tools for Creating Space-filing 3D Escher Tiles

    SciTech Connect

    Howison, Mark; Sequin, Carlo H.

    2009-04-10

    We discuss the design and implementation of CAD tools for creating decorative solids that tile 3-space in a regular, isohedral manner. Starting with the simplest case of extruded 2D tilings, we describe geometric algorithms used for maintaining boundary representations of 3D tiles, including a Java implementation of an interactive constrained Delaunay triangulation library and a mesh-cutting algorithm used in layering extruded tiles to create more intricate designs. Finally, we demonstrate a CAD tool for creating 3D tilings that are derived from cubic lattices. The design process for these 3D tiles is more constrained, and hence more difficult, than in the 2D case, and it raises additional user interface issues.

  2. Automatic alignment of standard views in 3D echocardiograms using real-time tracking

    NASA Astrophysics Data System (ADS)

    Orderud, Fredrik; Torp, Hans; Rabben, Stein Inge

    2009-02-01

    In this paper, we present an automatic approach for alignment of standard apical and short-axis slices, and correcting them for out-of-plane motion in 3D echocardiography. This is enabled by using real-time Kalman tracking to perform automatic left ventricle segmentation using a coupled deformable model, consisting of a left ventricle model, as well as structures for the right ventricle and left ventricle outflow tract. Landmark points from the segmented model are then used to generate standard apical and short-axis slices. The slices are automatically updated after tracking in each frame to correct for out-of-plane motion caused by longitudinal shortening of the left ventricle. Results from a dataset of 35 recordings demonstrate the potential for automating apical slice initialization and dynamic short-axis slices. Apical 4-chamber, 2-chamber and long-axis slices are generated based on an assumption of fixed angle between the slices, and short-axis slices are generated so that they follow the same myocardial tissue over the entire cardiac cycle. The error compared to manual annotation was 8.4 +/- 3.5 mm for apex, 3.6 +/- 1.8 mm for mitral valve and 8.4 +/- 7.4 for apical 4-chamber view. The high computational efficiency and automatic behavior of the method enables it to operate in real-time, potentially during image acquisition.

  3. Numerical simulations of Rock Avalanches with DAN-3D: from real case to analogue models

    NASA Astrophysics Data System (ADS)

    Longchamp, Céline; Penna, Ivanna; Sauthier, Claire; Jaboyedoff, Michel

    2013-04-01

    Rock avalanches are rapid events with capacity to develop long and unexpected runouts, which can evolve into catastrophic events difficult to predict. In order to better understand unusual travel distances, analogue and numerical modeling are often used. The comparison between real case, and analogue and dynamics models is key to constrain and understand parameters governing rock avalanches run outs. In the Pampeanas range (Argentina), the Potrero de Leyes rock avalanche involved 0.23 km3 of highly fractured metamorphic rocks that spread in the piedmont area without any topographical constrain, resulting in a runout of 4.8 km. In this study we first attempt to apply analogue models to replicate the rock avalanche deposit. The analogue modeling consists into the release of a granular material (calibrated and angular carborundum sand) along a slope, creating similar landscape conditions that the real case. The material is not constrained laterally and spread freely on a flat deposition surface. For a volume of 50 cm3, the runout is 50 cm, the deposit has as length of 10 cm and a width of 19 cm. For a volume of 100 cm3, the runout is 65 cm, the deposit has as length of 25 cm and a width of 30 cm. In a further step we model both the real case and the result of the analogue models. Dynamics models are carried out with DAN-3D, a dynamic model for the prediction of the run out of rapid landslide (O. Hungr, 1995; O. Hugr & S.G. Evans, 1996). The result of the simulations for both volumes tested with the analogue model give satisfactory results. In fact, for the volume of 50 cm3, the deposit has as length of 10 cm and a width of 20 cm and for the volume of 100 cm3, the deposit has as length of 25 cm and a width of 50 cm. The shape and the thickness of the deposit obtained with DAN-3D are also similar with those got with the analogue models.

  4. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  5. 3D imaging of translucent media with a plenoptic sensor based on phase space optics

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzhe; Shu, Bohong; Du, Shaojun

    2015-05-01

    Traditional stereo imaging technology is not working for dynamical translucent media, because there are no obvious characteristic patterns on it and it's not allowed using multi-cameras in most cases, while phase space optics can solve the problem, extracting depth information directly from "space-spatial frequency" distribution of the target obtained by plenoptic sensor with single lens. This paper discussed the presentation of depth information in phase space data, and calculating algorithms with different transparency. A 3D imaging example of waterfall was given at last.

  6. FMRI Reveals a Dissociation between Grasping and Perceiving the Size of Real 3D Objects

    PubMed Central

    Cavina-Pratesi, Cristiana; Goodale, Melvyn A.; Culham, Jody C.

    2007-01-01

    Background Almost 15 years after its formulation, evidence for the neuro-functional dissociation between a dorsal action stream and a ventral perception stream in the human cerebral cortex is still based largely on neuropsychological case studies. To date, there is no unequivocal evidence for separate visual computations of object features for performance of goal-directed actions versus perceptual tasks in the neurologically intact human brain. We used functional magnetic resonance imaging to test explicitly whether or not brain areas mediating size computation for grasping are distinct from those mediating size computation for perception. Methodology/Principal Findings Subjects were presented with the same real graspable 3D objects and were required to perform a number of different tasks: grasping, reaching, size discrimination, pattern discrimination or passive viewing. As in prior studies, the anterior intraparietal area (AIP) in the dorsal stream was more active during grasping, when object size was relevant for planning the grasp, than during reaching, when object properties were irrelevant for movement planning (grasping>reaching). Activity in AIP showed no modulation, however, when size was computed in the context of a purely perceptual task (size = pattern discrimination). Conversely, the lateral occipital (LO) cortex in the ventral stream was modulated when size was computed for perception (size>pattern discrimination) but not for action (grasping = reaching). Conclusions/Significance While areas in both the dorsal and ventral streams responded to the simple presentation of 3D objects (passive viewing), these areas were differentially activated depending on whether the task was grasping or perceptual discrimination, respectively. The demonstration of dual coding of an object for the purposes of action on the one hand and perception on the other in the same healthy brains offers a substantial contribution to the current debate about the nature of

  7. Real-time 3D Fourier-domain optical coherence tomography guided microvascular anastomosis

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Ibrahim, Zuhaib; Lee, W. P. A.; Brandacher, Gerald; Kang, Jin U.

    2013-03-01

    Vascular and microvascular anastomosis is considered to be the foundation of plastic and reconstructive surgery, hand surgery, transplant surgery, vascular surgery and cardiac surgery. In the last two decades innovative techniques, such as vascular coupling devices, thermo-reversible poloxamers and suture-less cuff have been introduced. Intra-operative surgical guidance using a surgical imaging modality that provides in-depth view and 3D imaging can improve outcome following both conventional and innovative anastomosis techniques. Optical coherence tomography (OCT) is a noninvasive high-resolution (micron level), high-speed, 3D imaging modality that has been adopted widely in biomedical and clinical applications. In this work we performed a proof-of-concept evaluation study of OCT as an assisted intraoperative and post-operative imaging modality for microvascular anastomosis of rodent femoral vessels. The OCT imaging modality provided lateral resolution of 12 μm and 3.0 μm axial resolution in air and 0.27 volume/s imaging speed, which could provide the surgeon with clearly visualized vessel lumen wall and suture needle position relative to the vessel during intraoperative imaging. Graphics processing unit (GPU) accelerated phase-resolved Doppler OCT (PRDOCT) imaging of the surgical site was performed as a post-operative evaluation of the anastomosed vessels and to visualize the blood flow and thrombus formation. This information could help surgeons improve surgical precision in this highly challenging anastomosis of rodent vessels with diameter less than 0.5 mm. Our imaging modality could not only detect accidental suture through the back wall of lumen but also promptly diagnose and predict thrombosis immediately after reperfusion. Hence, real-time OCT can assist in decision-making process intra-operatively and avoid post-operative complications.

  8. SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions

    SciTech Connect

    Glitzner, M; Lagendijk, J; Raaymakers, B; Crijns, S; Senneville, B Denis de

    2015-06-15

    Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axial volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the

  9. 3D histogram visualization in different color spaces with application in color clustering classification

    NASA Astrophysics Data System (ADS)

    Marcu, Gabriel G.; Abe, Satoshi

    1995-04-01

    The paper presents a dynamically visualization procedure for 3D histogram of color images. The procedure runs for RGB, YMC, HSV, HSL device dependent color spaces and for Lab, Luv device independent color spaces and it is easily extendable to other color spaces if the analytical form of color transformations is available. Each histogram value is represented in the color space as a colored ball, in a position corresponding to the place of color in the color space. The paper presents the procedures for nonlinear ball normalization, ordering of drawing, space edges drawing, translation, scaling and rotation of the histogram. The 3D histogram visualization procedure can be used in different applications described in the second part of the paper. It enables to get a clear representation of the range of colors of one image, to derive and compare the efficiency of different clusterization procedures for color classification, to display comparatively the gamut of different color devices, to select the color space for an optimal mapping procedure of the outside gamut colors for minimizing the hue error, to detect bad-alignment in RGB planes for a sequential process.

  10. Second order superintegrable systems in conformally flat spaces. IV. The classical 3D Staeckel transform and 3D classification theory

    SciTech Connect

    Kalnins, E.G.; Kress, J.M.; Miller, W. Jr.

    2006-04-15

    This article is one of a series that lays the groundwork for a structure and classification theory of second order superintegrable systems, both classical and quantum, in conformally flat spaces. In the first part of the article we study the Staeckel transform (or coupling constant metamorphosis) as an invertible mapping between classical superintegrable systems on different three-dimensional spaces. We show first that all superintegrable systems with nondegenerate potentials are multiseparable and then that each such system on any conformally flat space is Staeckel equivalent to a system on a constant curvature space. In the second part of the article we classify all the superintegrable systems that admit separation in generic coordinates. We find that there are eight families of these systems.

  11. Benchmarking of 3D space charge codes using direct phase space measurements from photoemission high voltage dc gun

    NASA Astrophysics Data System (ADS)

    Bazarov, Ivan V.; Dunham, Bruce M.; Gulliford, Colwyn; Li, Yulin; Liu, Xianghong; Sinclair, Charles K.; Soong, Ken; Hannon, Fay

    2008-10-01

    We present a comparison between space charge calculations and direct measurements of the transverse phase space of space charge dominated electron bunches from a high voltage dc photoemission gun followed by an emittance compensation solenoid magnet. The measurements were performed using a double-slit emittance measurement system over a range of bunch charge and solenoid current values. The data are compared with detailed simulations using the 3D space charge codes GPT and Parmela3D. The initial particle distributions were generated from measured transverse and temporal laser beam profiles at the photocathode. The beam brightness as a function of beam fraction is calculated for the measured phase space maps and found to approach within a factor of 2 the theoretical maximum set by the thermal energy and the accelerating field at the photocathode.

  12. 3D Space Radiation Transport in a Shielded ICRU Tissue Sphere

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2014-01-01

    A computationally efficient 3DHZETRN code capable of simulating High Charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for a simple homogeneous shield object. Monte Carlo benchmarks were used to verify the methodology in slab and spherical geometry, and the 3D corrections were shown to provide significant improvement over the straight-ahead approximation in some cases. In the present report, the new algorithms with well-defined convergence criteria are extended to inhomogeneous media within a shielded tissue slab and a shielded tissue sphere and tested against Monte Carlo simulation to verify the solution methods. The 3D corrections are again found to more accurately describe the neutron and light ion fluence spectra as compared to the straight-ahead approximation. These computationally efficient methods provide a basis for software capable of space shield analysis and optimization.

  13. The Space {B^{-1}_{∞, ∞}} , Volumetric Sparseness, and 3D NSE

    NASA Astrophysics Data System (ADS)

    Farhat, Aseel; Grujić, Zoran; Leitmeyer, Keith

    2016-09-01

    In the context of the {L^∞} -theory of the 3D NSE, it is shown that smallness of a solution in Besov space {B^{-1}_{∞, ∞}} suffices to prevent a possible blow-up. In particular, it is revealed that the aforementioned condition implies a particular local spatial structure of the regions of high velocity magnitude, namely, the structure of local volumetric sparseness on the scale comparable to the radius of spatial analyticity measured in {L^∞}.

  14. Application Of Metric Space Technique (mst) In 2-d And 3-d To Sdss Dr5

    NASA Astrophysics Data System (ADS)

    Wu, Yongfeng; Batuski, D. J.; Khalil, A.

    2009-01-01

    The Metric Space Technique (MST) is a 2-D analysis method using multiple measures for quantitative analysis of any type of structure in an `image'. All potential values of the measures for such distributions are thus coordinates in a multi-parameter space, and the analysis is based on considering a sample's measures (called `output functions'), and their distance from the origin, which corresponds to the measures of the observed SDSS sample, in this multi-parameter space. Applications of this method to thin (approximately 2-D) slices of SDSS DR5 have yielded a detailed comparison of numerical models (Berlind et al. 2006, Croton et al. 2005) against the SDSS galaxy 2-D distribution structure in multi-parameter space. We present those results, including discussion of the effects of transforming from physical space to redshift space on the statistics at different scales. We also extended this 2-D method into 3-D, and we present comparisons of the SDSS galaxy 3-D distribution versus the same numerical simulations.

  15. Modeling of 3d Space-time Surface of Potential Fields and Hydrogeologic Modeling of Nuclear Waste Disposal Sites

    NASA Astrophysics Data System (ADS)

    Shestopalov, V.; Bondarenko, Y.; Zayonts, I.; Rudenko, Y.

    extracted from the total vertical and hori- zontal gradient respectively, both shaded from the 5 northeast to 355 northwest. The dip of multi-layer surfaces indicates the down -"gradient" direction in the fields. The methodology of 3D STSI is based on the analysis of vertical and horizontal anisotropy of gravity and magnetic fields, as well as of multi-layer 3D space-time surface model (3D STSM) of the stress fields. The 3D STSM is multi-layer topology structure of 1 lineaments or gradients (edges) and surfaces calculated by uniform matrices of the geophysical fields. One of the information components of the stress fields character- istics is the aspects and slopes for compressive and tensile stresses. Overlaying of the 3D STSI and lineaments with maps of multi-layer gradients enables to create highly reliable 3D Space-Time Kinematic Model "3D STKM". The analysis of 3D STKM in- cluded: - the space-time reconstruct of forces direction and strain distribution scheme during formation of geological structures and structural paragenesis (lineaments) of potential fields; - predict the real location of expected tectonic dislocations, zones of rock fracturing and disintegration, and mass-stable blocks. Based on these data, the 3D STSM are drawn which reflect the geodynamics of territory development on the ground of paleotectonic reconstruction of successive activity stages having formed the present-day lithosphere. Thus three-dimensional STSM allows to construct an un- mixing geodynamic processes in any interval of fixed space-time in coordinates x, y, t(z). The integrated of the 3D STSM and 3D seismic models enables also to create structural-kinematic and geodynamic maps of the Earth's crust at different depth. As a result, the classification of CNPP areas is performed into zones of compressive and tensile stresses characterized by enhanced permeability of rocks, and zones of consoli- dation with minimal rocks permeability. In addition, the vertically alternating zones of

  16. Characterization of 3D joint space morphology using an electrostatic model (with application to osteoarthritis)

    NASA Astrophysics Data System (ADS)

    Cao, Qian; Thawait, Gaurav; Gang, Grace J.; Zbijewski, Wojciech; Reigel, Thomas; Brown, Tyler; Corner, Brian; Demehri, Shadpour; Siewerdsen, Jeffrey H.

    2015-02-01

    Joint space morphology can be indicative of the risk, presence, progression, and/or treatment response of disease or trauma. We describe a novel methodology of characterizing joint space morphology in high-resolution 3D images (e.g. cone-beam CT (CBCT)) using a model based on elementary electrostatics that overcomes a variety of basic limitations of existing 2D and 3D methods. The method models each surface of a joint as a conductor at fixed electrostatic potential and characterizes the intra-articular space in terms of the electric field lines resulting from the solution of Gauss’ Law and the Laplace equation. As a test case, the method was applied to discrimination of healthy and osteoarthritic subjects (N = 39) in 3D images of the knee acquired on an extremity CBCT system. The method demonstrated improved diagnostic performance (area under the receiver operating characteristic curve, AUC > 0.98) compared to simpler methods of quantitative measurement and qualitative image-based assessment by three expert musculoskeletal radiologists (AUC = 0.87, p-value = 0.007). The method is applicable to simple (e.g. the knee or elbow) or multi-axial joints (e.g. the wrist or ankle) and may provide a useful means of quantitatively assessing a variety of joint pathologies.

  17. Characterization of 3D Joint Space Morphology Using an Electrostatic Model (with Application to Osteoarthritis)

    PubMed Central

    Cao, Qian; Thawait, Gaurav; Gang, Grace J.; Zbijewski, Wojciech; Reigel, Thomas; Brown, Tyler; Corner, Brian; Demehri, Shadpour; Siewerdsen, Jeffrey H.

    2015-01-01

    Joint space morphology can be indicative of the risk, presence, progression, and/or treatment response of disease or trauma. We describe a novel methodology of characterizing joint space morphology in high-resolution 3D images [e.g., cone-beam CT (CBCT)] using a model based on elementary electrostatics that overcomes a variety of basic limitations of existing 2D and 3D methods. The method models each surface of a joint as a conductor at fixed electrostatic potential and characterizes the intra-articular space in terms of the electric field lines resulting from the solution of Gauss’ Law and the Laplace equation. As a test case, the method was applied to discrimination of healthy and osteoarthritic subjects (N = 39) in 3D images of the knee acquired on an extremity CBCT system. The method demonstrated improved diagnostic performance (area under the receiver operating characteristic curve, AUC > 0.98) compared to simpler methods of quantitative measurement and qualitative image-based assessment by three expert musculoskeletal radiologists (AUC = 0.87, p-value = 0.007). The method is applicable to simple (e.g., the knee or elbow) or multi-axial joints (e.g., the wrist or ankle) and may provide a useful means of quantitatively assessing a variety of joint pathologies. PMID:25575100

  18. Real-Time Climate Simulations in the Interactive 3D Game Universe Sandbox ²

    NASA Astrophysics Data System (ADS)

    Goldenson, N. L.

    2014-12-01

    Exploration in an open-ended computer game is an engaging way to explore climate and climate change. Everyone can explore physical models with real-time visualization in the educational simulator Universe Sandbox ² (universesandbox.com/2), which includes basic climate simulations on planets. I have implemented a time-dependent, one-dimensional meridional heat transport energy balance model to run and be adjustable in real time in the midst of a larger simulated system. Universe Sandbox ² is based on the original game - at its core a gravity simulator - with other new physically-based content for stellar evolution, and handling collisions between bodies. Existing users are mostly science enthusiasts in informal settings. We believe that this is the first climate simulation to be implemented in a professionally developed computer game with modern 3D graphical output in real time. The type of simple climate model we've adopted helps us depict the seasonal cycle and the more drastic changes that come from changing the orbit or other external forcings. Users can alter the climate as the simulation is running by altering the star(s) in the simulation, dragging to change orbits and obliquity, adjusting the climate simulation parameters directly or changing other properties like CO2 concentration that affect the model parameters in representative ways. Ongoing visuals of the expansion and contraction of sea ice and snow-cover respond to the temperature calculations, and make it accessible to explore a variety of scenarios and intuitive to understand the output. Variables like temperature can also be graphed in real time. We balance computational constraints with the ability to capture the physical phenomena we wish to visualize, giving everyone access to a simple open-ended meridional energy balance climate simulation to explore and experiment with. The software lends itself to labs at a variety of levels about climate concepts including seasons, the Greenhouse effect

  19. Efficient near-real-time monitoring of 3D surface displacements in complex landslide scenarios

    NASA Astrophysics Data System (ADS)

    Allasia, Paolo; Manconi, Andrea; Giordan, Daniele; Baldo, Marco; Lollino, Giorgio

    2013-04-01

    Ground deformation measurements play a key role in monitoring activities of landslides. A wide spectrum of instruments and methods is nowadays available, going from in-situ to remote sensing approaches. In emergency scenarios, monitoring is often based on automated instruments capable to achieve accurate measurements, possibly with a very high temporal resolution, in order to achieve the best information about the evolution of the landslide in near-real-time, aiming at early warning purposes. However, the available tools for a rapid and efficient exploitation, understanding and interpretation of the retrieved measurements is still a challenge. This issue is particularly relevant in contexts where monitoring is fundamental to support early warning systems aimed at ensuring safety to people and/or infrastructures. Furthermore, in many cases the results obtained might be of difficult reading and divulgation, especially when people of different backgrounds are involved (e.g. scientists, authorities, civil protection operators, decision makers, etc.). In this work, we extend the concept of automatic and near real time from the acquisition of measurements to the data processing and divulgation, in order to achieve an efficient monitoring of surface displacements in landslide scenarios. We developed an algorithm that allows to go automatically and in near-real-time from the acquisition of 3D displacements on a landslide area to the efficient divulgation of the monitoring results via WEB. This set of straightforward procedures is called ADVICE (ADVanced dIsplaCement monitoring system for Early warning), and has been already successfully applied in several emergency scenarios. The algorithm includes: (i) data acquisition and transfer protocols; (ii) data collection, filtering, and validation; (iii) data analysis and restitution through a set of dedicated software, such as ©3DA [1]; (iv) recognition of displacement/velocity threshold and early warning (v) short term

  20. A real-time emergency response workstation using a 3-D numerical model initialized with sodar

    SciTech Connect

    Lawver, B.S.; Sullivan, T.J.; Baskett, R.L.

    1993-01-28

    Many emergency response dispersion modeling systems provide simple Gaussian models driven by single meteorological tower inputs to estimate the downwind consequences from accidental spills or stack releases. Complex meteorological or terrain settings demand more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion. Mountain valleys and sea breeze flows are two common examples of such settings. To address these complexities, the authors have implemented the three-dimensional diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on a workstation for use in real-time emergency response modeling. MATHEW/ADPIC have shown their utility in a variety of complex settings over the last 15 years within the Department of Energy`s Atmospheric Release Advisory Capability (ARAC) project. The models are initialized using an array of surface wind measurements from meteorological towers coupled with vertical profiles from an acoustic sounder (sodar). The workstation automatically acquires the meteorological data every 15 minutes. A source term is generated using either defaults or a real-time stack monitor. Model outputs include contoured isopleths displayed on site geography or plume densities shown over 3-D color shaded terrain. The models are automatically updated every 15 minutes to provide the emergency response manager with a continuous display of potentially hazardous ground-level conditions if an actual release were to occur. Model run time is typically less than 2 minutes on 6 megaflop ({approximately}30 MIPS) workstations. Data acquisition, limited by dial-up modem communications, requires 3 to 5 minutes.

  1. Registration of Real-Time 3-D Ultrasound to Tomographic Images of the Abdominal Aorta.

    PubMed

    Brekken, Reidar; Iversen, Daniel Høyer; Tangen, Geir Arne; Dahl, Torbjørn

    2016-08-01

    The purpose of this study was to develop an image-based method for registration of real-time 3-D ultrasound to computed tomography (CT) of the abdominal aorta, targeting future use in ultrasound-guided endovascular intervention. We proposed a method in which a surface model of the aortic wall was segmented from CT, and the approximate initial location of this model relative to the ultrasound volume was manually indicated. The model was iteratively transformed to automatically optimize correspondence to the ultrasound data. Feasibility was studied using data from a silicon phantom and in vivo data from a volunteer with previously acquired CT. Through visual evaluation, the ultrasound and CT data were seen to correspond well after registration. Both aortic lumen and branching arteries were well aligned. The processing was done offline, and the registration took approximately 0.2 s per ultrasound volume. The results encourage further patient studies to investigate accuracy, robustness and clinical value of the approach. PMID:27156015

  2. Real-time 3D visualization of cellular rearrangements during cardiac valve formation.

    PubMed

    Pestel, Jenny; Ramadass, Radhan; Gauvrit, Sebastien; Helker, Christian; Herzog, Wiebke; Stainier, Didier Y R

    2016-06-15

    During cardiac valve development, the single-layered endocardial sheet at the atrioventricular canal (AVC) is remodeled into multilayered immature valve leaflets. Most of our knowledge about this process comes from examining fixed samples that do not allow a real-time appreciation of the intricacies of valve formation. Here, we exploit non-invasive in vivo imaging techniques to identify the dynamic cell behaviors that lead to the formation of the immature valve leaflets. We find that in zebrafish, the valve leaflets consist of two sets of endocardial cells at the luminal and abluminal side, which we refer to as luminal cells (LCs) and abluminal cells (ALCs), respectively. By analyzing cellular rearrangements during valve formation, we observed that the LCs and ALCs originate from the atrium and ventricle, respectively. Furthermore, we utilized Wnt/β-catenin and Notch signaling reporter lines to distinguish between the LCs and ALCs, and also found that cardiac contractility and/or blood flow is necessary for the endocardial expression of these signaling reporters. Thus, our 3D analyses of cardiac valve formation in zebrafish provide fundamental insights into the cellular rearrangements underlying this process.

  3. GPU based, real-time tracking of perturbed, 3D plasma equilibria

    NASA Astrophysics Data System (ADS)

    Rath, N.; Bialek, J.; Byrne, P. J.; Debono, B.; Levesque, J. P.; Li, B.; Mauel, M. E.; Maurer, D. A.; Navratil, G. A.; Shiraki, D.

    2011-10-01

    The new high-resolution magnetic diagnostics and actuators of the HBT-EP tokamak are used to evaluate a novel approach to long-wavelength MHD mode control: instead of controlling the amplitude of specific preselected perturbations from axisymmetry, the control system will attempt to control the 3D shape of the plasma. This approach frees the experimenter from having to know the approximate shape of the expected instabilities ahead of time, and lifts the restriction of the control reference having to be the perfectly axisymmetric state. Instead, the plasma can be maintained in an arbitrary perturbed equilibrium, which may be selected for beneficial plasma properties. The increased computational demands on the control system are handled by a graphical computing unit (GPU) with 448 computing cores that interfaces directly to digitizers and analog output boards. The control system is designed to handle 96 inputs and 64 outputs with cycle times below 5 and I/O latencies below 10 microseconds. We report on the technical and theoretical design of the control system and give experimental results from testing the system's observer module which tracks the perturbed plasma equilibrium in real-time. This work was supported by US-DOE grant DE-FG02-86ER53222.

  4. The Quantitative Measurement Of Temperature Distribution In 3-D Thermal Field With High-Speed Real-Time Holographic Interferometry

    NASA Astrophysics Data System (ADS)

    Ji-zong, Wu; Wei-qiao, Fu; Qin, Wu

    1989-06-01

    The theory of using high-speed real-time holographic interferometry to measure quantitatively 3-D thermal field is discussed in thispaper. An experimental arrangement, and the holographic interference fringes of thermal field formed by the electrAc heating coil wires which were taken by the high-speed camera are given. With CONCEPT 32/2725 computer system and corresponding programms the distribution of 3-D thermal field is calculated and plotted Finally, the problems required to be improved and solved for the method of measuring quantitatively 3-D thermal field are discussed.

  5. TIPS Placement in Swine, Guided by Electromagnetic Real-Time Needle Tip Localization Displayed on Previously Acquired 3-D CT

    SciTech Connect

    Solomon, Stephen B.; Magee, Carolyn; Acker, David E.; Venbrux, Anthony C.

    1999-09-15

    Purpose: To determine the feasibility of guiding a transjugular intrahepatic portosystemic shunt (TIPS) procedure with an electromagnetic real-time needle tip position sensor coupled to previously acquired 3-dimensional (3-D) computed tomography (CT) images. Methods: An electromagnetic position sensor was placed at the tip of a Colapinto needle. The real-time position and orientation of the needle tip was then displayed on previously acquired 3-D CT images which were registered with the five swine. Portal vein puncture was then attempted in all animals. Results: The computer calculated accuracy of the position sensor was on average 3 mm. Four of five portal vein punctures were successful. In the successes, only one or two attempts were necessary and success was achieved in minutes. Conclusion: A real-time position sensor attached to the tip of a Colapinto needle and coupled to previously acquired 3-D CT images may potentially aid in entering the portal vein during the TIPS procedure.

  6. High-speed real-time 3-D coordinates measurement based on fringe projection profilometry considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, Shi Ling

    2014-10-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. However, the camera lens is never perfect and the lens distortion does influence the accuracy of the measurement result, which is often overlooked in the existing real-time 3-D shape measurement systems. To this end, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. The out-of-plane height is obtained firstly and the acquisition for the two corresponding in-plane coordinates follows on the basis of the solved height. Besides, a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the generated LUTs, a 3-D reconstruction speed of 92.34 frames per second can be achieved.

  7. 3D Visualization of near real-time remote-sensing observation for hurricanes field campaign using Google Earth API

    NASA Astrophysics Data System (ADS)

    Li, P.; Turk, J.; Vu, Q.; Knosp, B.; Hristova-Veleva, S. M.; Lambrigtsen, B.; Poulsen, W. L.; Licata, S.

    2009-12-01

    NASA is planning a new field experiment, the Genesis and Rapid Intensification Processes (GRIP), in the summer of 2010 to better understand how tropical storms form and develop into major hurricanes. The DC-8 aircraft and the Global Hawk Unmanned Airborne System (UAS) will be deployed loaded with instruments for measurements including lightning, temperature, 3D wind, precipitation, liquid and ice water contents, aerosol and cloud profiles. During the field campaign, both the spaceborne and the airborne observations will be collected in real-time and integrated with the hurricane forecast models. This observation-model integration will help the campaign achieve its science goals by allowing team members to effectively plan the mission with current forecasts. To support the GRIP experiment, JPL developed a website for interactive visualization of all related remote-sensing observations in the GRIP’s geographical domain using the new Google Earth API. All the observations are collected in near real-time (NRT) with 2 to 5 hour latency. The observations include a 1KM blended Sea Surface Temperature (SST) map from GHRSST L2P products; 6-hour composite images of GOES IR; stability indices, temperature and vapor profiles from AIRS and AMSU-B; microwave brightness temperature and rain index maps from AMSR-E, SSMI and TRMM-TMI; ocean surface wind vectors, vorticity and divergence of the wind from QuikSCAT; the 3D precipitation structure from TRMM-PR and vertical profiles of cloud and precipitation from CloudSAT. All the NRT observations are collected from the data centers and science facilities at NASA and NOAA, subsetted, re-projected, and composited into hourly or daily data products depending on the frequency of the observation. The data products are then displayed on the 3D Google Earth plug-in at the JPL Tropical Cyclone Information System (TCIS) website. The data products offered by the TCIS in the Google Earth display include image overlays, wind vectors, clickable

  8. Accurate Automatic Detection of Densely Distributed Cell Nuclei in 3D Space

    PubMed Central

    Tokunaga, Terumasa; Kanamori, Manami; Teramoto, Takayuki; Jang, Moon Sun; Kuge, Sayuri; Ishihara, Takeshi; Yoshida, Ryo; Iino, Yuichi

    2016-01-01

    To measure the activity of neurons using whole-brain activity imaging, precise detection of each neuron or its nucleus is required. In the head region of the nematode C. elegans, the neuronal cell bodies are distributed densely in three-dimensional (3D) space. However, no existing computational methods of image analysis can separate them with sufficient accuracy. Here we propose a highly accurate segmentation method based on the curvatures of the iso-intensity surfaces. To obtain accurate positions of nuclei, we also developed a new procedure for least squares fitting with a Gaussian mixture model. Combining these methods enables accurate detection of densely distributed cell nuclei in a 3D space. The proposed method was implemented as a graphical user interface program that allows visualization and correction of the results of automatic detection. Additionally, the proposed method was applied to time-lapse 3D calcium imaging data, and most of the nuclei in the images were successfully tracked and measured. PMID:27271939

  9. Accurate Automatic Detection of Densely Distributed Cell Nuclei in 3D Space.

    PubMed

    Toyoshima, Yu; Tokunaga, Terumasa; Hirose, Osamu; Kanamori, Manami; Teramoto, Takayuki; Jang, Moon Sun; Kuge, Sayuri; Ishihara, Takeshi; Yoshida, Ryo; Iino, Yuichi

    2016-06-01

    To measure the activity of neurons using whole-brain activity imaging, precise detection of each neuron or its nucleus is required. In the head region of the nematode C. elegans, the neuronal cell bodies are distributed densely in three-dimensional (3D) space. However, no existing computational methods of image analysis can separate them with sufficient accuracy. Here we propose a highly accurate segmentation method based on the curvatures of the iso-intensity surfaces. To obtain accurate positions of nuclei, we also developed a new procedure for least squares fitting with a Gaussian mixture model. Combining these methods enables accurate detection of densely distributed cell nuclei in a 3D space. The proposed method was implemented as a graphical user interface program that allows visualization and correction of the results of automatic detection. Additionally, the proposed method was applied to time-lapse 3D calcium imaging data, and most of the nuclei in the images were successfully tracked and measured. PMID:27271939

  10. Accurate Automatic Detection of Densely Distributed Cell Nuclei in 3D Space.

    PubMed

    Toyoshima, Yu; Tokunaga, Terumasa; Hirose, Osamu; Kanamori, Manami; Teramoto, Takayuki; Jang, Moon Sun; Kuge, Sayuri; Ishihara, Takeshi; Yoshida, Ryo; Iino, Yuichi

    2016-06-01

    To measure the activity of neurons using whole-brain activity imaging, precise detection of each neuron or its nucleus is required. In the head region of the nematode C. elegans, the neuronal cell bodies are distributed densely in three-dimensional (3D) space. However, no existing computational methods of image analysis can separate them with sufficient accuracy. Here we propose a highly accurate segmentation method based on the curvatures of the iso-intensity surfaces. To obtain accurate positions of nuclei, we also developed a new procedure for least squares fitting with a Gaussian mixture model. Combining these methods enables accurate detection of densely distributed cell nuclei in a 3D space. The proposed method was implemented as a graphical user interface program that allows visualization and correction of the results of automatic detection. Additionally, the proposed method was applied to time-lapse 3D calcium imaging data, and most of the nuclei in the images were successfully tracked and measured.

  11. Issues and Challenges of Teaching and Learning in 3D Virtual Worlds: Real Life Case Studies

    ERIC Educational Resources Information Center

    Pfeil, Ulrike; Ang, Chee Siang; Zaphiris, Panayiotis

    2009-01-01

    We aimed to study the characteristics and usage patterns of 3D virtual worlds in the context of teaching and learning. To achieve this, we organised a full-day workshop to explore, discuss and investigate the educational use of 3D virtual worlds. Thirty participants took part in the workshop. All conversations were recorded and transcribed for…

  12. Supersymmetric D3/D7 for holographic flavors on curved space

    NASA Astrophysics Data System (ADS)

    Karch, Andreas; Robinson, Brandon; Uhlemann, Christoph F.

    2015-11-01

    We derive a new class of supersymmetric D3/D7 brane configurations, which allow to holographically describe N=4 SYM coupled to massive N=2 flavor degrees of freedom on spaces of constant curvature. We systematically solve the κ-symmetry condition for D7-brane embeddings into AdS4-sliced AdS5×S5, and find supersymmetric embeddings in a simple closed form. Up to a critical mass, these embeddings come in surprisingly diverse families, and we present a first study of their (holographic) phenomenology. We carry out the holographic renormalization, compute the one-point functions and attempt a field-theoretic interpretation of the different families. To complete the catalog of supersymmetric D3/D7 configurations, we construct analogous embeddings for flavored N=4 SYM on S4 and dS4.

  13. Parallel Imaging of 3D Surface Profile with Space-Division Multiplexing.

    PubMed

    Lee, Hyung Seok; Cho, Soon-Woo; Kim, Gyeong Hun; Jeong, Myung Yung; Won, Young Jae; Kim, Chang-Seok

    2016-01-01

    We have developed a modified optical frequency domain imaging (OFDI) system that performs parallel imaging of three-dimensional (3D) surface profiles by using the space division multiplexing (SDM) method with dual-area swept sourced beams. We have also demonstrated that 3D surface information for two different areas could be well obtained in a same time with only one camera by our method. In this study, double field of views (FOVs) of 11.16 mm × 5.92 mm were achieved within 0.5 s. Height range for each FOV was 460 µm and axial and transverse resolutions were 3.6 and 5.52 µm, respectively.

  14. Gust Acoustics Computation with a Space-Time CE/SE Parallel 3D Solver

    NASA Technical Reports Server (NTRS)

    Wang, X. Y.; Himansu, A.; Chang, S. C.; Jorgenson, P. C. E.; Reddy, D. R. (Technical Monitor)

    2002-01-01

    The benchmark Problem 2 in Category 3 of the Third Computational Aero-Acoustics (CAA) Workshop is solved using the space-time conservation element and solution element (CE/SE) method. This problem concerns the unsteady response of an isolated finite-span swept flat-plate airfoil bounded by two parallel walls to an incident gust. The acoustic field generated by the interaction of the gust with the flat-plate airfoil is computed by solving the 3D (three-dimensional) Euler equations in the time domain using a parallel version of a 3D CE/SE solver. The effect of the gust orientation on the far-field directivity is studied. Numerical solutions are presented and compared with analytical solutions, showing a reasonable agreement.

  15. Probabilistic Modeling of Conformational Space for 3D Machine Learning Approaches.

    PubMed

    Jahn, Andreas; Hinselmann, Georg; Fechner, Nikolas; Henneges, Carsten; Zell, Andreas

    2010-05-17

    We present a new probabilistic encoding of the conformational space of a molecule that allows for the integration into common similarity calculations. The method uses distance profiles of flexible atom-pairs and computes generative models that describe the distance distribution in the conformational space. The generative models permit the use of probabilistic kernel functions and, therefore, our approach can be used to extend existing 3D molecular kernel functions, as applied in support vector machines, to build QSAR models. The resulting kernels are valid 4D kernel functions and reduce the dependency of the model quality on suitable conformations of the molecules. We showed in several experiments the robust performance of the 4D kernel function, which was extended by our approach, in comparison to the original 3D-based kernel function. The new method compares the conformational space of two molecules within one kernel evaluation. Hence, the number of kernel evaluations is significantly reduced in comparison to common kernel-based conformational space averaging techniques. Additionally, the performance gain of the extended model correlates with the flexibility of the data set and enables an a priori estimation of the model improvement.

  16. 3D Analysis of Remote-Sensed Heliospheric Data for Space Weather Forecasting

    NASA Astrophysics Data System (ADS)

    Yu, H. S.; Jackson, B. V.; Hick, P. P.; Buffington, A.; Bisi, M. M.; Odstrcil, D.; Hong, S.; Kim, J.; Yi, J.; Tokumaru, M.; Gonzalez-Esparza, A.

    2015-12-01

    The University of California, San Diego (UCSD) time-dependent iterative kinematic reconstruction technique has been used and expanded upon for over two decades. It currently provides some of the most accurate predictions and three-dimensional (3D) analyses of heliospheric solar-wind parameters now available using interplanetary scintillation (IPS) data. The parameters provided include reconstructions of velocity, density, and magnetic fields. Precise time-dependent results are obtained at any solar distance in the inner heliosphere using current Solar-Terrestrial Environment Laboratory (STELab), Nagoya University, Japan IPS data sets, but the reconstruction technique can also incorporate data from other IPS systems from around the world. With access using world IPS data systems, not only can predictions using the reconstruction technique be made without observation dead times due to poor longitude coverage or system outages, but the program can itself be used to standardize observations of IPS. Additionally, these analyses are now being exploited as inner-boundary values to drive an ENLIL 3D-MHD heliospheric model in real time. A major potential of this is that it will use the more realistic physics of 3D-MHD modeling to provide an automatic forecast of CMEs and corotating structures up to several days in advance of the event/features arriving at Earth, with or without involving coronagraph imagery or the necessity of magnetic fields being used to provide the background solar wind speeds.

  17. Application of 3D WebGIS and real-time technique in earthquake information publishing and visualization

    NASA Astrophysics Data System (ADS)

    Li, Boren; Wu, Jianping; Pan, Mao; Huang, Jing

    2015-06-01

    In hazard management, earthquake researchers have utilized GIS to ease the process of managing disasters. Researchers use WebGIS to assess hazards and seismic risk. Although they can provide a visual analysis platform based on GIS technology, they lack a general description in the extensibility of WebGIS for processing dynamic data, especially real-time data. In this paper, we propose a novel approach for real-time 3D visual earthquake information publishing model based on WebGIS and digital globe to improve the ability of processing real-time data in systems based on WebGIS. On the basis of the model, we implement a real-time 3D earthquake information publishing system—EqMap3D. The system can not only publish real-time earthquake information but also display these data and their background geoscience information in a 3D scene. It provides a powerful tool for display, analysis, and decision-making for researchers and administrators. It also facilitates better communication between researchers engaged in geosciences and the interested public.

  18. Real Time 3D Echocardiographic Evaluation of Iatrogenic Atrial Septal Defects After Percutaneous Transvenous Mitral Commissurotomy

    PubMed Central

    Devarakonda, Sarath Babu; Mannuva, Boochi Babu; Durgaprasad, Rajasekhar; Velam, Vanajakshamma; Akula, Vidya Sagar; Kasala, Latheef

    2015-01-01

    Introduction: Percutaneous transvenous mitral commissurotomy (PTMC) is a safe and effective procedure for relief of severe mitral stenosis. PTMC is being done widely and many transseptal procedures requiring large diameter catheters, sheaths are becoming popular. The knowledge of iatrogenic atrial septal defect (iASD) is vital. This study assessed the use of real-time 3D echocardiography (RT3DE) and incidence of iASD in a cohort of patients undergoing transseptal catheterization during PTMC. Methods: One hundred ten patients underwent PTMC. The reliability and accuracy of RT3DE for iASD detection was determined, RT3DE was compared with 2D echocardiography (2DE) for iASD occurrence, influencing variables analyzed and followed up for 1 year. Results: RT3DE is more reliable and accurate for the study of iASD than 2DE. Color RT3DE detected iASD in 94 (85.5%), with 2DE iASD was detected in 74 (67.3%) (P < .0001).On follow up 85% had iASD post procedure, 56% at 6 months, 19% at 1 year follow up. The mean iASD diameter was 5.41 ± 3.12 mm and area 6.57 ± 3.81 mm2. iASD correlated with patient height, Wilkins score, pre-PTMC LA ‘v’, and post-PTMC LVEDP. Conclusion: RT3DE imaging is superior in accuracy to traditional 2DE techniques. All the modes of RT3DE are useful in the assessment of iASD. iASD measured by RT3DE correlates with several patient, procedural and echocardiographic variables. PMID:26430495

  19. PSF Rotation with Changing Defocus and Applications to 3D Imaging for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Kumar, R.

    2013-09-01

    For a clear, well corrected imaging aperture in space, the point-spread function (PSF) in its Gaussian image plane has the conventional, diffraction-limited, tightly focused Airy form. Away from that plane, the PSF broadens rapidly, however, resulting in a loss of sensitivity and transverse resolution that makes such a traditional best-optics approach untenable for rapid 3D image acquisition. One must scan in focus to maintain high sensitivity and resolution as one acquires image data, slice by slice, from a 3D volume with reduced efficiency. In this paper we describe a computational-imaging approach to overcome this limitation, one that uses pupil-phase engineering to fashion a PSF that, although not as tight as the Airy spot, maintains its shape and size while rotating uniformly with changing defocus over many waves of defocus phase at the pupil edge. As one of us has shown recently [1], the subdivision of a circular pupil aperture into M Fresnel zones, with the mth zone having an outer radius proportional to m and impressing a spiral phase profile of form m? on the light wave, where ? is the azimuthal angle coordinate measured from a fixed x axis (the dislocation line), yields a PSF that rotates with defocus while keeping its shape and size. Physically speaking, a nonzero defocus of a point source means a quadratic optical phase in the pupil that, because of the square-root dependence of the zone radius on the zone number, increases on average by the same amount from one zone to the next. This uniformly incrementing phase yields, in effect, a rotation of the dislocation line, and thus a rotated PSF. Since the zone-to-zone phase increment depends linearly on defocus to first order, the PSF rotates uniformly with changing defocus. For an M-zone pupil, a complete rotation of the PSF occurs when the defocus-induced phase at the pupil edge changes by M waves. Our recent simulations of reconstructions from image data for 3D image scenes comprised of point sources at

  20. Helicopter Flight Test of a Compact, Real-Time 3-D Flash Lidar for Imaging Hazardous Terrain During Planetary Landing

    NASA Technical Reports Server (NTRS)

    Roback, VIncent E.; Amzajerdian, Farzin; Brewster, Paul F.; Barnes, Bruce W.; Kempton, Kevin S.; Reisse, Robert A.; Bulyshev, Alexander E.

    2013-01-01

    A second generation, compact, real-time, air-cooled 3-D imaging Flash Lidar sensor system, developed from a number of cutting-edge components from industry and NASA, is lab characterized and helicopter flight tested under the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project. The ALHAT project is seeking to develop a guidance, navigation, and control (GN&C) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar incorporates a 3-D imaging video camera based on Indium-Gallium-Arsenide Avalanche Photo Diode and novel micro-electronic technology for a 128 x 128 pixel array operating at a video rate of 20 Hz, a high pulse-energy 1.06 µm Neodymium-doped: Yttrium Aluminum Garnet (Nd:YAG) laser, a remote laser safety termination system, high performance transmitter and receiver optics with one and five degrees field-of-view (FOV), enhanced onboard thermal control, as well as a compact and self-contained suite of support electronics housed in a single box and built around a PC-104 architecture to enable autonomous operations. The Flash Lidar was developed and then characterized at two NASA-Langley Research Center (LaRC) outdoor laser test range facilities both statically and dynamically, integrated with other ALHAT GN&C subsystems from partner organizations, and installed onto a Bell UH-1H Iroquois "Huey" helicopter at LaRC. The integrated system was flight tested at the NASA-Kennedy Space Center (KSC) on simulated lunar approach to a custom hazard field consisting of rocks, craters, hazardous slopes, and safe-sites near the Shuttle Landing Facility runway starting at slant ranges of 750 m. In order to evaluate different methods of achieving hazard detection, the lidar, in conjunction with the ALHAT hazard detection and GN&C system, operates in both a narrow 1deg FOV raster

  1. Multi-scale simulations of space problems with iPIC3D

    NASA Astrophysics Data System (ADS)

    Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano

    The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038

  2. Using articulated scene models for dynamic 3d scene analysis in vista spaces

    NASA Astrophysics Data System (ADS)

    Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven

    2010-09-01

    In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.

  3. Real-time 3-D X-ray and gamma-ray viewer

    NASA Technical Reports Server (NTRS)

    Yin, L. I. (Inventor)

    1983-01-01

    A multi-pinhole aperture lead screen forms an equal plurality of invisible mini-images having dissimilar perspectives of an X-ray and gamma-ray emitting object (ABC) onto a near-earth phosphor layer. This layer provides visible light mini-images directly into a visible light image intensifier. A viewing screen having an equal number of dissimilar perspective apertures distributed across its face in a geometric pattern identical to the lead screen, provides a viewer with a real, pseudoscopic image (A'B'C') of the object with full horizontal and vertical parallax. Alternatively, a third screen identical to viewing screen and spaced apart from a second visible light image intensifier, may be positioned between the first image intensifier and the viewing screen, thereby providing the viewer with a virtual, orthoscopic image (A"B"C") of the object (ABC) with full horizontal and vertical parallax.

  4. Space-charge driven emittance growth in a 3D mismatched anisotropic beam

    SciTech Connect

    Qiang, J.; Ryne, R.D.; Hofmann, I.

    2002-12-03

    In this paper we present a 3D simulation study of the emittance growth in a mismatched anisotropic beam. The equipartitioning driven by a 4th order space-charge resonance can be significantly modified by the presence of mismatch oscillation and halo formation. This causes emittance growth in both the longitudinal and transverse directions which could drive the beam even further away from equipartition. The averaged emittance growth per degree freedom follows the upper bound of the 2D free energy limit plus the contributions from equipartitioning.

  5. PARALLEL 3-D SPACE CHARGE CALCULATIONS IN THE UNIFIED ACCELERATOR LIBRARY.

    SciTech Connect

    D'IMPERIO, N.L.; LUCCIO, A.U.; MALITSKY, N.

    2006-06-26

    The paper presents the integration of the SIMBAD space charge module in the UAL framework. SIMBAD is a Particle-in-Cell (PIC) code. Its 3-D Parallel approach features an optimized load balancing scheme based on a genetic algorithm. The UAL framework enhances the SIMBAD standalone version with the interactive ROOT-based analysis environment and an open catalog of accelerator algorithms. The composite package addresses complex high intensity beam dynamics and has been developed as part of the FAIR SIS 100 project.

  6. Portable high-intensity focused ultrasound system with 3D electronic steering, real-time cavitation monitoring, and 3D image reconstruction algorithms: a preclinical study in pigs

    PubMed Central

    2014-01-01

    Purpose: The aim of this study was to evaluate the safety and accuracy of a new portable ultrasonography-guided high-intensity focused ultrasound (USg-HIFU) system with a 3-dimensional (3D) electronic steering transducer, a simultaneous ablation and imaging module, real-time cavitation monitoring, and 3D image reconstruction algorithms. Methods: To address the accuracy of the transducer, hydrophones in a water chamber were used to assess the generation of sonic fields. An animal study was also performed in five pigs by ablating in vivo thighs by single-point sonication (n=10) or volume sonication (n=10) and ex vivo kidneys by single-point sonication (n=10). Histological and statistical analyses were performed. Results: In the hydrophone study, peak voltages were detected within 1.0 mm from the targets on the y- and z-axes and within 2.0-mm intervals along the x-axis (z-axis, direction of ultrasound propagation; y- and x-axes, perpendicular to the direction of ultrasound propagation). Twenty-nine of 30 HIFU sessions successfully created ablations at the target. The in vivo porcine thigh study showed only a small discrepancy (width, 0.5-1.1 mm; length, 3.0 mm) between the planning ultrasonograms and the pathological specimens. Inordinate thermal damage was not observed in the adjacent tissues or sonic pathways in the in vivo thigh and ex vivo kidney studies. Conclusion: Our study suggests that this new USg-HIFU system may be a safe and accurate technique for ablating soft tissues and encapsulated organs. PMID:25038809

  7. Representing geometric structures in 3D tomography soil images: Application to pore-space modeling

    NASA Astrophysics Data System (ADS)

    Monga, Olivier; Ndeye Ngom, Fatou; François Delerue, Jean

    2007-09-01

    Only in the last decade have geoscientists started to use 3D computed tomography (CT) images of soil for better understanding and modeling of soil properties. In this paper, we propose one of the first approaches to allow the definition and computation of stable (intrinsic) geometric representations of structures in 3D CT soil images. This addresses the open problem set by the description of volume shapes from discrete traces without any a priori information. The basic concept involves representing the volume shape by a piecewise approximation using simple volume primitives (bowls, cylinders, cones, etc.). This typical representation is assumed to optimize a criterion ensuring its stability. This criterion includes the representation scale, which characterizes the trade-off between the fitting error and the number of patches. We also take into account the preservation of topological properties of the initial shape: the number of connected components, adjacency relationships, etc. We propose an efficient computation method for this piecewise approximation using cylinders or bowls. For cylinders, we use optimal region growing in a valuated adjacency graph that represents the primitives and their adjacency relationships. For bowls, we compute a minimal set of Delaunay spheres recovering the skeleton. Our method is applied to modeling of a coarse pore space extracted from 3D CT soil images. The piecewise bowls approximation gives a geometric formalism corresponding to the intuitive notion of pores and also an efficient way to compute it. This geometric and topological representation of coarse pore space can be used, for instance, to simulate biological activity in soil.

  8. 3D Printing in Zero-G Experiment, In Space Manufacturing (LPS, 4)

    NASA Technical Reports Server (NTRS)

    Bean, Quincy; Cooper, Ken; Werkheiser, Niki

    2015-01-01

    The 3D Printing in Zero-G Experiment has been an ongoing effort for several years. In June 2014 the technology demonstration 3D printer was launched to the International Space Station. In November 2014 the first 21 parts were manufactured in orbit marking the beginning of a paradigm shift that will allow astronauts to be more self-sufficient and pave the way to larger scale orbital manufacturing. Prior to launch the 21 parts were built on the ground with the flight unit with the same feedstock. These ground control samples are to be tested alongside the flight samples in order to determine if there is a measurable difference between parts built on the ground vs. parts built in space. As of this writing, testing has not yet commenced. Tests to be performed are structured light scanning for volume and geometric discrepancies, CT scanning for density measurement, destructive testing of mechanical samples, and SEM analysis for inter-laminar adhesion discrepancies. Additionally, an ABS material characterization was performed on mechanical samples built from the same CAD files as the flight and ground samples on different machine / feedstock combinations. The purpose of this testing was twofold: first to obtain mechanical data in order to have a baseline comparison for the flight and ground samples and second to ascertain if there is a measurable difference between machines and feedstock.

  9. CheS-Mapper - Chemical Space Mapping and Visualization in 3D

    PubMed Central

    2012-01-01

    Analyzing chemical datasets is a challenging task for scientific researchers in the field of chemoinformatics. It is important, yet difficult to understand the relationship between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects. To that respect, visualization tools can help to better comprehend the underlying correlations. Our recently developed 3D molecular viewer CheS-Mapper (Chemical Space Mapper) divides large datasets into clusters of similar compounds and consequently arranges them in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kind of features, like structural fragments as well as quantitative chemical descriptors. These features can be highlighted within CheS-Mapper, which aids the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. As a final function, the tool can also be used to select and export specific subsets of a given dataset for further analysis. PMID:22424447

  10. Real-Time Analysis of Endogenous Wnt Signalling in 3D Mesenchymal Stromal Cells.

    PubMed

    Saleh, Fatima; Carstairs, Alice; Etheridge, S Leah; Genever, Paul

    2016-01-01

    Wnt signalling has been implicated in the regulation of stem cell self-renewal and differentiation; however, the majority of in vitro studies are carried out using monolayer 2D culture techniques. Here, we used mesenchymal stromal cell (MSC) EGFP reporter lines responsive to Wnt pathway activation in a 3D spheroid culture system to mimic better the in vivo environment. Endogenous Wnt signalling was then investigated under basal conditions and when MSCs were induced to undergo osteogenic and adipogenic differentiation. Interestingly, endogenous Wnt signalling was only active during 3D differentiation whereas 2D cultures showed no EGFP expression throughout an extended differentiation time-course. Furthermore, exogenous Wnt signalling in 3D adipogenic conditions inhibited differentiation compared to unstimulated controls. In addition, suppressing Wnt signalling by Dkk-1 restored and facilitated adipogenic differentiation in MSC spheroids. Our findings indicate that endogenous Wnt signalling is active and can be tracked in 3D MSC cultures where it may act as a molecular switch in adipogenesis. The identification of the signalling pathways that regulate MSCs in a 3D in vivo-like environment will advance our understanding of the molecular mechanisms that control MSC fate. PMID:27668000

  11. Real-Time Analysis of Endogenous Wnt Signalling in 3D Mesenchymal Stromal Cells

    PubMed Central

    Saleh, Fatima; Etheridge, S. Leah

    2016-01-01

    Wnt signalling has been implicated in the regulation of stem cell self-renewal and differentiation; however, the majority of in vitro studies are carried out using monolayer 2D culture techniques. Here, we used mesenchymal stromal cell (MSC) EGFP reporter lines responsive to Wnt pathway activation in a 3D spheroid culture system to mimic better the in vivo environment. Endogenous Wnt signalling was then investigated under basal conditions and when MSCs were induced to undergo osteogenic and adipogenic differentiation. Interestingly, endogenous Wnt signalling was only active during 3D differentiation whereas 2D cultures showed no EGFP expression throughout an extended differentiation time-course. Furthermore, exogenous Wnt signalling in 3D adipogenic conditions inhibited differentiation compared to unstimulated controls. In addition, suppressing Wnt signalling by Dkk-1 restored and facilitated adipogenic differentiation in MSC spheroids. Our findings indicate that endogenous Wnt signalling is active and can be tracked in 3D MSC cultures where it may act as a molecular switch in adipogenesis. The identification of the signalling pathways that regulate MSCs in a 3D in vivo-like environment will advance our understanding of the molecular mechanisms that control MSC fate. PMID:27668000

  12. Real-Time Analysis of Endogenous Wnt Signalling in 3D Mesenchymal Stromal Cells

    PubMed Central

    Saleh, Fatima; Etheridge, S. Leah

    2016-01-01

    Wnt signalling has been implicated in the regulation of stem cell self-renewal and differentiation; however, the majority of in vitro studies are carried out using monolayer 2D culture techniques. Here, we used mesenchymal stromal cell (MSC) EGFP reporter lines responsive to Wnt pathway activation in a 3D spheroid culture system to mimic better the in vivo environment. Endogenous Wnt signalling was then investigated under basal conditions and when MSCs were induced to undergo osteogenic and adipogenic differentiation. Interestingly, endogenous Wnt signalling was only active during 3D differentiation whereas 2D cultures showed no EGFP expression throughout an extended differentiation time-course. Furthermore, exogenous Wnt signalling in 3D adipogenic conditions inhibited differentiation compared to unstimulated controls. In addition, suppressing Wnt signalling by Dkk-1 restored and facilitated adipogenic differentiation in MSC spheroids. Our findings indicate that endogenous Wnt signalling is active and can be tracked in 3D MSC cultures where it may act as a molecular switch in adipogenesis. The identification of the signalling pathways that regulate MSCs in a 3D in vivo-like environment will advance our understanding of the molecular mechanisms that control MSC fate.

  13. 3D models as a platform for urban analysis and studies on human perception of space

    NASA Astrophysics Data System (ADS)

    Fisher-Gewirtzman, D.

    2012-10-01

    The objective of this work is to develop an integrated visual analysis and modelling for environmental and urban systems in respect to interior space layout and functionality. This work involves interdisciplinary research efforts that focus primarily on architecture design discipline, yet incorporates experts from other and different disciplines, such as Geoinformatics, computer sciences and environment-behavior studies. This work integrates an advanced Spatial Openness Index (SOI) model within realistic geovisualized Geographical Information System (GIS) environment and assessment using subjective residents' evaluation. The advanced SOI model measures the volume of visible space at any required view point practically, for every room or function. This model enables accurate 3D simulation of the built environment regarding built structure and surrounding vegetation. This paper demonstrates the work on a case study. A 3D model of Neve-Shaanan neighbourhood in Haifa was developed. Students that live in this neighbourhood had participated in this research. Their apartments were modelled in details and inserted into a general model, representing topography and the volumes of buildings. The visual space for each room in every apartment was documented and measured and at the same time the students were asked to answer questions regarding their perception of space and view from their residence. The results of this research work had shown potential contribution to professional users, such as researchers, designers and city planners. This model can be easily used by professionals and by non-professionals such as city dwellers, contractors and developers. This work continues with additional case studies having different building typologies and functions variety, using virtual reality tools.

  14. Programmable real-time applications with the 3D-Flow for input data rate systems of hundreds of MHz

    SciTech Connect

    Crosetto, D.

    1996-02-01

    The applicability of the 3D-Flow system to different experimental setups for real-time applications in the range of hundreds of nanoseconds is described. The results of the simulation of several real-time applications using the 3D-Flow demonstrate the advantages of a simple architecture that carries out operations in a balanced manner using regular connections and exceptionally few replicated components compared to conventional microprocessors. Diverse applications can be found that will benefit from this approach: High Energy Physics (HEP), which typically requires discerning patterns from thousands of accelerator particle collision signals up to 40 Mhz input data rate; Medical Imaging, that requires interactive tools for studying fast occurring biological processes; processing output from high-rate CCD cameras in commercial applications, such as quality control in manufacturing; data compression; speech and character recognition; automatic automobile guidance, and other applications. The 3D-Flow system was conceived for experiments at the Superconducting Super Collider (SSC). It was adopted by the Gamma Electron and Muon (GEM) experiment that was to be used for particle identification. The target of the 3D-Flow system was real-time pattern recognition at 100 million frames/sec.

  15. Real-time 3D adaptive filtering for portable imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.

  16. 3D Modeling of Building Indoor Spaces and Closed Doors from Imagery and Point Clouds

    PubMed Central

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-01-01

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723

  17. 3D modeling of building indoor spaces and closed doors from imagery and point clouds.

    PubMed

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-02-03

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction.

  18. 3D modeling of building indoor spaces and closed doors from imagery and point clouds.

    PubMed

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-01-01

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723

  19. Touring Mars Online, Real-time, in 3D for Math and Science Educators and Students

    ERIC Educational Resources Information Center

    Jones, Greg; Kalinowski, Kevin

    2007-01-01

    This article discusses a project that placed over 97% of Mars' topography made available from NASA into an interactive 3D multi-user online learning environment beginning in 2003. In 2005 curriculum materials that were created to support middle school math and science education were developed. Research conducted at the University of North Texas…

  20. Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data

    NASA Astrophysics Data System (ADS)

    Huai, J.; Zhang, Y.; Yilmaz, A.

    2015-08-01

    Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.

  1. 3D real-time visualization of blood flow in cerebral aneurysms by light field particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Carlsohn, Matthias F.; Kemmling, André; Petersen, Arne; Wietzke, Lennart

    2016-04-01

    Cerebral aneurysms require endovascular treatment to eliminate potentially lethal hemorrhagic rupture by hemostasis of blood flow within the aneurysm. Devices (e.g. coils and flow diverters) promote homeostasis, however, measurement of blood flow within an aneurysm or cerebral vessel before and after device placement on a microscopic level has not been possible so far. This would allow better individualized treatment planning and improve manufacture design of devices. For experimental analysis, direct measurement of real-time microscopic cerebrovascular flow in micro-structures may be an alternative to computed flow simulations. An application of microscopic aneurysm flow measurement on a regular basis to empirically assess a high number of different anatomic shapes and the corresponding effect of different devices would require a fast and reliable method at low cost with high throughout assessment. Transparent three dimensional 3D models of brain vessels and aneurysms may be used for microscopic flow measurements by particle image velocimetry (PIV), however, up to now the size of structures has set the limits for conventional 3D-imaging camera set-ups. On line flow assessment requires additional computational power to cope with the processing large amounts of data generated by sequences of multi-view stereo images, e.g. generated by a light field camera capturing the 3D information by plenoptic imaging of complex flow processes. Recently, a fast and low cost workflow for producing patient specific three dimensional models of cerebral arteries has been established by stereo-lithographic (SLA) 3D printing. These 3D arterial models are transparent an exhibit a replication precision within a submillimeter range required for accurate flow measurements under physiological conditions. We therefore test the feasibility of microscopic flow measurements by PIV analysis using a plenoptic camera system capturing light field image sequences. Averaging across a sequence of

  2. A Bayesian approach to real-time 3D tumor localization via monoscopic x-ray imaging during treatment delivery

    SciTech Connect

    Li, Ruijiang; Fahimian, Benjamin P.; Xing, Lei

    2011-07-15

    Purpose: Monoscopic x-ray imaging with on-board kV devices is an attractive approach for real-time image guidance in modern radiation therapy such as VMAT or IMRT, but it falls short in providing reliable information along the direction of imaging x-ray. By effectively taking consideration of projection data at prior times and/or angles through a Bayesian formalism, the authors develop an algorithm for real-time and full 3D tumor localization with a single x-ray imager during treatment delivery. Methods: First, a prior probability density function is constructed using the 2D tumor locations on the projection images acquired during patient setup. Whenever an x-ray image is acquired during the treatment delivery, the corresponding 2D tumor location on the imager is used to update the likelihood function. The unresolved third dimension is obtained by maximizing the posterior probability distribution. The algorithm can also be used in a retrospective fashion when all the projection images during the treatment delivery are used for 3D localization purposes. The algorithm does not involve complex optimization of any model parameter and therefore can be used in a ''plug-and-play'' fashion. The authors validated the algorithm using (1) simulated 3D linear and elliptic motion and (2) 3D tumor motion trajectories of a lung and a pancreas patient reproduced by a physical phantom. Continuous kV images were acquired over a full gantry rotation with the Varian TrueBeam on-board imaging system. Three scenarios were considered: fluoroscopic setup, cone beam CT setup, and retrospective analysis. Results: For the simulation study, the RMS 3D localization error is 1.2 and 2.4 mm for the linear and elliptic motions, respectively. For the phantom experiments, the 3D localization error is < 1 mm on average and < 1.5 mm at 95th percentile in the lung and pancreas cases for all three scenarios. The difference in 3D localization error for different scenarios is small and is not

  3. Real-time visual sensing system achieving high-speed 3D particle tracking with nanometer resolution.

    PubMed

    Cheng, Peng; Jhiang, Sissy M; Menq, Chia-Hsiang

    2013-11-01

    This paper presents a real-time visual sensing system, which is created to achieve high-speed three-dimensional (3D) motion tracking of microscopic spherical particles in aqueous solutions with nanometer resolution. The system comprises a complementary metal-oxide-semiconductor (CMOS) camera, a field programmable gate array (FPGA), and real-time image processing programs. The CMOS camera has high photosensitivity and superior SNR. It acquires images of 128×120 pixels at a frame rate of up to 10,000 frames per second (fps) under the white light illumination from a standard 100 W halogen lamp. The real-time image stream is downloaded from the camera directly to the FPGA, wherein a 3D particle-tracking algorithm is implemented to calculate the 3D positions of the target particle in real time. Two important objectives, i.e., real-time estimation of the 3D position matches the maximum frame rate of the camera and the timing of the output data stream of the system is precisely controlled, are achieved. Two sets of experiments were conducted to demonstrate the performance of the system. First, the visual sensing system was used to track the motion of a 2 μm polystyrene bead, whose motion was controlled by a three-axis piezo motion stage. The ability to track long-range motion with nanometer resolution in all three axes is demonstrated. Second, it was used to measure the Brownian motion of the 2 μm polystyrene bead, which was stabilized in aqueous solution by a laser trapping system. PMID:24216655

  4. Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system.

    PubMed

    Tao, Tianyang; Chen, Qian; Da, Jian; Feng, Shijie; Hu, Yan; Zuo, Chao

    2016-09-01

    In recent years, fringe projection has become an established and essential method for dynamic three-dimensional (3-D) shape measurement in different fields such as online inspection and real-time quality control. Numerous high-speed 3-D shape measurement methods have been developed by either employing high-speed hardware, minimizing the number of pattern projection, or both. However, dynamic 3-D shape measurement of arbitrarily-shaped objects with full sensor resolution without the necessity of additional pattern projections is still a big challenge. In this work, we introduce a high-speed 3-D shape measurement technique based on composite phase-shifting fringes and a multi-view system. The geometry constraint is adopted to search the corresponding points independently without additional images. Meanwhile, by analysing the 3-D position and the main wrapped phase of the corresponding point, pairs with an incorrect 3-D position or a considerable phase difference are effectively rejected. All of the qualified corresponding points are then corrected, and the unique one as well as the related period order is selected through the embedded triangular wave. Finally, considering that some points can only be captured by one of the cameras due to the occlusions, these points may have different fringe orders in the two views, so a left-right consistency check is employed to eliminate those erroneous period orders in this case. Several experiments on both static and dynamic scenes are performed, verifying that our method can achieve a speed of 120 frames per second (fps) with 25-period fringe patterns for fast, dense, and accurate 3-D measurement.

  5. Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system.

    PubMed

    Tao, Tianyang; Chen, Qian; Da, Jian; Feng, Shijie; Hu, Yan; Zuo, Chao

    2016-09-01

    In recent years, fringe projection has become an established and essential method for dynamic three-dimensional (3-D) shape measurement in different fields such as online inspection and real-time quality control. Numerous high-speed 3-D shape measurement methods have been developed by either employing high-speed hardware, minimizing the number of pattern projection, or both. However, dynamic 3-D shape measurement of arbitrarily-shaped objects with full sensor resolution without the necessity of additional pattern projections is still a big challenge. In this work, we introduce a high-speed 3-D shape measurement technique based on composite phase-shifting fringes and a multi-view system. The geometry constraint is adopted to search the corresponding points independently without additional images. Meanwhile, by analysing the 3-D position and the main wrapped phase of the corresponding point, pairs with an incorrect 3-D position or a considerable phase difference are effectively rejected. All of the qualified corresponding points are then corrected, and the unique one as well as the related period order is selected through the embedded triangular wave. Finally, considering that some points can only be captured by one of the cameras due to the occlusions, these points may have different fringe orders in the two views, so a left-right consistency check is employed to eliminate those erroneous period orders in this case. Several experiments on both static and dynamic scenes are performed, verifying that our method can achieve a speed of 120 frames per second (fps) with 25-period fringe patterns for fast, dense, and accurate 3-D measurement. PMID:27607632

  6. The 3D Space and Spin Velocities of a Gamma-ray Pulsar

    NASA Astrophysics Data System (ADS)

    Romani, Roger W.

    2016-04-01

    PSR J2030+4415 is a LAT-discovered 0.5My-old gamma-ray pulsar with an X-ray synchrotron trail and a rare Halpha bowshock. We have obtained GMOS IFU spectroscopic imaging of this shell, and show a sweep through the remarkable Halpha structure, comparing with the high energy emission. These data provide a unique 3D map of the momentum distribution of the relativistic pulsar wind. This shows that the pulsar is moving nearly in the plane of the sky and that the pulsar wind has a polar component misaligned with the space velocity. The spin axis is shown to be inclined some 95degrees to the Earth line of sight, explaining why this is a radio-quiet, gamma-only pulsar. Intriguingly, the shell also shows multiple bubbles that suggest that the pulsar wind power has varied substantially over the past 500 years.

  7. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  8. Argonaute 3D: a real-time cooperative medical planning software on DSL network.

    PubMed

    Le Mer, Pascal; Soler, Luc; Pavy, Dominique; Bernard, Alain; Moreau, Johan; Mutter, Didier; Marescaux, Jacques

    2004-01-01

    Today, diagnosis of cancer and also therapeutic choice imply many specialized practitioners. They are generally located at different places and have to take the best decision as promptly as possible with the difficulty of CT-scan or MRI interpretation. Argonaute 3D is a tool that easily overcomes these issues, thanks to a cooperative solution based on virtual reality. An experimentation, where four practitioners met virtually throughout France, allowed to assess the interest of this solution.

  9. Needle Trajectory and Tip Localization in Real-Time 3-D Ultrasound Using a Moving Stylus.

    PubMed

    Beigi, Parmida; Rohling, Robert; Salcudean, Tim; Lessoway, Victoria A; Ng, Gary C

    2015-07-01

    Described here is a novel approach to needle localization in 3-D ultrasound based on automatic detection of small changes in appearance on movement of the needle stylus. By stylus oscillation, including its full insertion into the cannula to the tip, the image processing techniques can localize the needle trajectory and the tip in the 3-D ultrasound volume. The 3-D needle localization task is reduced to two 2-D localizations using orthogonal projections. To evaluate our method, we tested it on three different ex vivo tissue types, and the preliminary results indicated that the method accuracy lies within clinical acceptance, with average error ranges of 0.9°-1.4° in needle trajectory and 0.8-1.1 mm in needle tip. Results also indicate that method performance is independent of the echogenicity of the tissue. This technique is a safe way of producing ultrasonic intensity changes and appears to introduce negligible risk to the patient, as the outer cannula remains fixed.

  10. Effects of Presence, Copresence, and Flow on Learning Outcomes in 3D Learning Spaces

    ERIC Educational Resources Information Center

    Hassell, Martin D.; Goyal, Sandeep; Limayem, Moez; Boughzala, Imed

    2012-01-01

    The level of satisfaction and effectiveness of 3D virtual learning environments were examined. Additionally, 3D virtual learning environments were compared with face-to-face learning environments. Students that experienced higher levels of flow and presence also experienced more satisfaction but not necessarily more effectiveness with 3D virtual…

  11. Robust 3D Quantification of Glacial Landforms: A Use of Idealised Drumlins in a Real DEM

    NASA Astrophysics Data System (ADS)

    Hillier, J. K.; Smith, M. S.

    2012-04-01

    Drumlins' attributes, such as height (h) and volume (V ), may preserve important information about the dynamics of former ice sheets. However, measurement errors are large (e.g., 39.2% of V within ±25% of their real values for the 'cookie cutter') and, in general, poorly understood. To accurately quantify the morphology of glacial landforms, the relief belonging to that landform must be reliably isolated from other components of the landscape (e.g. buildings, hills). A number of techniques have been proposed for this regional-residual separation (RRS). Which is best? Justifications for those applied remain qualitative assertions. A recently developed, novel method using idealised drumlins of known size (hin, V in) in a real digital elevation model (DEM) is used to quantitatively determine the best RRS technique, allowing general guidelines for quantifying glacial landforms to be proposed. 184 drumlins with digitised outlines in western Central Scotland are used as a case study. The NEXTMap surface model (DSM) is the primary dataset employed. A variety of techniques are then investigated for their ability to recover sizes (hr, V r). A metric, ɛ, is used that maximises the number of Hr/Hin values near 1.0 whilst giving equal weight to different drumlin sizes: a metric dominated by the large number of small drumlins is not desirable. For simplicity, the semi-automated 'cookie cutter' technique is used as a baseline for comparison. This removes heights within a drumlin from a DEM, cuts a hole, then estimates its basal surface by interpolating across the space with a fully tensioned bi-cubic spline (-T1). Metrics for h and V are ɛh = 0.885 and ɛV = 0.247. Other tensions do not improve this significantly, with ɛV of 0.245 at best, but using Delauney triangulation reduces ɛV to 0.206. Windowed 'sliding median' filters, which do not require heights within drumlins to be removed, attain a minimum ɛV of 0.470 at a best width of 340 m (-Fm340). Finally, even crudely

  12. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  13. In Vivo 3D Meibography of the Human Eyelid Using Real Time Imaging Fourier-Domain OCT

    PubMed Central

    Hwang, Ho Sik; Shin, Jun Geun; Lee, Byeong Ha; Eom, Tae Joong; Joo, Choun-Ki

    2013-01-01

    Recently, we reported obtaining tomograms of meibomian glands from healthy volunteers using commercial anterior segment optical coherence tomography (AS-OCT), which is widely employed in clinics for examination of the anterior segment. However, we could not create 3D images of the meibomian glands, because the commercial OCT does not have a 3D reconstruction function. In this study we report the creation of 3D images of the meibomian glands by reconstructing the tomograms of these glands using high speed Fourier-Domain OCT (FD-OCT) developed in our laboratory. This research was jointly undertaken at the Department of Ophthalmology, Seoul St. Mary's Hospital (Seoul, Korea) and the Advanced Photonics Research Institute of Gwangju Institute of Science and Technology (Gwangju, Korea) with two healthy volunteers and seven patients with meibomian gland dysfunction. A real time imaging FD-OCT system based on a high-speed wavelength swept laser was developed that had a spectral bandwidth of 100 nm at the 1310 nm center wavelength. The axial resolution was 5 µm and the lateral resolution was 13 µm in air. Using this device, the meibomian glands of nine subjects were examined. A series of tomograms from the upper eyelid measuring 5 mm (from left to right, B-scan) × 2 mm (from upper part to lower part, C-scan) were collected. Three-D images of the meibomian glands were then reconstructed using 3D “data visualization, analysis, and modeling software”. Established infrared meibography was also performed for comparison. The 3D images of healthy subjects clearly showed the meibomian glands, which looked similar to bunches of grapes. These results were consistent with previous infrared meibography results. The meibomian glands were parallel to each other, and the saccular acini were clearly visible. Here we report the successful production of 3D images of human meibomian glands by reconstructing tomograms of these glands with high speed FD-OCT. PMID:23805297

  14. In Vivo 3D Meibography of the Human Eyelid Using Real Time Imaging Fourier-Domain OCT.

    PubMed

    Hwang, Ho Sik; Shin, Jun Geun; Lee, Byeong Ha; Eom, Tae Joong; Joo, Choun-Ki

    2013-01-01

    Recently, we reported obtaining tomograms of meibomian glands from healthy volunteers using commercial anterior segment optical coherence tomography (AS-OCT), which is widely employed in clinics for examination of the anterior segment. However, we could not create 3D images of the meibomian glands, because the commercial OCT does not have a 3D reconstruction function. In this study we report the creation of 3D images of the meibomian glands by reconstructing the tomograms of these glands using high speed Fourier-Domain OCT (FD-OCT) developed in our laboratory. This research was jointly undertaken at the Department of Ophthalmology, Seoul St. Mary's Hospital (Seoul, Korea) and the Advanced Photonics Research Institute of Gwangju Institute of Science and Technology (Gwangju, Korea) with two healthy volunteers and seven patients with meibomian gland dysfunction. A real time imaging FD-OCT system based on a high-speed wavelength swept laser was developed that had a spectral bandwidth of 100 nm at the 1310 nm center wavelength. The axial resolution was 5 µm and the lateral resolution was 13 µm in air. Using this device, the meibomian glands of nine subjects were examined. A series of tomograms from the upper eyelid measuring 5 mm (from left to right, B-scan) × 2 mm (from upper part to lower part, C-scan) were collected. Three-D images of the meibomian glands were then reconstructed using 3D "data visualization, analysis, and modeling software". Established infrared meibography was also performed for comparison. The 3D images of healthy subjects clearly showed the meibomian glands, which looked similar to bunches of grapes. These results were consistent with previous infrared meibography results. The meibomian glands were parallel to each other, and the saccular acini were clearly visible. Here we report the successful production of 3D images of human meibomian glands by reconstructing tomograms of these glands with high speed FD-OCT.

  15. A time-space domain stereo finite difference method for 3D scalar wave propagation

    NASA Astrophysics Data System (ADS)

    Chen, Yushu; Yang, Guangwen; Ma, Xiao; He, Conghui; Song, Guojie

    2016-11-01

    The time-space domain finite difference methods reduce numerical dispersion effectively by minimizing the error in the joint time-space domain. However, their interpolating coefficients are related with the Courant numbers, leading to significantly extra time costs for loading the coefficients consecutively according to velocity in heterogeneous models. In the present study, we develop a time-space domain stereo finite difference (TSSFD) method for 3D scalar wave equation. The method propagates both the displacements and their gradients simultaneously to keep more information of the wavefields, and minimizes the maximum phase velocity error directly using constant interpolation coefficients for different Courant numbers. We obtain the optimal constant coefficients by combining the truncated Taylor series approximation and the time-space domain optimization, and adjust the coefficients to improve the stability condition. Subsequent investigation shows that the TSSFD can suppress numerical dispersion effectively with high computational efficiency. The maximum phase velocity error of the TSSFD is just 3.09% even with only 2 sampling points per minimum wavelength when the Courant number is 0.4. Numerical experiments show that to generate wavefields with no visible numerical dispersion, the computational efficiency of the TSSFD is 576.9%, 193.5%, 699.0%, and 191.6% of those of the 4th-order and 8th-order Lax-Wendroff correction (LWC) method, the 4th-order staggered grid method (SG), and the 8th-order optimal finite difference method (OFD), respectively. Meanwhile, the TSSFD is compatible to the unsplit convolutional perfectly matched layer (CPML) boundary condition for absorbing artificial boundaries. The efficiency and capability to handle complex velocity models make it an attractive tool in imaging methods such as acoustic reverse time migration (RTM).

  16. A unified 3D default space consciousness model combining neurological and physiological processes that underlie conscious experience

    PubMed Central

    Jerath, Ravinder; Crawford, Molly W.; Barnes, Vernon A.

    2015-01-01

    The Global Workspace Theory and Information Integration Theory are two of the most currently accepted consciousness models; however, these models do not address many aspects of conscious experience. We compare these models to our previously proposed consciousness model in which the thalamus fills-in processed sensory information from corticothalamic feedback loops within a proposed 3D default space, resulting in the recreation of the internal and external worlds within the mind. This 3D default space is composed of all cells of the body, which communicate via gap junctions and electrical potentials to create this unified space. We use 3D illustrations to explain how both visual and non-visual sensory information may be filled-in within this dynamic space, creating a unified seamless conscious experience. This neural sensory memory space is likely generated by baseline neural oscillatory activity from the default mode network, other salient networks, brainstem, and reticular activating system. PMID:26379573

  17. Revitalizing the Space Shuttle's Thermal Protection System with Reverse Engineering and 3D Vision Technology

    NASA Technical Reports Server (NTRS)

    Wilson, Brad; Galatzer, Yishai

    2008-01-01

    The Space Shuttle is protected by a Thermal Protection System (TPS) made of tens of thousands of individually shaped heat protection tile. With every flight, tiles are damaged on take-off and return to earth. After each mission, the heat tiles must be fixed or replaced depending on the level of damage. As part of the return to flight mission, the TPS requirements are more stringent, leading to a significant increase in heat tile replacements. The replacement operation requires scanning tile cavities, and in some cases the actual tiles. The 3D scan data is used to reverse engineer each tile into a precise CAD model, which in turn, is exported to a CAM system for the manufacture of the heat protection tile. Scanning is performed while other activities are going on in the shuttle processing facility. Many technicians work simultaneously on the space shuttle structure, which results in structural movements and vibrations. This paper will cover a portable, ultra-fast data acquisition approach used to scan surfaces in this unstable environment.

  18. Understanding WCAG2.0 Colour Contrast Requirements Through 3D Colour Space Visualisation.

    PubMed

    Sandnes, Frode Eika

    2016-01-01

    Sufficient contrast between text and background is needed to achieve sufficient readability. WCAG2.0 provides a specific definition of sufficient contrast on the web. However, the definition is hard to understand and most designers thus use contrast calculators to validate their colour choices. Often, such checks are performed after design and this may be too late. This paper proposes a colour selection approach based on three-dimensional visualisation of the colour space. The complex non-linear relationships between the colour components become comprehendible when viewed in 3D. The method visualises the available colours in an intuitive manner and allows designers to check a colour against the set of other valid colours. Unlike the contrast calculators, the proposed method is proactive and fun to use. A colour space builder was developed and the resulting models were viewed with a point cloud viewer. The technique can be used as both a design tool and a pedagogical aid to teach colour theory and design. PMID:27534328

  19. Improved time-space method for 3-D heat transfer problems including global warming

    SciTech Connect

    Saitoh, T.S.; Wakashima, Shinichiro

    1999-07-01

    In this paper, the Time-Space Method (TSM) which has been proposed for solving general heat transfer and fluid flow problems was improved in order to cover global and urban warming. The TSM is effective in almost all-transient heat transfer and fluid flow problems, and has been already applied to the 2-D melting problems (or moving boundary problems). The computer running time will be reduced to only 1/100th--1/1000th of the existing schemes for 2-D and 3-D problems. However, in order to apply to much larger-scale problems, for example, global warming, urban warming and general ocean circulation, the SOR method (or other iterative methods) in four dimensions is somewhat tedious and provokingly slow. Motivated by the above situation, the authors improved the speed of iteration of the previous TSM by introducing the following ideas: (1) Timewise chopping: Time domain is chopped into small peaches to save memory requirement; (2) Adaptive iteration: Converged region is eliminated for further iteration; (3) Internal selective iteration: Equation with slow iteration speed in iterative procedure is selectively iterated to accelerate entire convergence; and (4) False transient integration: False transient term is added to the Poisson-type equation and the relevant solution is regarded as a parabolic equation. By adopting the above improvements, the higher-order finite different schemes and the hybrid mesh, the computer running time for the TSM is reduced to some 1/4600th of the conventional explicit method for a typical 3-D natural convection problem in a closed cavity. The proposed TSM will be more efficacious for large-scale environmental problems, such as global warming, urban warming and general ocean circulation, in which a tremendous computing time would be required.

  20. Real-time 3D imaging of microstructure growth in battery cells using indirect MRI.

    PubMed

    Ilott, Andrew J; Mohammadi, Mohaddese; Chang, Hee Jung; Grey, Clare P; Jerschow, Alexej

    2016-09-27

    Lithium metal is a promising anode material for Li-ion batteries due to its high theoretical specific capacity and low potential. The growth of dendrites is a major barrier to the development of high capacity, rechargeable Li batteries with lithium metal anodes, and hence, significant efforts have been undertaken to develop new electrolytes and separator materials that can prevent this process or promote smooth deposits at the anode. Central to these goals, and to the task of understanding the conditions that initiate and propagate dendrite growth, is the development of analytical and nondestructive techniques that can be applied in situ to functioning batteries. MRI has recently been demonstrated to provide noninvasive imaging methodology that can detect and localize microstructure buildup. However, until now, monitoring dendrite growth by MRI has been limited to observing the relatively insensitive metal nucleus directly, thus restricting the temporal and spatial resolution and requiring special hardware and acquisition modes. Here, we present an alternative approach to detect a broad class of metallic dendrite growth via the dendrites' indirect effects on the surrounding electrolyte, allowing for the application of fast 3D (1)H MRI experiments with high resolution. We use these experiments to reconstruct 3D images of growing Li dendrites from MRI, revealing details about the growth rate and fractal behavior. Radiofrequency and static magnetic field calculations are used alongside the images to quantify the amount of the growing structures.

  1. A spheroid toxicity assay using magnetic 3D bioprinting and real-time mobile device-based imaging.

    PubMed

    Tseng, Hubert; Gage, Jacob A; Shen, Tsaiwei; Haisler, William L; Neeley, Shane K; Shiao, Sue; Chen, Jianbo; Desai, Pujan K; Liao, Angela; Hebel, Chris; Raphael, Robert M; Becker, Jeanne L; Souza, Glauco R

    2015-01-01

    An ongoing challenge in biomedical research is the search for simple, yet robust assays using 3D cell cultures for toxicity screening. This study addresses that challenge with a novel spheroid assay, wherein spheroids, formed by magnetic 3D bioprinting, contract immediately as cells rearrange and compact the spheroid in relation to viability and cytoskeletal organization. Thus, spheroid size can be used as a simple metric for toxicity. The goal of this study was to validate spheroid contraction as a cytotoxic endpoint using 3T3 fibroblasts in response to 5 toxic compounds (all-trans retinoic acid, dexamethasone, doxorubicin, 5'-fluorouracil, forskolin), sodium dodecyl sulfate (+control), and penicillin-G (-control). Real-time imaging was performed with a mobile device to increase throughput and efficiency. All compounds but penicillin-G significantly slowed contraction in a dose-dependent manner (Z' = 0.88). Cells in 3D were more resistant to toxicity than cells in 2D, whose toxicity was measured by the MTT assay. Fluorescent staining and gene expression profiling of spheroids confirmed these findings. The results of this study validate spheroid contraction within this assay as an easy, biologically relevant endpoint for high-throughput compound screening in representative 3D environments. PMID:26365200

  2. A spheroid toxicity assay using magnetic 3D bioprinting and real-time mobile device-based imaging

    PubMed Central

    Tseng, Hubert; Gage, Jacob A.; Shen, Tsaiwei; Haisler, William L.; Neeley, Shane K.; Shiao, Sue; Chen, Jianbo; Desai, Pujan K.; Liao, Angela; Hebel, Chris; Raphael, Robert M.; Becker, Jeanne L.; Souza, Glauco R.

    2015-01-01

    An ongoing challenge in biomedical research is the search for simple, yet robust assays using 3D cell cultures for toxicity screening. This study addresses that challenge with a novel spheroid assay, wherein spheroids, formed by magnetic 3D bioprinting, contract immediately as cells rearrange and compact the spheroid in relation to viability and cytoskeletal organization. Thus, spheroid size can be used as a simple metric for toxicity. The goal of this study was to validate spheroid contraction as a cytotoxic endpoint using 3T3 fibroblasts in response to 5 toxic compounds (all-trans retinoic acid, dexamethasone, doxorubicin, 5′-fluorouracil, forskolin), sodium dodecyl sulfate (+control), and penicillin-G (−control). Real-time imaging was performed with a mobile device to increase throughput and efficiency. All compounds but penicillin-G significantly slowed contraction in a dose-dependent manner (Z’ = 0.88). Cells in 3D were more resistant to toxicity than cells in 2D, whose toxicity was measured by the MTT assay. Fluorescent staining and gene expression profiling of spheroids confirmed these findings. The results of this study validate spheroid contraction within this assay as an easy, biologically relevant endpoint for high-throughput compound screening in representative 3D environments. PMID:26365200

  3. Low-cost real-time 3D PC distributed-interactive-simulation (DIS) application for C4I

    NASA Astrophysics Data System (ADS)

    Gonthier, David L.; Veron, Harry

    1998-04-01

    A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.

  4. Development and comparison of projection and image space 3D nodule insertion techniques

    NASA Astrophysics Data System (ADS)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Samei, Ehsan

    2016-04-01

    This study aimed to develop and compare two methods of inserting computerized virtual lesions into CT datasets. 24 physical (synthetic) nodules of three sizes and four morphologies were inserted into an anthropomorphic chest phantom (LUNGMAN, KYOTO KAGAKU). The phantom was scanned (Somatom Definition Flash, Siemens Healthcare) with and without nodules present, and images were reconstructed with filtered back projection and iterative reconstruction (SAFIRE) at 0.6 mm slice thickness using a standard thoracic CT protocol at multiple dose settings. Virtual 3D CAD models based on the physical nodules were virtually inserted (accounting for the system MTF) into the nodule-free CT data using two techniques. These techniques include projection-based and image-based insertion. Nodule volumes were estimated using a commercial segmentation tool (iNtuition, TeraRecon, Inc.). Differences were tested using paired t-tests and R2 goodness of fit between the virtually and physically inserted nodules. Both insertion techniques resulted in nodule volumes very similar to the real nodules (<3% difference) and in most cases the differences were not statistically significant. Also, R2 values were all <0.97 for both insertion techniques. These data imply that these techniques can confidently be used as a means of inserting virtual nodules in CT datasets. These techniques can be instrumental in building hybrid CT datasets composed of patient images with virtually inserted nodules.

  5. Passive Markers for Tracking Surgical Instruments in Real-Time 3-D Ultrasound Imaging

    PubMed Central

    Stoll, Jeffrey; Ren, Hongliang; Dupont, Pierre E.

    2013-01-01

    A family of passive echogenic markers is presented by which the position and orientation of a surgical instrument can be determined in a 3-D ultrasound volume, using simple image processing. Markers are attached near the distal end of the instrument so that they appear in the ultrasound volume along with the instrument tip. They are detected and measured within the ultrasound image, thus requiring no external tracking device. This approach facilitates imaging instruments and tissue simultaneously in ultrasound-guided interventions. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing. Design principles for marker shapes are presented that ensure imaging system and measurement uniqueness constraints are met. An error analysis is included that can be used to guide marker design and which also establishes a lower bound on measurement uncertainty. Finally, examples of marker measurement and tracking algorithms are presented along with experimental validation of the concepts. PMID:22042148

  6. Real-scale 3D models of the scoliotic spine from biplanar radiography without calibration objects.

    PubMed

    Moura, Daniel C; Barbosa, Jorge G

    2014-10-01

    This paper presents a new method for modelling the spines of subjects and making accurate 3D measurements using standard radiologic systems without requiring calibration objects. The method makes use of the focal distance and statistical models for estimating the geometrical parameters of the system. A dataset of 32 subjects was used to assess this method. The results show small errors for the main clinical indices, such as an RMS error of 0.49° for the Cobb angle, 0.50° for kyphosis, 0.38° for lordosis, and 2.62mm for the spinal length. This method is the first to achieve this level of accuracy without requiring the use of calibration objects when acquiring radiographs. We conclude that the proposed method allows for the evaluation of scoliosis with a much simpler setup than currently available methods. PMID:24908193

  7. Testing 3D landform quantification methods with synthetic drumlins in a real digital elevation model

    NASA Astrophysics Data System (ADS)

    Hillier, John K.; Smith, Mike J.

    2012-06-01

    Metrics such as height and volume quantifying the 3D morphology of landforms are important observations that reflect and constrain Earth surface processes. Errors in such measurements are, however, poorly understood. A novel approach, using statistically valid ‘synthetic' landscapes to quantify the errors is presented. The utility of the approach is illustrated using a case study of 184 drumlins observed in Scotland as quantified from a Digital Elevation Model (DEM) by the ‘cookie cutter' extraction method. To create the synthetic DEMs, observed drumlins were removed from the measured DEM and replaced by elongate 3D Gaussian ones of equivalent dimensions positioned randomly with respect to the ‘noise' (e.g. trees) and regional trends (e.g. hills) that cause the errors. Then, errors in the cookie cutter extraction method were investigated by using it to quantify these ‘synthetic' drumlins, whose location and size is known. Thus, the approach determines which key metrics are recovered accurately. For example, mean height of 6.8 m is recovered poorly at 12.5 ± 0.6 (2σ) m, but mean volume is recovered correctly. Additionally, quantification methods can be compared: A variant on the cookie cutter using an un-tensioned spline induced about twice (× 1.79) as much error. Finally, a previously reportedly statistically significant (p = 0.007) difference in mean volume between sub-populations of different ages, which may reflect formational processes, is demonstrated to be only 30-50% likely to exist in reality. Critically, the synthetic DEMs are demonstrated to realistically model parameter recovery, primarily because they are still almost entirely the original landscape. Results are insensitive to the exact method used to create the synthetic DEMs, and the approach could be readily adapted to assess a variety of landforms (e.g. craters, dunes and volcanoes).

  8. Real-time geometric scene estimation for RGBD images using a 3D box shape grammar

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative "Block World" perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.

  9. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  10. A model and simulation to predict 3D imaging LADAR sensor systems performance in real-world type environments

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Dippel, George F.; Russo, Leonard E.

    2006-08-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. Accurate methods to model and simulate performance from 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation here is developed expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) detector noise figure; 4) detector gain; 5) target attributes; 6) atmospheric transmission; 7) atmospheric backscatter; 8) atmospheric turbulence; 9) obscurants; 10) obscurant path length, and; 11) platform motion. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel. Here, noise sources and gain are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel for the entire array. Model outputs are 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array.

  11. Finding Space in Second Life, NASA Education and Public Outreach in a 3D Metaverse

    NASA Astrophysics Data System (ADS)

    Ireton, F. M.

    2007-12-01

    Second Life (SL) is a virtual 3D simulation or metaverse with almost eight million users worldwide. SL has seen explosive growth in the four years it has been available and hosts a number of educational and institutional "islands" or sims. Federal agencies with an SL presence include NASA and NOAA. There are several educational institutions and education specific sims in SL. At any one time there may be as many as 40,000 users on line. Users develop a persona and are seen on screen as a human figure or avatar. Avatars are able to move around the sim islands by walking or flying and move from island to island or remote locations by teleporting. While a big part of the Second Life experience deals with avatar interactions and exploring, there is an active community of builders who create the scenery, buildings, and other artifacts of the SL world including clothing and other personal items. SL builders start with basic shapes and through size manipulation on three axis and adding texture to the shapes create a myriad of objects - a 3D world. This paper will deal with the design and creation of exhibits halls for NASA's LRO/LCROSS mission slated for launch October 2008 and a NASA sponsored aeronautical engineering student challenge contest. The exhibit halls will be placed on the NASA sponsored Co-Lab sim and will feature models of the spacecraft and the instruments carried on board and student exhibits. There also will be storyboards with information about the mission and contest. Where appropriate there will be links to external websites for further information. The exhibits will be interactive to support the outreach efforts associated with the mission and the contest. Upon completion of the visit to the LRO/LCROSS hall participants will have the opportunity to visit a near by sandbox - SL parlance for a building area - to design and build a spacecraft from a suite of instruments provided for them depending on their area of interest. Real limitations such as mass

  12. [Measurement of left atrial and ventricular volumes in real-time 3D echocardiography. Validation by nuclear magnetic resonance

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Qin, J. X.; White, R. D.; Thomas, J. D.

    2001-01-01

    The measurement of the left ventricular ejection fraction is important for the evaluation of cardiomyopathy and depends on the measurement of left ventricular volumes. There are no existing conventional echocardiographic means of measuring the true left atrial and ventricular volumes without mathematical approximations. The aim of this study was to test anew real time 3-dimensional echocardiographic system of calculating left atrial and ventricular volumes in 40 patients after in vitro validation. The volumes of the left atrium and ventricle acquired from real time 3-D echocardiography in the apical view, were calculated in 7 sections parallel to the surface of the probe and compared with atrial (10 patients) and ventricular (30 patients) volumes calculated by nuclear magnetic resonance with the simpson method and with volumes of water in balloons placed in a cistern. Linear regression analysis showed an excellent correlation between the real volume of water in the balloons and volumes given in real time 3-dimensional echocardiography (y = 0.94x + 5.5, r = 0.99, p < 0.001, D = -10 +/- 4.5 ml). A good correlation was observed between real time 3-dimensional echocardiography and nuclear magnetic resonance for the measurement of left atrial and ventricular volumes (y = 0.95x - 10, r = 0.91, p < 0.001, D = -14.8 +/- 19.5 ml and y = 0.87x + 10, r = 0.98, P < 0.001, D = -8.3 +/- 18.7 ml, respectively. The authors conclude that real time three-dimensional echocardiography allows accurate measurement of left heart volumes underlying the clinical potential of this new 3-D method.

  13. Scale Space Graph Representation and Kernel Matching for Non Rigid and Textured 3D Shape Retrieval.

    PubMed

    Garro, Valeria; Giachetti, Andrea

    2016-06-01

    In this paper we introduce a novel framework for 3D object retrieval that relies on tree-based shape representations (TreeSha) derived from the analysis of the scale-space of the Auto Diffusion Function (ADF) and on specialized graph kernels designed for their comparison. By coupling maxima of the Auto Diffusion Function with the related basins of attraction, we can link the information at different scales encoding spatial relationships in a graph description that is isometry invariant and can easily incorporate texture and additional geometrical information as node and edge features. Using custom graph kernels it is then possible to estimate shape dissimilarities adapted to different specific tasks and on different categories of models, making the procedure a powerful and flexible tool for shape recognition and retrieval. Experimental results demonstrate that the method can provide retrieval scores similar or better than state-of-the-art on textured and non textured shape retrieval benchmarks and give interesting insights on effectiveness of different shape descriptors and graph kernels.

  14. Atlas-registration based image segmentation of MRI human thigh muscles in 3D space

    NASA Astrophysics Data System (ADS)

    Ahmad, Ezak; Yap, Moi Hoon; Degens, Hans; McPhee, Jamie S.

    2014-03-01

    Automatic segmentation of anatomic structures of magnetic resonance thigh scans can be a challenging task due to the potential lack of precisely defined muscle boundaries and issues related to intensity inhomogeneity or bias field across an image. In this paper, we demonstrate a combination framework of atlas construction and image registration methods to propagate the desired region of interest (ROI) between atlas image and the targeted MRI thigh scans for quadriceps muscles, femur cortical layer and bone marrow segmentations. The proposed system employs a semi-automatic segmentation method on an initial image in one dataset (from a series of images). The segmented initial image is then used as an atlas image to automate the segmentation of other images in the MRI scans (3-D space). The processes include: ROI labeling, atlas construction and registration, and morphological transform correspondence pixels (in terms of feature and intensity value) between the atlas (template) image and the targeted image based on the prior atlas information and non-rigid image registration methods.

  15. Holographic display system for dynamic synthesis of 3D light fields with increased space bandwidth product.

    PubMed

    Agour, Mostafa; Falldorf, Claas; Bergmann, Ralf B

    2016-06-27

    We present a new method for the generation of a dynamic wave field with high space bandwidth product (SBP). The dynamic wave field is generated from several wave fields diffracted by a display which comprises multiple spatial light modulators (SLMs) each having a comparably low SBP. In contrast to similar approaches in stereoscopy, we describe how the independently generated wave fields can be coherently superposed. A major benefit of the scheme is that the display system may be extended to provide an even larger display. A compact experimental configuration which is composed of four phase-only SLMs to realize the coherent combination of independent wave fields is presented. Effects of important technical parameters of the display system on the wave field generated across the observation plane are investigated. These effects include, e.g., the tilt of the individual SLM and the gap between the active areas of multiple SLMs. As an example of application, holographic reconstruction of a 3D object with parallax effects is demonstrated. PMID:27410593

  16. Real Time Gabor-Domain Optical Coherence Microscopy for 3D Imaging.

    PubMed

    Rolland, Jannick P; Canavesi, Cristina; Tankam, Patrice; Cogliati, Andrea; Lanis, Mara; Santhanam, Anand P

    2016-01-01

    Fast, robust, nondestructive 3D imaging is needed for the characterization of microscopic tissue structures across various clinical applications. A custom microelectromechanical system (MEMS)-based 2D scanner was developed to achieve, together with a multi-level GPU architecture, 55 kHz fast-axis A-scan acquisition in a Gabor-domain optical coherence microscopy (GD-OCM) custom instrument. GD-OCM yields high-definition micrometer-class volumetric images. A dynamic depth of focusing capability through a bio-inspired liquid lens-based microscope design, as in whales' eyes, was developed to enable the high definition instrument throughout a large field of view of 1 mm3 volume of imaging. Developing this technology is prime to enable integration within the workflow of clinical environments. Imaging at an invariant resolution of 2 μm has been achieved throughout a volume of 1 × 1 × 0.6 mm3, acquired in less than 2 minutes. Volumetric scans of human skin in vivo and an excised human cornea are presented. PMID:27046601

  17. Real-time 3D computed tomographic reconstruction using commodity graphics hardware

    NASA Astrophysics Data System (ADS)

    Xu, Fang; Mueller, Klaus

    2007-07-01

    The recent emergence of various types of flat-panel x-ray detectors and C-arm gantries now enables the construction of novel imaging platforms for a wide variety of clinical applications. Many of these applications require interactive 3D image generation, which cannot be satisfied with inexpensive PC-based solutions using the CPU. We present a solution based on commodity graphics hardware (GPUs) to provide these capabilities. While GPUs have been employed for CT reconstruction before, our approach provides significant speedups by exploiting the various built-in hardwired graphics pipeline components for the most expensive CT reconstruction task, backprojection. We show that the timings so achieved are superior to those obtained when using the GPU merely as a multi-processor, without a drop in reconstruction quality. In addition, we also show how the data flow across the graphics pipeline can be optimized, by balancing the load among the pipeline components. The result is a novel streaming CT framework that conceptualizes the reconstruction process as a steady flow of data across a computing pipeline, updating the reconstruction result immediately after the projections have been acquired. Using a single PC equipped with a single high-end commodity graphics board (the Nvidia 8800 GTX), our system is able to process clinically-sized projection data at speeds meeting and exceeding the typical flat-panel detector data production rates, enabling throughput rates of 40-50 projections s-1 for the reconstruction of 5123 volumes.

  18. Development of 3D touch trigger probe with real-time observation

    NASA Astrophysics Data System (ADS)

    Chu, Chih-Liang; Wu, Cheng-Yu

    2010-08-01

    This study aims at inventing a low-price but high-precision 3D touch trigger probe (or a CMM probe). The tip ball of the stylus, with a diameter smaller than 100 μm, is made by a micro electro discharge machine and wire electro discharge grinding. The stylus is mounted at the centre of a stiff cross-form frame, which in turn is suspended on four micro beams. As proven by several experiments, this structure restricts the degrees of freedom on three directions. The displacement sensor and 2D angle sensor is performed using modified commercial DVD pickup heads to measure the three degrees of motional freedom on the suspension structure. As for application, since the tip ball is difficult to identify by naked eye, we use modified commercial webcam and microscope to create a micro imaging system. This imaging system has been tested to have 2.8mmx2.1mm field of view, and 1.5mm depth of field.

  19. 3D Markov Process for Traffic Flow Prediction in Real-Time

    PubMed Central

    Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi

    2016-01-01

    Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further. PMID:26821025

  20. Real-time 3D imaging of Haines jumps in porous media flow

    PubMed Central

    Berg, Steffen; Ott, Holger; Klapp, Stephan A.; Schwing, Alex; Neiteler, Rob; Brussee, Niels; Makurat, Axel; Leu, Leon; Enzmann, Frieder; Schwarz, Jens-Oliver; Kersten, Michael; Irvine, Sarah; Stampanoni, Marco

    2013-01-01

    Newly developed high-speed, synchrotron-based X-ray computed microtomography enabled us to directly image pore-scale displacement events in porous rock in real time. Common approaches to modeling macroscopic fluid behavior are phenomenological, have many shortcomings, and lack consistent links to elementary pore-scale displacement processes, such as Haines jumps and snap-off. Unlike the common singular pore jump paradigm based on observations of restricted artificial capillaries, we found that Haines jumps typically cascade through 10–20 geometrically defined pores per event, accounting for 64% of the energy dissipation. Real-time imaging provided a more detailed fundamental understanding of the elementary processes in porous media, such as hysteresis, snap-off, and nonwetting phase entrapment, and it opens the way for a rigorous process for upscaling based on thermodynamic models. PMID:23431151

  1. The 3D reconstruction of greenhouse tomato plant based on real organ samples and parametric L-system

    NASA Astrophysics Data System (ADS)

    Xin, Longjiao; Xu, Lihong; Li, Dawei; Fu, Daichang

    2014-04-01

    In this paper, a fast and effective 3D reconstruction method for the growth of greenhouse tomato plant is proposed by using real organ samples and a parametric L-system. By analyzing the stereo structure of tomato plant, we extracts rules and parameters to assemble an L-system that is able to simulate the plant growth, and then the components of the L-system are translated into plant organ entities via image processing and computer graphics techniques. This method can efficiently and faithfully simulate the growing process of the greenhouse tomato plant.

  2. The Non-Newtonian Rheology of Real Magmas: insights into 3D microstructures

    NASA Astrophysics Data System (ADS)

    Pistone, M.; Caricchi, L.; Ulmer, P.; Reusser, E.; Marone, F.; Burlini, L.

    2010-12-01

    We present high-resolution 3D microstructures of three-phase magmas composed of melt, bubbles and crystals in different proportions deformed at magmatic pressure and temperature conditions. This study aims to constrain the dependence of rheological and physical properties of magmas on the viscosity of the silicate melt, the applied deformation rate, the relative contents of crystals and bubbles and on the interactions between these phases. The starting material is composed of a hydrous haplogranitic melt containing H2O (2.26 wt%) and CO2 (624 ppm) and different proportions of quartz crystals (between 24 and 65 vol%; 63-125 μm in diameter) and bubbles (between 9 and 12 vol%; 5-150 μm in diameter). Experiments were performed in simple shear using a HT-HP internally-heated Paterson-type rock deformation apparatus (Paterson and Olgaard, 2000) at strain rates ranging between 5×10-5 s-1 and 4×10-3 s-1, at a constant pressure of 200 MPa and temperatures ranging between 723 and 1023 K. Synchrotron based X-ray tomographic microscopy performed at the TOMCAT beamline (Stampanoni et al., 2006) at the Swiss Light Source enabled quantitative evaluation of the 3D microstructure. At high temperature and low strain rate conditions the silicate melt behaves as a Newtonian liquid (Webb and Dingwell, 1990). Higher deformation rates and the contemporary presence of gas bubbles and solid crystals make magma rheology more complex and non-Newtonian behaviour occurs. In all experimental runs two different non-Newtonian effects were observed: shear thinning (decrease of viscosity with increasing strain rate) in high crystal-content magmas (55-65 vol% crystals; 9-10 vol% bubbles) and shear thickening (increase of viscosity with increasing strain rate) in magmas at lower degree of crystallinity (24 vol% crystals; 12 vol% bubbles). Both behaviours were observed at intermediate crystal-content (44 vol% crystals; 12 vol% bubbles), with an initial thickening that subsequently gives way to

  3. Effects of scene content and layout on the perceived light direction in 3D spaces.

    PubMed

    Xia, Ling; Pont, Sylvia C; Heynderickx, Ingrid

    2016-08-01

    The lighting and furnishing of an interior space (i.e., the reflectance of its materials, the geometries of the furnishings, and their arrangement) determine the appearance of this space. Conversely, human observers infer lighting properties from the space's appearance. We conducted two psychophysical experiments to investigate how the perception of the light direction is influenced by a scene's objects and their layout using real scenes. In the first experiment, we confirmed that the shape of the objects in the scene and the scene layout influence the perceived light direction. In the second experiment, we systematically investigated how specific shape properties influenced the estimation of the light direction. The results showed that increasing the number of visible faces of an object, ultimately using globally spherical shapes in the scene, supported the veridicality of the estimated light direction. Furthermore, symmetric arrangements in the scene improved the estimation of the tilt direction. Thus, human perception of light should integrally consider materials, scene content, and layout. PMID:27548091

  4. Texture-based visualization of unsteady 3D flow by real-time advection and volumetric illumination.

    PubMed

    Weiskopf, Daniel; Schafhitzel, Tobias; Ertl, Thomas

    2007-01-01

    This paper presents an interactive technique for the dense texture-based visualization of unsteady 3D flow, taking into account issues of computational efficiency and visual perception. High efficiency is achieved by a 3D graphics processing unit (GPU)-based texture advection mechanism that implements logical 3D grid structures by physical memory in the form of 2D textures. This approach results in fast read and write access to physical memory, independent of GPU architecture. Slice-based direct volume rendering is used for the final display. We investigate two alternative methods for the volumetric illumination of the result of texture advection: First, gradient-based illumination that employs a real-time computation of gradients, and, second, line-based lighting based on illumination in codimension 2. In addition to the Phong model, perception-guided rendering methods are considered, such as cool/warm shading, halo rendering, or color-based depth cueing. The problems of clutter and occlusion are addressed by supporting a volumetric importance function that enhances features of the flow and reduces visual complexity in less interesting regions. GPU implementation aspects, performance measurements, and a discussion of results are included to demonstrate our visualization approach.

  5. Real-time 3-D SAFT-UT system evaluation and validation

    SciTech Connect

    Doctor, S.R.; Schuster, G.J.; Reid, L.D.; Hall, T.E.

    1996-09-01

    SAFT-UT technology is shown to provide significant enhancements to the inspection of materials used in US nuclear power plants. This report provides guidelines for the implementation of SAFT-UT technology and shows the results from its application. An overview of the development of SAFT-UT is provided so that the reader may become familiar with the technology. Then the basic fundamentals are presented with an extensive list of references. A comprehensive operating procedure, which is used in conjunction with the SAFT-UT field system developed by Pacific Northwest Laboratory (PNL), provides the recipe for both SAFT data acquisition and analysis. The specification for the hardware implementation is provided for the SAFT-UT system along with a description of the subsequent developments and improvements. One development of technical interest is the SAFT real time processor. Performance of the real-time processor is impressive and comparison is made of this dedicated parallel processor to a conventional computer and to the newer high-speed computer architectures designed for image processing. Descriptions of other improvements, including a robotic scanner, are provided. Laboratory parametric and application studies, performed by PNL and not previously reported, are discussed followed by a section on field application work in which SAFT was used during inservice inspections of operating reactors.

  6. Beyond optical molasses: 3D raman sideband cooling of atomic cesium to high phase-space density

    PubMed

    Kerman; Vuletic; Chin; Chu

    2000-01-17

    We demonstrate a simple, general purpose method to cool neutral atoms. A sample containing 3x10(8) cesium atoms prepared in a magneto-optical trap is cooled and simultaneously spin polarized in 10 ms at a density of 1.1x10(11) cm (-3) to a phase space density nlambda(3)(dB) = 1/500, which is almost 3 orders of magnitude higher than attainable in free space with optical molasses. The technique is based on 3D degenerate Raman sideband cooling in optical lattices and remains efficient even at densities where the mean lattice site occupation is close to unity.

  7. Computing and monitoring potential of public spaces by shading analysis using 3d lidar data and advanced image analysis

    NASA Astrophysics Data System (ADS)

    Zwolinski, A.; Jarzemski, M.

    2015-04-01

    The paper regards specific context of public spaces in "shadow" of tall buildings located in European cities. Majority of tall buildings in European cities were built in last 15 years. Tall buildings appear mainly in city centres, directly at important public spaces being viable environment for inhabitants with variety of public functions (open spaces, green areas, recreation places, shops, services etc.). All these amenities and services are under direct impact of extensive shading coming from the tall buildings. The paper focuses on analyses and representation of impact of shading from tall buildings on various public spaces in cities using 3D city models. Computer environment of 3D city models in cityGML standard uses 3D LiDAR data as one of data types for definition of 3D cities. The structure of cityGML allows analytic applications using existing computer tools, as well as developing new techniques to estimate extent of shading coming from high-risers, affecting life in public spaces. These measurable shading parameters in specific time are crucial for proper functioning, viability and attractiveness of public spaces - finally it is extremely important for location of tall buildings at main public spaces in cities. The paper explores impact of shading from tall buildings in different spatial contexts on the background of using cityGML models based on core LIDAR data to support controlled urban development in sense of viable public spaces. The article is prepared within research project 2TaLL: Application of 3D Virtual City Models in Urban Analyses of Tall Buildings, realized as a part of Polish-Norway Grants.

  8. 3D Real-Time Echocardiography Combined with Mini Pressure Wire Generate Reliable Pressure-Volume Loops in Small Hearts

    PubMed Central

    Linden, Katharina; Dewald, Oliver; Gatzweiler, Eva; Seehase, Matthias; Duerr, Georg Daniel; Dörner, Jonas; Kleppe, Stephanie

    2016-01-01

    Background Pressure-volume loops (PVL) provide vital information regarding ventricular performance and pathophysiology in cardiac disease. Unfortunately, acquisition of PVL by conductance technology is not feasible in neonates and small children due to the available human catheter size and resulting invasiveness. The aim of the study was to validate the accuracy of PVL in small hearts using volume data obtained by real-time three-dimensional echocardiography (3DE) and simultaneously acquired pressure data. Methods In 17 piglets (weight range: 3.6–8.0 kg) left ventricular PVL were generated by 3DE and simultaneous recordings of ventricular pressure using a mini pressure wire (PVL3D). PVL3D were compared to conductance catheter measurements (PVLCond) under various hemodynamic conditions (baseline, alpha-adrenergic stimulation with phenylephrine, beta-adrenoreceptor-blockage using esmolol). In order to validate the accuracy of 3D volumetric data, cardiac magnetic resonance imaging (CMR) was performed in another 8 piglets. Results Correlation between CMR- and 3DE-derived volumes was good (enddiastolic volume: mean bias -0.03ml ±1.34ml). Computation of PVL3D in small hearts was feasible and comparable to results obtained by conductance technology. Bland-Altman analysis showed a low bias between PVL3D and PVLCond. Systolic and diastolic parameters were closely associated (Intraclass-Correlation Coefficient for: systolic myocardial elastance 0.95, arterial elastance 0.93, diastolic relaxation constant tau 0.90, indexed end-diastolic volume 0.98). Hemodynamic changes under different conditions were well detected by both methods (ICC 0.82 to 0.98). Inter- and intra-observer coefficients of variation were below 5% for all parameters. Conclusions PVL3D generated from 3DE combined with mini pressure wire represent a novel, feasible and reliable method to assess different hemodynamic conditions of cardiac function in hearts comparable to neonate and infant size. This

  9. Intracellular nanomanipulation by a photonic-force microscope with real-time acquisition of a 3D stiffness matrix

    NASA Astrophysics Data System (ADS)

    Bertseva, E.; Singh, A. S. G.; Lekki, J.; Thévenaz, P.; Lekka, M.; Jeney, S.; Gremaud, G.; Puttini, S.; Nowak, W.; Dietler, G.; Forró, L.; Unser, M.; Kulik, A. J.

    2009-07-01

    A traditional photonic-force microscope (PFM) results in huge sets of data, which requires tedious numerical analysis. In this paper, we propose instead an analog signal processor to attain real-time capabilities while retaining the richness of the traditional PFM data. Our system is devoted to intracellular measurements and is fully interactive through the use of a haptic joystick. Using our specialized analog hardware along with a dedicated algorithm, we can extract the full 3D stiffness matrix of the optical trap in real time, including the off-diagonal cross-terms. Our system is also capable of simultaneously recording data for subsequent offline analysis. This allows us to check that a good correlation exists between the classical analysis of stiffness and our real-time measurements. We monitor the PFM beads using an optical microscope. The force-feedback mechanism of the haptic joystick helps us in interactively guiding the bead inside living cells and collecting information from its (possibly anisotropic) environment. The instantaneous stiffness measurements are also displayed in real time on a graphical user interface. The whole system has been built and is operational; here we present early results that confirm the consistency of the real-time measurements with offline computations.

  10. Real-Time Estimation of 3-D Needle Shape and Deflection for MRI-Guided Interventions

    PubMed Central

    Park, Yong-Lae; Elayaperumal, Santhi; Daniel, Bruce; Ryu, Seok Chang; Shin, Mihye; Savall, Joan; Black, Richard J.; Moslehi, Behzad; Cutkosky, Mark R.

    2015-01-01

    We describe a MRI-compatible biopsy needle instrumented with optical fiber Bragg gratings for measuring bending deflections of the needle as it is inserted into tissues. During procedures, such as diagnostic biopsies and localized treatments, it is useful to track any tool deviation from the planned trajectory to minimize positioning errors and procedural complications. The goal is to display tool deflections in real time, with greater bandwidth and accuracy than when viewing the tool in MR images. A standard 18 ga × 15 cm inner needle is prepared using a fixture, and 350-μm-deep grooves are created along its length. Optical fibers are embedded in the grooves. Two sets of sensors, located at different points along the needle, provide an estimate of the bent profile, as well as temperature compensation. Tests of the needle in a water bath showed that it produced no adverse imaging artifacts when used with the MR scanner. PMID:26405428

  11. Detection of hidden objects using a real-time 3-D millimeter-wave imaging system

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon, Avihai; Levanon, Assaf; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, N. S.

    2014-10-01

    Millimeter (mm)and sub-mm wavelengths or terahertz (THz) band have several properties that motivate their use in imaging for security applications such as recognition of hidden objects, dangerous materials, aerosols, imaging through walls as in hostage situations, and also in bad weather conditions. There is no known ionization hazard for biological tissue, and atmospheric degradation of THz radiation is relatively low for practical imaging distances. We recently developed a new technology for the detection of THz radiation. This technology is based on very inexpensive plasma neon indicator lamps, also known as Glow Discharge Detector (GDD), that can be used as very sensitive THz radiation detectors. Using them, we designed and constructed a Focal Plane Array (FPA) and obtained recognizable2-dimensional THz images of both dielectric and metallic objects. Using THz wave it is shown here that even concealed weapons made of dielectric material can be detected. An example is an image of a knife concealed inside a leather bag and also under heavy clothing. Three-dimensional imaging using radar methods can enhance those images since it can allow the isolation of the concealed objects from the body and environmental clutter such as nearby furniture or other people. The GDDs enable direct heterodyning between the electric field of the target signal and the reference signal eliminating the requirement for expensive mixers, sources, and Low Noise Amplifiers (LNAs).We expanded the ability of the FPA so that we are able to obtain recognizable 2-dimensional THz images in real time. We show here that the THz detection of objects in three dimensions, using FMCW principles is also applicable in real time. This imaging system is also shown here to be capable of imaging objects from distances allowing standoff detection of suspicious objects and humans from large distances.

  12. A Comprehensive Software System for Interactive, Real-time, Visual 3D Deterministic and Stochastic Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Li, S.

    2002-05-01

    Taking advantage of the recent developments in groundwater modeling research and computer, image and graphics processing, and objected oriented programming technologies, Dr. Li and his research group have recently developed a comprehensive software system for unified deterministic and stochastic groundwater modeling. Characterized by a new real-time modeling paradigm and improved computational algorithms, the software simulates 3D unsteady flow and reactive transport in general groundwater formations subject to both systematic and "randomly" varying stresses and geological and chemical heterogeneity. The software system has following distinct features and capabilities: Interactive simulation and real time visualization and animation of flow in response to deterministic as well as stochastic stresses. Interactive, visual, and real time particle tracking, random walk, and reactive plume modeling in both systematically and randomly fluctuating flow. Interactive statistical inference, scattered data interpolation, regression, and ordinary and universal Kriging, conditional and unconditional simulation. Real-time, visual and parallel conditional flow and transport simulations. Interactive water and contaminant mass balance analysis and visual and real-time flux update. Interactive, visual, and real time monitoring of head and flux hydrographs and concentration breakthroughs. Real-time modeling and visualization of aquifer transition from confined to unconfined to partially de-saturated or completely dry and rewetting Simultaneous and embedded subscale models, automatic and real-time regional to local data extraction; Multiple subscale flow and transport models Real-time modeling of steady and transient vertical flow patterns on multiple arbitrarily-shaped cross-sections and simultaneous visualization of aquifer stratigraphy, properties, hydrological features (rivers, lakes, wetlands, wells, drains, surface seeps), and dynamically adjusted surface flooding area

  13. MR image reconstruction of sparsely sampled 3D k-space data by projection-onto-convex sets.

    PubMed

    Peng, Haidong; Sabati, Mohammad; Lauzon, Louis; Frayne, Richard

    2006-07-01

    In many rapid three-dimensional (3D) magnetic resonance (MR) imaging applications, such as when following a contrast bolus in the vasculature using a moving table technique, the desired k-space data cannot be fully acquired due to scan time limitations. One solution to this problem is to sparsely sample the data space. Typically, the central zone of k-space is fully sampled, but the peripheral zone is partially sampled. We have experimentally evaluated the application of the projection-onto-convex sets (POCS) and zero-filling (ZF) algorithms for the reconstruction of sparsely sampled 3D k-space data. Both a subjective assessment (by direct image visualization) and an objective analysis [using standard image quality parameters such as global and local performance error and signal-to-noise ratio (SNR)] were employed. Compared to ZF, the POCS algorithm was found to be a powerful and robust method for reconstructing images from sparsely sampled 3D k-space data, a practical strategy for greatly reducing scan time. The POCS algorithm reconstructed a faithful representation of the true image and improved image quality with regard to global and local performance error, with respect to the ZF images. SNR, however, was superior to ZF only when more than 20% of the data were sparsely sampled. POCS-based methods show potential for reconstructing fast 3D MR images obtained by sparse sampling.

  14. Left ventricular endocardial surface detection based on real-time 3D echocardiographic data

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.

    2001-01-01

    OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.

  15. The Application of GIS 3D Modeling and Analysis Technology in Real Estate Mass Appraisal - Taking landscape and sunlight factors as the example

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Li, Y.; Liu, B.; Liu, C.

    2014-04-01

    Based on procedural modeling approach and buildings 2D GIS data of Shenzhen, 3D external models of buildings are generated by CityEngine in a quick and batch mode. And 3D internal model is generated by vectorization of houses distribution within the target building. Following that, the landscape analysis and the sunlight analysis based on GIS visibility analysis method are applied on 3D model of the target building to get the concrete quantization indexes, such as landscape visual range and sunshine duration which could significantly influence real estate value. Finally, the drawing with 3D visualization effect for landscape information and sunshine information is produced. Compared with traditional manual modeling method, the results showed that rule-based 3D modeling method in CityEngine platform could take full advantage of existing GIS data. It could improve the efficiency of 3D modeling by rapidly and automatically generate refined building 3D models in batch mode. Meanwhile, compared with man-made subjective judgment, the building landscape and sunlight analysis model built by visibility analysis could quantify landscape and sunshine indexes more accurately. Furthermore, the application in real estate mass appraisal model for calculation and analysis will reduce the index errors caused by man-made subjective judgment. In addition, precise 3D visualization effect can provide appraisers with more intuitive and efficient view for real estate expression. It greatly improves the efficiency and accuracy in real estate appraisal.

  16. Space-geodetic Constraints on GIA Models with 3D Viscosity

    NASA Astrophysics Data System (ADS)

    Van Der Wal, W.; Xu, Z.

    2012-12-01

    Models for Glacial Isostatic Adjustment (GIA) are an important correction to observations of mass change in the polar regions. Inputs for GIA models include past ice thickness and deformation parameters of the Earth's mantle, both of which are imperfectly known. Here we focus on the latter by investigating GIA models with 3D viscosity and composite (linear and non-linear) flow laws. It was found recently that GIA models with a composite flow law result in a better fit to historic sea level data, but they predict too low present-day uplift rates and gravity rates. Here GIA models are fit to space-geodetic constraints in Fennoscandia and North America. The preferred models are used to calculate the magnitude of the GIA correction on mass change estimates in Greenland and Antarctica. The observations used are GRACE Release 4 solutions from CSR and GFZ and published GPS solutions for North America and Fennoscandia, as well as historic sea level data. The GIA simulations are performed with a finite element model of a spherical, self-gravitating, incompressible Earth with 2x2 degree elements. Parameters in the flow laws are taken from seismology, heatflow measurements and experimental constraints and the ice loading history is prescribed by ICE-5G. It was found that GRACE and GPS derived uplift rates agree at the level of 1 mm/year in North America and at a level of 0.5 mm/year in Fennoscandia, the difference between the two regions being due to larger GPS errors and under sampling in North America. It can be concluded that both GPS and GRACE see the same process and the effects of filtering, noise and non-GIA processes such as land hydrology are likely to be small. Two GIA models are found that bring present-day uplift rate close to observed values in North America and Fennoscandia. These models result in a GIA correction of -17 Gt/year and -26 Gt/year on Greenland mass balance estimates from GRACE.

  17. Demonstrating Advancements in 3D Analysis and Prediction Tools for Space Weather Forecasting utilizing the Enlil Model

    NASA Astrophysics Data System (ADS)

    Murphy, J. J.; Elkington, S. R.; Schmitt, P.; Wiltberger, M. J.; Baker, D. N.

    2012-12-01

    Simulation models of the heliospheric and geospace environments can provide key insights into the geoeffective potential of solar disturbances such as Coronal Mass Ejections and High Speed Solar Wind Streams. Analysis and prediction tools for post processing and visualizing simulation results greatly enhance the utility of these models in aiding space weather forecasters to predict the terrestrial consequences of these events. The Center For Integrated Space Weather Modeling (CISM) Knowledge Transfer (KT) group is making significant progress on an integrated post-processing and analysis and prediction tool based on the ParaView open source visualization application for space weather prediction. These tools will provide space weather forecasters with the ability to use 3D situational awareness of the solar wind, CME, and eventually the geospace environments. Current work focuses on bringing new 3D analysis and prediction tools for the Enlil heliospheric model to space weather forecasters. In this effort we present a ParaView-based model interface that will provide forecasters with an interactive system for analyzing complete 3D datasets from modern space weather models.

  18. Design and application of real-time visual attention model for the exploration of 3D virtual environments.

    PubMed

    Hillaire, Sébastien; Lécuyer, Anatole; Regia-Corte, Tony; Cozot, Rémi; Royan, Jérôme; Breton, Gaspard

    2012-03-01

    This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera

  19. Design and application of real-time visual attention model for the exploration of 3D virtual environments.

    PubMed

    Hillaire, Sébastien; Lécuyer, Anatole; Regia-Corte, Tony; Cozot, Rémi; Royan, Jérôme; Breton, Gaspard

    2012-03-01

    This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera

  20. Real-time 3D visualization of the thoraco-abdominal surface during breathing with body movement and deformation extraction.

    PubMed

    Povšič, K; Jezeršek, M; Možina, J

    2015-07-01

    Real-time 3D visualization of the breathing displacements can be a useful diagnostic tool in order to immediately observe the most active regions on the thoraco-abdominal surface. The developed method is capable of separating non-relevant torso movement and deformations from the deformations that are solely related to breathing. This makes it possible to visualize only the breathing displacements. The system is based on the structured laser triangulation principle, with simultaneous spatial and color data acquisition of the thoraco-abdominal region. Based on the tracking of the attached passive markers, the torso movement and deformation is compensated using rigid and non-rigid transformation models on the three-dimensional (3D) data. The total time of 3D data processing together with visualization equals 20 ms per cycle.In vitro verification of the rigid movement extraction was performed using the iterative closest point algorithm as a reference. Furthermore, a volumetric evaluation on a live subject was performed to establish the accuracy of the rigid and non-rigid model. The root mean square deviation between the measured and the reference volumes shows an error of  ±0.08 dm(3) for rigid movement extraction. Similarly, the error was calculated to be  ±0.02 dm(3) for torsional deformation extraction and  ±0.11 dm(3) for lateral bending deformation extraction. The results confirm that during the torso movement and deformation, the proposed method is sufficiently accurate to visualize only the displacements related to breathing. The method can be used, for example, during the breathing exercise on an indoor bicycle or a treadmill.

  1. Real-time cardiac synchronization with fixed volume frame rate for reducing physiological instabilities in 3D FMRI.

    PubMed

    Tijssen, Rob H N; Okell, Thomas W; Miller, Karla L

    2011-08-15

    Although 2D echo-planar imaging (EPI) remains the dominant method for functional MRI (FMRI), 3D readouts are receiving more interest as these sequences have favorable signal-to-noise ratio (SNR) and enable imaging at a high isotropic resolution. Spoiled gradient-echo (SPGR) and balanced steady-state free-precession (bSSFP) are rapid sequences that are typically acquired with highly segmented 3D readouts, and thus less sensitive to image distortion and signal dropout. They therefore provide a powerful alternative for FMRI in areas with strong susceptibility offsets, such as deep gray matter structures and the brainstem. Unfortunately, the multi-shot nature of the readout makes these sequences highly sensitive to physiological fluctuations, and large signal instabilities are observed in the inferior regions of the brain. In this work a characterization of the source of these instabilities is given and a new method is presented to reduce the instabilities observed in 3D SPGR and bSSFP. Rapidly acquired single-slice data, which critically sampled the respiratory and cardiac waveforms, showed that cardiac pulsation is the dominant source of the instabilities. Simulations further showed that synchronizing the readout to the cardiac cycle minimizes the instabilities considerably. A real-time synchronization method was therefore developed, which utilizes parallel-imaging techniques to allow cardiac synchronization without alteration of the volume acquisition rate. The implemented method significantly improves the temporal stability in areas that are affected by cardiac-related signal fluctuations. In bSSFP data the tSNR in the brainstem increased by 45%, at the cost of a small reduction in tSNR in the cortical areas. In SPGR the temporal stability is improved by approximately 20% in the subcortical structures and as well as cortical gray matter when synchronization was performed.

  2. Simultaneous bilateral real-time 3-d transcranial ultrasound imaging at 1 MHz through poor acoustic windows.

    PubMed

    Lindsey, Brooks D; Nicoletto, Heather A; Bennett, Ellen R; Laskowitz, Daniel T; Smith, Stephen W

    2013-04-01

    Ultrasound imaging has been proposed as a rapid, portable alternative imaging modality to examine stroke patients in pre-hospital or emergency room settings. However, in performing transcranial ultrasound examinations, 8%-29% of patients in a general population may present with window failure, in which case it is not possible to acquire clinically useful sonographic information through the temporal bone acoustic window. In this work, we describe the technical considerations, design and fabrication of low-frequency (1.2 MHz), large aperture (25.3 mm) sparse matrix array transducers for 3-D imaging in the event of window failure. These transducers are integrated into a system for real-time 3-D bilateral transcranial imaging-the ultrasound brain helmet-and color flow imaging capabilities at 1.2 MHz are directly compared with arrays operating at 1.8 MHz in a flow phantom with attenuation comparable to the in vivo case. Contrast-enhanced imaging allowed visualization of arteries of the Circle of Willis in 5 of 5 subjects and 8 of 10 sides of the head despite probe placement outside of the acoustic window. Results suggest that this type of transducer may allow acquisition of useful images either in individuals with poor windows or outside of the temporal acoustic window in the field.

  3. Real-time processor for 3-D information extraction from image sequences by a moving area sensor

    NASA Astrophysics Data System (ADS)

    Hattori, Tetsuo; Nakada, Makoto; Kubo, Katsumi

    1990-11-01

    This paper presents a real time image processor for obtaining threedimensional( 3-D) distance information from image sequence caused by a moving area sensor. The processor has been developed for an automated visual inspection robot system (pilot system) with an autonomous vehicle which moves around avoiding obstacles in a power plant and checks whether there are defects or abnormal phenomena such as steam leakage from valves. The processor detects the distance between objects in the input image and the area sensor deciding corresponding points(pixels) between the first input image and the last one by tracing the loci of edges through the sequence of sixteen images. The hardware which plays an important role is two kinds of boards: mapping boards which can transform X-coordinate (horizontal direction) and Y-coordinate (vertical direction) for each horizontal row of images and a regional labelling board which extracts the connected loci of edges through image sequence. This paper also shows the whole processing flow of the distance detection algorithm. Since the processor can continuously process images ( 512x512x8 [pixels*bits per frame] ) at the NTSC video rate it takes about O. 7[sec] to measure the 3D distance by sixteen input images. The error rate of the measurement is maximum 10 percent when the area sensor laterally moves the range of 20 [centimeters] and when the measured scene including complicated background is at a distance of 4 [meters] from

  4. Simultaneous real-time 3D photoacoustic tomography and EEG for neurovascular coupling study in an animal model of epilepsy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Xiao, Jiaying; Jiang, Huabei

    2014-08-01

    Objective. Neurovascular coupling in epilepsy is poorly understood; its study requires simultaneous monitoring of hemodynamic changes and neural activity in the brain. Approach. Here for the first time we present a combined real-time 3D photoacoustic tomography (PAT) and electrophysiology/electroencephalography (EEG) system for the study of neurovascular coupling in epilepsy, whose ability was demonstrated with a pentylenetetrazol (PTZ) induced generalized seizure model in rats. Two groups of experiments were carried out with different wavelengths to detect the changes of oxy-hemoglobin (HbO2) and deoxy-hemoglobin (HbR) signals in the rat brain. We extracted the average PAT signals of the superior sagittal sinus (SSS), and compared them with the EEG signal. Main results. Results showed that the seizure process can be divided into three stages. A ‘dip’ lasting for 1-2 min in the first stage and the following hyperfusion in the second stage were observed. The HbO2 signal and the HbR signal were generally negatively correlated. The change of blood flow was also estimated. All the acquired results here were in accordance with other published results. Significance. Compared to other existing functional neuroimaging tools, the method proposed here enables reliable tracking of hemodynamic signal with both high spatial and high temporal resolution in 3D, so it is more suitable for neurovascular coupling study of epilepsy.

  5. C-ME: A 3D Community-Based, Real-Time Collaboration Tool for Scientific Research and Training

    PubMed Central

    Kolatkar, Anand; Kennedy, Kevin; Halabuk, Dan; Kunken, Josh; Marrinucci, Dena; Bethel, Kelly; Guzman, Rodney; Huckaby, Tim; Kuhn, Peter

    2008-01-01

    The need for effective collaboration tools is growing as multidisciplinary proteome-wide projects and distributed research teams become more common. The resulting data is often quite disparate, stored in separate locations, and not contextually related. Collaborative Molecular Modeling Environment (C-ME) is an interactive community-based collaboration system that allows researchers to organize information, visualize data on a two-dimensional (2-D) or three-dimensional (3-D) basis, and share and manage that information with collaborators in real time. C-ME stores the information in industry-standard databases that are immediately accessible by appropriate permission within the computer network directory service or anonymously across the internet through the C-ME application or through a web browser. The system addresses two important aspects of collaboration: context and information management. C-ME allows a researcher to use a 3-D atomic structure model or a 2-D image as a contextual basis on which to attach and share annotations to specific atoms or molecules or to specific regions of a 2-D image. These annotations provide additional information about the atomic structure or image data that can then be evaluated, amended or added to by other project members. PMID:18286178

  6. Medical applications of fast 3D cameras in real-time image-guided radiotherapy (IGRT) of cancer

    NASA Astrophysics Data System (ADS)

    Li, Shidong; Li, Tuotuo; Geng, Jason

    2013-03-01

    Dynamic volumetric medical imaging (4DMI) has reduced motion artifacts, increased early diagnosis of small mobile tumors, and improved target definition for treatment planning. High speed cameras for video, X-ray, or other forms of sequential imaging allow a live tracking of external or internal movement useful for real-time image-guided radiation therapy (IGRT). However, none of 4DMI can track real-time organ motion and no camera has correlated with 4DMI to show volumetric changes. With a brief review of various IGRT techniques, we propose a fast 3D camera for live-video stereovision, an automatic surface-motion identifier to classify body or respiratory motion, a mechanical model for synchronizing the external surface movement with the internal target displacement by combination use of the real-time stereovision and pre-treatment 4DMI, and dynamic multi-leaf collimation for adaptive aiming the moving target. Our preliminary results demonstrate that the technique is feasible and efficient in IGRT of mobile targets. A clinical trial has been initiated for validation of its spatial and temporal accuracies and dosimetric impact for intensity-modulated RT (IMRT), volumetric-modulated arc therapy (VMAT), and stereotactic body radiotherapy (SBRT) of any mobile tumors. The technique can be extended for surface-guided stereotactic needle insertion in biopsy of small lung nodules.

  7. VR-Planets : a 3D immersive application for real-time flythrough images of planetary surfaces

    NASA Astrophysics Data System (ADS)

    Civet, François; Le Mouélic, Stéphane

    2015-04-01

    During the last two decades, a fleet of planetary probes has acquired several hundred gigabytes of images of planetary surfaces. Mars has been particularly well covered thanks to the Mars Global Surveyor, Mars Express and Mars Reconnaissance Orbiter spacecrafts. HRSC, CTX, HiRISE instruments allowed the computation of Digital Elevation Models with a resolution from hundreds of meters up to 1 meter per pixel, and corresponding orthoimages with a resolution from few hundred of meters up to 25 centimeters per pixel. The integration of such huge data sets into a system allowing user-friendly manipulation either for scientific investigation or for public outreach can represent a real challenge. We are investigating how innovative tools can be used to freely fly over reconstructed landscapes in real time, using technologies derived from the game industry and virtual reality. We have developed an application based on a game engine, using planetary data, to immerse users in real martian landscapes. The user can freely navigate in each scene at full spatial resolution using a game controller. The actual rendering is compatible with several visualization devices such as 3D active screen, virtual reality headsets (Oculus Rift), and android devices.

  8. Incorporation of 3-D Scanning Lidar Data into Google Earth for Real-time Air Pollution Observation

    NASA Astrophysics Data System (ADS)

    Chiang, C.; Nee, J.; Das, S.; Sun, S.; Hsu, Y.; Chiang, H.; Chen, S.; Lin, P.; Chu, J.; Su, C.; Lee, W.; Su, L.; Chen, C.

    2011-12-01

    3-D Differential Absorption Scanning Lidar (DIASL) system has been designed with small size, light weight, and suitable for installation in various vehicles and places for monitoring of air pollutants and displays a detailed real-time temporal and spatial variability of trace gases via the Google Earth. The fast scanning techniques and visual information can rapidly identify the locations and sources of the polluted gases and assess the most affected areas. It is helpful for Environmental Protection Agency (EPA) to protect the people's health and abate the air pollution as quickly as possible. The distributions of the atmospheric pollutants and their relationship with local metrological parameters measured with ground based instruments will also be discussed. Details will be presented in the upcoming symposium.

  9. Towards real-time 3D US to CT bone image registration using phase and curvature feature based GMM matching.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2011-01-01

    In order to use pre-operatively acquired computed tomography (CT) scans to guide surgical tool movements in orthopaedic surgery, the CT scan must first be registered to the patient's anatomy. Three-dimensional (3D) ultrasound (US) could potentially be used for this purpose if the registration process could be made sufficiently automatic, fast and accurate, but existing methods have difficulties meeting one or more of these criteria. We propose a near-real-time US-to-CT registration method that matches point clouds extracted from local phase images with points selected in part on the basis of local curvature. The point clouds are represented as Gaussian Mixture Models (GMM) and registration is achieved by minimizing the statistical dissimilarity between the GMMs using an L2 distance metric. We present quantitative and qualitative results on both phantom and clinical pelvis data and show a mean registration time of 2.11 s with a mean accuracy of 0.49 mm.

  10. Stereoscopic 3D Projections with MITAKA An Important Tool to Get People Interested in Astronomy and Space Science in Peru

    NASA Astrophysics Data System (ADS)

    Shiomi, Nemoto; Shoichi, Itoh; Hidehiko, Agata; Mario, Zegarra; Jose, Ishitsuka; Edwin, Choque; Adita, Quispe; Tsunehiko, Kato

    2014-02-01

    National Astronomical Observatory of Japan has developed space simulation software "Mitaka". By using Mitaka on two PCs and two projectors with polarizing filter, and look through polarized glasses, we can enjoy space travel in three dimensions. Any one can download Mitaka from anywhere in the world by Internet. But, it has been prepared only Japanese and English versions now. We improved a Mitaka Spanish version, and now we are making projections for local people. The experience of the universe in three dimensions is a very memorable for people, and it has become an opportunity to get interested in astronomy and space sciences. A 40 people capacity room, next o to our Planetarium, has been conditioned for 3D projections; also a portable system is available. Due to success of this new outreach system more 3D show rooms will be implemented within the country.

  11. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    NASA Astrophysics Data System (ADS)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  12. Movement-Based Estimation and Visualization of Space Use in 3D for Wildlife Ecology and Conservation

    PubMed Central

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  13. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation

    USGS Publications Warehouse

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  14. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation.

    PubMed

    Tracey, Jeff A; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R; Fisher, Robert N

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species--giant panda, dugong, and California condor--to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  15. Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA

    SciTech Connect

    Carbajo, Juan J; Qualls, A L

    2008-01-01

    The transient analysis 3-dimensional (3-D) computer code RELAP5-3D/ATHENA has been employed to model and analyze a space reactor of 180 kW(thermal), 40 kW (net, electrical) with eight Stirling engines (SEs). Each SE will generate over 6 kWe; the excess power will be needed for the pumps and other power management devices. The reactor will be cooled by NaK (a eutectic mixture of sodium and potassium which is liquid at ambient temperature). This space reactor is intended to be deployed over the surface of the Moon or Mars. The reactor operating life will be 8 to 10 years. The RELAP5-3D/ATHENA code is being developed and maintained by Idaho National Laboratory. The code can employ a variety of coolants in addition to water, the original coolant employed with early versions of the code. The code can also use 3-D volumes and 3-D junctions, thus allowing for more realistic representation of complex geometries. A combination of 3-D and 1-D volumes is employed in this study. The space reactor model consists of a primary loop and two secondary loops connected by two heat exchangers (HXs). Each secondary loop provides heat to four SEs. The primary loop includes the nuclear reactor with the lower and upper plena, the core with 85 fuel pins, and two vertical heat exchangers (HX). The maximum coolant temperature of the primary loop is 900 K. The secondary loops also employ NaK as a coolant at a maximum temperature of 877 K. The SEs heads are at a temperature of 800 K and the cold sinks are at a temperature of ~400 K. Two radiators will be employed to remove heat from the SEs. The SE HXs surrounding the SE heads are of annular design and have been modeled using 3-D volumes. These 3-D models have been used to improve the HX design by optimizing the flows of coolant and maximizing the heat transferred to the SE heads. The transients analyzed include failure of one or more Stirling engines, trip of the reactor pump, and trips of the secondary loop pumps feeding the HXs of the

  16. A 3-D CFD Analysis of the Space Shuttle RSRM With Propellant Fins @ 1 sec. Burn-Back

    NASA Technical Reports Server (NTRS)

    Morstadt, Robert A.

    2003-01-01

    In this study 3-D Computational Fluid Dynamic (CFD) runs have been made for the Space Shuttle RSRM using 2 different grids and 4 different turbulent models, which were the Standard KE, the RNG KE, the Realizable KE, and the Reynolds stress model. The RSRM forward segment consists of 11 fins. By taking advantage of the forward fin symmetry only half of one fin along the axis had to be used in making the grid. This meant that the 3-D model consisted of a pie slice that encompassed 1/22nd of the motor circumference and went along the axis of the entire motor. The 3-D flow patterns in the forward fin region are of particular interest. Close inspection of these flow patterns indicate that 2 counter-rotating axial vortices emerge from each submerged solid propellant fin. Thus, the 3-D CFD analysis allows insight into complicated internal motor flow patterns that are not available from the simpler 2-D axi-symmetric studies. In addition, a comparison is made between the 3-D bore pressure drop and the 2-D axi-symmetric pressure drop.

  17. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  18. Real-time 3D millimeter wave imaging based FMCW using GGD focal plane array as detectors

    NASA Astrophysics Data System (ADS)

    Levanon, Assaf; Rozban, Daniel; Kopeika, Natan S.; Yitzhaky, Yitzhak; Abramovich, Amir

    2014-03-01

    Millimeter wave (MMW) imaging systems are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is relatively low. The lack of inexpensive room temperature imaging systems makes it difficult to give a suitable MMW system for many of the above applications. 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with a Glow Discharge Detector (GDD) Focal Plane Array (FPA) of plasma based detectors. Each point on the object corresponds to a point in the image and includes the distance information. This will enable 3D MMW imaging. The radar system requires that the millimeter wave detector (GDD) will be able to operate as a heterodyne detector. Since the source of radiation is a frequency modulated continuous wave (FMCW), the detected signal as a result of heterodyne detection gives the object's depth information according to value of difference frequency, in addition to the reflectance of the image. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of GDD devices. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  19. Projecting 2D gene expression data into 3D and 4D space.

    PubMed

    Gerth, Victor E; Katsuyama, Kaori; Snyder, Kevin A; Bowes, Jeff B; Kitayama, Atsushi; Ueno, Naoto; Vize, Peter D

    2007-04-01

    Video games typically generate virtual 3D objects by texture mapping an image onto a 3D polygonal frame. The feeling of movement is then achieved by mathematically simulating camera movement relative to the polygonal frame. We have built customized scripts that adapt video game authoring software to texture mapping images of gene expression data onto b-spline based embryo models. This approach, known as UV mapping, associates two-dimensional (U and V) coordinates within images to the three dimensions (X, Y, and Z) of a b-spline model. B-spline model frameworks were built either from confocal data or de novo extracted from 2D images, once again using video game authoring approaches. This system was then used to build 3D models of 182 genes expressed in developing Xenopus embryos and to implement these in a web-accessible database. Models can be viewed via simple Internet browsers and utilize openGL hardware acceleration via a Shockwave plugin. Not only does this database display static data in a dynamic and scalable manner, the UV mapping system also serves as a method to align different images to a common framework, an approach that may make high-throughput automated comparisons of gene expression patterns possible. Finally, video game systems also have elegant methods for handling movement, allowing biomechanical algorithms to drive the animation of models. With further development, these biomechanical techniques offer practical methods for generating virtual embryos that recapitulate morphogenesis.

  20. PLANETARY NEBULAE DETECTED IN THE SPITZER SPACE TELESCOPE GLIMPSE 3D LEGACY SURVEY

    SciTech Connect

    Zhang Yong; Hsia, Chih-Hao; Kwok, Sun E-mail: xiazh@hku.hk

    2012-01-20

    We used the data from the Spitzer Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE) to investigate the mid-infrared (MIR) properties of planetary nebulae (PNs) and PN candidates. In previous studies of GLIMPSE I and II data, we have shown that these MIR data are very useful in distinguishing PNs from other emission-line objects. In the present paper, we focus on the PNs in the field of the GLIMPSE 3D survey, which has a more extensive latitude coverage. We found a total of 90 Macquarie-AAO-Strasbourg (MASH) and MASH II PNs and 101 known PNs to have visible MIR counterparts in the GLIMPSE 3D survey area. The images and photometry of these PNs are presented. Combining the derived IRAC photometry at 3.6, 4.5, 5.8, and 8.0 {mu}m with the existing photometric measurements from other infrared catalogs, we are able to construct spectral energy distributions (SEDs) of these PNs. Among the most notable objects in this survey is the PN M1-41, whose GLIMPSE 3D image reveals a large bipolar structure of more than 3 arcmin in extent.

  1. Projecting 2D gene expression data into 3D and 4D space.

    PubMed

    Gerth, Victor E; Katsuyama, Kaori; Snyder, Kevin A; Bowes, Jeff B; Kitayama, Atsushi; Ueno, Naoto; Vize, Peter D

    2007-04-01

    Video games typically generate virtual 3D objects by texture mapping an image onto a 3D polygonal frame. The feeling of movement is then achieved by mathematically simulating camera movement relative to the polygonal frame. We have built customized scripts that adapt video game authoring software to texture mapping images of gene expression data onto b-spline based embryo models. This approach, known as UV mapping, associates two-dimensional (U and V) coordinates within images to the three dimensions (X, Y, and Z) of a b-spline model. B-spline model frameworks were built either from confocal data or de novo extracted from 2D images, once again using video game authoring approaches. This system was then used to build 3D models of 182 genes expressed in developing Xenopus embryos and to implement these in a web-accessible database. Models can be viewed via simple Internet browsers and utilize openGL hardware acceleration via a Shockwave plugin. Not only does this database display static data in a dynamic and scalable manner, the UV mapping system also serves as a method to align different images to a common framework, an approach that may make high-throughput automated comparisons of gene expression patterns possible. Finally, video game systems also have elegant methods for handling movement, allowing biomechanical algorithms to drive the animation of models. With further development, these biomechanical techniques offer practical methods for generating virtual embryos that recapitulate morphogenesis. PMID:17366623

  2. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys

    NASA Astrophysics Data System (ADS)

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-06-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments).

  3. Computational hologram synthesis and representation on spatial light modulators for real-time 3D holographic imaging

    NASA Astrophysics Data System (ADS)

    Reichelt, Stephan; Leister, Norbert

    2013-02-01

    In dynamic computer-generated holography that utilizes spatial light modulators, both hologram synthesis and hologram representation are essential in terms of fast computation and high reconstruction quality. For hologram synthesis, i.e. the computation step, Fresnel transform based or point-source based raytracing methods can be applied. In the encoding step, the complex wave-field has to be optimally represented by the SLM with its given modulation capability. For proper hologram reconstruction that implies a simultaneous and independent amplitude and phase modulation of the input wave-field by the SLM. In this paper, we discuss full complex hologram representation methods on SLMs by considering inherent SLM parameter such as modulation type and bit depth on their reconstruction performance such as diffraction efficiency and SNR. We review the three implementation schemes of Burckhardt amplitude-only representation, phase-only macro-pixel representation, and two-phase interference representation. Besides the optical performance we address their hardware complexity and required computational load. Finally, we experimentally demonstrate holographic reconstructions of different representation schemes as obtained by functional prototypes utilizing SeeReal's viewing-window holographic display technology. The proposed hardware implementations enable a fast encoding of complex-valued hologram data and thus will pave the way for commercial real-time holographic 3D imaging in the near future.

  4. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys.

    PubMed

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-01-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments). PMID:27302087

  5. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys

    PubMed Central

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-01-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments). PMID:27302087

  6. NASA's "Eyes On The Solar System:" A Real-time, 3D-Interactive Tool to Teach the Wonder of Planetary Science

    NASA Astrophysics Data System (ADS)

    Hussey, K.

    2014-12-01

    NASA's Jet Propulsion Laboratory is using video game technology to immerse students, the general public and mission personnel in our solar system and beyond. "Eyes on the Solar System," a cross-platform, real-time, 3D-interactive application that can run on-line or as a stand-alone "video game," is of particular interest to educators looking for inviting tools to capture students interest in a format they like and understand. (eyes.nasa.gov). It gives users an extraordinary view of our solar system by virtually transporting them across space and time to make first-person observations of spacecraft, planetary bodies and NASA/ESA missions in action. Key scientific results illustrated with video presentations, supporting imagery and web links are imbedded contextually into the solar system. Educators who want an interactive, game-based approach to engage students in learning Planetary Science will see how "Eyes" can be effectively used to teach its principles to grades 3 through 14.The presentation will include a detailed demonstration of the software along with a description/demonstration of how this technology is being adapted for education. There will also be a preview of coming attractions. This work is being conducted by the Visualization Technology Applications and Development Group at NASA's Jet Propulsion Laboratory, the same team responsible for "Eyes on the Earth 3D," and "Eyes on Exoplanets," which can be viewed at eyes.nasa.gov/earth and eyes.nasa.gov/exoplanets.

  7. Applying a 3D Situational Virtual Learning Environment to the Real World Business--An Extended Research in Marketing

    ERIC Educational Resources Information Center

    Wang, Shwu-huey

    2012-01-01

    In order to understand (1) what kind of students can be facilitated through the help of three-dimensional virtual learning environment (3D VLE), and (2) the relationship between a conventional test (ie, paper and pencil test) and the 3D VLE used in this study, the study designs a 3D virtual supermarket (3DVS) to help students transform their role…

  8. Fusion of current technologies with real-time 3D MEMS ladar for novel security and defense applications

    NASA Astrophysics Data System (ADS)

    Siepmann, James P.

    2006-05-01

    Through the utilization of scanning MEMS mirrors in ladar devices, a whole new range of potential military, Homeland Security, law enforcement, and civilian applications is now possible. Currently, ladar devices are typically large (>15,000 cc), heavy (>15 kg), and expensive (>$100,000) while current MEMS ladar designs are more than a magnitude less, opening up a myriad of potential new applications. One such application with current technology is a GPS integrated MEMS ladar unit, which could be used for real-time border monitoring or the creation of virtual 3D battlefields after being dropped or propelled into hostile territory. Another current technology that can be integrated into a MEMS ladar unit is digital video that can give high resolution and true color to a picture that is then enhanced with range information in a real-time display format that is easier for the user to understand and assimilate than typical gray-scale or false color images. The problem with using 2-axis MEMS mirrors in ladar devices is that in order to have a resonance frequency capable of practical real-time scanning, they must either be quite small and/or have a low maximum tilt angle. Typically, this value has been less than (< or = to 10 mg-mm2-kHz2)-degrees. We have been able to solve this problem by using angle amplification techniques that utilize a series of MEMS mirrors and/or a specialized set of optics to achieve a broad field of view. These techniques and some of their novel applications mentioned will be explained and discussed herein.

  9. Effect of space balance 3D training using visual feedback on balance and mobility in acute stroke patients

    PubMed Central

    Ko, YoungJun; Ha, HyunGeun; Bae, Young-Hyeon; Lee, WanHee

    2015-01-01

    [Purpose] The purpose of the study was to determine the effects of balance training with Space Balance 3D, which is a computerized measurement and visual feedback balance assessment system, on balance and mobility in acute stroke patients. [Subjects and Methods] This was a randomized controlled trial in which 52 subjects were assigned randomly into either an experimental group or a control group. The experimental group, which contained 26 subjects, received balance training with a Space Balance 3D exercise program and conventional physical therapy interventions 5 times per week during 3 weeks. Outcome measures were examined before and after the 3-week interventions using the Berg Balance Scale (BBS), Timed Up and Go (TUG) test, and Postural Assessment Scale for Stroke Patients (PASS). The data were analyzed by a two-way repeated measures ANOVA using SPSS 19.0. [Results] The results revealed a nonsignificant interaction effect between group and time period for both groups before and after the interventions in the BBS score, TUG score, and PASS score. In addition, the experimental group showed more improvement than the control group in the BBS, TUG and PASS scores, but the differences were not significant. In the comparisons within the groups by time, both groups showed significant improvement in BBS, TUG, and PASS scores. [Conclusion] The Space Balance 3D training with conventional physical therapy intervention is recommended for improvement of balance and mobility in acute stroke patients. PMID:26157270

  10. Effect of space balance 3D training using visual feedback on balance and mobility in acute stroke patients.

    PubMed

    Ko, YoungJun; Ha, HyunGeun; Bae, Young-Hyeon; Lee, WanHee

    2015-05-01

    [Purpose] The purpose of the study was to determine the effects of balance training with Space Balance 3D, which is a computerized measurement and visual feedback balance assessment system, on balance and mobility in acute stroke patients. [Subjects and Methods] This was a randomized controlled trial in which 52 subjects were assigned randomly into either an experimental group or a control group. The experimental group, which contained 26 subjects, received balance training with a Space Balance 3D exercise program and conventional physical therapy interventions 5 times per week during 3 weeks. Outcome measures were examined before and after the 3-week interventions using the Berg Balance Scale (BBS), Timed Up and Go (TUG) test, and Postural Assessment Scale for Stroke Patients (PASS). The data were analyzed by a two-way repeated measures ANOVA using SPSS 19.0. [Results] The results revealed a nonsignificant interaction effect between group and time period for both groups before and after the interventions in the BBS score, TUG score, and PASS score. In addition, the experimental group showed more improvement than the control group in the BBS, TUG and PASS scores, but the differences were not significant. In the comparisons within the groups by time, both groups showed significant improvement in BBS, TUG, and PASS scores. [Conclusion] The Space Balance 3D training with conventional physical therapy intervention is recommended for improvement of balance and mobility in acute stroke patients.

  11. MONTE GENEROSO ROCKFALL FIELD TEST (SWITZERLAND): Real size experiment to constraint 2D and 3D rockfall simulations

    NASA Astrophysics Data System (ADS)

    Humair, F.; Matasci, B.; Carrea, D.; Pedrazzini, A.; Loye, A.; Pedrozzi, G.; Nicolet, P.; Jaboyedoff, M.

    2012-04-01

    account the results of the experimental testing are performed and compared with the a-priori simulations. 3D simulations were performed using a software that takes into account the effect of the forest cover in the blocky trajectory (RockyFor 3D) and an other that neglects this aspect (Rotomap; geo&soft international). 2D simulation (RocFall; Rocscience) profiles were located in the blocks paths deduced from 3D simulations. The preliminary results show that: (1) high speed movies are promising and allow us to track the blocks using video software, (2) the a-priori simulations tend to overestimate the runout distance which is certainly due to an underestimation of the obstacles as well as the breaking of the failing rocks which is not taken into account in the models, (3) the trajectories deduced from both a-priori simulation and real size experiment highlights the major influence of the channelized slope morphology on rock paths as it tends to follow the flow direction. This indicates that the 2D simulation have to be performed along the line of flow direction.

  12. Space-time evolution of a growth fold (Betic Cordillera, Spain). Evidences from 3D geometrical modelling

    NASA Astrophysics Data System (ADS)

    Martin-Rojas, Ivan; Alfaro, Pedro; Estévez, Antonio

    2014-05-01

    We present a study that encompasses several software tools (iGIS©, ArcGIS©, Autocad©, etc.) and data (geological mapping, high resolution digital topographic data, high resolution aerial photographs, etc.) to create a detailed 3D geometric model of an active fault propagation growth fold. This 3D model clearly shows structural features of the analysed fold, as well as growth relationships and sedimentary patterns. The results obtained permit us to discuss the kinematics and structural evolution of the fold and the fault in time and space. The study fault propagation fold is the Crevillente syncline. This fold represents the northern limit of the Bajo Segura Basin, an intermontane basin in the Eastern Betic Cordillera (SE Spain) developed from upper Miocene on. 3D features of the Crevillente syncline, including growth pattern, indicate that limb rotation and, consequently, fault activity was higher during Messinian than during Tortonian; consequently, fault activity was also higher. From Pliocene on our data point that limb rotation and fault activity steadies or probably decreases. This in time evolution of the Crevillente syncline is not the same all along the structure; actually the 3D geometric model indicates that observed lateral heterogeneity is related to along strike variation of fault displacement.

  13. Nondestructive testing of 3D disperse systems with micro- and nano-particles: N-dimensional space of optical parameters

    NASA Astrophysics Data System (ADS)

    Bezrukova, Alexandra G.

    2006-04-01

    The simultaneous analysis of 3D disperse systems (DS) with micro- and nano- particles by refractometry, absorbency, fluorescence and by different types of light scattering, can help to elaborate the sensing elements for specffic impurity control. Our research has investigated by complex of optical methods different 3D DS such as: proteins, nucleoproteids, lipoproteids, liposomes, viruses, virosomes, lipid emulsions, blood substitutes, latexes, liquid crystals, biological cells with various form and size (including bacterial cells), metallic powders, clays, kimberlites, zeolites, oils, crude oils, samples of natural and water-supply waters, etc. This experience suggests that each 3D DS can be charactensed by N-dimensional vector in N-dimensional space of optical parameters. Due to the fusion of various optical data it is possible to solve the inverse physical problem on the presence of impurity in mixtures of 3D DS by information statistical theory methods. It is important that in this case polymodality of particle size distribution is not an obstacle.

  14. GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies.

    PubMed

    Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain

    2015-01-01

    Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/.

  15. GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies

    PubMed Central

    Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain

    2015-01-01

    Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/. PMID:26313379

  16. The impacts of open-mouth breathing on upper airway space in obstructive sleep apnea: 3-D MDCT analysis.

    PubMed

    Kim, Eun Joong; Choi, Ji Ho; Kim, Kang Woo; Kim, Tae Hoon; Lee, Sang Hag; Lee, Heung Man; Shin, Chol; Lee, Ki Yeol; Lee, Seung Hoon

    2011-04-01

    Open-mouth breathing during sleep is a risk factor for obstructive sleep apnea (OSA) and is associated with increased disease severity and upper airway collapsibility. The aim of this study was to investigate the effect of open-mouth breathing on the upper airway space in patients with OSA using three-dimensional multi-detector computed tomography (3-D MDCT). The study design included a case-control study with planned data collection. The study was performed at a tertiary medical center. 3-D MDCT analysis was conducted on 52 patients with OSA under two experimental conditions: mouth closed and mouth open. Under these conditions, we measured the minimal cross-sectional area of the retropalatal and retroglossal regions (mXSA-RP, mXSA-RG), as well as the upper airway length (UAL), defined as the vertical dimension from hard palate to hyoid. We also computed the volume of the upper airway space by 3-D reconstruction of both conditions. When the mouth was open, mXSA-RP and mXSA-RG significantly decreased and the UAL significantly increased, irrespective of the severity of OSA. However, between the closed- and open-mouth states, there was no significant change in upper airway volume at any severity of OSA. Results suggest that the more elongated and narrow upper airway during open-mouth breathing may aggravate the collapsibility of the upper airway and, thus, negatively affect OSA severity.

  17. A Real-Time Magnetoencephalography Brain-Computer Interface Using Interactive 3D Visualization and the Hadoop Ecosystem.

    PubMed

    McClay, Wilbert A; Yadav, Nancy; Ozbek, Yusuf; Haas, Andy; Attias, Hagaii T; Nagarajan, Srikantan S

    2015-09-30

    Ecumenically, the fastest growing segment of Big Data is human biology-related data and the annual data creation is on the order of zetabytes. The implications are global across industries, of which the treatment of brain related illnesses and trauma could see the most significant and immediate effects. The next generation of health care IT and sensory devices are acquiring and storing massive amounts of patient related data. An innovative Brain-Computer Interface (BCI) for interactive 3D visualization is presented utilizing the Hadoop Ecosystem for data analysis and storage. The BCI is an implementation of Bayesian factor analysis algorithms that can distinguish distinct thought actions using magneto encephalographic (MEG) brain signals. We have collected data on five subjects yielding 90% positive performance in MEG mid- and post-movement activity. We describe a driver that substitutes the actions of the BCI as mouse button presses for real-time use in visual simulations. This process has been added into a flight visualization demonstration. By thinking left or right, the user experiences the aircraft turning in the chosen direction. The driver components of the BCI can be compiled into any software and substitute a user's intent for specific keyboard strikes or mouse button presses. The BCI's data analytics OPEN ACCESS Brain. Sci. 2015, 5 420 of a subject's MEG brainwaves and flight visualization performance are stored and analyzed using the Hadoop Ecosystem as a quick retrieval data warehouse.

  18. A Real-Time Magnetoencephalography Brain-Computer Interface Using Interactive 3D Visualization and the Hadoop Ecosystem.

    PubMed

    McClay, Wilbert A; Yadav, Nancy; Ozbek, Yusuf; Haas, Andy; Attias, Hagaii T; Nagarajan, Srikantan S

    2015-01-01

    Ecumenically, the fastest growing segment of Big Data is human biology-related data and the annual data creation is on the order of zetabytes. The implications are global across industries, of which the treatment of brain related illnesses and trauma could see the most significant and immediate effects. The next generation of health care IT and sensory devices are acquiring and storing massive amounts of patient related data. An innovative Brain-Computer Interface (BCI) for interactive 3D visualization is presented utilizing the Hadoop Ecosystem for data analysis and storage. The BCI is an implementation of Bayesian factor analysis algorithms that can distinguish distinct thought actions using magneto encephalographic (MEG) brain signals. We have collected data on five subjects yielding 90% positive performance in MEG mid- and post-movement activity. We describe a driver that substitutes the actions of the BCI as mouse button presses for real-time use in visual simulations. This process has been added into a flight visualization demonstration. By thinking left or right, the user experiences the aircraft turning in the chosen direction. The driver components of the BCI can be compiled into any software and substitute a user's intent for specific keyboard strikes or mouse button presses. The BCI's data analytics OPEN ACCESS Brain. Sci. 2015, 5 420 of a subject's MEG brainwaves and flight visualization performance are stored and analyzed using the Hadoop Ecosystem as a quick retrieval data warehouse. PMID:26437432

  19. Towards real-time 3D US to CT bone image registration using phase and curvature feature based GMM matching.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2011-01-01

    In order to use pre-operatively acquired computed tomography (CT) scans to guide surgical tool movements in orthopaedic surgery, the CT scan must first be registered to the patient's anatomy. Three-dimensional (3D) ultrasound (US) could potentially be used for this purpose if the registration process could be made sufficiently automatic, fast and accurate, but existing methods have difficulties meeting one or more of these criteria. We propose a near-real-time US-to-CT registration method that matches point clouds extracted from local phase images with points selected in part on the basis of local curvature. The point clouds are represented as Gaussian Mixture Models (GMM) and registration is achieved by minimizing the statistical dissimilarity between the GMMs using an L2 distance metric. We present quantitative and qualitative results on both phantom and clinical pelvis data and show a mean registration time of 2.11 s with a mean accuracy of 0.49 mm. PMID:22003622

  20. A Real-Time Magnetoencephalography Brain-Computer Interface Using Interactive 3D Visualization and the Hadoop Ecosystem

    PubMed Central

    McClay, Wilbert A.; Yadav, Nancy; Ozbek, Yusuf; Haas, Andy; Attias, Hagaii T.; Nagarajan, Srikantan S.

    2015-01-01

    Ecumenically, the fastest growing segment of Big Data is human biology-related data and the annual data creation is on the order of zetabytes. The implications are global across industries, of which the treatment of brain related illnesses and trauma could see the most significant and immediate effects. The next generation of health care IT and sensory devices are acquiring and storing massive amounts of patient related data. An innovative Brain-Computer Interface (BCI) for interactive 3D visualization is presented utilizing the Hadoop Ecosystem for data analysis and storage. The BCI is an implementation of Bayesian factor analysis algorithms that can distinguish distinct thought actions using magneto encephalographic (MEG) brain signals. We have collected data on five subjects yielding 90% positive performance in MEG mid- and post-movement activity. We describe a driver that substitutes the actions of the BCI as mouse button presses for real-time use in visual simulations. This process has been added into a flight visualization demonstration. By thinking left or right, the user experiences the aircraft turning in the chosen direction. The driver components of the BCI can be compiled into any software and substitute a user’s intent for specific keyboard strikes or mouse button presses. The BCI’s data analytics of a subject’s MEG brainwaves and flight visualization performance are stored and analyzed using the Hadoop Ecosystem as a quick retrieval data warehouse. PMID:26437432

  1. Minimum slice spacing required to reconstruct 3D shape for serial sections of breast tissue for comparison with medical imaging

    NASA Astrophysics Data System (ADS)

    Reis, Sara; Eiben, Bjoern; Mertzanidou, Thomy; Hipwell, John; Hermsen, Meyke; van der Laak, Jeroen; Pinder, Sarah; Bult, Peter; Hawkes, David

    2015-03-01

    There is currently an increasing interest in combining the information obtained from radiology and histology with the intent of gaining a better understanding of how different tumour morphologies can lead to distinctive radiological signs which might predict overall treatment outcome. Relating information at different resolution scales is challenging. Reconstructing 3D volumes from histology images could be the key to interpreting and relating the radiological image signal to tissue microstructure. The goal of this study is to determine the minimum sampling (maximum spacing between histological sections through a fixed surgical specimen) required to create a 3D reconstruction of the specimen to a specific tolerance. We present initial results for one lumpectomy specimen case where 33 consecutive histology slides were acquired.

  2. Assessing quality of urban underground spaces by coupling 3D geological models: The case study of Foshan city, South China

    NASA Astrophysics Data System (ADS)

    Hou, Weisheng; Yang, Liang; Deng, Dongcheng; Ye, Jing; Clarke, Keith; Yang, Zhijun; Zhuang, Wenming; Liu, Jianxiong; Huang, Jichun

    2016-04-01

    Urban underground spaces (UUS), especially those containing natural resources that have not yet been utilized, have been recognized as important for future sustainable development in large cities. One of the key steps in city planning is to estimate the quality of urban underground space resources, since they are major determinants of suitable land use. Yet geological constraints are rarely taken into consideration in urban planning, nor are the uncertainties in the quality of the available assessments. Based on Fuzzy Set theory and the analytic hierarchy process, a 3D stepwise process for the quality assessment of geotechnical properties of natural resources in UUS is presented. The process includes an index system for construction factors; area partitioning; the extraction of geological attributes; the creation of a relative membership grade matrix; the evaluation of subject and destination layers; and indeterminacy analysis. A 3D geological model of the study area was introduced into the process that extracted geological attributes as constraints. This 3D geological model was coupled with borehole data for Foshan City, Guangdong province, South China, and the indeterminacies caused by the cell size and the geological strata constraints were analyzed. The results of the case study show that (1) a relatively correct result can be obtained if the cell size is near to the average sampling distance of the boreholes; (2) the constraints of the 3D geological model have a major role in establishing the UUS quality level and distribution, especially at the boundaries of the geological bodies; and (3) the assessment result is impacted by an interaction between the cell resolution and the geological model used.

  3. Innovative radar products for the 3D, high-resolution and real-time monitoring of the convective activity in the airspace around airports

    NASA Astrophysics Data System (ADS)

    Tabary, P.; Bousquet, O.; Sénési, S.; Josse, P.

    2009-09-01

    Airports are recognized to become critical areas in the future given the expected doubling in air traffic by 2020. The increased density of aircrafts in the airport airspaces calls for improved systems and products to monitor in real-time potential hazards and thus meet the airport objectives in terms of safety and throughput. Among all meteorological hazards, convection is certainly the most impacting one. We describe here some innovative radar products that have recently been developed and tested at Météo France around the Paris airports. Those products rely on the French Doppler radar network consisting today of 24 elements with some of them being polarimetric. Reflectivity and Doppler volumetric data are concentrated from all 24 radar sites in real-time at the central level (Toulouse) where 3D Cartesian mosaics covering the entire French territory (i.e. a typical 1,000 by 1,000 km² area) are elaborated. The innovation with respect to what has been done previously is that the three components of the wind are retrieved by operational combination of the radial velocities. The final product, available in real-time every 15 minutes with a spatial resolution of 2.5 km horizontally and 500 m vertically, is a 3D grid giving the interpolated reflectivity and wind field (u, v and w) values. The 2.5 km resolution, arising from the fact that the retrieval is carried out every 15 minutes from radars typically spaced apart by 150 km, is not sufficient for airport airspace monitoring but is valuable for en-route monitoring. Its extension to the entire European space is foreseen. To address the specific needs in the airport areas, a downscaling technique has been proposed to merge the above-mentioned low-resolution 3D wind and reflectivity fields with the high resolution (5 minutes and 1 km²) 2D imagery of the Trappes radar that is the one that covers the Paris airports. The merging approach is based on the assumption that the Vertical Profile of Reflectivity (i.e. the

  4. On the application of focused ion beam nanotomography in characterizing the 3D pore space geometry of Opalinus clay

    NASA Astrophysics Data System (ADS)

    Keller, Lukas M.; Holzer, Lorenz; Wepf, Roger; Gasser, Philippe; Münch, Beat; Marschall, Paul

    The evaluation and optimization of radioactive disposal systems requires a comprehensive understanding of mass transport processes. Among others, mass transport in porous geomaterials depends crucially on the topology and geometry of the pore space. Thus, understanding the mechanism of mass transport processes ultimately requires a 3D characterization of the pore structure. Here, we demonstrate the potential of focused ion beam nanotomography (FIB-nT) in characterizing the 3D geometry of pore space in clay rocks, i.e. Opalinus clay. In order to preserve the microstructure and to reduce sample preparation artefacts we used high pressure freezing and subsequent freeze drying to prepare the samples. Resolution limitations placed the lower limit in pore radii that can be analyzed by FIB-nT to about 10-15 nm. Image analysis and the calculation of pore size distribution revealed that pores with radii larger than 15 nm are related to a porosity of about 3 vol.%. To validate the method, we compared the pores size distribution obtained by FIB-nT with the one obtained by N 2 adsorption analysis. The latter yielded a porosity of about 13 vol.%. This means that FIB-nT can describe around 20-30% of the total pore space. For pore radii larger than 15 nm the pore size distribution obtained by FIB-nT and N 2 adsorption analysis were in good agreement. This suggests that FIB-nT can provide representative data on the spatial distribution of pores for pore sizes in the range of about 10-100 nm. Based on the spatial analysis of 3D data we extracted information on the spatial distribution of pore space geometrical properties.

  5. An embedded real-time red peach detection system based on an OV7670 camera, ARM cortex-M4 processor and 3D look-up tables.

    PubMed

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-10-22

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  6. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    PubMed Central

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second. PMID:23202040

  7. Use and Evaluation of 3D GeoWall Visualizations in Undergraduate Space Science Classes

    NASA Astrophysics Data System (ADS)

    Turner, N. E.; Hamed, K. M.; Lopez, R. E.; Mitchell, E. J.; Gray, C. L.; Corralez, D. S.; Robinson, C. A.; Soderlund, K. M.

    2005-12-01

    One persistent difficulty many astronomy students face is the lack of 3- dimensional mental model of the systems being studied, in particular the Sun-Earth-Moon system. Students without such a mental model can have a very hard time conceptualizing the geometric relationships that cause, for example, the cycle of lunar phases or the pattern of seasons. The GeoWall is a recently developed and affordable projection mechanism for three-dimensional stereo visualization which is becoming a popular tool in classrooms and research labs for use in geology classes, but as yet very little work has been done involving the GeoWall for astronomy classes. We present results from a large study involving over 1000 students of varied backgrounds: some students were tested at the University of Texas at El Paso, a large public university on the US-Mexico border and other students were from the Florida Institute of Technology, a small, private, technical school in Melbourne Florida. We wrote a lecture tutorial-style lab to go along with a GeoWall 3D visual of the Earth-Moon system and tested the students before and after with several diagnostics. Students were given pre and post tests using the Lunar Phase Concept Inventory (LPCI) as well as a separate evaluation written specifically for this project. We found the lab useful for both populations of students, but not equally effective for all. We discuss reactions from the students and their improvement, as well as whether the students are able to correctly assess the usefullness of the project for their own learning.

  8. ¹³C NMR-distance matrix descriptors: optimal abstract 3D space granularity for predicting estrogen binding.

    PubMed

    Slavov, Svetoslav H; Geesaman, Elizabeth L; Pearce, Bruce A; Schnackenberg, Laura K; Buzatu, Dan A; Wilkes, Jon G; Beger, Richard D

    2012-07-23

    An improved three-dimensional quantitative spectral data-activity relationship (3D-QSDAR) methodology was used to build and validate models relating the activity of 130 estrogen receptor binders to specific structural features. In 3D-QSDAR, each compound is represented by a unique fingerprint constructed from (13)C chemical shift pairs and associated interatomic distances. Grids of different granularity can be used to partition the abstract fingerprint space into congruent "bins" for which the optimal size was previously unexplored. For this purpose, the endocrine disruptor knowledge base data were used to generate 50 3D-QSDAR models with bins ranging in size from 2 ppm × 2 ppm × 0.5 Å to 20 ppm × 20 ppm × 2.5 Å, each of which was validated using 100 training/test set partitions. Best average predictivity in terms of R(2)test was achieved at 10 ppm ×10 ppm × Z Å (Z = 0.5, ..., 2.5 Å). It was hypothesized that this optimum depends on the chemical shifts' estimation error (±4.13 ppm) and the precision of the calculated interatomic distances. The highest ranked bins from partial least-squares weights were found to be associated with structural features known to be essential for binding to the estrogen receptor.

  9. 3D Reconfigurable NoC Multiprocessor Imaging Interferometer for Space Climate

    NASA Astrophysics Data System (ADS)

    Dekoulis, George

    2016-07-01

    This paper describes the development of an imaging interferometer for long-term observations of solar activity related events. Heliospheric physics phenomena are responsible for causing irregularities to the ionospheric-magnetospheric plasmasphere. Distinct signatures of these events are captured and studied over long periods of time deducting crucial conclusions about the short-term Space Weather and in the long run about Space Climate. The new prototype features an eight-channel implementation. The available hardware resources permit a 256- channel configuration for accurate beam scanning of the Earth's plasmasphere. A dual-polarization scheme has been implemented for obtaining accurate measurements. The system is based on state-of-the-art three-dimensional reconfigurable logic and exhibits a performance increase in the range of 70% compared to similar instruments in operation. Special circuits allow measurements of the most intense heliospheric physics events to be fully captured and analyzed.

  10. Neural correlates of visuospatial consciousness in 3D default space: insights from contralateral neglect syndrome.

    PubMed

    Jerath, Ravinder; Crawford, Molly W

    2014-08-01

    One of the most compelling questions still unanswered in neuroscience is how consciousness arises. In this article, we examine visual processing, the parietal lobe, and contralateral neglect syndrome as a window into consciousness and how the brain functions as the mind and we introduce a mechanism for the processing of visual information and its role in consciousness. We propose that consciousness arises from integration of information from throughout the body and brain by the thalamus and that the thalamus reimages visual and other sensory information from throughout the cortex in a default three-dimensional space in the mind. We further suggest that the thalamus generates a dynamic default three-dimensional space by integrating processed information from corticothalamic feedback loops, creating an infrastructure that may form the basis of our consciousness. Further experimental evidence is needed to examine and support this hypothesis, the role of the thalamus, and to further elucidate the mechanism of consciousness.

  11. 3D Embedded Reconfigurable SoC for Expediting Magnetometric Space Missions

    NASA Astrophysics Data System (ADS)

    Dekoulis, George

    2016-07-01

    This paper describes the development of a state-of-the-art three-dimensional embedded reconfigurable System-on-Chip (SoC) for accelerating the design of future magnetometric space missions. This involves measurements of planetary magnetic fields or measurements of heliospheric physics events' signatures superimposed on the aggregate measurements of the stronger planetary fields. The functionality of the embedded core is fully customizable, therefore, its operation is independent of the magnetic sensor being used. Standard calibration procedures still apply for setting the magnetometer measurements to the desired initial state and removing any seriatim interference inferred by the adjacent environment. The system acts as a pathfinder for future high-resolution heliospheric space missions.

  12. Quantification of Shunt Volume Through Ventricular Septal Defect by Real-Time 3-D Color Doppler Echocardiography: An in Vitro Study.

    PubMed

    Zhu, Meihua; Ashraf, Muhammad; Tam, Lydia; Streiff, Cole; Kimura, Sumito; Shimada, Eriko; Sahn, David J

    2016-05-01

    Quantification of shunt volume is important for ventricular septal defects (VSDs). The aim of the in vitro study described here was to test the feasibility of using real-time 3-D color Doppler echocardiography (RT3-D-CDE) to quantify shunt volume through a modeled VSD. Eight porcine heart phantoms with VSDs ranging in diameter from 3 to 25 mm were studied. Each phantom was passively driven at five different stroke volumes from 30 to 70 mL and two stroke rates, 60 and 120 strokes/min. RT3-D-CDE full volumes were obtained at color Doppler volume rates of 15, 20 and 27 volumes/s. Shunt flow derived from RT3-D-CDE was linearly correlated with pump-driven stroke volume (R = 0.982). RT3-D-CDE-derived shunt volumes from three color Doppler flow rate settings and two stroke rate acquisitions did not differ (p > 0.05). The use of RT3-D-CDE to determine shunt volume though VSDs is feasible. Different color volume rates/heart rates under clinically/physiologically relevant range have no effect on VSD 3-D shunt volume determination.

  13. Microwave and camera sensor fusion for the shape extraction of metallic 3D space objects

    NASA Technical Reports Server (NTRS)

    Shaw, Scott W.; Defigueiredo, Rui J. P.; Krishen, Kumar

    1989-01-01

    The vacuum of space presents special problems for optical image sensors. Metallic objects in this environment can produce intense specular reflections and deep shadows. By combining the polarized RCS with an incomplete camera image, it has become possible to better determine the shape of some simple three-dimensional objects. The radar data are used in an iterative procedure that generates successive approximations to the target shape by minimizing the error between computed scattering cross-sections and the observed radar returns. Favorable results have been obtained for simulations and experiments reconstructing plates, ellipsoids, and arbitrary surfaces.

  14. Multi-hole seismic modeling in 3-D space and cross-hole seismic tomography analysis for boulder detection

    NASA Astrophysics Data System (ADS)

    Cheng, Fei; Liu, Jiangping; Wang, Jing; Zong, Yuquan; Yu, Mingyu

    2016-11-01

    A boulder stone, a common geological feature in south China, is referred to the remnant of a granite body which has been unevenly weathered. Undetected boulders could adversely impact the schedule and safety of subway construction when using tunnel boring machine (TBM) method. Therefore, boulder detection has always been a key issue demanded to be solved before the construction. Nowadays, cross-hole seismic tomography is a high resolution technique capable of boulder detection, however, the method can only solve for velocity in a 2-D slice between two wells, and the size and central position of the boulder are generally difficult to be accurately obtained. In this paper, the authors conduct a multi-hole wave field simulation and characteristic analysis of a boulder model based on the 3-D elastic wave staggered-grid finite difference theory, and also a 2-D imaging analysis based on first arrival travel time. The results indicate that (1) full wave field records could be obtained from multi-hole seismic wave simulations. Simulation results describe that the seismic wave propagation pattern in cross-hole high-velocity spherical geological bodies is more detailed and can serve as a basis for the wave field analysis. (2) When a cross-hole seismic section cuts through the boulder, the proposed method provides satisfactory cross-hole tomography results; however, when the section is closely positioned to the boulder, such high-velocity object in the 3-D space would impact on the surrounding wave field. The received diffracted wave interferes with the primary wave and in consequence the picked first arrival travel time is not derived from the profile, which results in a false appearance of high-velocity geology features. Finally, the results of 2-D analysis in 3-D modeling space are comparatively analyzed with the physical model test vis-a-vis the effect of high velocity body on the seismic tomographic measurements.

  15. A mapping of an ensemble of mitochondrial sequences for various organisms into 3D space based on the word composition.

    PubMed

    Aita, Takuyo; Nishigaki, Koichi

    2012-11-01

    To visualize a bird's-eye view of an ensemble of mitochondrial genome sequences for various species, we recently developed a novel method of mapping a biological sequence ensemble into Three-Dimensional (3D) vector space. First, we represented a biological sequence of a species s by a word-composition vector x(s), where its length [absolute value]x(s)[absolute value] represents the sequence length, and its unit vector x(s)/[absolute value]x(s)[absolute value] represents the relative composition of the K-tuple words through the sequence and the size of the dimension, N=4(K), is the number of all possible words with the length of K. Second, we mapped the vector x(s) to the 3D position vector y(s), based on the two following simple principles: (1) [absolute value]y(s)[absolute value]=[absolute value]x(s)[absolute value] and (2) the angle between y(s) and y(t) maximally correlates with the angle between x(s) and x(t). The mitochondrial genome sequences for 311 species, including 177 Animalia, 85 Fungi and 49 Green plants, were mapped into 3D space by using K=7. The mapping was successful because the angles between vectors before and after the mapping highly correlated with each other (correlation coefficients were 0.92-0.97). Interestingly, the Animalia kingdom is distributed along a single arc belt (just like the Milky Way on a Celestial Globe), and the Fungi and Green plant kingdoms are distributed in a similar arc belt. These two arc belts intersect at their respective middle regions and form a cross structure just like a jet aircraft fuselage and its wings. This new mapping method will allow researchers to intuitively interpret the visual information presented in the maps in a highly effective manner.

  16. A mapping of an ensemble of mitochondrial sequences for various organisms into 3D space based on the word composition.

    PubMed

    Aita, Takuyo; Nishigaki, Koichi

    2012-11-01

    To visualize a bird's-eye view of an ensemble of mitochondrial genome sequences for various species, we recently developed a novel method of mapping a biological sequence ensemble into Three-Dimensional (3D) vector space. First, we represented a biological sequence of a species s by a word-composition vector x(s), where its length [absolute value]x(s)[absolute value] represents the sequence length, and its unit vector x(s)/[absolute value]x(s)[absolute value] represents the relative composition of the K-tuple words through the sequence and the size of the dimension, N=4(K), is the number of all possible words with the length of K. Second, we mapped the vector x(s) to the 3D position vector y(s), based on the two following simple principles: (1) [absolute value]y(s)[absolute value]=[absolute value]x(s)[absolute value] and (2) the angle between y(s) and y(t) maximally correlates with the angle between x(s) and x(t). The mitochondrial genome sequences for 311 species, including 177 Animalia, 85 Fungi and 49 Green plants, were mapped into 3D space by using K=7. The mapping was successful because the angles between vectors before and after the mapping highly correlated with each other (correlation coefficients were 0.92-0.97). Interestingly, the Animalia kingdom is distributed along a single arc belt (just like the Milky Way on a Celestial Globe), and the Fungi and Green plant kingdoms are distributed in a similar arc belt. These two arc belts intersect at their respective middle regions and form a cross structure just like a jet aircraft fuselage and its wings. This new mapping method will allow researchers to intuitively interpret the visual information presented in the maps in a highly effective manner. PMID:22776549

  17. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  18. Quantum 3D spin-glass system on the scales of space-time periods of external electromagnetic fields

    SciTech Connect

    Gevorkyan, A. S.

    2012-10-15

    A dielectric medium consisting of rigidly polarized molecules has been treated as a quantum 3D disordered spin system. It is shown that using Birkhoff's ergodic hypothesis the initial 3D disordered spin problem on scales of space-time periods of external field is reduced to two conditionally separable 1D problems. The first problem describes a 1D disordered N-particle quantum system with relaxation in random environment while the second one describes statistical properties of ensemble of disordered 1D steric spin chains of certain length. Basing on constructions which are developed in both problems, the coefficient of polarizability related to collective orientational effects under the influence of external field was calculated. On the basis of these investigations the equation of Clausius-Mossotti (CM) has been generalized as well as the equation for permittivity. It is shown that under the influence of weak standing electromagnetic fields in the equation of CM arising of catastrophe is possible, that can substantially change behavior of permittivity in the X-ray region on the macroscopic scale of space.

  19. Real-time 3D image reconstruction of a 24×24 row-column addressing array: from raw data to image

    NASA Astrophysics Data System (ADS)

    Li, Chunyu; Yang, Jiali; Li, Xu; Zhong, Xiaoli; Song, Junjie; Ding, Mingyue; Yuchi, Ming

    2016-04-01

    This paper presents a work of real-time 3-D image reconstruction for a 7.5-MHz, 24×24 row-column addressing array transducer. The transducer works with a predesigned transmit/receive module. After the raw data are captured by the NI PXIe data acquisition (DAQ) module, the following processing procedures are performed: delay and sum (DAS), base-line calibration, envelope detection, logarithm compression, down-sampling, gray scale mapping and 3-D display. These procedures are optimized for obtaining real-time 3-D images. Fixed-point focusing scheme is applied in delay and sum (DAS) to obtain line data from channel data. Zero-phase high-pass filter is used to calibrate the base-line shift of echo. The classical Hilbert transformation is adopted to detect the envelopes of echo. Logarithm compression is implemented to enlarge the weak signals and narrow the gap from the strong ones. Down-sampling reduces the amount of data to improve the processing speed. Linear gray scale mapping is introduced that the weakest signal is mapped to 0 and the strongest signal 255. The real-time 3-D images are displayed with multi-planar mode, which shows three orthogonal sections (vertical section, coronal section, transverse section). A trigger signal is sent from the transmit/receive module to the DAQ module at the start of each volume data generation to ensure synchronization between these two modules. All procedures, include data acquisition (DAQ), signal processing and image display, are programmed on the platform of LabVIEW. 675MB raw echo data are acquired in one minute to generate 24×24×48, 27fps 3-D images. The experiment on the strong reflection object (aluminum slice) shows the feasibility of the whole process from raw data to real-time 3-D images.

  20. Multiresolutional schemata for unsupervised learning of autonomous robots for 3D space operation

    NASA Technical Reports Server (NTRS)

    Lacaze, Alberto; Meystel, Michael; Meystel, Alex

    1994-01-01

    This paper describes a novel approach to the development of a learning control system for autonomous space robot (ASR) which presents the ASR as a 'baby' -- that is, a system with no a priori knowledge of the world in which it operates, but with behavior acquisition techniques that allows it to build this knowledge from the experiences of actions within a particular environment (we will call it an Astro-baby). The learning techniques are rooted in the recursive algorithm for inductive generation of nested schemata molded from processes of early cognitive development in humans. The algorithm extracts data from the environment and by means of correlation and abduction, it creates schemata that are used for control. This system is robust enough to deal with a constantly changing environment because such changes provoke the creation of new schemata by generalizing from experiences, while still maintaining minimal computational complexity, thanks to the system's multiresolutional nature.

  1. Commentary on accessing 3-D currents in space: Experiences from Cluster

    NASA Astrophysics Data System (ADS)

    Dunlop, M. W.; Haaland, S.; Escoubet, P. C.; Dong, X.-C.

    2016-08-01

    The curlometer was introduced to estimate the electric current density from four-point measurements in space; anticipating the realization of the four spacecraft Cluster mission which began full science operations in February 2001. The method uses Ampère's law to estimate current from the magnetic field measurements, suitable for the high-conductivity plasma of the magnetosphere and surrounding regions. The accuracy of the method is limited by the spatial separation knowledge, accuracy of the magnetic field measurement, and the relative scale size of the current structures sampled but nevertheless has proven to be robust and reliable in many regions of the magnetosphere. The method has been applied successfully and has been a key element, in studies of the magnetopause currents; the magnetotail current sheet; and the ring current, as well as allowing other current structures such as flux tubes and field aligned currents to be determined. The method is also applicable to situations where less than four spacecraft are closely grouped or where special assumptions (particularly stationarity) can be made. In view of the new four-point observations of the MMS mission taking place now, which cover a dramatically different spatial regime, we comment on the performance, adaptability, and lessons learnt from the curlometer technique. We emphasize the adaptability of the method, in particular, to the new sampling regime offered by the MMS mission; thereby offering a tool to address open questions on small-scale current structures.

  2. A simple 3D plasma instrument with an electrically adjustable geometric factor for space research

    NASA Astrophysics Data System (ADS)

    Rohner, U.; Saul, L.; Wurz, P.; Allegrini, F.; Scheer, J.; McComas, D.

    2012-02-01

    We report on the design and experimental verification of a novel charged particle detector and an energy spectrometer with variable geometric factor functionality. Charged particle populations in the inner heliosphere create fluxes that can vary over many orders of magnitude in flux intensity. Space missions that plan to observe plasma fluxes, for example when travelling close to the Sun or to a planetary magnetosphere, require rapid particle measurements over the full three-dimensional velocity distribution. Traditionally, such measurements are carried out with plasma instrumentation with a fixed geometrical factor, which can only operate in a limited range of flux intensity. Here we report on the design and testing of a prototype sensor, which is capable of measuring particle flux with high angular and energy resolution, yet has a variable geometric factor that is controlled without moving parts. This prototype was designed in support of a proposal to make fast electron measurements on the Solar Probe Plus (SP+) mission planned by NASA. We simulated the ion optics inside the instrument and optimized the performance to design and build our prototype. This prototype was then tested in the MEFISTO facility at the University of Bern and its performance was verified over the full range of azimuth, elevation, energy and intensity.

  3. 1D-3D hybrid modeling-from multi-compartment models to full resolution models in space and time.

    PubMed

    Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M; Queisser, Gillian

    2014-01-01

    Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator-which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the

  4. 1D-3D hybrid modeling—from multi-compartment models to full resolution models in space and time

    PubMed Central

    Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M.; Queisser, Gillian

    2014-01-01

    Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator—which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to

  5. The DOSIS and DOSIS 3D Experiments onboard the International Space Station - Results from the Active DOSTEL Instruments

    NASA Astrophysics Data System (ADS)

    Burmeister, Soenke; Berger, Thomas; Reitz, Guenther; Beaujean, Rudolf; Boehme, Matthias; Haumann, Lutz; Labrenz, Johannes; Kortmann, Onno

    2012-07-01

    Besides the effects of the microgravity environment, and the psychological and psychosocial problems experienced in confined spaces, radiation is the main health detriment for long duration human space missions. The radiation environment encountered in space differs in nature from that on earth, consisting mostly of high energetic ions from protons up to iron, resulting in radiation levels far exceeding the ones present on earth for occupational radiation workers. Accurate knowledge of the physical characteristics of the space radiation field in dependence on the solar activity, the orbital parameters and the different shielding configurations of the International Space Station ISS is therefore needed. For the investigation of the spatial and temporal distribution of the radiation field inside the European COLUMBUS module the experiment DOSIS (Dose Distribution Inside the ISS) under the lead of DLR was launched on July 15th 2009 with STS-127 to the ISS. The experimental package was transferred from the Space Shuttle into COLUMBUS on July 18th. It consists of a combination of passive detector packages (PDP) distributed at 11 locations inside the European Columbus Laboratory and two active radiation detectors (DOSTELs) with a DDPU (DOSTEL Data and Power Unit) in a nomex pouch (DOSIS MAIN BOX) mounted at a fixed location beneath the European Physiology Module rack (EPM) inside COLUMBUS. The DOSTELs measured during the lowest solar minimum conditions in the space age from July 18th 2009 to June 16th 2011. In July 2011 the active hardware was transferred to ground for refurbishment and preparation for the DOSIS-3D experiment. The hardware will be launched with the Soyuz 30S flight to the ISS on May 15th 2012 and activated approximately ten days later. Data will be transferred from the DOSTEL units to ground via the EPM rack which is activated approximately every four weeks for this action. First Results for the active DOSIS-3D measurements such as count rate profiles

  6. An Augmented Reality based 3D Catalog

    NASA Astrophysics Data System (ADS)

    Yamada, Ryo; Kishimoto, Katsumi

    This paper presents a 3D catalog system that uses Augmented Reality technology. The use of Web-based catalog systems that present products in 3D form is increasing in various fields, along with the rapid and widespread adoption of Electronic Commerce. However, 3D shapes could previously only be seen in a virtual space, and it was difficult to understand how the products would actually look in the real world. To solve this, we propose a method that combines the virtual and real worlds simply and intuitively. The method applies Augmented Reality technology, and the system developed based on the method enables users to evaluate 3D virtual products in a real environment.

  7. Educational use of 3D models and photogrammetry content: the Europeana space project for Cypriot UNESCO monuments

    NASA Astrophysics Data System (ADS)

    Ioannides, M.; Chatzigrigoriou, P.; Bokolas, V.; Nikolakopoulou, V.; Athanasiou, V.

    2016-08-01

    Digital heritage data are now more accessible through crowdsourcing platforms, social media and blogs. At the same time, evolving technology on 3D modelling, laser scanning and 3D reconstruction is constantly upgrading and multiplying the information that we can use from heritage digitalisation. The question of reusing the information in different aspects rises. Educators and students are potential users of the digital content; developing for them an adaptable environment for applications and services is our challenge. One of the main objective of the EU Europeana Space project is the development of a holistic approach for educating people (grown ups and kids) on Monuments that are listed at UNESCO world heritage list, in Cyprus. The challenge was the use of Europeana Data (Pictures and the 3D objects) in a way that the information on the platform would be comprehensible by the users. Most of the data have little metadata information and they lack history and cultural value description (semantics). The proposed model ction is based on the cross cultural approach which responds to the multicultural features of present era but at the same time to the contemporary pedagogical and methodological directions. The system uses all innovative digital heritage resources, in order to help the user, in a UX friendly way, to learn about the different phases of the monument, the history, the pathology state, the architectural value and the conservation stage. The result is a responsive platform, accessible through smart devices and desktop computers, (in the frame of "Bring Your Own Device" a.k.a. BYOD) where every Monument is a different course and every course is addressed to different age groups (from elementary level to adults' vocational training).

  8. Real-Time 3D Fluoroscopy-Guided Large Core Needle Biopsy of Renal Masses: A Critical Early Evaluation According to the IDEAL Recommendations

    SciTech Connect

    Kroeze, Stephanie G. C.; Huisman, Merel; Verkooijen, Helena M.; Diest, Paul J. van; Ruud Bosch, J. L. H.; Bosch, Maurice A. A. J. van den

    2012-06-15

    Introduction: Three-dimensional (3D) real-time fluoroscopy cone beam CT is a promising new technique for image-guided biopsy of solid tumors. We evaluated the technical feasibility, diagnostic accuracy, and complications of this technique for guidance of large-core needle biopsy in patients with suspicious renal masses. Methods: Thirteen patients with 13 suspicious renal masses underwent large-core needle biopsy under 3D real-time fluoroscopy cone beam CT guidance. Imaging acquisition and subsequent 3D reconstruction was done by a mobile flat-panel detector (FD) C-arm system to plan the needle path. Large-core needle biopsies were taken by the interventional radiologist. Technical success, accuracy, and safety were evaluated according to the Innovation, Development, Exploration, Assessment, Long-term study (IDEAL) recommendations. Results: Median tumor size was 2.6 (range, 1.0-14.0) cm. In ten (77%) patients, the histological diagnosis corresponded to the imaging findings: five were malignancies, five benign lesions. Technical feasibility was 77% (10/13); in three patients biopsy results were inconclusive. The lesion size of these three patients was <2.5 cm. One patient developed a minor complication. Median follow-up was 16.0 (range, 6.4-19.8) months. Conclusions: 3D real-time fluoroscopy cone beam CT-guided biopsy of renal masses is feasible and safe. However, these first results suggest that diagnostic accuracy may be limited in patients with renal masses <2.5 cm.

  9. Apply Multi-baseline SAR Interferometry on Long Term Space-borne SAR Data for 3-D Reconstruction in Forest and Urban Areas

    NASA Astrophysics Data System (ADS)

    Lin, Q.; Zebker, H. A.

    2014-12-01

    Multi Baseline Synthetic Aperture Radar (MB SAR) Tomography is a promising extension to traditional SAR interferometry. By coherently combining SAR images acquired from different baseline location, MB SAR Tomography can achieve unprecedentedly full 3-D imaging of volumetric and layover scatters for each SAR cell.Its capability of 3-D reflectivity reconstruction and multiple scatters separation is enormously helpful for different scientific applications in forestry, agriculture , glaciology etc. However, in order to apply on repeat-pass space borne interferometric dataset, the Fourier Based MB SAR Tomography is generally affected by unsatisfactory imaging quality due to low number of baseline with unequal distribution, atmospheric phase disturbance and temporal decorrelation. In this paper, we propose different signal processing techniques for overcoming these limitations in oder for a better image quality. 1) we develop a robust interpolator to translate the nonuniform greed to uniform one, largely improved the image quality 2) we apply Robust Capon Spectrum Estimation method to improve the resolution and interference of uncertainty in steering matrix. 3) for atmosphere disturbance and radiometric , we select certain flat and known area from image as a estimation for atmospheric offset. We first test our result in simulated SAR data. Comparing with Fourier based method, the result shows better sidelobe suppression and robustness to unknown multiplicative phase noise. Finally, we test the algorithm using real ALOS PALSAR L-band data, acquired between August 2009 to February 2011 near Harvard Forest Area, MA, USA.

  10. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    DOE PAGES

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  11. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    SciTech Connect

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics. In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.

  12. A 3D boundary integral equation method for ultrasonic scattering in a fluid-loaded elastic half space

    NASA Astrophysics Data System (ADS)

    Kimoto, K.; Hirose, S.

    2002-05-01

    This paper presents a boundary integral equation method for 3D ultrasonic scattering problems in a fluid-loaded elastic half space. Since full scale of numerical calculation using finite element or boundary element method is still very expensive, we formulate a boundary integral equation for the scattered field, which is amenable to numerical treatment. In order to solve the problem using the integral equation, however, the wave field without scattering objects, so-called free field need to be given in advance. We calculate the free field by the plane wave spectral method where the asymptotic approximation is introduced for computational efficiency. To show the efficiency of our method, scattering by a spherical cavity near fluid-solid interface is solved and the validity of the results is discussed.

  13. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays

    PubMed Central

    Contreras, Javier; Tornero, Josep; Ferreira, Isabel; Martins, Rodrigo; Gomes, Luis; Fortunato, Elvira

    2015-01-01

    A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array. PMID:26633403

  14. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  15. Semi-automatic characterization of fractured rock masses using 3D point clouds: discontinuity orientation, spacing and SMR geomechanical classification

    NASA Astrophysics Data System (ADS)

    Riquelme, Adrian; Tomas, Roberto; Abellan, Antonio; Cano, Miguel; Jaboyedoff, Michel

    2015-04-01

    Investigation of fractured rock masses for different geological applications (e.g. fractured reservoir exploitation, rock slope instability, rock engineering, etc.) requires a deep geometric understanding of the discontinuity sets affecting rock exposures. Recent advances in 3D data acquisition using photogrammetric and/or LiDAR techniques currently allow a quick and an accurate characterization of rock mass discontinuities. This contribution presents a methodology for: (a) use of 3D point clouds for the identification and analysis of planar surfaces outcropping in a rocky slope; (b) calculation of the spacing between different discontinuity sets; (c) semi-automatic calculation of the parameters that play a capital role in the Slope Mass Rating geomechanical classification. As for the part a) (discontinuity orientation), our proposal identifies and defines the algebraic equations of the different discontinuity sets of the rock slope surface by applying an analysis based on a neighbouring points coplanarity test. Additionally, the procedure finds principal orientations by Kernel Density Estimation and identifies clusters (Riquelme et al., 2014). As a result of this analysis, each point is classified with a discontinuity set and with an outcrop plane (cluster). Regarding the part b) (discontinuity spacing) our proposal utilises the previously classified point cloud to investigate how different outcropping planes are linked in space. Discontinuity spacing is calculated for each pair of linked clusters within the same discontinuity set, and then spacing values are analysed calculating their statistic values. Finally, as for the part c) the previous results are used to calculate parameters F_1, F2 and F3 of the Slope Mass Rating geomechanical classification. This analysis is carried out for each discontinuity set using their respective orientation extracted in part a). The open access tool SMRTool (Riquelme et al., 2014) is then used to calculate F1 to F3 correction

  16. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  17. Optical display of magnified, real and orthoscopic 3-D object images by moving-direct-pixel-mapping in the scalable integral-imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Piao, Yongri; Kim, Eun-Soo

    2011-10-01

    In this paper, we proposed a novel approach for reconstruction of the magnified, real and orthoscopic three-dimensional (3-D) object images by using the moving-direct-pixel-mapping (MDPM) method in the MALT(moving-array-lenslet-technique)-based scalable integral-imaging system. In the proposed system, multiple sets of elemental image arrays (EIAs) are captured with the MALT, and these picked-up EIAs are computationally transformed into the depth-converted ones by using the proposed MDPM method. Then, these depth-converted EIAs are combined and interlaced together to form an enlarged EIA, from which a magnified, real and orthoscopic 3-D object images can be optically displayed without any degradation of resolution. Good experimental results finally confirmed the feasibility of the proposed method.

  18. Real-Depth imaging: a new (no glasses) 3D imaging technology with video/data projection applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax and accommodation which coincides with convergence. Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed. The imagery must either be formatted for the Floating Images platform when written, or existing software can be reformatted without much difficult. The optical hardware system can be made to accommodate virtually any projection system to produce Floating Images for the Boardroom, video arcade, stage shows, or the classroom.

  19. Exciton-polariton oscillations in real space

    NASA Astrophysics Data System (ADS)

    Liew, T. C. H.; Rubo, Y. G.; Kavokin, A. V.

    2014-12-01

    We introduce and model spin-Rabi oscillations based on exciton-polaritons in semiconductor microcavities. The phase and polarization of oscillations can be controlled by resonant coherent pulses and the propagation of oscillating domains gives rise to phase-dependent interference patterns in real space. We show that interbranch polariton-polariton scattering controls the propagation of oscillating domains, which can be used to realize logic gates based on an analog variable phase.

  20. A Sensory 3D Map of the Odor Description Space Derived from a Comparison of Numeric Odor Profile Databases.

    PubMed

    Zarzo, Manuel

    2015-06-01

    Many authors have proposed different schemes of odor classification, which are useful to aid the complex task of describing smells. However, reaching a consensus on a particular classification seems difficult because our psychophysical space of odor description is a continuum and is not clustered into well-defined categories. An alternative approach is to describe the perceptual space of odors as a low-dimensional coordinate system. This idea was first proposed by Crocker and Henderson in 1927, who suggested using numeric profiles based on 4 dimensions: "fragrant," "acid," "burnt," and "caprylic." In the present work, the odor profiles of 144 aroma chemicals were compared by means of statistical regression with comparable numeric odor profiles obtained from 2 databases, enabling a plausible interpretation of the 4 dimensions. Based on the results and taking into account comparable 2D sensory maps of odor descriptors from the literature, a 3D sensory map (odor cube) has been drawn up to improve understanding of the similarities and dissimilarities of the odor descriptors most frequently used in fragrance chemistry. PMID:25847969

  1. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    SciTech Connect

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-15

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  2. Modeling Semantic Emotion Space Using a 3D Hypercube-Projection: An Innovative Analytical Approach for the Psychology of Emotions.

    PubMed

    Trnka, Radek; Lačev, Alek; Balcar, Karel; Kuška, Martin; Tavel, Peter

    2016-01-01

    The widely accepted two-dimensional circumplex model of emotions posits that most instances of human emotional experience can be understood within the two general dimensions of valence and activation. Currently, this model is facing some criticism, because complex emotions in particular are hard to define within only these two general dimensions. The present theory-driven study introduces an innovative analytical approach working in a way other than the conventional, two-dimensional paradigm. The main goal was to map and project semantic emotion space in terms of mutual positions of various emotion prototypical categories. Participants (N = 187; 54.5% females) judged 16 discrete emotions in terms of valence, intensity, controllability and utility. The results revealed that these four dimensional input measures were uncorrelated. This implies that valence, intensity, controllability and utility represented clearly different qualities of discrete emotions in the judgments of the participants. Based on this data, we constructed a 3D hypercube-projection and compared it with various two-dimensional projections. This contrasting enabled us to detect several sources of bias when working with the traditional, two-dimensional analytical approach. Contrasting two-dimensional and three-dimensional projections revealed that the 2D models provided biased insights about how emotions are conceptually related to one another along multiple dimensions. The results of the present study point out the reductionist nature of the two-dimensional paradigm in the psychological theory of emotions and challenge the widely accepted circumplex model.

  3. Modeling Semantic Emotion Space Using a 3D Hypercube-Projection: An Innovative Analytical Approach for the Psychology of Emotions

    PubMed Central

    Trnka, Radek; Lačev, Alek; Balcar, Karel; Kuška, Martin; Tavel, Peter

    2016-01-01

    The widely accepted two-dimensional circumplex model of emotions posits that most instances of human emotional experience can be understood within the two general dimensions of valence and activation. Currently, this model is facing some criticism, because complex emotions in particular are hard to define within only these two general dimensions. The present theory-driven study introduces an innovative analytical approach working in a way other than the conventional, two-dimensional paradigm. The main goal was to map and project semantic emotion space in terms of mutual positions of various emotion prototypical categories. Participants (N = 187; 54.5% females) judged 16 discrete emotions in terms of valence, intensity, controllability and utility. The results revealed that these four dimensional input measures were uncorrelated. This implies that valence, intensity, controllability and utility represented clearly different qualities of discrete emotions in the judgments of the participants. Based on this data, we constructed a 3D hypercube-projection and compared it with various two-dimensional projections. This contrasting enabled us to detect several sources of bias when working with the traditional, two-dimensional analytical approach. Contrasting two-dimensional and three-dimensional projections revealed that the 2D models provided biased insights about how emotions are conceptually related to one another along multiple dimensions. The results of the present study point out the reductionist nature of the two-dimensional paradigm in the psychological theory of emotions and challenge the widely accepted circumplex model. PMID:27148130

  4. Is principal component analysis an effective tool to predict face attractiveness? A contribution based on real 3D faces of highly selected attractive women, scanned with stereophotogrammetry.

    PubMed

    Galantucci, Luigi Maria; Di Gioia, Eliana; Lavecchia, Fulvio; Percoco, Gianluca

    2014-05-01

    In the literature, several papers report studies on mathematical models used to describe facial features and to predict female facial beauty based on 3D human face data. Many authors have proposed the principal component analysis (PCA) method that permits modeling of the entire human face using a limited number of parameters. In some cases, these models have been correlated with beauty classifications, obtaining good attractiveness predictability using wrapped 2D or 3D models. To verify these results, in this paper, the authors conducted a three-dimensional digitization study of 66 very attractive female subjects using a computerized noninvasive tool known as 3D digital photogrammetry. The sample consisted of the 64 contestants of the final phase of the Miss Italy 2010 beauty contest, plus the two highest ranked contestants in the 2009 competition. PCA was conducted on this real faces sample to verify if there is a correlation between ranking and the principal components of the face models. There was no correlation and therefore, this hypothesis is not confirmed for our sample. Considering that the results of the contest are not only solely a function of facial attractiveness, but undoubtedly are significantly impacted by it, the authors based on their experience and real faces conclude that PCA analysis is not a valid prediction tool for attractiveness. The database of the features belonging to the sample analyzed are downloadable online and further contributions are welcome. PMID:24728666

  5. Real time animation of space plasma phenomena

    NASA Technical Reports Server (NTRS)

    Jordan, K. F.; Greenstadt, E. W.

    1987-01-01

    In pursuit of real time animation of computer simulated space plasma phenomena, the code was rewritten for the Massively Parallel Processor (MPP). The program creates a dynamic representation of the global bowshock which is based on actual spacecraft data and designed for three dimensional graphic output. This output consists of time slice sequences which make up the frames of the animation. With the MPP, 16384, 512 or 4 frames can be calculated simultaneously depending upon which characteristic is being computed. The run time was greatly reduced which promotes the rapid sequence of images and makes real time animation a foreseeable goal. The addition of more complex phenomenology in the constructed computer images is now possible and work proceeds to generate these images.

  6. Integrated monolithic 3D MEMS scanner for switchable real time vertical/horizontal cross-sectional imaging.

    PubMed

    Li, Haijun; Duan, Xiyu; Qiu, Zhen; Zhou, Quan; Kurabayashi, Katsuo; Oldham, Kenn R; Wang, Thomas D

    2016-02-01

    We present an integrated monolithic, electrostatic 3D MEMS scanner with a compact chip size of 3.2 × 2.9 mm(2). Use of parametric excitation near resonance frequencies produced large optical deflection angles up to ± 27° and ± 28.5° in the X- and Y-axes and displacements up to 510 μm in the Z-axis with low drive voltages at atmospheric pressure. When packaged in a dual axes confocal endomicroscope, horizontal and vertical cross-sectional images can be collected seamlessly in tissue with a large field-of-view of >1 × 1 mm(2) and 1 × 0.41 mm(2), respectively, at 5 frames/sec.

  7. Real NASA Inspiration in a Virtual Space

    NASA Technical Reports Server (NTRS)

    Petersen, Ruth; Starr, Bob; Anderson, Susan

    2003-01-01

    NASA exemplifies the spirit of exploration of new horizons - from flight in earth's skies to missions in space. As we know from our experience as teachers, one of the best ways to motivate students' interest in mathematics, science, technology, and engineering is to allow them to explore the universe through NASA's rich history of air and space exploration and current missions. But how? It's not really practical for large numbers of students to talk to NASA astronauts, researchers, scientists, and engineers in person. NASA offers tools that make it possible for hundreds of students to visit with NASA through videoconferencing. These visits provide a real-world connection to scientists and their research and support the NASA mission statement: To inspire the next generation of explorers ... as only NASA can.

  8. Accuracy of x-ray image-based 3D localization from two C-arm views: a comparison between an ideal system and a real device

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Strobel, Norbert; Yatziv, Liron; Gilson, Wesley; Meyer, Bernhard; Hornegger, Joachim; Lewin, Jonathan; Wacker, Frank

    2009-02-01

    arm X-ray imaging devices are commonly used for minimally invasive cardiovascular or other interventional procedures. Calibrated state-of-the-art systems can, however, not only be used for 2D imaging but also for three-dimensional reconstruction either using tomographic techniques or even stereotactic approaches. To evaluate the accuracy of X-ray object localization from two views, a simulation study assuming an ideal imaging geometry was carried out first. This was backed up with a phantom experiment involving a real C-arm angiography system. Both studies were based on a phantom comprising five point objects. These point objects were projected onto a flat-panel detector under different C-arm view positions. The resulting 2D positions were perturbed by adding Gaussian noise to simulate 2D point localization errors. In the next step, 3D point positions were triangulated from two views. A 3D error was computed by taking differences between the reconstructed 3D positions using the perturbed 2D positions and the initial 3D positions of the five points. This experiment was repeated for various C-arm angulations involving angular differences ranging from 15° to 165°. The smallest 3D reconstruction error was achieved, as expected, by views that were 90° degrees apart. In this case, the simulation study yielded a 3D error of 0.82 mm +/- 0.24 mm (mean +/- standard deviation) for 2D noise with a standard deviation of 1.232 mm (4 detector pixels). The experimental result for this view configuration obtained on an AXIOM Artis C-arm (Siemens AG, Healthcare Sector, Forchheim, Germany) system was 0.98 mm +/- 0.29 mm, respectively. These results show that state-of-the-art C-arm systems can localize instruments with millimeter accuracy, and that they can accomplish this almost as well as an idealized theoretical counterpart. High stereotactic localization accuracy, good patient access, and CT-like 3D imaging capabilities render state-of-the-art C-arm systems ideal devices for X

  9. Free-space coherent optical communication with orbital angular, momentum multiplexing/demultiplexing using a hybrid 3D photonic integrated circuit.

    PubMed

    Guan, Binbin; Scott, Ryan P; Qin, Chuan; Fontaine, Nicolas K; Su, Tiehui; Ferrari, Carlo; Cappuzzo, Mark; Klemens, Fred; Keller, Bob; Earnshaw, Mark; Yoo, S J B

    2014-01-13

    We demonstrate free-space space-division-multiplexing (SDM) with 15 orbital angular momentum (OAM) states using a three-dimensional (3D) photonic integrated circuit (PIC). The hybrid device consists of a silica planar lightwave circuit (PLC) coupled to a 3D waveguide circuit to multiplex/demultiplex OAM states. The low excess loss hybrid device is used in individual and two simultaneous OAM states multiplexing and demultiplexing link experiments with a 20 Gb/s, 1.67 b/s/Hz quadrature phase shift keyed (QPSK) signal, which shows error-free performance for 379,960 tested bits for all OAM states.

  10. Accuracy of a Mitral Valve Segmentation Method Using J-Splines for Real-Time 3D Echocardiography Data

    PubMed Central

    Siefert, Andrew W.; Icenogle, David A.; Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Rossignac, Jarek; Lerakis, Stamatios; Yoganathan, Ajit P.

    2013-01-01

    Patient-specific models of the heart’s mitral valve (MV) exhibit potential for surgical planning. While advances in 3D echocardiography (3DE) have provided adequate resolution to extract MV leaflet geometry, no study has quantitatively assessed the accuracy of their modeled leaflets versus a ground-truth standard for temporal frames beyond systolic closure or for differing valvular dysfunctions. The accuracy of a 3DE-based segmentation methodology based on J-splines was assessed for porcine MVs with known 4D leaflet coordinates within a pulsatile simulator during closure, peak closure, and opening for a control, prolapsed, and billowing MV model. For all time points, the mean distance error between the segmented models and ground-truth data were 0.40±0.32 mm, 0.52±0.51 mm, and 0.74±0.69 mm for the control, flail, and billowing models. For all models and temporal frames, 95% of the distance errors were below 1.64 mm. When applied to a patient data set, segmentation was able to confirm a regurgitant orifice and post-operative improvements in coaptation. This study provides an experimental platform for assessing the accuracy of an MV segmentation methodology at phases beyond systolic closure and for differing MV dysfunctions. Results demonstrate the accuracy of a MV segmentation methodology for the development of future surgical planning tools. PMID:23460042

  11. Registration of fast cine cardiac MR slices to 3D preprocedural images: toward real-time registration for MRI-guided procedures

    NASA Astrophysics Data System (ADS)

    Smolikova, Renata; Wachowiak, Mark P.; Drangova, Maria

    2004-05-01

    Interventional cardiac magnetic resonance (MR) procedures are the subject of an increasing number of research studies. Typically, during the procedure only two-dimensional images of oblique slices can be presented to the interventionalist in real time. There is a clear benefit to being able to register the real-time 2D slices to a previously acquired 3D computed tomography (CT) or MR image of the heart. Results from a study of the accuracy of registration of 2D cardiac images of an anesthetized pig to a 3D volume obtained in diastole are presented. Fast cine MR images representing twenty phases of the cardiac cycle were obtained of a 2D slice in a known oblique orientation. The 2D images were initially mis-oriented at distances ranging from 2 to 20 mm, and rotations of +/-10 degrees about all three axes. Images from all 20 cardiac phases were registered to examine the effect of timing between the 2D image and the 3D pre-procedural image. Linear registration using mutual information computed with 64 histogram bins yielded the highest accuracy. For the diastolic phases, mean translation and rotation errors ranged between 0.91 and 1.32 mm and between 1.73 and 2.10 degrees. Scans acquired at other phases also had high accuracy. These results are promising for the use of real time MR in image-guided cardiac interventions, and demonstrate the feasibility of registering 2D oblique MR slices to previously acquired single-phase volumes without preprocessing.

  12. 3D computed tomographic evaluation of the upper airway space of patients undergoing mandibular distraction osteogenesis for micrognathia.

    PubMed

    Bianchi, A; Betti, E; Badiali, G; Ricotta, F; Marchetti, C; Tarsitano, A

    2015-10-01

    Mandibular distraction osteogenesis (MDO) is currently an accepted method of treatment for patients requiring reconstruction of hypoplastic mandibles. To date one of the unsolved problems is how to assess the quantitative increase of mandible length needed to achieve a significant change in the volume of the posterior airway space (PAS) in children with mandibular micrognathia following distraction osteogenesis. The purpose of this study is to present quantitative volumetric evaluation of PAS in young patients having distraction osteogenesis for micrognathia using 3D-CT data sets and compare it with pre-operative situation. In this observational retrospective study, we report our experience in five consecutive patients who underwent MDO in an attempt to relieve severe upper airway obstruction. Each patient was evaluated before treatment (T0) and at the end of distraction procedure (T1) with computer tomography (CT) in axial, coronal, and sagittal planes and three-dimensional CT of the facial bones and upper airway. Using parameters to extract only data within anatomic constraints, a digital set of the edited upper airway volume was obtained. The volume determination was used for volumetric qualification of upper airway. The computed tomographic digital data were used to evaluate the upper airway volumes both pre-distraction and post-distraction. The mean length of distraction was 23 mm. Quantitative assessment of upper airway volume before and after distraction demonstrated increased volumes ranging from 84% to 3,087% with a mean of 536%. In conclusion, our study seems to show that DO can significantly increase the volume of the PAS in patients with upper airway obstruction following micrognathia, by an average of 5 times. Furthermore, the worse is the starting volume, the greater the increase in PAS to equal distraction.

  13. Effect of Clouds on Optical Imaging of the Space Shuttle During the Ascent Phase: A Statistical Analysis Based on a 3D Model

    NASA Technical Reports Server (NTRS)

    Short, David A.; Lane, Robert E., Jr.; Winters, Katherine A.; Madura, John T.

    2004-01-01

    Clouds are highly effective in obscuring optical images of the Space Shuttle taken during its ascent by ground-based and airborne tracking cameras. Because the imagery is used for quick-look and post-flight engineering analysis, the Columbia Accident Investigation Board (CAIB) recommended the return-to-flight effort include an upgrade of the imaging system to enable it to obtain at least three useful views of the Shuttle from lift-off to at least solid rocket booster (SRB) separation (NASA 2003). The lifetimes of individual cloud elements capable of obscuring optical views of the Shuttle are typically 20 minutes or less. Therefore, accurately observing and forecasting cloud obscuration over an extended network of cameras poses an unprecedented challenge for the current state of observational and modeling techniques. In addition, even the best numerical simulations based on real observations will never reach "truth." In order to quantify the risk that clouds would obscure optical imagery of the Shuttle, a 3D model to calculate probabilistic risk was developed. The model was used to estimate the ability of a network of optical imaging cameras to obtain at least N simultaneous views of the Shuttle from lift-off to SRB separation in the presence of an idealized, randomized cloud field.

  14. Efficient Numerical Modeling of 3D, Half-Space, Slow-Slip and Quasi-Dynamic Earthquake Ruptures

    NASA Astrophysics Data System (ADS)

    Bradley, A. M.; Segall, P.

    2011-12-01

    Motivated by the hypothesis that dilatancy plays a critical role in faulting in subduction zones, we are developing FDRA2 (Fault Dynamics with the Radiation-damping Approximation), a software package to simulate three-dimensional quasi-dynamic faulting that includes rate-state friction, thermal pressurization, and dilatancy (following Segall and Rice [1995]) in a finite-width shear zone. This work builds on the two-dimensional simulations performed by FDRA1 (Bradley and Segall [AGU 2010], Segall and Bradley [submitted]). These simulations show that at lower background effective normal stress (\\bar σ), slow slip events occur spontaneously, whereas at higher \\bar σ , slip is inertially limited. At intermediate \\bar σ , dynamic events are followed by quiescent periods and then long durations of repeating slow slip events. Models with depth-dependent properties produce sequences similar to those observed in Cascadia. Like FDRA1, FDRA2 solves partial differential equations in pressure and temperature on profiles normal to the fault. The diffusion equations are discretized in space using finite differences on a nonuniform mesh having greater density near the fault. The full system of equations is a semiexplicit index-1 differential algebraic equation (DAE) in slip, slip rate, state, fault zone porosity, pressure, and temperature. We integrate state, porosity, and slip explicitly; solve the momentum balance equation on the fault for slip rate; and integrate pressure and temperature implicitly. Adaptive time steps are limited by accuracy and the stability criterion governing explicit integration of hyperbolic, but not the more stringent one governing parabolic, PDE. To compute elasticity in a 3D half-space, FDRA2 compresses the large, dense matrix arising from the boundary element method using an H-matrix. The work to perform a matrix-vector product scales almost linearly, rather than quadratically, in the number of fault cells. A new technique to relate the error

  15. Stereoscopic helmet mounted system for real time 3D environment reconstruction and indoor ego-motion estimation

    NASA Astrophysics Data System (ADS)

    Donato, Giuseppe; Sequeira, Vitor M.; Sadka, Abdul

    2008-04-01

    A novel type of stereoscopic Helmet Mounted System for simultaneous user localization and mapping applications is described. This paper presents precise real time volume data reconstruction. The system is designed for users that need to explore and navigate in unprepared indoor environments without any support of GPS signal or environment preparation through preinstalled markers. Augmented Reality features in support of self-navigation can be interactively added by placing virtual markers in the desired positions in the world coordinate system. They can then be retrieved when the marker is back in the user field of view being used as visual alerts or for back path finding.

  16. Improved image guidance technique for minimally invasive mitral valve repair using real-time tracked 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry

    2016-03-01

    In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.

  17. A full-field and real-time 3D surface imaging augmented DOT system for in-vivo small animal studies

    NASA Astrophysics Data System (ADS)

    Yi, Steven X.; Yang, Bingcheng; Yin, Gongjie

    2010-02-01

    A crucial parameter in Diffuse Optical Tomography (DOT) is the construction of an accurate forward model, which greatly depends on tissue boundary. Since photon propagation is a three-dimensional volumetric problem, extraction and subsequent modeling of three-dimensional boundaries is essential. Original experimental demonstration of the feasibility of DOT to reconstruct absorbers, scatterers and fluorochromes used phantoms or tissues confined appropriately to conform to easily modeled geometries such as a slab or a cylinder. In later years several methods have been developed to model photon propagation through diffuse media with complex boundaries using numerical solutions of the diffusion or transport equation (finite elements or differences) or more recently analytical methods based on the tangent-plane method . While optical examinations performed simultaneously with anatomical imaging modalities such as MRI provide well-defined boundaries, very limited progress has been done so far in extracting full-field (360 degree) boundaries for in-vivo three-dimensional DOT stand-alone imaging. In this paper, we present a desktop multi-spectrum in-vivo 3D DOT system for small animal imaging. This system is augmented with Technest's full-field 3D cameras. The built system has the capability of acquiring 3D object surface profiles in real time and registering 3D boundary with diffuse tomography. Extensive experiments are performed on phantoms and small animals by our collaborators at the Center for Molecular Imaging Research (CMIR) at Massachusetts General Hospital (MGH) and Harvard Medical School. Data has shown successful reconstructed DOT data with improved accuracy.

  18. A real-time monitoring/emergency response workstation using a 3-D numerical model initialized with SODAR

    SciTech Connect

    Lawver, B.S.; Sullivan, T.J.; Baskett, R.L.

    1993-05-10

    Many workstation based emergency response dispersion modeling systems provide simple Gaussian models driven by single meteorological tower inputs to estimate the downwind consequences from accidental spills or stack releases. Complex meteorological or terrain settings demand more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion. Mountain valleys and sea breeze flows are two common examples of such settings. To address these complexities, we have implemented the three-dimensional-diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on a workstation for use in real-time emergency response modeling. Both MATHEW and ADPIC have shown their utility in a variety of complex settings over the last 15 years within the Department of Energy`s Atmospheric Release Advisory Capability project.

  19. A 3D immersive application for real-time flythrough of planetary surfaces : The VR2Planets project

    NASA Astrophysics Data System (ADS)

    Civet, F.; Le Mouélic, S.

    2015-10-01

    During the last two decades, a fleet of planetary probes has acquired several hundred gigabytes of images of planetary surfaces. Mars has been particularly well covered thanks to the Mars Global Surveyor, Mars Express and Mars Reconnaissance Orbiter spacecrafts. HRSC [1], CTX, HiRISE [2] instruments allowed the computation of Digital Elevation Models with a resolution from hundreds of meters up to 1 meter per pixel, and corresponding orthoimages with a resolution from few hundred of meters up to 25 centimeters per pixel. The integration of such huge data sets into a system allowing a userfriendly manipulation either for scientific use or for public outreach can represent a real challenge, which we are investigating in this study.

  20. Performance and suitability assessment of a real-time 3D electromagnetic needle tracking system for interstitial brachytherapy

    PubMed Central

    Boutaleb, Samir; Fillion, Olivier; Bonillas, Antonio; Hautvast, Gilion; Binnekamp, Dirk; Beaulieu, Luc

    2015-01-01

    Purpose Accurate insertion and overall needle positioning are key requirements for effective brachytherapy treatments. This work aims at demonstrating the accuracy performance and the suitability of the Aurora® V1 Planar Field Generator (PFG) electromagnetic tracking system (EMTS) for real-time treatment assistance in interstitial brachytherapy procedures. Material and methods The system's performance was characterized in two distinct studies. First, in an environment free of EM disturbance, the boundaries of the detection volume of the EMTS were characterized and a tracking error analysis was performed. Secondly, a distortion analysis was conducted as a means of assessing the tracking accuracy performance of the system in the presence of potential EM disturbance generated by the proximity of standard brachytherapy components. Results The tracking accuracy experiments showed that positional errors were typically 2 ± 1 mm in a zone restricted to the first 30 cm of the detection volume. However, at the edges of the detection volume, sensor position errors of up to 16 mm were recorded. On the other hand, orientation errors remained low at ± 2° for most of the measurements. The EM distortion analysis showed that the presence of typical brachytherapy components in vicinity of the EMTS had little influence on tracking accuracy. Position errors of less than 1 mm were recorded with all components except with a metallic arm support, which induced a mean absolute error of approximately 1.4 mm when located 10 cm away from the needle sensor. Conclusions The Aurora® V1 PFG EMTS possesses a great potential for real-time treatment assistance in general interstitial brachytherapy. In view of our experimental results, we however recommend that the needle axis remains as parallel as possible to the generator surface during treatment and that the tracking zone be restricted to the first 30 cm from the generator surface. PMID:26622231

  1. Real-time slicing of data space

    SciTech Connect

    Crawfis, R.A.

    1996-07-01

    Real-time rendering of iso-contour surfaces is problematic for large complex data sets. In this paper, an algorithm is presented that allows very rapid representation of an interval set surrounding a iso-contour surface. The algorithm draws upon three main ideas. A fast indexing scheme is used to select only those data points near the contour surface. Hardware assisted splatting is then employed on these data points to produce a volume rendering of the interval set. Finally, by shifting a small window through the indexing scheme or data space, animated volumes are produced showing the changing contour values. In addition to allowing fast selection and rendering of the data, the indexing scheme allows a much compressed representation of the data by eliminating ``noise`` data points.

  2. In situ and real time characterization of interface microstructure in 3D alloy solidification: benchmark microgravity experiments in the DECLIC-Directional Solidification Insert on ISS

    NASA Astrophysics Data System (ADS)

    Ramirez, A.; Chen, L.; Bergeon, N.; Billia, B.; Gu, Jiho; Trivedi, R.

    2012-01-01

    Dynamical microstructure formation and selection during solidification processing, which has a major influence on the properties in the use of elaborated materials, occur during the growth process. In situ observation of the solid-liquid interface morphology evolution is thus necessary. On earth, convection effects dominate in bulk samples and may strongly interact with microstructure dynamics and alter pattern characterization. Series of solidification experiments with 3D cylindrical sample geometry were conducted in succinonitrile (SCN) -0.24 wt%camphor (model transparent system), in microgravity environment in the Directional Solidification Insert of the DECLIC facility of CNES (French space agency) on the International Space Station (ISS). Microgravity enabled homogeneous values of control parameters over the whole interface allowing the obtaining of homogeneous patterns suitable to get quantitative benchmark data. First analyses of the characteristics of the pattern (spacing, order, etc.) and of its dynamics in microgravity will be presented.

  3. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures.

    PubMed

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R

    2012-02-23

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  4. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    NASA Astrophysics Data System (ADS)

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H.; Meeks, Sanford L.; Kupelian, Patrick A.

    2010-09-01

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  5. Towards real-time 3D US-CT registration on the beating heart for guidance of minimally invasive cardiac interventions

    NASA Astrophysics Data System (ADS)

    Li, Feng; Lang, Pencilla; Rajchl, Martin; Chen, Elvis C. S.; Guiraudon, Gerard; Peters, Terry M.

    2012-02-01

    Compared to conventional open-heart surgeries, minimally invasive cardiac interventions cause less trauma and sideeffects to patients. However, the direct view of surgical targets and tools is usually not available in minimally invasive procedures, which makes image-guided navigation systems essential. The choice of imaging modalities used in the navigation systems must consider the capability of imaging soft tissues, spatial and temporal resolution, compatibility and flexibility in the OR, and financial cost. In this paper, we propose a new means of guidance for minimally invasive cardiac interventions using 3D real-time ultrasound images to show the intra-operative heart motion together with preoperative CT image(s) employed to demonstrate high-quality 3D anatomical context. We also develop a method to register intra-operative ultrasound and pre-operative CT images in close to real-time. The registration method has two stages. In the first, anatomical features are segmented from the first frame of ultrasound images and the CT image(s). A feature based registration is used to align those features. The result of this is used as an initialization in the second stage, in which a mutual information based registration is used to register every ultrasound frame to the CT image(s). A GPU based implementation is used to accelerate the registration.

  6. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  7. Reasoning about geological space: Coupling 3D GeoModels and topological queries as an aid to spatial data selection

    NASA Astrophysics Data System (ADS)

    Pouliot, Jacynthe; Bédard, Karine; Kirkwood, Donna; Lachance, Bernard

    2008-05-01

    Topological relationships between geological objects are of great interest for mining and petroleum exploration. Indeed, adjacency, inclusion and intersection are common relationships between geological objects such as faults, geological units, fractures, mineralized zones and reservoirs. However, in the context of 3D modeling, actual geometric data models used to store those objects are not designed to manage explicit topological relationships. For example, with Gocad© software, topological analyses are possible but they require a series of successive manipulations and are time consuming. This paper presents the development of a 3D topological query prototype, TQuery, compatible with Gocad© modeling platform. It allows the user to export Gocad© objects to a data storage model that regularizes the topological relationships between objects. The development of TQuery was oriented towards the use of volumetric objects that are composed of tetrahedrons. Exported data are then retrieved and used for 3D topological and spatial queries. One of the advantages of TQuery is that different types of objects can be queried at the same time without restricting the operations to voxel regions. TQuery allows the user to analyze data more quickly and efficiently and does not require a 3D modeling specialist to use it, which is particularly attractive in the context of a decision-making aid. The prototype was tested on a 3D GeoModel of a continental red-bed copper deposit in the Silurian Robitaille Formation (Transfiguration property, Québec, Canada).

  8. Future Mission Concept for 3-D Aerosol Monitoring From Space Based on Fusion of Remote Sensing Approaches

    NASA Astrophysics Data System (ADS)

    Diner, D. J.; Kahn, R. A.; Hostetler, C. A.; Ferrare, R. A.; Hair, J. W.; Cairns, B.; Torres, O.

    2006-05-01

    extinction independently along with vertically resolved estimates of microphysical properties, thus representing a significant advance relative to simpler backscatter systems such as GLAS and CALIPSO. This fusion of satellite-based approaches is aimed at observing the 3-D distribution of aerosol abundances, sizes, shapes, and absorption, and would represent a major technological advance in our ability to monitor and characterize near-surface particulate matter from space.

  9. Real-time 3D reconstruction of road curvature in far look-ahead distance from analysis of image sequences

    NASA Astrophysics Data System (ADS)

    Behringer, Reinhold

    1995-12-01

    A system for visual road recognition in far look-ahead distance, implemented in the autonomous road vehicle VaMP (a passenger car), is described. Visual cues of a road in a video image are the bright lane markings and the edges formed at the road borders. In a distance of more than 100 m, the most relevant road cue is the homogeneous road area, limited by the two border edges. These cues can be detected by the image processing module KRONOS applying edge detection techniques and areal 2D segmentation based on resolution triangles (analogous to a resolution pyramid). An estimation process performs an update of a state vector, which describes spatial road shape and vehicle orientation relative to the road. This state vector is estimated every 40 ms by exploiting knowledge about the vehicle movement (spatio-temporal model of vehicle dynamics) and the road design rules (clothoidal segments). Kalman filter techniques are applied to obtain an optimal estimate of the state vector by evaluating the measurements of the road border positions in the image sequence taken by a set of CCD cameras. The road consists of segments with piecewise constant curvature parameters. The borders between these segments can be detected by applying methods which have been developed for detection of discontinuities during time-discrete measurements. The road recognition system has been tested in autonomous rides with VaMP on public Autobahnen in real traffic at speeds up to 130 km/h.

  10. Real time radiation measurements in space

    NASA Astrophysics Data System (ADS)

    Thomson, I.; Mackay, G.

    Radiation composed of energetic electrons, protons, photons, and galactic cosmic rays will be experienced by all space missions and may have effects on radiation sensitive electronic components and biological specimens. Radiation issues of interest to microgravity and biological experiments are discussed and the design of a new direct reading electronic radiation monitoring system is described. The proposed system consists of a radiation sensitive metal oxide semiconductor field effect transistor (MOSFET) specially designed to respond to ionizing radiation. On exposure to radiation, a permanent charge is stored in the MOSFET's insulating oxide, altering the device's electrical characteristics in a manner directly proportional to the dose exposed. A simple circuit reads the MOSFET's cumulative dose, making it possible to obtain real-time measurements and store the data or transfer the data to an earth station. Tests have shown that the MOSFET dosimeter shows a linear response up to at least 30,000 centiGray at a resolution of 0.1 centiGray. The MOSFET dosimetry system will be installed on the European Space Agency's ARTEP satellite scheduled for launch in November 1991.

  11. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  12. On the Correlation Between Fatigue Striation Spacing and Crack Growth Rate: A Three-Dimensional (3-D) X-ray Synchrotron Tomography Study

    NASA Astrophysics Data System (ADS)

    Williams, Jason J.; Yazzie, Kyle E.; Connor Phillips, N.; Chawla, Nikhilesh; Xiao, Xinghui; de Carlo, Francesco; Iyyer, Nagaraja; Kittur, Maddan

    2011-12-01

    In situ three-dimensional (3-D) X-ray synchrotron tomography of fatigue crack growth was conducted in a 7075-T6 aluminum alloy. Local measurements of da/ dN were possible with the 3-D data sets obtained from tomography. A comparison with fatigue striation spacings obtained from scanning electron microscopy of the fracture surfaces yielded excellent correlation with da/ dN obtained from tomography. The X-ray tomography technique can be used to obtain a highly accurate and representative measurements of crack growth locally in the microstructure of the material.

  13. Use of MV and kV imager correlation for maintaining continuous real-time 3D internal marker tracking during beam interruptions

    NASA Astrophysics Data System (ADS)

    Wiersma, R. D.; Riaz, N.; Dieterich, Sonja; Suh, Yelin; Xing, L.

    2009-01-01

    The integration of onboard kV imaging together with a MV electronic portal imaging device (EPID) on linear accelerators (LINAC) can provide an easy to implement real-time 3D organ position monitoring solution for treatment delivery. Currently, real-time MV-kV tracking has only been demonstrated by simultaneous imagining by both MV and kV imaging devices. However, modalities such as step-and-shoot IMRT (SS-IMRT), which inherently contain MV beam interruptions, can lead to loss of target information necessary for 3D localization. Additionally, continuous kV imaging throughout the treatment delivery can lead to high levels of imaging dose to the patient. This work demonstrates for the first time how full 3D target tracking can be maintained even in the presence of such beam interruption, or MV/kV beam interleave, by use of a relatively simple correlation model together with MV-kV tracking. A moving correlation model was constructed using both present and prior positions of the marker in the available MV or kV image to compute the position of the marker on the interrupted imager. A commercially available radiotherapy system, equipped with both MV and kV imaging devices, was used to deliver typical SS-IMRT lung treatment plans to a 4D phantom containing internally embedded metallic markers. To simulate actual lung tumor motion, previous recorded 4D lung patient motion data were used. Lung tumor motion data of five separate patients were inputted into the 4D phantom, and typical SS-IMRT lung plans were delivered to simulate actual clinical deliveries. Application of the correlation model to SS-IMRT lung treatment deliveries was found to be an effective solution for maintaining continuous 3D tracking during 'step' beam interruptions. For deliveries involving five or more gantry angles with 50 or more fields per plan, the positional errors were found to have <=1 mm root mean squared error (RMSE) in all three spatial directions. In addition to increasing the robustness of

  14. Use of MV and kV imager correlation for maintaining continuous real-time 3D internal marker tracking during beam interruptions.

    PubMed

    Wiersma, R D; Riaz, N; Dieterich, Sonja; Suh, Yelin; Xing, L

    2009-01-01

    The integration of onboard kV imaging together with a MV electronic portal imaging device (EPID) on linear accelerators (LINAC) can provide an easy to implement real-time 3D organ position monitoring solution for treatment delivery. Currently, real-time MV-kV tracking has only been demonstrated by simultaneous imagining by both MV and kV imaging devices. However, modalities such as step-and-shoot IMRT (SS-IMRT), which inherently contain MV beam interruptions, can lead to loss of target information necessary for 3D localization. Additionally, continuous kV imaging throughout the treatment delivery can lead to high levels of imaging dose to the patient. This work demonstrates for the first time how full 3D target tracking can be maintained even in the presence of such beam interruption, or MV/kV beam interleave, by use of a relatively simple correlation model together with MV-kV tracking. A moving correlation model was constructed using both present and prior positions of the marker in the available MV or kV image to compute the position of the marker on the interrupted imager. A commercially available radiotherapy system, equipped with both MV and kV imaging devices, was used to deliver typical SS-IMRT lung treatment plans to a 4D phantom containing internally embedded metallic markers. To simulate actual lung tumor motion, previous recorded 4D lung patient motion data were used. Lung tumor motion data of five separate patients were inputted into the 4D phantom, and typical SS-IMRT lung plans were delivered to simulate actual clinical deliveries. Application of the correlation model to SS-IMRT lung treatment deliveries was found to be an effective solution for maintaining continuous 3D tracking during 'step' beam interruptions. For deliveries involving five or more gantry angles with 50 or more fields per plan, the positional errors were found to have < or =1 mm root mean squared error (RMSE) in all three spatial directions. In addition to increasing the robustness

  15. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures

    NASA Astrophysics Data System (ADS)

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.

    2012-03-01

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in realtime by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  16. Nanoelectronic three-dimensional (3D) nanotip sensing array for real-time, sensitive, label-free sequence specific detection of nucleic acids

    PubMed Central

    Yang, Lu; koochak, Zahra; Harris, James S.; Davis, Ronald W.

    2016-01-01

    The improvements in our ability to sequence and genotype DNA have opened up numerous avenues in the understanding of human biology and medicine with various applications, especially in medical diagnostics. But the realization of a label free, real time, high-throughput and low cost biosensing platforms to detect molecular interactions with a high level of sensitivity has been yet stunted due to two factors: one, slow binding kinetics caused by the lack of probe molecules on the sensors and two, limited mass transport due to the planar structure (two-dimensional) of the current biosensors. Here we present a novel three-dimensional (3D), highly sensitive, real-time, inexpensive and label-free nanotip array as a rapid and direct platform to sequence-specific DNA screening. Our nanotip sensors are designed to have a nano sized thin film as their sensing area (~ 20 nm), sandwiched between two sensing electrodes. The tip is then conjugated to a DNA oligonucleotide complementary to the sequence of interest, which is electrochemically detected in real-time via impedance changes upon the formation of a double-stranded helix at the sensor interface. This 3D configuration is specifically designed to improve the biomolecular hit rate and the detection speed. We demonstrate that our nanotip array effectively detects oligonucleotides in a sequence-specific and highly sensitive manner, yielding concentration-dependent impedance change measurements with a target concentration as low as 10 pM and discrimination against even a single mismatch. Notably, our nanotip sensors achieve this accurate, sensitive detection without relying on signal indicators or enhancing molecules like fluorophores. It can also easily be scaled for highly multiplxed detection with up to 5000 sensors/square centimeter, and integrated into microfluidic devices. The versatile, rapid, and sensitive performance of the nanotip array makes it an excellent candidate for point-of-care diagnostics, and high

  17. NOTE: A software tool for 2D/3D visualization and analysis of phase-space data generated by Monte Carlo modelling of medical linear accelerators

    NASA Astrophysics Data System (ADS)

    Neicu, Toni; Aljarrah, Khaled M.; Jiang, Steve B.

    2005-10-01

    A computer program has been developed for novel 2D/3D visualization and analysis of the phase-space parameters of Monte Carlo simulations of medical accelerator radiation beams. The software is written in the IDL language and reads the phase-space data generated in the BEAMnrc/BEAM Monte Carlo code format. Contour and colour-wash plots of the fluence, mean energy, energy fluence, mean angle, spectra distribution, energy fluence distribution, angular distribution, and slices and projections of the 3D ZLAST distribution can be calculated and displayed. Based on our experience of using it at Massachusetts General Hospital, the software has proven to be a useful tool for analysis and verification of the Monte Carlo generated phase-space files. The software is in the public domain.

  18. Real-space renormalization in statistical mechanics

    NASA Astrophysics Data System (ADS)

    Efrati, Efi; Wang, Zhe; Kolan, Amy; Kadanoff, Leo P.

    2014-04-01

    This review compares the conceptualization and practice of early real-space renormalization group methods with the conceptualization of more recent real-space transformations based on tensor networks. For specificity, it focuses upon two basic methods: the "potential-moving" approach most used in the period 1975-1980 and the "rewiring method" as it has been developed in the last five years. The newer method, part of a development called the tensor renormalization group, was originally based on principles of quantum entanglement. It is specialized for computing approximations for tensor products constituting, for example, the free energy or the ground state energy of a large system. It can attack a wide variety of problems, including quantum problems, which would otherwise be intractable. The older method is formulated in terms of spin variables and permits a straightforward construction and analysis of fixed points in rather transparent terms. However, in the form described here it is unsystematic, offers no path for improvement, and of unknown reliability. The new method is formulated in terms of index variables which may be considered as linear combinations of the statistical variables. Free energies emerge naturally, but fixed points are more subtle. Further, physical interpretations of the index variables are often elusive due to a gauge symmetry which allows only selected combinations of tensor entries to have physical significance. In applications, both methods employ analyses with varying degrees of complexity. The complexity is parametrized by an integer called χ (or D in the recent literature). Both methods are examined in action by using them to compute fixed points related to Ising models for small values of the complexity parameter. They behave quite differently. The old method gives a reasonably good picture of the fixed point, as measured, for example, by the accuracy of the measured critical indices. This happens at low values of χ, but there is no

  19. 3D Reconstructed Cyto-, Muscarinic M2 Receptor, and Fiber Architecture of the Rat Brain Registered to the Waxholm Space Atlas.

    PubMed

    Schubert, Nicole; Axer, Markus; Schober, Martin; Huynh, Anh-Minh; Huysegoms, Marcel; Palomero-Gallagher, Nicola; Bjaalie, Jan G; Leergaard, Trygve B; Kirlangic, Mehmet E; Amunts, Katrin; Zilles, Karl

    2016-01-01

    High-resolution multiscale and multimodal 3D models of the brain are essential tools to understand its complex structural and functional organization. Neuroimaging techniques addressing different aspects of brain organization should be integrated in a reference space to enable topographically correct alignment and subsequent analysis of the various datasets and their modalities. The Waxholm Space (http://software.incf.org/software/waxholm-space) is a publicly available 3D coordinate-based standard reference space for the mapping and registration of neuroanatomical data in rodent brains. This paper provides a newly developed pipeline combining imaging and reconstruction steps with a novel registration strategy to integrate new neuroimaging modalities into the Waxholm Space atlas. As a proof of principle, we incorporated large scale high-resolution cyto-, muscarinic M2 receptor, and fiber architectonic images of rat brains into the 3D digital MRI based atlas of the Sprague Dawley rat in Waxholm Space. We describe the whole workflow, from image acquisition to reconstruction and registration of these three modalities into the Waxholm Space rat atlas. The registration of the brain sections into the atlas is performed by using both linear and non-linear transformations. The validity of the procedure is qualitatively demonstrated by visual inspection, and a quantitative evaluation is performed by measurement of the concordance between representative atlas-delineated regions and the same regions based on receptor or fiber architectonic data. This novel approach enables for the first time the generation of 3D reconstructed volumes of nerve fibers and fiber tracts, or of muscarinic M2 receptor density distributions, in an entire rat brain. Additionally, our pipeline facilitates the inclusion of further neuroimaging datasets, e.g., 3D reconstructed volumes of histochemical stainings or of the regional distributions of multiple other receptor types, into the Waxholm Space

  20. 3D Reconstructed Cyto-, Muscarinic M2 Receptor, and Fiber Architecture of the Rat Brain Registered to the Waxholm Space Atlas

    PubMed Central

    Schubert, Nicole; Axer, Markus; Schober, Martin; Huynh, Anh-Minh; Huysegoms, Marcel; Palomero-Gallagher, Nicola; Bjaalie, Jan G.; Leergaard, Trygve B.; Kirlangic, Mehmet E.; Amunts, Katrin; Zilles, Karl

    2016-01-01

    High-resolution multiscale and multimodal 3D models of the brain are essential tools to understand its complex structural and functional organization. Neuroimaging techniques addressing different aspects of brain organization should be integrated in a reference space to enable topographically correct alignment and subsequent analysis of the various datasets and their modalities. The Waxholm Space (http://software.incf.org/software/waxholm-space) is a publicly available 3D coordinate-based standard reference space for the mapping and registration of neuroanatomical data in rodent brains. This paper provides a newly developed pipeline combining imaging and reconstruction steps with a novel registration strategy to integrate new neuroimaging modalities into the Waxholm Space atlas. As a proof of principle, we incorporated large scale high-resolution cyto-, muscarinic M2 receptor, and fiber architectonic images of rat brains into the 3D digital MRI based atlas of the Sprague Dawley rat in Waxholm Space. We describe the whole workflow, from image acquisition to reconstruction and registration of these three modalities into the Waxholm Space rat atlas. The registration of the brain sections into the atlas is performed by using both linear and non-linear transformations. The validity of the procedure is qualitatively demonstrated by visual inspection, and a quantitative evaluation is performed by measurement of the concordance between representative atlas-delineated regions and the same regions based on receptor or fiber architectonic data. This novel approach enables for the first time the generation of 3D reconstructed volumes of nerve fibers and fiber tracts, or of muscarinic M2 receptor density distributions, in an entire rat brain. Additionally, our pipeline facilitates the inclusion of further neuroimaging datasets, e.g., 3D reconstructed volumes of histochemical stainings or of the regional distributions of multiple other receptor types, into the Waxholm Space

  1. Nuclear accessibility of β-actin mRNA is measured by 3D single-molecule real-time tracking.

    PubMed

    Smith, Carlas S; Preibisch, Stephan; Joseph, Aviva; Abrahamsson, Sara; Rieger, Bernd; Myers, Eugene; Singer, Robert H; Grunwald, David

    2015-05-25

    Imaging single proteins or RNAs allows direct visualization of the inner workings of the cell. Typically, three-dimensional (3D) images are acquired by sequentially capturing a series of 2D sections. The time required to step through the sample often impedes imaging of large numbers of rapidly moving molecules. Here we applied multifocus microscopy (MFM) to instantaneously capture 3D single-molecule real-time images in live cells, visualizing cell nuclei at 10 volumes per second. We developed image analysis techniques to analyze messenger RNA (mRNA) diffusion in the entire volume of the nucleus. Combining MFM with precise registration between fluorescently labeled mRNA, nuclear pore complexes, and chromatin, we obtained globally optimal image alignment within 80-nm precision using transformation models. We show that β-actin mRNAs freely access the entire nucleus and fewer than 60% of mRNAs are more than 0.5 µm away from a nuclear pore, and we do so for the first time accounting for spatial inhomogeneity of nuclear organization. PMID:26008747

  2. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    NASA Astrophysics Data System (ADS)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  3. Nuclear accessibility of β-actin mRNA is measured by 3D single-molecule real-time tracking

    PubMed Central

    Smith, Carlas S.; Preibisch, Stephan; Joseph, Aviva; Abrahamsson, Sara; Rieger, Bernd; Myers, Eugene; Singer, Robert H.

    2015-01-01

    Imaging single proteins or RNAs allows direct visualization of the inner workings of the cell. Typically, three-dimensional (3D) images are acquired by sequentially capturing a series of 2D sections. The time required to step through the sample often impedes imaging of large numbers of rapidly moving molecules. Here we applied multifocus microscopy (MFM) to instantaneously capture 3D single-molecule real-time images in live cells, visualizing cell nuclei at 10 volumes per second. We developed image analysis techniques to analyze messenger RNA (mRNA) diffusion in the entire volume of the nucleus. Combining MFM with precise registration between fluorescently labeled mRNA, nuclear pore complexes, and chromatin, we obtained globally optimal image alignment within 80-nm precision using transformation models. We show that β-actin mRNAs freely access the entire nucleus and fewer than 60% of mRNAs are more than 0.5 µm away from a nuclear pore, and we do so for the first time accounting for spatial inhomogeneity of nuclear organization. PMID:26008747

  4. Detection of Leptomeningeal Metastasis by Contrast-Enhanced 3D T1-SPACE: Comparison with 2D FLAIR and Contrast-Enhanced 2D T1-Weighted Images

    PubMed Central

    Gil, Bomi; Hwang, Eo-Jin; Lee, Song; Jang, Jinhee; Jung, So-Lyung; Ahn, Kook-Jin; Kim, Bum-soo

    2016-01-01

    Introduction To compare the diagnostic accuracy of contrast-enhanced 3D(dimensional) T1-weighted sampling perfection with application-optimized contrasts by using different flip angle evolutions (T1-SPACE), 2D fluid attenuated inversion recovery (FLAIR) images and 2D contrast-enhanced T1-weighted image in detection of leptomeningeal metastasis except for invasive procedures such as a CSF tapping. Materials and Methods Three groups of patients were included retrospectively for 9 months (from 2013-04-01 to 2013-12-31). Group 1 patients with positive malignant cells in CSF cytology (n = 22); group 2, stroke patients with steno-occlusion in ICA or MCA (n = 16); and group 3, patients with negative results on MRI, whose symptom were dizziness or headache (n = 25). A total of 63 sets of MR images are separately collected and randomly arranged: (1) CE 3D T1-SPACE; (2) 2D FLAIR; and (3) CE T1-GRE using a 3-Tesla MR system. A faculty neuroradiologist with 8-year-experience and another 2nd grade trainee in radiology reviewed each MR image- blinded by the results of CSF cytology and coded their observations as positives or negatives of leptomeningeal metastasis. The CSF cytology result was considered as a gold standard. Sensitivity and specificity of each MR images were calculated. Diagnostic accuracy was compared using a McNemar’s test. A Cohen's kappa analysis was performed to assess inter-observer agreements. Results Diagnostic accuracy was not different between 3D T1-SPACE and CSF cytology by both raters. However, the accuracy test of 2D FLAIR and 2D contrast-enhanced T1-weighted GRE was inconsistent by the two raters. The Kappa statistic results were 0.657 (3D T1-SPACE), 0.420 (2D FLAIR), and 0.160 (2D contrast-enhanced T1-weighted GRE). The 3D T1-SPACE images showed the highest inter-observer agreements between the raters. Conclusions Compared to 2D FLAIR and 2D contrast-enhanced T1-weighted GRE, contrast-enhanced 3D T1 SPACE showed a better detection rate of

  5. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation

    PubMed Central

    Housden, R. James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C. Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D.; Razavi, Reza; Rhode, Kawal S.

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures. PMID:27170872

  6. Using DOE-ARM and Space-Based Assets to Assess the Quality of Air Force Weather 3D Cloud Analysis and Forecast Products

    NASA Astrophysics Data System (ADS)

    Nobis, T. E.

    2015-12-01

    Air Force Weather (AFW) has documented requirements for global cloud analysis and forecasting to support DoD missions around the world. To meet these needs, AFW utilizes a number of cloud products. Cloud analyses are constructed using 17 different near real time satellite sources. Products include analysis of the individual satellite transmissions at native satellite resolution and an hourly global merge of all 17 sources on a 24km grid. AFW has also recently started creation of a time delayed global cloud reanalysis to produce a 'best possible' analysis for climatology and verification purposes. Forecasted cloud products include global short-range cloud forecasts created using advection techniques as well as statistically post processed cloud forecast products derived from various global and regional numerical weather forecast models. All of these cloud products cover different spatial and temporal resolutions and are produced on a number of different grid projections. The longer term vision of AFW is to consolidate these various approaches into uniform global numerical weather modeling (NWM) system using advanced cloudy-data assimilation processes to construct the analysis and a licensed version of UKMO's Unified Model to produce the various cloud forecast products. In preparation for this evolution in cloud modeling support, AFW has started to aggressively benchmark the performance of their current capabilities. Cloud information collected from so called 'active' sensors on the ground at the DOE-ARM sites and from space by such instruments as CloudSat, CALIPSO and CATS are being utilized to characterize the performance of AFW products derived largely by passive means. The goal is to understand the performance of the 3D cloud analysis and forecast products of today to help shape the requirements and standards for the future NWM driven system.This presentation will present selected results from these benchmarking efforts and highlight insights and observations

  7. RESCU: A real space electronic structure method

    NASA Astrophysics Data System (ADS)

    Michaud-Rioux, Vincent; Zhang, Lei; Guo, Hong

    2016-02-01

    In this work we present RESCU, a powerful MATLAB-based Kohn-Sham density functional theory (KS-DFT) solver. We demonstrate that RESCU can compute the electronic structure properties of systems comprising many thousands of atoms using modest computer resources, e.g. 16 to 256 cores. Its computational efficiency is achieved from exploiting four routes. First, we use numerical atomic orbital (NAO) techniques to efficiently generate a good quality initial subspace which is crucially required by Chebyshev filtering methods. Second, we exploit the fact that only a subspace spanning the occupied Kohn-Sham states is required, and solving accurately the KS equation using eigensolvers can generally be avoided. Third, by judiciously analyzing and optimizing various parts of the procedure in RESCU, we delay the O (N3) scaling to large N, and our tests show that RESCU scales consistently as O (N2.3) from a few hundred atoms to more than 5000 atoms when using a real space grid discretization. The scaling is better or comparable in a NAO basis up to the 14,000 atoms level. Fourth, we exploit various numerical algorithms and, in particular, we introduce a partial Rayleigh-Ritz algorithm to achieve efficiency gains for systems comprising more than 10,000 electrons. We demonstrate the power of RESCU in solving KS-DFT problems using many examples running on 16, 64 and/or 256 cores: a 5832 Si atoms supercell; a 8788 Al atoms supercell; a 5324 Cu atoms supercell and a small DNA molecule submerged in 1713 water molecules for a total 5399 atoms. The KS-DFT is entirely converged in a few hours in all cases. Our results suggest that the RESCU method has reached a milestone of solving thousands of atoms by KS-DFT on a modest computer cluster.

  8. Enhancing Scientific Collaboration, Transparency, and Public Access: Utilizing the Second Life Platform to Convene a Scientific Conference in 3-D Virtual Space

    NASA Astrophysics Data System (ADS)

    McGee, B. W.

    2006-12-01

    Recent studies reveal a general mistrust of science as well as a distorted perception of the scientific method by the public at-large. Concurrently, the number of science undergraduate and graduate students is in decline. By taking advantage of emergent technologies not only for direct public outreach but also to enhance public accessibility to the science process, it may be possible to both begin a reversal of popular scientific misconceptions and to engage a new generation of scientists. The Second Life platform is a 3-D virtual world produced and operated by Linden Research, Inc., a privately owned company instituted to develop new forms of immersive entertainment. Free and downloadable to the public, Second Life offers an imbedded physics engine, streaming audio and video capability, and unlike other "multiplayer" software, the objects and inhabitants of Second Life are entirely designed and created by its users, providing an open-ended experience without the structure of a traditional video game. Already, educational institutions, virtual museums, and real-world businesses are utilizing Second Life for teleconferencing, pre-visualization, and distance education, as well as to conduct traditional business. However, the untapped potential of Second Life lies in its versatility, where the limitations of traditional scientific meeting venues do not exist, and attendees need not be restricted by prohibitive travel costs. It will be shown that the Second Life system enables scientific authors and presenters at a "virtual conference" to display figures and images at full resolution, employ audio-visual content typically not available to conference organizers, and to perform demonstrations or premier three-dimensional renderings of objects, processes, or information. An enhanced presentation like those possible with Second Life would be more engaging to non- scientists, and such an event would be accessible to the general users of Second Life, who could have an

  9. Fourier-Space Nonlinear Rayleigh-Taylor Growth Measurements of 3D Laser-Imprinted Modulations in Planar Targets

    SciTech Connect

    Smalyuk, V.A.; Sadot, O.; Delettrez, J.A.; Meyerhofer, D.D.; Regan, S.P.; Sangster, T.C.

    2005-12-05

    Nonlinear growth of 3-D broadband nonuniformities was measured near saturation levels using x-ray radiography in planar foils accelerated by laser light. The initial target modulations were seeded by laser nonuniformities and later amplified during acceleration by Rayleigh-Taylor instability. The nonlinear saturation velocities are measured for the first time and are found to be in excellent agreement with Haan predictions. The measured growth of long-wavelength modes is consistent with enhanced, nonlinear, long-wavelength generation in ablatively driven targets.

  10. FluoRender: An Application of 2D Image Space Methods for 3D and 4D Confocal Microscopy Data Visualization in Neurobiology Research

    PubMed Central

    Wan, Yong; Otsuna, Hideo; Chien, Chi-Bin; Hansen, Charles

    2013-01-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists’ demands for qualitative analysis of confocal microscopy data. PMID:23584131

  11. FluoRender: An Application of 2D Image Space Methods for 3D and 4D Confocal Microscopy Data Visualization in Neurobiology Research.

    PubMed

    Wan, Yong; Otsuna, Hideo; Chien, Chi-Bin; Hansen, Charles

    2012-01-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists' demands for qualitative analysis of confocal microscopy data.

  12. EDITORIAL: From reciprocal space to real space in surface science From reciprocal space to real space in surface science

    NASA Astrophysics Data System (ADS)

    Bartels, Ludwig; Ernst, Karl-Heinz

    2012-09-01

    This issue is dedicated to Karl-Heinz Rieder on the occasion of his 70th birthday. It contains contributions written by his former students and colleagues from all over the world. Experimental techniques based on free electrons, such as photoelectron spectroscopy, electron microscopy and low energy electron diffraction (LEED), were foundational to surface science. While the first revealed the band structures of materials, the second provided nanometer scale imagery and the latter elucidated the atomic scale periodicity of surfaces. All required an (ultra-)high vacuum, and LEED illustrated impressively that adsorbates, such as carbon monoxide, hydrogen or oxygen, can markedly and periodically restructure surfaces from their bulk termination, even at pressures ten orders of magnitude or more below atmospheric. Yet these techniques were not generally able to reveal atomic scale surface defects, nor could they faithfully show adsorption of light atoms such as hydrogen. Although a complete atom, helium can also be regarded as a wave with a de Broglie wavelength that allows the study of surface atomic periodicities at a delicateness and sensitivity exceeding that of electrons-based techniques. In combination, these and other techniques generated insight into the periodicity of surfaces and their vibrational properties, yet were limited to simple and periodic surface setups. All that changed with the advent of scanning tunneling microscopy (STM) roughly 30 years ago, allowing real space access to surface defects and individual adsorbates. Applied at low temperatures, not only can STM establish a height profile of surfaces, but can also perform spectroscopy and serve as an actuator capable of rearranging individual species at atomic scale resolution. The direct and intuitive manner in which STM provided access as a spectator and as an actor to the atomic scale was foundational to today's surface science and to the development of the concepts of nanoscience in general. The

  13. Exploring interaction with 3D volumetric displays

    NASA Astrophysics Data System (ADS)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  14. Physics-based Simulation of Human Posture Using 3D Whole Body Scanning Technology for Astronaut Space Suit Evaluation

    NASA Technical Reports Server (NTRS)

    Kim, Kyu-Jung

    2005-01-01

    Over the past few years high precision three-dimensional (3D) full body laser scanners have been developed to be used as a powerful anthropometry tool for quantification of the morphology of the human body. The full body scanner can quickly extract body characteristics in non-contact fashion. It is required for the Anthropometry and Biomechanics Facility (ABF) to have capabilities for kinematics simulation of a digital human at various postures whereas the laser scanner only allows capturing a single static posture at each time. During this summer fellowship period a theoretical study has been conducted to estimate an arbitrary posture with a series of example postures through finite element (FE) approximation and found that four-point isoparametric FE approximation would result in reasonable maximum position errors less than 5%. Subsequent pilot scan experiments demonstrated that a bead marker with a nominal size of 6 mm could be used as a marker for digitizing 3-D coordinates of anatomical landmarks for further kinematic analysis. Two sessions of human subject testing were conducted for reconstruction of an arbitrary postures from a set of example postures for each joint motion for the forearm/hand complex and the whole upper extremity.

  15. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  16. Rapid Real-Time SpaceWire Emulation

    NASA Astrophysics Data System (ADS)

    Mudie, Stephen; Parkes, Steve; Dunstan, Martin

    2015-09-01

    The SpaceWire Electronic Ground Support Equipment (EGSE) test and development unit from STAR-Dundee can be used to very rapidly emulate real-time behaviour of SpaceWire equipment. The behaviour of the equipment to emulate is described in a script using a SpaceWire specific scripting language. Once configured the SpaceWire EGSE unit operates independent of software. This paper describes three camera emulation scripts to demonstrate the rapid real-time SpaceWire emulation possible using the SpaceWire EGSE.

  17. 3D rapid mapping

    NASA Astrophysics Data System (ADS)

    Isaksson, Folke; Borg, Johan; Haglund, Leif

    2008-04-01

    In this paper the performance of passive range measurement imaging using stereo technique in real time applications is described. Stereo vision uses multiple images to get depth resolution in a similar way as Synthetic Aperture Radar (SAR) uses multiple measurements to obtain better spatial resolution. This technique has been used in photogrammetry for a long time but it will be shown that it is now possible to do the calculations, with carefully designed image processing algorithms, in e.g. a PC in real time. In order to get high resolution and quantitative data in the stereo estimation a mathematical camera model is used. The parameters to the camera model are settled in a calibration rig or in the case of a moving camera the scene itself can be used for calibration of most of the parameters. After calibration an ordinary TV camera has an angular resolution like a theodolite, but to a much lower price. The paper will present results from high resolution 3D imagery from air to ground. The 3D-results from stereo calculation of image pairs are stitched together into a large database to form a 3D-model of the area covered.

  18. Optical coherence tomography for ultrahigh-resolution 3D imaging of cell development and real-time guiding for photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Wang, Tianshi; Zhen, Jinggao; Wang, Bo; Xue, Ping

    2009-11-01

    Optical coherence tomography is a new emerging technique for cross-sectional imaging with high spatial resolution of micrometer scale. It enables in vivo and non-invasive imaging with no need to contact the sample and is widely used in biological and clinic application. In this paper optical coherence tomography is demonstrated for both biological and clinic applications. For biological application, a white-light interference microscope is developed for ultrahigh-resolution full-field optical coherence tomography (full-field OCT) to implement 3D imaging of biological tissue. Spatial resolution of 0.9μm×1.1μm (transverse×axial) is achieved A system sensitivity of 85 dB is obtained at an acquisition time of 5s per image. The development of a mouse embryo is studied layer by layer with our ultrahigh-resolution full-filed OCT. For clinic application, a handheld optical coherence tomography system is designed for real-time and in situ imaging of the port wine stains (PWS) patient and supplying surgery guidance for photodynamic therapy (PDT) treatment. The light source with center wavelength of 1310nm, -3 dB wavelength range of 90 nm and optical power of 9mw is utilized. Lateral resolution of 8 μm and axial resolution of 7μm at a rate of 2 frames per second and with 102dB sensitivity are achieved in biological tissue. It is shown that OCT images distinguish very well the normal and PWS tissues in clinic and are good to serve as a valuable diagnosis tool for PDT treatment.

  19. Three-Dimensional Rotation, Twist and Torsion Analyses Using Real-Time 3D Speckle Tracking Imaging: Feasibility, Reproducibility, and Normal Ranges in Pediatric Population

    PubMed Central

    Han, Wei; Gao, Jun; He, Lin; Yang, Yali; Yin, Ping; Xie, Mingxing; Ge, Shuping

    2016-01-01

    Background and Objective The specific aim of this study was to evaluate the feasibility, reproducibility and maturational changes of LV rotation, twist and torsion variables by real-time 3D speckle-tracking echocardiography (RT3DSTE) in children. Methods A prospective study was conducted in 347 consecutive healthy subjects (181 males/156 females, mean age 7.12 ± 5.3 years, and range from birth to 18-years) using RT 3D echocardiography (3DE). The LV rotation, twist and torsion measurements were made off-line using TomTec software. Manual landmark selection and endocardial border editing were performed in 3 planes (apical “2”-, “4”-, and “3”- chamber views) and semi-automated tracking yielded LV rotation, twist and torsion measurements. LV rotation, twist and torsion analysis by RT 3DSTE were feasible in 307 out of 347 subjects (88.5%). Results There was no correlation between rotation or twist and age, height, weight, BSA or heart rate, respectively. However, there was statistically significant, but very modest correlation between LV torsion and age (R2 = 0.036, P< 0.001). The normal ranges were defined for rotation and twist in this cohort, and for torsion for each age group. The intra-observer and inter-observer variabilities for apical and basal rotation, twist and torsion ranged from 7.3% ± 3.8% to 12.3% ± 8.8% and from 8.8% ± 4.6% to 15.7% ± 10.1%, respectively. Conclusions We conclude that analysis of LV rotation, twist and torsion by this new RT3D STE is feasible and reproducible in pediatric population. There is no maturational change in rotation and twist, but torsion decreases with age in this cohort. Further refinement is warranted to validate the utility of this new methodology in more sensitive and quantitative evaluation of congenital and acquired heart diseases in children. PMID:27427968

  20. A radial sampling strategy for uniform k-space coverage with retrospective respiratory gating in 3D ultrashort-echo-time lung imaging.

    PubMed

    Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon

    2016-05-01

    The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space.

  1. The moduli spaces of 3 d N≥ 2 Chern-Simons gauge theories and their Hilbert series

    NASA Astrophysics Data System (ADS)

    Cremonesi, Stefano; Mekareeya, Noppadol; Zaffaroni, Alberto

    2016-10-01

    We present a formula for the Hilbert series that counts gauge invariant chiral operators in a large class of 3d N≥ 2 Yang-Mills-Chern-Simons theories. The formula counts 't Hooft monopole operators dressed by gauge invariants of a residual gauge theory of massless fields in the monopole background. We provide a general formula for the case of abelian theories, where nonperturbative corrections are absent, and consider a few examples of nonabelian theories where nonperturbative corrections are well understood. We also analyze in detail nonabelian ABJ(M) theories as well as worldvolume theories of M2-branes probing Calabi-Yau fourfold and hyperKähler twofold singularities with N≥ 2 and N≥ 3 supersymmetry.

  2. Long term dose monitoring onboard the European Columbus module of the International Space Station (ISS) in the frame of the DOSIS and DOSIS 3D project

    NASA Astrophysics Data System (ADS)

    Berger, Thomas

    The radiation environment encountered in space differs in nature from that on earth, consisting mostly of high energetic ions from protons up to iron, resulting in radiation levels far exceeding the ones present on earth for occupational radiation workers. Accurate knowledge of the physical characteristics of the space radiation field in dependence on the solar activity, the orbital parameters and the different shielding configurations of the International Space Station (ISS) is therefore needed. For the investigation of the spatial and temporal distribution of the radiation field inside the European Columbus module the experiment “Dose Distribution Inside the ISS” (DOSIS), under the project and science lead of the German Aerospace Center (DLR), was launched on July 15th 2009 with STS-127 to the ISS. The DOSIS experiment consists of a combination of “Passive Detector Packages” (PDP) distributed at eleven locations inside Columbus for the measurement of the spatial variation of the radiation field and two active Dosimetry Telescopes (DOSTELs) with a Data and Power Unit (DDPU) in a dedicated nomex pouch mounted at a fixed location beneath the European Physiology Module rack (EPM) for the measurement of the temporal variation of the radiation field parameters. The DOSIS experiment suite measured during the lowest solar minimum conditions in the space age from July 2009 to June 2011. In July 2011 the active hardware was transferred to ground for refurbishment and preparation for the follow up DOSIS 3D experiment. The hardware for DOSIS 3D was launched with Soyuz 30S to the ISS on May 15th 2012. The PDPs are replaced with each even number Soyuz flight starting with Soyuz 30S. Data from the active detectors is transferred to ground via the EPM rack which is activated once a month for this action. The presentation will give an overview of the DOSIS and DOSIS 3D experiment and focus on the results from the passive radiation detectors from the DOSIS 3D experiment

  3. 'If you assume, you can make an ass out of u and me': a decade of the disector for stereological counting of particles in 3D space.

    PubMed Central

    Mayhew, T M; Gundersen, H J

    1996-01-01

    The year 1984 was a watershed in stereology. It saw the introduction of highly efficient and unbiased design-based methods for counting the number of arbitrary objects in 3-dimensional (3D) space using 2D sectional images. The only requirement is that the objects be unambiguously identifiable on parallel sections or successive focal planes. The move away from the ¿assumption-based' and ¿model-based' methods applied previously has been a major scientific advance. It has led to the resolution of several problems in different biomedical areas. The basic principle which makes possible 3D counting from sections is the disector. Here, we review the disector principle and consider its impact on the counting and sizing of biological particles. From now on, there can be no excuse for applying the biased counting methods of yesteryear. Their continued use, despite the availability of unbiased alternatives, should be seen as paying homage to History rather than advancing Science. PMID:8655396

  4. Joint Design of Excitation k-Space Trajectory and RF Pulse for Small-Tip 3D Tailored Excitation in MRI.

    PubMed

    Hao, Sun; Fessler, Jeffrey A; Noll, Douglas C; Nielsen, Jon-Fredrik

    2016-02-01

    We propose a new method for the joint design of k-space trajectory and RF pulse in 3D small-tip tailored excitation. Designing time-varying RF and gradient waveforms for a desired 3D target excitation pattern in MRI poses a non-linear, non-convex, constrained optimization problem with relatively large problem size that is difficult to solve directly. Existing joint pulse design approaches are therefore typically restricted to predefined trajectory types such as EPI or stack-of-spirals that intrinsically satisfy the gradient maximum and slew rate constraints and reduce the problem size (dimensionality) dramatically, but lead to suboptimal excitation accuracy for a given pulse duration. Here we use a 2nd-order B-spline basis that can be fitted to an arbitrary k-space trajectory, and allows the gradient constraints to be implemented efficiently. We show that this allows the joint optimization problem to be solved with quite general k-space trajectories. Starting from an arbitrary initial trajectory, we first approximate the trajectory using B-spline basis, and then optimize the corresponding coefficients. We evaluate our method in simulation using four different k-space initializations: stack-of-spirals, SPINS, KT-points, and a new method based on KT-points. In all cases, our approach leads to substantial improvement in excitation accuracy for a given pulse duration. We also validated our method for inner-volume excitation using phantom experiments. The computation is fast enough for online applications.

  5. EDITORIAL: From reciprocal space to real space in surface science From reciprocal space to real space in surface science

    NASA Astrophysics Data System (ADS)

    Bartels, Ludwig; Ernst, Karl-Heinz

    2012-09-01

    This issue is dedicated to Karl-Heinz Rieder on the occasion of his 70th birthday. It contains contributions written by his former students and colleagues from all over the world. Experimental techniques based on free electrons, such as photoelectron spectroscopy, electron microscopy and low energy electron diffraction (LEED), were foundational to surface science. While the first revealed the band structures of materials, the second provided nanometer scale imagery and the latter elucidated the atomic scale periodicity of surfaces. All required an (ultra-)high vacuum, and LEED illustrated impressively that adsorbates, such as carbon monoxide, hydrogen or oxygen, can markedly and periodically restructure surfaces from their bulk termination, even at pressures ten orders of magnitude or more below atmospheric. Yet these techniques were not generally able to reveal atomic scale surface defects, nor could they faithfully show adsorption of light atoms such as hydrogen. Although a complete atom, helium can also be regarded as a wave with a de Broglie wavelength that allows the study of surface atomic periodicities at a delicateness and sensitivity exceeding that of electrons-based techniques. In combination, these and other techniques generated insight into the periodicity of surfaces and their vibrational properties, yet were limited to simple and periodic surface setups. All that changed with the advent of scanning tunneling microscopy (STM) roughly 30 years ago, allowing real space access to surface defects and individual adsorbates. Applied at low temperatures, not only can STM establish a height profile of surfaces, but can also perform spectroscopy and serve as an actuator capable of rearranging individual species at atomic scale resolution. The direct and intuitive manner in which STM provided access as a spectator and as an actor to the atomic scale was foundational to today's surface science and to the development of the concepts of nanoscience in general. The

  6. Real-time space system control with expert systems

    NASA Technical Reports Server (NTRS)

    Leinweber, David; Hawkinson, Lowell; Perry, John

    1988-01-01

    Many aspects of space system operations involve continuous control of real time processes. These processes include electrical power system monitoring, prelaunch and ongoing propulsion system health and maintenance, environmental and life support systems, space suit checkout, onboard manufacturing, and vehicle servicing including satellites, shuttles, orbital maneuvering vehicles, orbital transfer vehicles and remote teleoperators. Traditionally, monitoring of these critical real time processes has been done by trained human experts monitoring telemetry data. However, the long duration of future space missions and the high cost of crew time in space creates a powerful economic incentive for the development of highly autonomous knowledge based expert control procedures for these space systems.

  7. An experiment to study the effects of space flight cells of mesenchymal origin in the new model 3D-graft in vitro

    NASA Astrophysics Data System (ADS)

    Volova, Larissa

    One of the major health problems of the astronauts are disorders of the musculoskeletal system, which determines the relevance of studies of the effect of space flight factors on osteoblastic and hondroblastic cells in vitro. An experiment to study the viability and proliferative activity of cells of mesenchymal origin on culture: chondroblasts and dermal fibroblasts was performed on SC "BION -M" № 1 with scientific equipment " BIOKONT -B ." To study the effect of space flight conditions in vitro at the cellular level has developed a new model with 3D- graft as allogeneic demineralized spongiosa obtained on technology Lioplast ®. For space and simultaneous experiments in the laboratory of the Institute of Experimental Medicine and Biotechnology Samara State Medical University were obtained from the cell culture of hyaline cartilage and human skin, which have previously been grown, and then identified by morphological and immunohistochemical methods. In the experiment, they were seeded on the porous 3D- graft (controlled by means of scanning electron and confocal microscopy) and cultured in full growth medium. After completion of the flight of spacecraft "BION -M" № 1 conducted studies of biological objects using a scanning electron microscope (JEOL JSM-6390A Analysis Station, Japan), confocal microscopy and LDH - test. According to the results of the experiment revealed that after a 30- day flight of the cells not only retained vitality, but also during the flight actively proliferate, and their number has increased by almost 8 times. In synchronous experiment, all the cells died by this date. The experimentally confirmed the adequacy of the proposed model 3D- graft in studying the effect of space flight on the morphological and functional characteristics of cells in vitro.

  8. 2D mapping of the MV photon fluence and 3D dose reconstruction in real time for quality assurance during radiotherapy treatment

    NASA Astrophysics Data System (ADS)

    Alrowaili, Z. A.; Lerch, M. L. F.; Carolan, M.; Fuduli, I.; Porumb, C.; Petasecca, M.; Metcalfe, P.; Rosenfeld, A. B.

    2015-09-01

    Summary: the photon irradiation response of a 2D solid state transmission detector array mounted in a linac block tray is used to reconstruct the projected 2D dose map in a homogenous phantom along rays that diverge from the X-ray source and pass through each of the 121 detector elements. A unique diode response-to-dose scaling factor, applied to all detectors, is utilised in the reconstruction to demonstrate that real time QA during radiotherapy treatment is feasible. Purpose: to quantitatively demonstrate reconstruction of the real time radiation dose from the irradiation response of the 11×11 silicon Magic Plate (MP) detector array operated in Transmission Mode (MPTM). Methods and Materials: in transmission mode the MP is positioned in the block tray of a linac so that the central detector of the array lies on the central axis of the radiation beam. This central detector is used to determine the conversion factor from measured irradiation response to reconstructed dose at any point on the central axis within a homogenous solid water phantom. The same unique conversion factor is used for all MP detector elements lying within the irradiation field. Using the two sets of data, the 2D or 3D dose map is able to be reconstructed in the homogenous phantom. The technique we have developed is illustrated here for different depths and irradiation field sizes, (5 × 5 cm2 to 40 × 40 cm2) as well as a highly non uniform irradiation field. Results: we find that the MPTM response is proportional to the projected 2D dose map measured at a specific phantom depth, the "sweet depth". A single factor, for several irradiation field sizes and depths, is derived to reconstruct the dose in the phantom along rays projected from the photon source through each MPTM detector element. We demonstrate that for all field sizes using the above method, the 2D reconstructed and measured doses agree to within ± 2.48% (2 standard deviation) for all in-field MP detector elements. Conclusions: a

  9. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV kV imaging

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wiersma, R. D.; Mao, W.; Luxton, G.; Xing, L.

    2008-12-01

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ~0.5 mm for the normal adult breathing pattern to ~1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real

  10. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging.

    PubMed

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-12-21

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from approximately 0.5 mm for the normal adult breathing pattern to approximately 1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general

  11. Single Molecule 3D Orientation in Time and Space: A 6D Dynamic Study on Fluorescently Labeled Lipid Membranes.

    PubMed

    Börner, Richard; Ehrlich, Nicky; Hohlbein, Johannes; Hübner, Christian G

    2016-05-01

    Interactions between single molecules profoundly depend on their mutual three-dimensional orientation. Recently, we demonstrated a technique that allows for orientation determination of single dipole emitters using a polarization-resolved distribution of fluorescence into several detection channels. As the method is based on the detection of single photons, it additionally allows for performing fluorescence correlation spectroscopy (FCS) as well as dynamical anisotropy measurements thereby providing access to fast orientational dynamics down to the nanosecond time scale. The 3D orientation is particularly interesting in non-isotropic environments such as lipid membranes, which are of great importance in biology. We used giant unilamellar vesicles (GUVs) labeled with fluorescent dyes down to a single molecule concentration as a model system for both, assessing the robustness of the orientation determination at different timescales and quantifying the associated errors. The vesicles provide a well-defined spherical surface, such that the use of fluorescent lipid dyes (DiO) allows to establish a a wide range of dipole orientations experimentally. To complement our experimental data, we performed Monte Carlo simulations of the rotational dynamics of dipoles incorporated into lipid membranes. Our study offers a comprehensive view on the dye orientation behavior in a lipid membrane with high spatiotemporal resolution representing a six-dimensional fluorescence detection approach. PMID:26972111

  12. Real-time analysis of integrin-mediated chemotactic migration of T lymphocytes within 3-D extracellular matrix-like gels.

    PubMed

    Franitza, S; Alon, R; Lider, O

    1999-05-27

    We have developed a novel 3-D gel reconstituted with major extracellular matrix (ECM) glycoproteins to follow the dynamics of migration of human T cells locomoting, in real-time, on gradients formed by representative chemoattractants: the C-C chemokine RANTES, and the pro-inflammatory cytokine IL-2. In the absence of chemoattractants, none of the T cells migrated directionally and the levels of random migration or cell polarization were low. However, major fractions of T cells placed in IL-2 and RANTES gradients in the gels polarized immediately after exposure to the chemoattractants. Shortly after polarization, 25% of the T cells migrated, in either a random or directional fashion, towards the sources of the chemoattractants; additional 5-10% of the cells remained polarized but stationary. The number of T cells migrating directionally towards RANTES or IL-2 peaked along with the formation of the chemotactic gradients. The directional migration of T cells was increased by a short pre-exposure to low doses of IL-2, which did not alter the level of expression of the beta1 integrins. The directional migration of T cells towards IL-2 and RANTES was mediated by IL-2R and pertussis toxin-sensitive receptors, respectively, and the directional, and to a lesser degree, the random locomotion of T cells induced by both chemoattractants required intact tyrosine kinase signaling and activities of the alpha4, alpha5, and, to a lesser degree, the alpha2 and alpha6 members the beta1 integrins. Our system enables the real-time tracking of individual locomoting lymphocytes and the analysis of their dynamic interactions with ECM components and cytokines. PMID:10365778

  13. OFAI: 3D block tracking for a real-size rockfall experiment in the weathered volcanic context of Tahiti, French Polynesia

    NASA Astrophysics Data System (ADS)

    Dewez, Thomas; Nachbaur, Aude; Mathon, Christian; Sedan, Olivier; Berger, Frédéric; Des Garets, Emmanuel

    2010-05-01

    The Land Management Authority of French Polynesia contracted BRGM to run a real-size rockfall experiment name-coded OFAI in September 2009. The purposes of the experiment are two fold: first observe real-size rock trajectories in a context of variably weathered volcanic rock slopes; and second, use observed rockfall trajectories to calibrate block propagation numerical models (see Mathon et al., EGU 2010, this session). 90 basalt blocks were dropped down a 150-m-long slope made of hard basalt veins, lenses of colluvium and erosion channels covered in blocks of various sizes. Parameters of the experiment concerned the shape (from nearly perfect sphere to elongated cubes) and mass of the blocks (from 300 kg to >5000 kg), and the launching point, in order to bounce the blocks both off stiff basalt veins and colluvium lenses. The presentation addresses the monitoring technique developed to measure block trajectories in 3D and the variables extracted from them. A set of two 50-frame-per-second digital reflex cameras (Panasonic GH1) were installed on two prominent vantage points in order to record block motion in stereoscopy. A series of ground control points, surveyed with centimetre accuracy, served to orient pairs of images in the local topographic reference frame. This enabled the computation of block position at 50 Hz along a section of ca. 30-m-long slope, constrained by the cameras field of view. These results were then processed to extract parameters, such as velocity (horizontal, vertical, rotational, incident and reflected), number of impacts, and height of rebounds in relation with ground cover properties.

  14. Coming down to Earth: Helping Teachers Use 3D Virtual Worlds in Across-Spaces Learning Situations

    ERIC Educational Resources Information Center

    Muñoz-Cristóbal, Juan A.; Prieto, Luis P.; Asensio-Pérez, Juan I.; Martínez-Monés, Alejandra; Jorrín-Abellán, Iván M.; Dimitriadis, Yannis

    2015-01-01

    Different approaches have explored how to provide seamless learning across multiple ICT-enabled physical and virtual spaces, including three-dimensional virtual worlds (3DVW). However, these approaches present limitations that may reduce their acceptance in authentic educational practice: The difficulties of authoring and sharing teacher-created…

  15. Coherent Doppler Wind Lidar Development at NASA Langley Research Center for NASA Space-Based 3-D Winds Mission

    NASA Technical Reports Server (NTRS)

    Singh, Upendra N.; Kavaya, Michael J.; Yu, Jirong; Koch, Grady J.

    2012-01-01

    We review the 20-plus years of pulsed transmit laser development at NASA Langley Research Center (LaRC) to enable a coherent Doppler wind lidar to measure global winds from earth orbit. We briefly also discuss the many other ingredients needed to prepare for this space mission.

  16. Constraining the Properties of the Eta Carinae System via 3-D SPH Models of Space-Based Observations: The Absolute Orientation of the Binary Orbit

    NASA Technical Reports Server (NTRS)

    Madura, Thomas I.; Gull, Theodore R.; Owocki, Stanley P.; Okazaki, Atsuo T.; Russell, Christopher M. P.

    2011-01-01

    The extremely massive (> 90 Stellar Mass) and luminous (= 5 x 10(exp 6) Stellar Luminosity) star Eta Carinae, with its spectacular bipolar "Homunculus" nebula, comprises one of the most remarkable and intensely observed stellar systems in the Galaxy. However, many of its underlying physical parameters remain unknown. Multiwavelength variations observed to occur every 5.54 years are interpreted as being due to the collision of a massive wind from the primary star with the fast, less dense wind of a hot companion star in a highly elliptical (e approx. 0.9) orbit. Using three-dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) simulations of the binary wind-wind collision, together with radiative transfer codes, we compute synthetic spectral images of [Fe III] emission line structures and compare them to existing Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS) observations. We are thus able, for the first time, to tightly constrain the absolute orientation of the binary orbit on the sky. An orbit with an inclination of approx. 40deg, an argument of periapsis omega approx. 255deg, and a projected orbital axis with a position angle of approx. 312deg east of north provides the best fit to the observations, implying that the orbital axis is closely aligned in 3-D space with the Homunculus symmetry axis, and that the companion star orbits clockwise on the sky relative to the primary.

  17. Mechanical, Electromagnetic, and X-ray Shielding Characterization of a 3D Printable Tungsten-Polycarbonate Polymer Matrix Composite for Space-Based Applications

    NASA Astrophysics Data System (ADS)

    Shemelya, Corey M.; Rivera, Armando; Perez, Angel Torrado; Rocha, Carmen; Liang, Min; Yu, Xiaoju; Kief, Craig; Alexander, David; Stegeman, James; Xin, Hao; Wicker, Ryan B.; MacDonald, Eric; Roberson, David A.

    2015-08-01

    Material-extrusion three-dimensional (3D) printing has recently attracted much interest because of its process flexibility, rapid response to design alterations, and ability to create structures "on-the-go". For this reason, 3D printing has possible applications in rapid creation of space-based devices, for example cube satellites (CubeSat). This work focused on fabrication and characterization of tungsten-doped polycarbonate polymer matrix composites specifically designed for x-ray radiation-shielding applications. The polycarbonate-tungsten polymer composite obtained intentionally utilizes low loading levels to provide x-ray shielding while limiting effects on other properties of the material, for example weight, electromagnetic functionality, and mechanical strength. The fabrication process, from tungsten functionalization to filament extrusion and material characterization, is described, including printability, determination of x-ray attenuation, tensile strength, impact resistance, and gigahertz permittivity, and failure analysis. The proposed materials are uniquely advantageous when implemented in 3D printed structures, because even a small volume fraction of tungsten has been shown to substantially alter the properties of the resulting composite.

  18. Additive Manufacturing and 3D Printing in NASA: An Overview of Current Projects and Future Initiatives for Space Exploration

    NASA Technical Reports Server (NTRS)

    Clinton, R. G., Jr.

    2014-01-01

    NASA, including each Mission Directorate, is investing in, experimenting with, and/or utilizing AM across a broad spectrum of applications and projects; Centers have created and are continuing to create partnerships with industry, other Government Agencies, other Centers, and Universities; In-house additive manufacturing capability enables rapid iteration of the entire design, development and testing process, increasing innovation and reducing risk and cost to projects; For deep space exploration, AM offers significant reduction to logistics costs and risk by providing ability to create on demand; There are challenges: Overwhelming message from recent JANNAF AM for Propulsion Applications TIM was "certification."; NASA will continue to work with our partners to address this and other challenges to advance the state of the art in AM and incorporate these capabilities into an array of applications from aerospace to science missions to deep space exploration.

  19. The New Realm of 3-D Vision

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  20. The fast multipole method in the differential algebra framework for the calculation of 3D space charge fields

    NASA Astrophysics Data System (ADS)

    Zhang, He

    2013-01-01

    The space charge effect is one of the most important collective effects in beam dynamic studies. In many cases, numerical simulations are inevitable in order to get a clear understanding of this effect. The particle-particle interaction algorithms and the article-in-cell algorithms are widely used in space charge effect simulations. But they both have difficulties in dealing with highly correlated beams with abnormal distributions or complicated geometries. We developed a new algorithm to calculate the three dimensional self-field between charged particles by combining the differential algebra (DA) techniques with the fast multi-pole method (FMM). The FMM hierarchically decomposes the whole charged domain into many small regions. For each region it uses multipole expansions to represent the potential/field contributions from the particles far away from the region and then converts the multipole expansions into a local expansion inside the region. The potential/field due to the far away particles is calculated from the expansions and the potential/field due to the nearby particles is calculated from the Coulomb force law. The DA techniques are used in the calculation, translation and converting of the expansions. The new algorithm scales linearly with the total number of particles and it is suitable for any arbitrary charge distribution. Using the DA techniques, we can calculate both the potential/field and its high order derivatives, which will be useful for the purpose of including the space charge effect into transfer maps in the future. We first present the single level FMM, which decomposes the whole domain into boxes of the same size. It works best for charge distributions that are not overly non-uniform. Then we present the multilevel fast multipole algorithm (MLFMA), which decomposes the whole domain into different sized boxes according to the charge density. Finer boxes are generated where the higher charge density exists; thus the algorithm works for any

  1. Studies of a new class of high electro-thermal performing Polyimide embedded with 3D scaffold in the harsh environment of outer space

    NASA Astrophysics Data System (ADS)

    Loeblein, Manuela; Bolker, Asaf; Tsang, Siu Hon; Atar, Nurit; Uzan-Saguy, Cecile; Verker, Ronen; Gouzman, Irina; Grossman, Eitan; Teo, Edwin Hang Tong

    The polymer class of Polyimides (PIs) has been wide-spread in the use of outer space coatings due to their chemical stability and flexibility. Nevertheless, their poor thermal conductivity and completely electrically insulating characteristics have caused severe limitations, such as thermal management challenges and spacecraft electrostatic charging, which forces the use of additional materials such as brittle ITO in order to completely resist the harsh environment of space. For this reason, we developed a new composite material via infiltration of PI with a 3D scaffold which improves PIs performance and resilience and enables the use of only a single flexible material to protect spacecraft. Here we present a study of this new material based on outer-space environment simulated on ground. It includes an exhaustive range of tests simulating space environments in accordance with European Cooperation for Space Standard (ECSS), which includes atomic oxygen (AO) etching, Gamma-ray exposure and outgassing properties over extended periods of time and under strenuous mechanical bending and thermal annealing cycles. Measurement methods for the harsh environment of space and the obtained results will be presented.

  2. Real-Time Data Use for Operational Space Weather Products

    NASA Astrophysics Data System (ADS)

    Quigley, S.; Nobis, T. E.

    2010-12-01

    The Space Vehicles Directorate of the Air Force Research Laboratory (AFRL/RVBX) and the Space Environment Division of the Space and Missile Systems Center (AFSPC SYAG/WMLE) have combined efforts to design, develop, test, implement, and validate numerical and graphical products for Air Force Space Command’s (AFSPC) Space Environmental Effects Fusion System (SEEFS). These products were developed to analyze, specify, and forecast the effects of the near-earth space environment on Department of Defense weapons, navigation, communications, and surveillance systems in real/near-real time. This real-time attribute is the primary factor in allowing for actual operational product output, but it’s also responsible for a variety of detrimental effects that need to be considered, researched, mitigated, or otherwise eliminated in future/upgrade product applications. This presentation will provide brief overviews of the SEEFS products, along with information and recommendations concerned with their near/real-time data acquisition and use, to include: input data requirements, inputs/outputs ownership, observation cadence, transmission/receipt links and cadence, data latency, quality control, error propagation and associated confidence level applications, and ensemble model run potentials. Validation issues related to real-time data will also be addressed, along with recommendations for new real-time data archiving that should prove operationally beneficial.

  3. Unwrapped wavefront evaluation in phase-shifting interferometry based on 3D dynamic fringe processing in state space.

    PubMed

    Garifullin, Azat; Gurov, Igor; Volynsky, Maxim

    2016-08-01

    Recovery of an unwrapped wavefront in phase-shifting interferometry is considered when the wavefront phase increments are determined between previous and subsequent fringe patterns as well as between adjacent pixels of the current fringe pattern. A parametric model of a three-dimensional interferometric signal and the recurrence processing algorithm in state space are utilized, providing an evaluation of an unwrapped wavefront phase at each phase shift step in dynamic mode. Estimates of the achievable accuracy and experimental results of the wavefront recovery are presented. Comparison with the conventional seven-frame phase-shifting algorithm, which is one of the most accurate, confirmed the high accuracy and noise immunity of the proposed method. PMID:27505660

  4. Compact, High Energy 2-micron Coherent Doppler Wind Lidar Development for NASA's Future 3-D Winds Measurement from Space

    NASA Technical Reports Server (NTRS)

    Singh, Upendra N.; Koch, Grady; Yu, Jirong; Petros, Mulugeta; Beyon, Jeffrey; Kavaya, Michael J.; Trieu, Bo; Chen, Songsheng; Bai, Yingxin; Petzar, paul; Modlin, Edward A.; Barnes, Bruce W.; Demoz, Belay B.

    2010-01-01

    This paper presents an overview of 2-micron laser transmitter development at NASA Langley Research Center for coherent-detection lidar profiling of winds. The novel high-energy, 2-micron, Ho:Tm:LuLiF laser technology developed at NASA Langley was employed to study laser technology currently envisioned by NASA for future global coherent Doppler lidar winds measurement. The 250 mJ, 10 Hz laser was designed as an integral part of a compact lidar transceiver developed for future aircraft flight. Ground-based wind profiles made with this transceiver will be presented. NASA Langley is currently funded to build complete Doppler lidar systems using this transceiver for the DC-8 aircraft in autonomous operation. Recently, LaRC 2-micron coherent Doppler wind lidar system was selected to contribute to the NASA Science Mission Directorate (SMD) Earth Science Division (ESD) hurricane field experiment in 2010 titled Genesis and Rapid Intensification Processes (GRIP). The Doppler lidar system will measure vertical profiles of horizontal vector winds from the DC-8 aircraft using NASA Langley s existing 2-micron, pulsed, coherent detection, Doppler wind lidar system that is ready for DC-8 integration. The measurements will typically extend from the DC-8 to the earth s surface. They will be highly accurate in both wind magnitude and direction. Displays of the data will be provided in real time on the DC-8. The pulsed Doppler wind lidar of NASA Langley Research Center is much more powerful than past Doppler lidars. The operating range, accuracy, range resolution, and time resolution will be unprecedented. We expect the data to play a key role, combined with the other sensors, in improving understanding and predictive algorithms for hurricane strength and track. 1

  5. 3-D Reconstruction of Macular Type II Cell Innervation Patterns in Space-Flight and Control Rats

    NASA Technical Reports Server (NTRS)

    Ross, Muriel Dorothy; Montgomery, K.; Linton, S.; Cheng, R.; Tomko, David L. (Technical Monitor)

    1995-01-01

    A semiautomated method for reconstructing objects from serial thin sections has been developed in the Biocomputation Center. The method is being used to completely, for the first time, type II hair cells and their innervations. The purposes are to learn more about the fundamental circuitry of the macula on Earth and to determine whether changes in connectivities occur under space flight conditions. Data captured directly from a transmission electron microscope via a video camera are sent to a graphics workstation. There, the digitized micrographs are mosaicked into sections and contours are traced, registered and displayed by semiautomated methods. Current reconstructions are of type II cells from the medial part of rat maculas collected in-flight on the Space Life Sciences-2 mission, 4.5 hrs post-flight, and from a ground control. Results show that typical type II cells receive processes from tip to six nearby calyces or afferents. Nearly all processes are elongated and have bouton-like enlargements; some have numerous vesicles. Multiple (2 to 4) processes from a single calyx to a type II cell are common, and approximately 1/3 of the processes innervale 2 or 3 type II cells or a neighboring cluster. From 2% to 6% of the cells resemble type I cells morphologically but have demi-calyces. Thus far, increments in synaptic number in type II cells of flight rats are prominent along processes that supply two hair cells. It is clear that reconstruction methods provide insights into details of macular circuitry not obtainable by other techniques. The results demonstrate a morphological basis for interactions between adjacent receptive fields through feed back-feed forward connections, and for dynamic alterations in receptive field range and activity during preprocessing of linear acceleratory information by the maculas. The reconstruction method we have developed will find further applications in the study of the details of neuronal architecture of more complex systems, to

  6. A model-based 3D template matching technique for pose acquisition of an uncooperative space object.

    PubMed

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2015-01-01

    This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented.

  7. A Model-Based 3D Template Matching Technique for Pose Acquisition of an Uncooperative Space Object

    PubMed Central

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2015-01-01

    This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented. PMID:25785309

  8. A model-based 3D template matching technique for pose acquisition of an uncooperative space object.

    PubMed

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2015-01-01

    This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented. PMID:25785309

  9. Space Electron Density Gradient Studies using a 3D Embedded Reconfigurable Sounder and ESA/NASA CLUSTER Mission

    NASA Astrophysics Data System (ADS)

    Dekoulis, George

    2016-07-01

    This paper provides a direct comparison between data captured by a new embedded reconfigurable digital sounder, different ground-based ionospheric sounders spread around Europe and the ESA/NASA CLUSTER mission. The CLUSTER mission consists of four identical space probes flying in a formation that allows measurements of the electron density gradient in the local magnetic field. Both the ground-based and the spacecraft instrumentations assist in studying the motion, geometry and boundaries of the plasmasphere. The comparison results are in accordance to each other. Some slight deviations among the captured data were expected from the beginning of this investigation. These small discrepancies are reasonable and seriatim analyzed. The results of this research are significant, since the level of the plasma's ionization, which is related to the solar activity, dominates the propagation of electromagnetic waves through it. Similarly, unusually high solar activity presents serious hazards to orbiting satellites, spaceborne instrumentation, satellite communications and infrastructure located on the Earth's surface. Long-term collaborative study of the data is required to continue, in order to identify and determine the enhanced risk in advance. This would allow scientists to propose an immediate cure.

  10. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible.

  11. Bootstrapping Critical Ising Model on Three Dimensional Real Projective Space.

    PubMed

    Nakayama, Yu

    2016-04-01

    Given conformal data on a flat Euclidean space, we use crosscap conformal bootstrap equations to numerically solve the Lee-Yang model as well as the critical Ising model on a three dimensional real projective space. We check the rapid convergence of our bootstrap program in two dimensions from the exact solutions available. Based on the comparison, we estimate that our systematic error on the numerically solved one-point functions of the critical Ising model on a three dimensional real projective space is less than 1%. Our method opens up a novel way to solve conformal field theories on nontrivial geometries.

  12. Real-space density profile reconstruction of stacked voids

    NASA Astrophysics Data System (ADS)

    Pisani, Alice; Sutter, P.; Lavaux, G.; Wandelt, B.

    2016-10-01

    Modern surveys allow us to access to high quality large scale structure measurements. In this framework, cosmic voids appear as a new potential probe of Cosmology. We discuss the use of cosmic voids as standard spheres and their capacity to constrain new physics, dark energy and cosmological models. We introduce the Alcock-Paczyński test and its use with voids. We discuss the main difficulties in treating with cosmic voids: redshift-space distortions, the sparsity of data, and peculiar velocities. We present a method to reconstruct the spherical density profiles of void stacks in real space, without redshift-space distortions. We show its application to a toy model and a dark matter simulation; as well as a first application to reconstruct real cosmic void stacks density profiles in real space from the Sloan Digital Sky Survey.

  13. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  14. Real-Time and Near Real-Time Data for Space Weather Applications and Services

    NASA Astrophysics Data System (ADS)

    Singer, H. J.; Balch, C. C.; Biesecker, D. A.; Matsuo, T.; Onsager, T. G.

    2015-12-01

    Space weather can be defined as conditions in the vicinity of Earth and in the interplanetary environment that are caused primarily by solar processes and influenced by conditions on Earth and its atmosphere. Examples of space weather are the conditions that result from geomagnetic storms, solar particle events, and bursts of intense solar flare radiation. These conditions can have impacts on modern-day technologies such as GPS or electric power grids and on human activities such as astronauts living on the International Space Station or explorers traveling to the moon or Mars. While the ultimate space weather goal is accurate prediction of future space weather conditions, for many applications and services, we rely on real-time and near-real time observations and model results for the specification of current conditions. In this presentation, we will describe the space weather system and the need for real-time and near-real time data that drive the system, characterize conditions in the space environment, and are used by models for assimilation and validation. Currently available data will be assessed and a vision for future needs will be given. The challenges for establishing real-time data requirements, as well as acquiring, processing, and disseminating the data will be described, including national and international collaborations. In addition to describing how the data are used for official government products, we will also give examples of how these data are used by both the public and private sector for new applications that serve the public.

  15. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  16. 2D and 3D Traveling Salesman Problem

    ERIC Educational Resources Information Center

    Haxhimusa, Yll; Carpenter, Edward; Catrambone, Joseph; Foldes, David; Stefanov, Emil; Arns, Laura; Pizlo, Zygmunt

    2011-01-01

    When a two-dimensional (2D) traveling salesman problem (TSP) is presented on a computer screen, human subjects can produce near-optimal tours in linear time. In this study we tested human performance on a real and virtual floor, as well as in a three-dimensional (3D) virtual space. Human performance on the real floor is as good as that on a…

  17. Multibaseline IFSAR for 3D target reconstruction

    NASA Astrophysics Data System (ADS)

    Ertin, Emre; Moses, Randolph L.; Potter, Lee C.

    2008-04-01

    We consider three dimensional target construction from SAR data collected on multiple complete circular apertures at different elevation angle. The 3-D resolution of circular SAR systems is constrained by two factors: the sparse sampling in elevation and the limited azimuthal persistence of the reflectors in the scene. Three dimensional target reconstruction with multipass circular SAR data is further complicated by nonuniform elevation spacing in real flight paths and non-constant elevation angle throughout the circular pass. In this paper we first develop parametric spectral estimation methods that extend standard IFSAR method of height estimation to apertures at more than two elevation angles. Next, we show that linear interpolation of the phase history data leads to unsatisfactory performance in 3-D reconstruction from nonuniformly sampled elevation passes. We then present a new sparsity regularized interpolation algorithm to preprocess nonuniform elevation samples to create a virtual uniform linear array geometry. We illustrate the performance of the proposed method using simulated backscatter data.

  18. Axonemal Positioning and Orientation in 3-D Space for Primary Cilia: What is Known, What is Assumed, and What Needs Clarification

    PubMed Central

    Farnum, Cornelia E.; Wilsman, Norman J.

    2012-01-01

    Two positional characteristics of the ciliary axoneme – its location on the plasma membrane as it emerges from the cell, and its orientation in three-dimensional space – are known to be critical for optimal function of actively motile cilia (including nodal cilia), as well as for modified cilia associated with special senses. However, these positional characteristics have not been analyzed to any significant extent for primary cilia. This review briefly summarizes the history of knowledge of these two positional characteristics across a wide spectrum of cilia, emphasizing their importance for proper function. Then the review focuses what is known about these same positional characteristics for primary cilia in all major tissue types where they have been reported. The review emphasizes major areas that would be productive for future research for understanding how positioning and 3-D orientation of primary cilia may be related to their hypothesized signaling roles within different cellular populations. PMID:22012592

  19. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  20. Spacecraft 3D Augmented Reality Mobile App

    NASA Technical Reports Server (NTRS)

    Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M.

    2013-01-01

    The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker.

  1. Constraining the Properties of the Eta Carinae System via 3-D SPH Models of Space-Based Observations: The Absolute Orientation of the Binary Orbit

    NASA Technical Reports Server (NTRS)

    Madura, Thomas I.; Gull, Theodore R.; Owocki, Stanley P.; Okazaki, Atsuo T.; Russell, Christopher M. P.

    2010-01-01

    The extremely massive (> 90 Solar Mass) and luminous (= 5 x 10(exp 6) Solar Luminosity) star Eta Carinae, with its spectacular bipolar "Homunculus" nebula, comprises one of the most remarkable and intensely observed stellar systems in the galaxy. However, many of its underlying physical parameters remain a mystery. Multiwavelength variations observed to occur every 5.54 years are interpreted as being due to the collision of a massive wind from the primary star with the fast, less dense wind of a hot companion star in a highly elliptical (e approx. 0.9) orbit. Using three-dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) simulations of the binary wind-wind collision in Eta Car, together with radiative transfer codes, we compute synthetic spectral images of [Fe III] emission line structures and compare them to existing Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS) observations. We are thus able, for the first time, to constrain the absolute orientation of the binary orbit on the sky. An orbit with an inclination of i approx. 40deg, an argument of periapsis omega approx. 255deg, and a projected orbital axis with a position angle of approx. 312deg east of north provides the best fit to the observations, implying that the orbital axis is closely aligned in 3-1) space with the Homunculus symmetry axis, and that the companion star orbits clockwise on the sky relative to the primary.

  2. A new method for real-time co-registration of 3D coronary angiography and intravascular ultrasound or optical coherence tomography.

    PubMed

    Carlier, Stéphane; Didday, Rich; Slots, Tristan; Kayaert, Peter; Sonck, Jeroen; El-Mourad, Mike; Preumont, Nicolas; Schoors, Dany; Van Camp, Guy

    2014-06-01

    We present a new clinically practical method for online co-registration of 3D quantitative coronary angiography (QCA) and intravascular ultrasound (IVUS) or optical coherence tomography (OCT). The workflow is based on two modified commercially available software packages. Reconstruction steps are explained and compared to previously available methods. The feasibility for different clinical scenarios is illustrated. The co-registration appears accurate, robust and induced a minimal delay on the normal cath lab activities. This new method is based on the 3D angiographic reconstruction of the catheter path and does not require operator's identification of landmarks to establish the image synchronization.

  3. Real-space Berry phases: Skyrmion soccer (invited)

    SciTech Connect

    Everschor-Sitte, Karin Sitte, Matthias

    2014-05-07

    Berry phases occur when a system adiabatically evolves along a closed curve in parameter space. This tutorial-like article focuses on Berry phases accumulated in real space. In particular, we consider the situation where an electron traverses a smooth magnetic structure, while its magnetic moment adjusts to the local magnetization direction. Mapping the adiabatic physics to an effective problem in terms of emergent fields reveals that certain magnetic textures, skyrmions, are tailormade to study these Berry phase effects.

  4. Real-space Berry phases: Skyrmion soccer (invited)

    NASA Astrophysics Data System (ADS)

    Everschor-Sitte, Karin; Sitte, Matthias

    2014-05-01

    Berry phases occur when a system adiabatically evolves along a closed curve in parameter space. This tutorial-like article focuses on Berry phases accumulated in real space. In particular, we consider the situation where an electron traverses a smooth magnetic structure, while its magnetic moment adjusts to the local magnetization direction. Mapping the adiabatic physics to an effective problem in terms of emergent fields reveals that certain magnetic textures, skyrmions, are tailormade to study these Berry phase effects.

  5. 3D Position and Velocity Vector Computations of Objects Jettisoned from the International Space Station Using Close-Range Photogrammetry Approach

    NASA Technical Reports Server (NTRS)

    Papanyan, Valeri; Oshle, Edward; Adamo, Daniel

    2008-01-01

    Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.

  6. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    ERIC Educational Resources Information Center

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  7. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  8. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  9. Four-chamber heart modeling and automatic segmentation for 3-D cardiac