Sample records for provide real-time visualization

  1. Improvements and Additions to NASA Near Real-Time Earth Imagery

    NASA Technical Reports Server (NTRS)

    Cechini, Matthew; Boller, Ryan; Baynes, Kathleen; Schmaltz, Jeffrey; DeLuca, Alexandar; King, Jerome; Thompson, Charles; Roberts, Joe; Rodriguez, Joshua; Gunnoe, Taylor; hide

    2016-01-01

    For many years, the NASA Global Imagery Browse Services (GIBS) has worked closely with the Land, Atmosphere Near real-time Capability for EOS (Earth Observing System) (LANCE) system to provide near real-time imagery visualizations of AIRS (Atmospheric Infrared Sounder), MLS (Microwave Limb Sounder), MODIS (Moderate Resolution Imaging Spectrometer), OMI (Ozone Monitoring Instrument), and recently VIIRS (Visible Infrared Imaging Radiometer Suite) science parameters. These visualizations are readily available through standard web services and the NASA Worldview client. Access to near real-time imagery provides a critical capability to GIBS and Worldview users. GIBS continues to focus on improving its commitment to providing near real-time imagery for end-user applications. The focus of this presentation will be the following completed or planned GIBS system and imagery enhancements relating to near real-time imagery visualization.

  2. Issues in visual support to real-time space system simulation solved in the Systems Engineering Simulator

    NASA Technical Reports Server (NTRS)

    Yuen, Vincent K.

    1989-01-01

    The Systems Engineering Simulator has addressed the major issues in providing visual data to its real-time man-in-the-loop simulations. Out-the-window views and CCTV views are provided by three scene systems to give the astronauts their real-world views. To expand the window coverage for the Space Station Freedom workstation a rotating optics system is used to provide the widest field of view possible. To provide video signals to as many viewpoints as possible, windows and CCTVs, with a limited amount of hardware, a video distribution system has been developed to time-share the video channels among viewpoints at the selection of the simulation users. These solutions have provided the visual simulation facility for real-time man-in-the-loop simulations for the NASA space program.

  3. Scientific & Intelligence Exascale Visualization Analysis System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Money, James H.

    SIEVAS provides an immersive visualization framework for connecting multiple systems in real time for data science. SIEVAS provides the ability to connect multiple COTS and GOTS products in a seamless fashion for data fusion, data analysis, and viewing. It provides this capability by using a combination of micro services, real time messaging, and web service compliant back-end system.

  4. Perception of CPR quality: Influence of CPR feedback, Just-in-Time CPR training and provider role.

    PubMed

    Cheng, Adam; Overly, Frank; Kessler, David; Nadkarni, Vinay M; Lin, Yiqun; Doan, Quynh; Duff, Jonathan P; Tofil, Nancy M; Bhanji, Farhan; Adler, Mark; Charnovich, Alex; Hunt, Elizabeth A; Brown, Linda L

    2015-02-01

    Many healthcare providers rely on visual perception to guide cardiopulmonary resuscitation (CPR), but little is known about the accuracy of provider perceptions of CPR quality. We aimed to describe the difference between perceived versus measured CPR quality, and to determine the impact of provider role, real-time visual CPR feedback and Just-in-Time (JIT) CPR training on provider perceptions. We conducted secondary analyses of data collected from a prospective, multicenter, randomized trial of 324 healthcare providers who participated in a simulated cardiac arrest scenario between July 2012 and April 2014. Participants were randomized to one of four permutations of: JIT CPR training and real-time visual CPR feedback. We calculated the difference between perceived and measured quality of CPR and reported the proportion of subjects accurately estimating the quality of CPR within each study arm. Participants overestimated achieving adequate chest compression depth (mean difference range: 16.1-60.6%) and rate (range: 0.2-51%), and underestimated chest compression fraction (0.2-2.9%) across all arms. Compared to no intervention, the use of real-time feedback and JIT CPR training (alone or in combination) improved perception of depth (p<0.001). Accurate estimation of CPR quality was poor for chest compression depth (0-13%), rate (5-46%) and chest compression fraction (60-63%). Perception of depth is more accurate in CPR providers versus team leaders (27.8% vs. 7.4%; p=0.043) when using real-time feedback. Healthcare providers' visual perception of CPR quality is poor. Perceptions of CPR depth are improved by using real-time visual feedback and with prior JIT CPR training. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. High Performance Real-Time Visualization of Voluminous Scientific Data Through the NOAA Earth Information System (NEIS).

    NASA Astrophysics Data System (ADS)

    Stewart, J.; Hackathorn, E. J.; Joyce, J.; Smith, J. S.

    2014-12-01

    Within our community data volume is rapidly expanding. These data have limited value if one cannot interact or visualize the data in a timely manner. The scientific community needs the ability to dynamically visualize, analyze, and interact with these data along with other environmental data in real-time regardless of the physical location or data format. Within the National Oceanic Atmospheric Administration's (NOAA's), the Earth System Research Laboratory (ESRL) is actively developing the NOAA Earth Information System (NEIS). Previously, the NEIS team investigated methods of data discovery and interoperability. The recent focus shifted to high performance real-time visualization allowing NEIS to bring massive amounts of 4-D data, including output from weather forecast models as well as data from different observations (surface obs, upper air, etc...) in one place. Our server side architecture provides a real-time stream processing system which utilizes server based NVIDIA Graphical Processing Units (GPU's) for data processing, wavelet based compression, and other preparation techniques for visualization, allows NEIS to minimize the bandwidth and latency for data delivery to end-users. Client side, users interact with NEIS services through the visualization application developed at ESRL called TerraViz. Terraviz is developed using the Unity game engine and takes advantage of the GPU's allowing a user to interact with large data sets in real time that might not have been possible before. Through these technologies, the NEIS team has improved accessibility to 'Big Data' along with providing tools allowing novel visualization and seamless integration of data across time and space regardless of data size, physical location, or data format. These capabilities provide the ability to see the global interactions and their importance for weather prediction. Additionally, they allow greater access than currently exists helping to foster scientific collaboration and new ideas. This presentation will provide an update of the recent enhancements of the NEIS architecture and visualization capabilities, challenges faced, as well as ongoing research activities related to this project.

  6. Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.

    PubMed

    Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M

    2015-01-01

    This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.

  7. Visualizing Syllables: Real-Time Computerized Feedback within a Speech-Language Intervention

    ERIC Educational Resources Information Center

    DeThorne, Laura; Aparicio Betancourt, Mariana; Karahalios, Karrie; Halle, Jim; Bogue, Ellen

    2015-01-01

    Computerized technologies now offer unprecedented opportunities to provide real-time visual feedback to facilitate children's speech-language development. We employed a mixed-method design to examine the effectiveness of two speech-language interventions aimed at facilitating children's multisyllabic productions: one incorporated a novel…

  8. Real-Time Monitoring of Scada Based Control System for Filling Process

    NASA Astrophysics Data System (ADS)

    Soe, Aung Kyaw; Myint, Aung Naing; Latt, Maung Maung; Theingi

    2008-10-01

    This paper is a design of real-time monitoring for filling system using Supervisory Control and Data Acquisition (SCADA). The monitoring of production process is described in real-time using Visual Basic.Net programming under Visual Studio 2005 software without SCADA software. The software integrators are programmed to get the required information for the configuration screens. Simulation of components is expressed on the computer screen using parallel port between computers and filling devices. The programs of real-time simulation for the filling process from the pure drinking water industry are provided.

  9. Real-time tracking using stereo and motion: Visual perception for space robotics

    NASA Technical Reports Server (NTRS)

    Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann

    1994-01-01

    The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.

  10. Real-time scalable visual analysis on mobile devices

    NASA Astrophysics Data System (ADS)

    Pattath, Avin; Ebert, David S.; May, Richard A.; Collins, Timothy F.; Pike, William

    2008-02-01

    Interactive visual presentation of information can help an analyst gain faster and better insight from data. When combined with situational or context information, visualization on mobile devices is invaluable to in-field responders and investigators. However, several challenges are posed by the form-factor of mobile devices in developing such systems. In this paper, we classify these challenges into two broad categories - issues in general mobile computing and issues specific to visual analysis on mobile devices. Using NetworkVis and Infostar as example systems, we illustrate some of the techniques that we employed to overcome many of the identified challenges. NetworkVis is an OpenVG-based real-time network monitoring and visualization system developed for Windows Mobile devices. Infostar is a flash-based interactive, real-time visualization application intended to provide attendees access to conference information. Linked time-synchronous visualization, stylus/button-based interactivity, vector graphics, overview-context techniques, details-on-demand and statistical information display are some of the highlights of these applications.

  11. Real-time lexical comprehension in young children learning American Sign Language.

    PubMed

    MacDonald, Kyle; LaMarr, Todd; Corina, David; Marchman, Virginia A; Fernald, Anne

    2018-04-16

    When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality. © 2018 John Wiley & Sons Ltd.

  12. Dashboard visualizations: Supporting real-time throughput decision-making.

    PubMed

    Franklin, Amy; Gantela, Swaroop; Shifarraw, Salsawit; Johnson, Todd R; Robinson, David J; King, Brent R; Mehta, Amit M; Maddow, Charles L; Hoot, Nathan R; Nguyen, Vickie; Rubio, Adriana; Zhang, Jiajie; Okafor, Nnaemeka G

    2017-07-01

    Providing timely and effective care in the emergency department (ED) requires the management of individual patients as well as the flow and demands of the entire department. Strategic changes to work processes, such as adding a flow coordination nurse or a physician in triage, have demonstrated improvements in throughput times. However, such global strategic changes do not address the real-time, often opportunistic workflow decisions of individual clinicians in the ED. We believe that real-time representation of the status of the entire emergency department and each patient within it through information visualizations will better support clinical decision-making in-the-moment and provide for rapid intervention to improve ED flow. This notion is based on previous work where we found that clinicians' workflow decisions were often based on an in-the-moment local perspective, rather than a global perspective. Here, we discuss the challenges of designing and implementing visualizations for ED through a discussion of the development of our prototype Throughput Dashboard and the potential it holds for supporting real-time decision-making. Copyright © 2017. Published by Elsevier Inc.

  13. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion

    PubMed Central

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-01-01

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145

  14. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    PubMed

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  15. Adaptive Kalman filtering for real-time mapping of the visual field

    PubMed Central

    Ward, B. Douglas; Janik, John; Mazaheri, Yousef; Ma, Yan; DeYoe, Edgar A.

    2013-01-01

    This paper demonstrates the feasibility of real-time mapping of the visual field for clinical applications. Specifically, three aspects of this problem were considered: (1) experimental design, (2) statistical analysis, and (3) display of results. Proper experimental design is essential to achieving a successful outcome, particularly for real-time applications. A random-block experimental design was shown to have less sensitivity to measurement noise, as well as greater robustness to error in modeling of the hemodynamic impulse response function (IRF) and greater flexibility than common alternatives. In addition, random encoding of the visual field allows for the detection of voxels that are responsive to multiple, not necessarily contiguous, regions of the visual field. Due to its recursive nature, the Kalman filter is ideally suited for real-time statistical analysis of visual field mapping data. An important feature of the Kalman filter is that it can be used for nonstationary time series analysis. The capability of the Kalman filter to adapt, in real time, to abrupt changes in the baseline arising from subject motion inside the scanner and other external system disturbances is important for the success of clinical applications. The clinician needs real-time information to evaluate the success or failure of the imaging run and to decide whether to extend, modify, or terminate the run. Accordingly, the analytical software provides real-time displays of (1) brain activation maps for each stimulus segment, (2) voxel-wise spatial tuning profiles, (3) time plots of the variability of response parameters, and (4) time plots of activated volume. PMID:22100663

  16. Real-time evaluation and visualization of learner performance in a mixed-reality environment for clinical breast examination.

    PubMed

    Kotranza, Aaron; Lind, D Scott; Lok, Benjamin

    2012-07-01

    We investigate the efficacy of incorporating real-time feedback of user performance within mixed-reality environments (MREs) for training real-world tasks with tightly coupled cognitive and psychomotor components. This paper presents an approach to providing real-time evaluation and visual feedback of learner performance in an MRE for training clinical breast examination (CBE). In a user study of experienced and novice CBE practitioners (n = 69), novices receiving real-time feedback performed equivalently or better than more experienced practitioners in the completeness and correctness of the exam. A second user study (n = 8) followed novices through repeated practice of CBE in the MRE. Results indicate that skills improvement in the MRE transfers to the real-world task of CBE of human patients. This initial case study demonstrates the efficacy of MREs incorporating real-time feedback for training real-world cognitive-psychomotor tasks.

  17. Multi-Mission Simulation and Visualization for Real-Time Telemetry Display, Playback and EDL Event Reconstruction

    NASA Technical Reports Server (NTRS)

    Pomerantz, M. I.; Lim, C.; Myint, S.; Woodward, G.; Balaram, J.; Kuo, C.

    2012-01-01

    he Jet Propulsion Laboratory's Entry, Descent and Landing (EDL) Reconstruction Task has developed a software system that provides mission operations personnel and analysts with a real time telemetry-based live display, playback and post-EDL reconstruction capability that leverages the existing high-fidelity, physics-based simulation framework and modern game engine-derived 3D visualization system developed in the JPL Dynamics and Real Time Simulation (DARTS) Lab. Developed as a multi-mission solution, the EDL Telemetry Visualization (ETV) system has been used for a variety of projects including NASA's Mars Science Laboratory (MSL), NASA'S Low Density Supersonic Decelerator (LDSD) and JPL's MoonRise Lunar sample return proposal.

  18. Real-time quasi-3D tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    Buurlage, Jan-Willem; Kohr, Holger; Palenstijn, Willem Jan; Joost Batenburg, K.

    2018-06-01

    Developments in acquisition technology and a growing need for time-resolved experiments pose great computational challenges in tomography. In addition, access to reconstructions in real time is a highly demanded feature but has so far been out of reach. We show that by exploiting the mathematical properties of filtered backprojection-type methods, having access to real-time reconstructions of arbitrarily oriented slices becomes feasible. Furthermore, we present , software for visualization and on-demand reconstruction of slices. A user of can interactively shift and rotate slices in a GUI, while the software updates the slice in real time. For certain use cases, the possibility to study arbitrarily oriented slices in real time directly from the measured data provides sufficient visual and quantitative insight. Two such applications are discussed in this article.

  19. CyberPetri at CDX 2016: Real-time Network Situation Awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arendt, Dustin L.; Best, Daniel M.; Burtner, Edwin R.

    CyberPetri is a novel visualization technique that provides a flexible map of the network based on available characteristics, such as IP address, operating system, or service. Previous work introduced CyberPetri as a visualization feature in Ocelot, a network defense tool that helped security analysts understand and respond to an active defense scenario. In this paper we present a case study in which we use the CyberPetri visualization technique to support real-time situation awareness during the 2016 Cyber Defense Exercise.

  20. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  1. Real-time feedback on nonverbal clinical communication. Theoretical framework and clinician acceptance of ambient visual design.

    PubMed

    Hartzler, A L; Patel, R A; Czerwinski, M; Pratt, W; Roseway, A; Chandrasekaran, N; Back, A

    2014-01-01

    This article is part of the focus theme of Methods of Information in Medicine on "Pervasive Intelligent Technologies for Health". Effective nonverbal communication between patients and clinicians fosters both the delivery of empathic patient-centered care and positive patient outcomes. Although nonverbal skill training is a recognized need, few efforts to enhance patient-clinician communication provide visual feedback on nonverbal aspects of the clinical encounter. We describe a novel approach that uses social signal processing technology (SSP) to capture nonverbal cues in real time and to display ambient visual feedback on control and affiliation--two primary, yet distinct dimensions of interpersonal nonverbal communication. To examine the design and clinician acceptance of ambient visual feedback on nonverbal communication, we 1) formulated a model of relational communication to ground SSP and 2) conducted a formative user study using mixed methods to explore the design of visual feedback. Based on a model of relational communication, we reviewed interpersonal communication research to map nonverbal cues to signals of affiliation and control evidenced in patient-clinician interaction. Corresponding with our formulation of this theoretical framework, we designed ambient real-time visualizations that reflect variations of affiliation and control. To explore clinicians' acceptance of this visual feedback, we conducted a lab study using the Wizard-of-Oz technique to simulate system use with 16 healthcare professionals. We followed up with seven of those participants through interviews to iterate on the design with a revised visualization that addressed emergent design considerations. Ambient visual feedback on non- verbal communication provides a theoretically grounded and acceptable way to provide clinicians with awareness of their nonverbal communication style. We provide implications for the design of such visual feedback that encourages empathic patient-centered communication and include considerations of metaphor, color, size, position, and timing of feedback. Ambient visual feedback from SSP holds promise as an acceptable means for facilitating empathic patient-centered nonverbal communication.

  2. Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.

    2013-02-01

    In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.

  3. Real-time Magnetic Resonance Imaging Guidance for Cardiovascular Procedures

    PubMed Central

    Horvath, Keith A.; Li, Ming; Mazilu, Dumitru; Guttman, Michael A.; McVeigh, Elliot R.

    2008-01-01

    Magnetic resonance imaging (MRI) of the cardiovascular system has proven to be an invaluable diagnostic tool. Given the ability to allow for real-time imaging, MRI guidance of intraoperative procedures can provide superb visualization which can facilitate a variety of interventions and minimize the trauma of the operations as well. In addition to the anatomic detail, MRI can provide intraoperative assessment of organ and device function. Instruments and devices can be marked to enhance visualization and tracking. All of which is an advance over standard x-ray or ultrasonic imaging. PMID:18395633

  4. Real-Time Visualization of Tissue Ischemia

    NASA Technical Reports Server (NTRS)

    Bearman, Gregory H. (Inventor); Chrien, Thomas D. (Inventor); Eastwood, Michael L. (Inventor)

    2000-01-01

    A real-time display of tissue ischemia which comprises three CCD video cameras, each with a narrow bandwidth filter at the correct wavelength is discussed. The cameras simultaneously view an area of tissue suspected of having ischemic areas through beamsplitters. The output from each camera is adjusted to give the correct signal intensity for combining with, the others into an image for display. If necessary a digital signal processor (DSP) can implement algorithms for image enhancement prior to display. Current DSP engines are fast enough to give real-time display. Measurement at three, wavelengths, combined into a real-time Red-Green-Blue (RGB) video display with a digital signal processing (DSP) board to implement image algorithms, provides direct visualization of ischemic areas.

  5. The use of real-time ultrasound imaging for biofeedback of lumbar multifidus muscle contraction in healthy subjects.

    PubMed

    Van, Khai; Hides, Julie A; Richardson, Carolyn A

    2006-12-01

    Randomized controlled trial. To determine if the provision of visual biofeedback using real-time ultrasound imaging enhances the ability to activate the multifidus muscle. Increasingly clinicians are using real-time ultrasound as a form of biofeedback when re-educating muscle activation. The effectiveness of this form of biofeedback for the multifidus muscle has not been reported. Healthy subjects were randomly divided into groups that received different forms of biofeedback. All subjects received clinical instruction on how to activate the multifidus muscle isometrically prior to testing and verbal feedback regarding the amount of multifidus contraction, which occurred during 10 repetitions (acquisition phase). In addition, 1 group received visual biofeedback (watched the multifidus muscle contract) using real-time ultrasound imaging. All subjects were reassessed a week later (retention phase). Subjects from both groups improved their voluntary contraction of the multifidus muscle in the acquisition phase (P<.001) and the ability to recruit the multifidus muscle differed between groups (P<.05), with subjects in the group that received visual ultrasound biofeedback achieving greater improvements. In addition, the group that received visual ultrasound biofeedback retained their improvement in performance from week 1 to week 2 (P>.90), whereas the performance of the other group decreased (P<.05). Real-time ultrasound imaging can be used to provide visual biofeedback and improve performance and retention in the ability to activate the multifidus muscle in healthy subjects.

  6. Real-Time Visualization of Network Behaviors for Situational Awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Best, Daniel M.; Bohn, Shawn J.; Love, Douglas V.

    Plentiful, complex, and dynamic data make understanding the state of an enterprise network difficult. Although visualization can help analysts understand baseline behaviors in network traffic and identify off-normal events, visual analysis systems often do not scale well to operational data volumes (in the hundreds of millions to billions of transactions per day) nor to analysis of emergent trends in real-time data. We present a system that combines multiple, complementary visualization techniques coupled with in-stream analytics, behavioral modeling of network actors, and a high-throughput processing platform called MeDICi. This system provides situational understanding of real-time network activity to help analysts takemore » proactive response steps. We have developed these techniques using requirements gathered from the government users for which the tools are being developed. By linking multiple visualization tools to a streaming analytic pipeline, and designing each tool to support a particular kind of analysis (from high-level awareness to detailed investigation), analysts can understand the behavior of a network across multiple levels of abstraction.« less

  7. Real-space and real-time dynamics of CRISPR-Cas9 visualized by high-speed atomic force microscopy.

    PubMed

    Shibata, Mikihiro; Nishimasu, Hiroshi; Kodera, Noriyuki; Hirano, Seiichi; Ando, Toshio; Uchihashi, Takayuki; Nureki, Osamu

    2017-11-10

    The CRISPR-associated endonuclease Cas9 binds to a guide RNA and cleaves double-stranded DNA with a sequence complementary to the RNA guide. The Cas9-RNA system has been harnessed for numerous applications, such as genome editing. Here we use high-speed atomic force microscopy (HS-AFM) to visualize the real-space and real-time dynamics of CRISPR-Cas9 in action. HS-AFM movies indicate that, whereas apo-Cas9 adopts unexpected flexible conformations, Cas9-RNA forms a stable bilobed structure and interrogates target sites on the DNA by three-dimensional diffusion. These movies also provide real-time visualization of the Cas9-mediated DNA cleavage process. Notably, the Cas9 HNH nuclease domain fluctuates upon DNA binding, and subsequently adopts an active conformation, where the HNH active site is docked at the cleavage site in the target DNA. Collectively, our HS-AFM data extend our understanding of the action mechanism of CRISPR-Cas9.

  8. Low Cost Embedded Stereo System for Underwater Surveys

    NASA Astrophysics Data System (ADS)

    Nawaf, M. M.; Boï, J.-M.; Merad, D.; Royer, J.-P.; Drap, P.

    2017-11-01

    This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.

  9. A proposed intracortical visual prosthesis image processing system.

    PubMed

    Srivastava, N R; Troyk, P

    2005-01-01

    It has been a goal of neuroprosthesis researchers to develop a system, which could provide artifical vision to a large population of individuals with blindness. It has been demonstrated by earlier researches that stimulating the visual cortex area electrically can evoke spatial visual percepts, i.e. phosphenes. The goal of visual cortex prosthesis is to stimulate the visual cortex area and generate a visual perception in real time to restore vision. Even though the normal working of the visual system is not been completely understood, the existing knowledge has inspired research groups to develop strategies to develop visual cortex prosthesis which can help blind patients in their daily activities. A major limitation in this work is the development of an image proceessing system for converting an electronic image, as captured by a camera, into a real-time data stream for stimulation of the implanted electrodes. This paper proposes a system, which will capture the image using a camera and use a dedicated hardware real time image processor to deliver electrical pulses to intracortical electrodes. This system has to be flexible enough to adapt to individual patients and to various strategies of image reconstruction. Here we consider a preliminary architecture for this system.

  10. Decision support system for outage management and automated crew dispatch

    DOEpatents

    Kang, Ning; Mousavi, Mirrasoul

    2018-01-23

    A decision support system is provided for utility operations to assist with crew dispatch and restoration activities following the occurrence of a disturbance in a multiphase power distribution network, by providing a real-time visualization of possible location(s). The system covers faults that occur on fuse-protected laterals. The system uses real-time data from intelligent electronics devices coupled with other data sources such as static feeder maps to provide a complete picture of the disturbance event, guiding the utility crew to the most probable location(s). This information is provided in real-time, reducing restoration time and avoiding more costly and laborious fault location finding practices.

  11. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence

    PubMed Central

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude

    2016-01-01

    The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108

  12. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence.

    PubMed

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude

    2016-06-10

    The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.

  13. Expanding Access and Usage of NASA Near Real-Time Imagery and Data

    NASA Astrophysics Data System (ADS)

    Cechini, M.; Murphy, K. J.; Boller, R. A.; Schmaltz, J. E.; Thompson, C. K.; Huang, T.; McGann, J. M.; Ilavajhala, S.; Alarcon, C.; Roberts, J. T.

    2013-12-01

    In late 2009, the Land Atmosphere Near-real-time Capability for EOS (LANCE) was created to greatly expand the range of near real-time data products from a variety of Earth Observing System (EOS) instruments. Since that time, NASA's Earth Observing System Data and Information System (EOSDIS) developed the Global Imagery Browse Services (GIBS) to provide highly responsive, scalable, and expandable imagery services that distribute near real-time imagery in an intuitive and geo-referenced format. The GIBS imagery services provide access through standards-based protocols such as the Open Geospatial Consortium (OGC) Web Map Tile Service (WMTS) and standard mapping file formats such as the Keyhole Markup Language (KML). Leveraging these standard mechanisms opens NASA near real-time imagery to a broad landscape of mapping libraries supporting mobile applications. By easily integrating with mobile application development libraries, GIBS makes it possible for NASA imagery to become a reliable and valuable source for end-user applications. Recently, EOSDIS has taken steps to integrate near real-time metadata products into the EOS ClearingHOuse (ECHO) metadata repository. Registration of near real-time metadata allows for near real-time data discovery through ECHO clients. In kind with the near real-time data processing requirements, the ECHO ingest model allows for low-latency metadata insertion and updates. Combining with the ECHO repository, the fast visual access of GIBS imagery can now be linked directly back to the source data file(s). Through the use of discovery standards such as OpenSearch, desktop and mobile applications can connect users to more than just an image. As data services, such as OGC Web Coverage Service, become more prevalent within the EOSDIS system, applications may even be able to connect users from imagery to data values. In addition, the full resolution GIBS imagery provides visual context to other GIS data and tools. The NASA near real-time imagery covers a broad set of Earth science disciplines. By leveraging the ECHO and GIBS services, these data can become a visual context within which other GIS activities are performed. The focus of this presentation is to discuss the GIBS imagery and ECHO metadata services facilitating near real-time discovery and usage. Existing synergies and future possibilities will also be discussed. The NASA Worldview demonstration client will be used to show an existing application combining the ECHO and GIBS services.

  14. A study of internet of things real-time data updating based on WebSocket

    NASA Astrophysics Data System (ADS)

    Wei, Shoulin; Yu, Konglin; Dai, Wei; Liang, Bo; Zhang, Xiaoli

    2015-12-01

    The Internet of Things (IoT) is gradually entering the industrial stage. Web applications in IoT such as monitoring, instant messaging, real-time quote system changes need to be transmitted in real-time mode to client without client constantly refreshing and sending the request. These applications often need to be as fast as possible and provide nearly real-time components. Real-time data updating is becoming the core part of application layer visualization technology in IoT. With support of data push in server-side, running state of "Things" in IoT could be displayed in real-time mode. This paper discusses several current real-time data updating method and explores the advantages and disadvantages of each method. We explore the use of WebSocket in a new approach for real-time data updating in IoT, since WebSocket provides low delay, low network throughput solutions for full-duplex communication.

  15. Regional early flood warning system: design and implementation

    NASA Astrophysics Data System (ADS)

    Chang, L. C.; Yang, S. N.; Kuo, C. L.; Wang, Y. F.

    2017-12-01

    This study proposes a prototype of the regional early flood inundation warning system in Tainan City, Taiwan. The AI technology is used to forecast multi-step-ahead regional flood inundation maps during storm events. The computing time is only few seconds that leads to real-time regional flood inundation forecasting. A database is built to organize data and information for building real-time forecasting models, maintaining the relations of forecasted points, and displaying forecasted results, while real-time data acquisition is another key task where the model requires immediately accessing rain gauge information to provide forecast services. All programs related database are constructed in Microsoft SQL Server by using Visual C# to extracting real-time hydrological data, managing data, storing the forecasted data and providing the information to the visual map-based display. The regional early flood inundation warning system use the up-to-date Web technologies driven by the database and real-time data acquisition to display the on-line forecasting flood inundation depths in the study area. The friendly interface includes on-line sequentially showing inundation area by Google Map, maximum inundation depth and its location, and providing KMZ file download of the results which can be watched on Google Earth. The developed system can provide all the relevant information and on-line forecast results that helps city authorities to make decisions during typhoon events and make actions to mitigate the losses.

  16. A method for real-time visual stimulus selection in the study of cortical object perception.

    PubMed

    Leeds, Daniel D; Tarr, Michael J

    2016-06-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. A method for real-time visual stimulus selection in the study of cortical object perception

    PubMed Central

    Leeds, Daniel D.; Tarr, Michael J.

    2016-01-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit’s image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across predetermined 1 cm3 brain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) Real-time estimation of cortical responses to stimuli are reasonably consistent; 3) Search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. PMID:26973168

  18. Time Series Data Visualization in World Wide Telescope

    NASA Astrophysics Data System (ADS)

    Fay, J.

    WorldWide Telescope provides a rich set of timer series visualization for both archival and real time data. WWT consists of both interactive desktop tools for interactive immersive visualization and HTML5 web based controls that can be utilized in customized web pages. WWT supports a range of display options including full dome, power walls, stereo and virtual reality headsets.

  19. Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.

    2007-03-01

    In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.

  20. Real-time visual communication to aid disaster recovery in a multi-segment hybrid wireless networking system

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos

    2012-06-01

    When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.

  1. Integration for navigation on the UMASS mobile perception lab

    NASA Technical Reports Server (NTRS)

    Draper, Bruce; Fennema, Claude; Rochwerger, Benny; Riseman, Edward; Hanson, Allen

    1994-01-01

    Integration of real-time visual procedures for use on the Mobile Perception Lab (MPL) was presented. The MPL is an autonomous vehicle designed for testing visually guided behavior. Two critical areas of focus in the system design were data storage/exchange and process control. The Intermediate Symbolic Representation (ISR3) supported data storage and exchange, and the MPL script monitor provided process control. Resource allocation, inter-process communication, and real-time control are difficult problems which must be solved in order to construct strong autonomous systems.

  2. Real-time computer-based visual feedback improves visual acuity in downbeat nystagmus - a pilot study.

    PubMed

    Teufel, Julian; Bardins, S; Spiegel, Rainer; Kremmyda, O; Schneider, E; Strupp, M; Kalla, R

    2016-01-04

    Patients with downbeat nystagmus syndrome suffer from oscillopsia, which leads to an unstable visual perception and therefore impaired visual acuity. The aim of this study was to use real-time computer-based visual feedback to compensate for the destabilizing slow phase eye movements. The patients were sitting in front of a computer screen with the head fixed on a chin rest. The eye movements were recorded by an eye tracking system (EyeSeeCam®). We tested the visual acuity with a fixed Landolt C (static) and during real-time feedback driven condition (dynamic) in gaze straight ahead and (20°) sideward gaze. In the dynamic condition, the Landolt C moved according to the slow phase eye velocity of the downbeat nystagmus. The Shapiro-Wilk test was used to test for normal distribution and one-way ANOVA for comparison. Ten patients with downbeat nystagmus were included in the study. Median age was 76 years and the median duration of symptoms was 6.3 years (SD +/- 3.1y). The mean slow phase velocity was moderate during gaze straight ahead (1.44°/s, SD +/- 1.18°/s) and increased significantly in sideward gaze (mean left 3.36°/s; right 3.58°/s). In gaze straight ahead, we found no difference between the static and feedback driven condition. In sideward gaze, visual acuity improved in five out of ten subjects during the feedback-driven condition (p = 0.043). This study provides proof of concept that non-invasive real-time computer-based visual feedback compensates for the SPV in DBN. Therefore, real-time visual feedback may be a promising aid for patients suffering from oscillopsia and impaired text reading on screen. Recent technological advances in the area of virtual reality displays might soon render this approach feasible in fully mobile settings.

  3. Augmented Virtuality: A Real-time Process for Presenting Real-world Visual Sensory Information in an Immersive Virtual Environment for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.

    2017-12-01

    Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.

  4. History of visual systems in the Systems Engineering Simulator

    NASA Technical Reports Server (NTRS)

    Christianson, David C.

    1989-01-01

    The Systems Engineering Simulator (SES) houses a variety of real-time computer generated visual systems. The earliest machine dates from the mid-1960's and is one of the first real-time graphics systems in the world. The latest acquisition is the state-of-the-art Evans and Sutherland CT6. Between the span of time from the mid-1960's to the late 1980's, tremendous strides have been made in the real-time graphics world. These strides include advances in both software and hardware engineering. The purpose is to explore the history of the development of these real-time computer generated image systems from the first machine to the present. Hardware advances as well as software algorithm changes are presented. This history is not only quite interesting but also provides us with a perspective with which we can look backward and forward.

  5. High-power graphic computers for visual simulation: a real-time--rendering revolution

    NASA Technical Reports Server (NTRS)

    Kaiser, M. K.

    1996-01-01

    Advances in high-end graphics computers in the past decade have made it possible to render visual scenes of incredible complexity and realism in real time. These new capabilities make it possible to manipulate and investigate the interactions of observers with their visual world in ways once only dreamed of. This paper reviews how these developments have affected two preexisting domains of behavioral research (flight simulation and motion perception) and have created a new domain (virtual environment research) which provides tools and challenges for the perceptual psychologist. Finally, the current limitations of these technologies are considered, with an eye toward how perceptual psychologist might shape future developments.

  6. Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; McCrea, Andrew C.

    2009-01-01

    The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.

  7. Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; McCrea, Andrew C.

    2010-01-01

    The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.

  8. Fast interactive real-time volume rendering of real-time three-dimensional echocardiography: an implementation for low-end computers

    NASA Technical Reports Server (NTRS)

    Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.

    2002-01-01

    Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.

  9. Remote-controlled pan, tilt, zoom cameras at Kilauea and Mauna Loa Volcanoes, Hawai'i

    USGS Publications Warehouse

    Hoblitt, Richard P.; Orr, Tim R.; Castella, Frederic; Cervelli, Peter F.

    2008-01-01

    Lists of important volcano-monitoring disciplines usually include seismology, geodesy, and gas geochemistry. Visual monitoring - the essence of volcanology - is usually not mentioned. Yet, observations of the outward appearance of a volcano provide data that is equally as important as that provided by the other disciplines. The eye was almost certainly the first volcano monitoring-tool used by early man. Early volcanology was mostly descriptive and was based on careful visual observations of volcanoes. There is still no substitute for the eye of an experienced volcanologist. Today, scientific instruments replace or augment our senses as monitoring tools because instruments are faster and more sensitive, work tirelessly day and night, keep better records, operate in hazardous environments, do not generate lawsuits when damaged or destroyed, and in most cases are cheaper. Furthermore, instruments are capable of detecting phenomena that are outside the reach of our senses. The human eye is now augmented by the camera. Sequences of timed images provide a record of visual phenomena that occur on and above the surface of volcanoes. Photographic monitoring is a fundamental monitoring tool; image sequences can often provide the basis for interpreting other data streams. Monitoring data are most useful when they are generated and are available for analysis in real-time or near real-time. This report describes the current (as of 2006) system for real-time photograph acquisition and transmission from remote sites on Kilauea and Mauna Loa volcanoes to the U.S. Geological Survey Hawaiian Volcano Observatory (HVO). It also describes how the photographs are archived and analyzed. In addition to providing system documentation for HVO, we hope that the report will prove useful as a practical guide to the construction of a high-bandwidth network for the telemetry of real-time data from remote locations.

  10. Time-dependent transition density matrix for visualizing charge-transfer excitations in photoexcited organic donor-acceptor systems

    NASA Astrophysics Data System (ADS)

    Li, Yonghui; Ullrich, Carsten

    2013-03-01

    The time-dependent transition density matrix (TDM) is a useful tool to visualize and interpret the induced charges and electron-hole coherences of excitonic processes in large molecules. Combined with time-dependent density functional theory on a real-space grid (as implemented in the octopus code), the TDM is a computationally viable visualization tool for optical excitation processes in molecules. It provides real-time maps of particles and holes which gives information on excitations, in particular those that have charge-transfer character, that cannot be obtained from the density alone. Some illustration of the TDM and comparison with standard density difference plots will be shown for photoexcited organic donor-acceptor molecules. This work is supported by NSF Grant DMR-1005651

  11. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  12. Handheld real-time volumetric 3-D gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Haefner, Andrew; Barnowski, Ross; Luke, Paul; Amman, Mark; Vetter, Kai

    2017-06-01

    This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.

  13. A Prototype Visualization of Real-time River Drainage Network Response to Rainfall

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2011-12-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to and visualization of flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, and other flood-related data for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS streams rainfall data from NEXRAD radar, and provides three interfaces including animation for rainfall intensity, daily rainfall totals and rainfall accumulations for past 14 days for Iowa. A real-time interactive visualization interface is developed using past rainfall intensity data. The interface creates community-based rainfall products on-demand using watershed boundaries of each community as a mask. Each individual rainfall pixel is tracked in the interface along the drainage network, and the ones drains to same pixel location are accumulated. The interface loads recent rainfall data in five minute intervals that are combined with current values. Latest web technologies are utilized for the development of the interface including HTML 5 Canvas, and JavaScript. The performance of the interface is optimized to run smoothly on modern web browsers. The interface controls allow users to change internal parameters of the system, and operation conditions of the animation. The interface will help communities understand the effects of rainfall on water transport in stream and river networks and make better-informed decisions regarding the threat of floods. This presentation provides an overview of a unique visualization interface and discusses future plans for real-time dynamic presentations of streamflow forecasting.

  14. A Web-based Data Intensive Visualization of Real-time River Drainage Network Response to Rainfall

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2012-04-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to and visualization of flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, and other flood-related data for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS streams rainfall data from NEXRAD radar, and provides three interfaces including animation for rainfall intensity, daily rainfall totals and rainfall accumulations for past 14 days for Iowa. A real-time interactive visualization interface is developed using past rainfall intensity data. The interface creates community-based rainfall products on-demand using watershed boundaries of each community as a mask. Each individual rainfall pixel is tracked in the interface along the drainage network, and the ones drains to same pixel location are accumulated. The interface loads recent rainfall data in five minute intervals that are combined with current values. Latest web technologies are utilized for the development of the interface including HTML 5 Canvas, and JavaScript. The performance of the interface is optimized to run smoothly on modern web browsers. The interface controls allow users to change internal parameters of the system, and operation conditions of the animation. The interface will help communities understand the effects of rainfall on water transport in stream and river networks and make better-informed decisions regarding the threat of floods. This presentation provides an overview of a unique visualization interface and discusses future plans for real-time dynamic presentations of streamflow forecasting.

  15. Real-time recording and classification of eye movements in an immersive virtual environment.

    PubMed

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-10-10

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.

  16. Real-time recording and classification of eye movements in an immersive virtual environment

    PubMed Central

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-01-01

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements. PMID:24113087

  17. SU-F-T-91: Development of Real Time Abdominal Compression Force (ACF) Monitoring System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, T; Kim, D; Kang, S

    Purpose: Hard-plate based abdominal compression is known to be effective, but no explicit method exists to quantify abdominal compression force (ACF) and maintain the proper ACF through the whole procedure. In addition, even with compression, it is necessary to do 4D CT to manage residual motion but, 4D CT is often not possible due to reduced surrogating sensitivity. In this study, we developed and evaluated a system that both monitors ACF in real time and provides surrogating signal even under compression. The system can also provide visual-biofeedback. Methods: The system developed consists of a compression plate, an ACF monitoring unitmore » and a visual-biofeedback device. The ACF monitoring unit contains a thin air balloon in the size of compression plate and a gas pressure sensor. The unit is attached to the bottom of the plate thus, placed between the plate and the patient when compression is applied, and detects compression pressure. For reliability test, 3 volunteers were directed to take several different breathing patterns and the ACF variation was compared with the respiratory flow and external respiratory signal to assure that the system provides corresponding behavior. In addition, guiding waveform were generated based on free breathing, and then applied for evaluating the effectiveness of visual-biofeedback. Results: We could monitor ACF variation in real time and confirmed that the data was correlated with both respiratory flow data and external respiratory signal. Even under abdominal compression, in addition, it was possible to make the subjects successfully follow the guide patterns using the visual biofeedback system. Conclusion: The developed real time ACF monitoring system was found to be functional as intended and consistent. With the capability of both providing real time surrogating signal under compression and enabling visual-biofeedback, it is considered that the system would improve the quality of respiratory motion management in radiation therapy. This research was supported by the Mid-career Researcher Program through NRF funded by the Ministry of Science, ICT & Future Planning of Korea (NRF-2014R1A2A1A10050270) and by the Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less

  18. Real-time Position Based Population Data Analysis and Visualization Using Heatmap for Hazard Emergency Response

    NASA Astrophysics Data System (ADS)

    Ding, R.; He, T.

    2017-12-01

    With the increased popularity in mobile applications and services, there has been a growing demand for more advanced mobile technologies that utilize real-time Location Based Services (LBS) data to support natural hazard response efforts. Compared to traditional sources like the census bureau that often can only provide historical and static data, an LBS service can provide more current data to drive a real-time natural hazard response system to more accurately process and assess issues such as population density in areas impacted by a hazard. However, manually preparing or preprocessing the data to suit the needs of the particular application would be time-consuming. This research aims to implement a population heatmap visual analytics system based on real-time data for natural disaster emergency management. System comprised of a three-layered architecture, including data collection, data processing, and visual analysis layers. Real-time, location-based data meeting certain polymerization conditions are collected from multiple sources across the Internet, then processed and stored in a cloud-based data store. Parallel computing is utilized to provide fast and accurate access to the pre-processed population data based on criteria such as the disaster event and to generate a location-based population heatmap as well as other types of visual digital outputs using auxiliary analysis tools. At present, a prototype system, which geographically covers the entire region of China and combines population heat map based on data from the Earthquake Catalogs database has been developed. It Preliminary results indicate that the generation of dynamic population density heatmaps based on the prototype system has effectively supported rapid earthquake emergency rescue and evacuation efforts as well as helping responders and decision makers to evaluate and assess earthquake damage. Correlation analyses that were conducted revealed that the aggregation and movement of people depended on various factors, including earthquake occurrence time and location of epicenter. This research hopes to continue to build upon the success of the prototype system in order to improve and extend the system to support the analysis of earthquakes and other types of natural hazard events.

  19. Visualizing TZVOLCANO GNSS Data with Grafana via the EarthCube Cyberinfrastructure CHORDS: an Example of Dashboard Creation for the Geosciences

    NASA Astrophysics Data System (ADS)

    Nguyen, T. T.; Stamps, D. S.

    2017-12-01

    Visualizing societally relevant data in easy to comprehend formats is necessary for making informed decisions by non-scientist stakeholders. Despite scientists' efforts to inform the public, there continues to be a disconnect in information between stakeholders and scientists. Closing the gap in knowledge requires increased communication between the two groups facilitated by models and data visualizations. In this work we use real-time streaming data from TZVOLCANO, a network of GNSS/GPS sensors that monitor the active volcano Ol Doinyo Lengai in Tanzania, as a test-case for visualizing societally relevant data. Real-time data from TZVOLCANO is streamed into the US NSF Geodesy Facility UNAVCO archive (www.unavco.org) from which data are made available through the EarthCube cyberinfrastructure CHORDS (Cloud-Hosted Real-Time Data Services for the geosciences). CHORDS uses InfluxDB to make streaming data accessible in Grafana: an open source software that specializes in the display of time series analysis. With over 350 downloadable "dashboards", Grafana serves as an emerging software for data visualizations. Creating user-friendly visualizations ("dashboards") for the TZVOLCANO GNSS/GPS data in Tanzania can help scientists and stakeholders communicate effectively so informed decisions can be made about volcanic hazards during a time-sensitive crisis. Our use of Grafana's dashboards for one specific case-study provides an example for other geoscientists to develop analogous visualizations with the objectives of increasing the knowledge of the general public and facilitating a more informed decision-making process.

  20. Near-infrared intraoperative imaging during resection of an anterior mediastinal soft tissue sarcoma.

    PubMed

    Predina, Jarrod D; Newton, Andrew D; Desphande, Charuhas; Singhal, Sunil

    2018-01-01

    Sarcomas are rare malignancies that are generally treated with multimodal therapy protocols incorporating complete local resection, chemotherapy and radiation. Unfortunately, even with this aggressive approach, local recurrences are common. Near-infrared intraoperative imaging is a novel technology that provides real-time visual feedback that can improve identification of disease during resection. The presented study describes utilization of a near-infrared agent (indocyanine green) during resection of an anterior mediastinal sarcoma. Real-time fluorescent feedback provided visual information that helped the surgeon during tumor localization, margin assessment and dissection from mediastinal structures. This rapidly evolving technology may prove useful in patients with primary sarcomas arising from other locations or with other mediastinal neoplasms.

  1. Celeris: A GPU-accelerated open source software with a Boussinesq-type wave solver for real-time interactive simulation and visualization

    NASA Astrophysics Data System (ADS)

    Tavakkol, Sasan; Lynett, Patrick

    2017-08-01

    In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.

  2. VAST Challenge 2016: Streaming Visual Analytics

    DTIC Science & Technology

    2016-10-25

    understand rapidly evolving situations. To support such tasks, visual analytics solutions must move well beyond systems that simply provide real-time...received. Mini-Challenge 1: Design Challenge Mini-Challenge 1 focused on systems to support security and operational analytics at the Euybia...Challenge 1 was to solicit novel approaches for streaming visual analytics that push the boundaries for what constitutes a visual analytics system , and to

  3. Hybrid 2-D and 3-D Immersive and Interactive User Interface for Scientific Data Visualization

    DTIC Science & Technology

    2017-08-01

    visualization, 3-D interactive visualization, scientific visualization, virtual reality, real -time ray tracing 16. SECURITY CLASSIFICATION OF: 17...scientists to employ in the real world. Other than user-friendly software and hardware setup, scientists also need to be able to perform their usual...and scientific visualization communities mostly have different research priorities. For the VR community, the ability to support real -time user

  4. Effects of Real-Time Visual Feedback on Pre-Service Teachers' Singing

    ERIC Educational Resources Information Center

    Leong, S.; Cheng, L.

    2014-01-01

    This pilot study focuses on the use real-time visual feedback technology (VFT) in vocal training. The empirical research has two aims: to ascertain the effectiveness of the real-time visual feedback software "Sing & See" in the vocal training of pre-service music teachers and the teachers' perspective on their experience with…

  5. Demons registration for in vivo and deformable laser scanning confocal endomicroscopy.

    PubMed

    Chiew, Wei-Ming; Lin, Feng; Seah, Hock Soon

    2017-09-01

    A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  6. Demons registration for in vivo and deformable laser scanning confocal endomicroscopy

    NASA Astrophysics Data System (ADS)

    Chiew, Wei Ming; Lin, Feng; Seah, Hock Soon

    2017-09-01

    A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry.

  7. Real-time vision, tactile cues, and visual form agnosia: removing haptic feedback from a “natural” grasping task induces pantomime-like grasps

    PubMed Central

    Whitwell, Robert L.; Ganel, Tzvi; Byrne, Caitlin M.; Goodale, Melvyn A.

    2015-01-01

    Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. “Natural” prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object (“haptics-based object information”) once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets (“grip scaling”) when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF’s grip scaling slopes. In the second experiment, we examined an “unnatural” grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts. PMID:25999834

  8. Real-time vision, tactile cues, and visual form agnosia: removing haptic feedback from a "natural" grasping task induces pantomime-like grasps.

    PubMed

    Whitwell, Robert L; Ganel, Tzvi; Byrne, Caitlin M; Goodale, Melvyn A

    2015-01-01

    Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. "Natural" prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object ("haptics-based object information") once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets ("grip scaling") when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF's grip scaling slopes. In the second experiment, we examined an "unnatural" grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts.

  9. Real-time data acquisition and control system for the measurement of motor and neural data

    PubMed Central

    Bryant, Christopher L.; Gandhi, Neeraj J.

    2013-01-01

    This paper outlines a powerful, yet flexible real-time data acquisition and control system for use in the triggering and measurement of both analog and digital events. Built using the LabVIEW development architecture (version 7.1) and freely available, this system provides precisely timed auditory and visual stimuli to a subject while recording analog data and timestamps of neural activity retrieved from a window discriminator. The system utilizes the most recent real-time (RT) technology in order to provide not only a guaranteed data acquisition rate of 1 kHz, but a much more difficult to achieve guaranteed system response time of 1 ms. The system interface is windows-based and easy to use, providing a host of configurable options for end-user customization. PMID:15698659

  10. Subjective Estimation of Task Time and Task Difficulty of Simple Movement Tasks.

    PubMed

    Chan, Alan H S; Hoffmann, Errol R

    2017-01-01

    It has been demonstrated in previous work that the same neural structures are used for both imagined and real movements. To provide a strong test of the similarity of imagined and actual movement times, 4 simple movement tasks were used to determine the relationship between estimated task time and actual movement time. The tasks were single-component visually controlled movements, 2-component visually controlled, low index of difficulty (ID) moves and pin-to-hole transfer movements. For each task there was good correspondence between the mean estimated times and actual movement times. In all cases, the same factors determined the actual and estimated movement times: the amplitudes of movement and the IDs of the component movements, however the contribution of each of these variables differed for the imagined and real tasks. Generally, the standard deviations of the estimated times were linearly related to the estimated time values. Overall, the data provide strong evidence for the same neural structures being used for both imagined and actual movements.

  11. Visualization Improves Supraclavicular Access to the Subclavian Vein in a Mixed Reality Simulator.

    PubMed

    Sappenfield, Joshua Warren; Smith, William Brit; Cooper, Lou Ann; Lizdas, David; Gonsalves, Drew B; Gravenstein, Nikolaus; Lampotang, Samsun; Robinson, Albert R

    2018-07-01

    We investigated whether visual augmentation (3D, real-time, color visualization) of a procedural simulator improved performance during training in the supraclavicular approach to the subclavian vein, not as widely known or used as its infraclavicular counterpart. To train anesthesiology residents to access a central vein, a mixed reality simulator with emulated ultrasound imaging was created using an anatomically authentic, 3D-printed, physical mannequin based on a computed tomographic scan of an actual human. The simulator has a corresponding 3D virtual model of the neck and upper chest anatomy. Hand-held instruments such as a needle, an ultrasound probe, and a virtual camera controller are directly manipulated by the trainee and tracked and recorded with submillimeter resolution via miniature, 6 degrees of freedom magnetic sensors. After Institutional Review Board approval, 69 anesthesiology residents and faculty were enrolled and received scripted instructions on how to perform subclavian venous access using the supraclavicular approach based on anatomic landmarks. The volunteers were randomized into 2 cohorts. The first used real-time 3D visualization concurrently with trial 1, but not during trial 2. The second did not use real-time 3D visualization concurrently with trial 1 or 2. However, after trial 2, they observed a 3D visualization playback of trial 2 before performing trial 3 without visualization. An automated scoring system based on time, success, and errors/complications generated objective performance scores. Nonparametric statistical methods were used to compare the scores between subsequent trials, differences between groups (real-time visualization versus no visualization versus delayed visualization), and improvement in scores between trials within groups. Although the real-time visualization group demonstrated significantly better performance than the delayed visualization group on trial 1 (P = .01), there was no difference in gain scores, between performance on the first trial and performance on the final trial, that were dependent on group (P = .13). In the delayed visualization group, the difference in performance between trial 1 and trial 2 was not significant (P = .09); reviewing performance on trial 2 before trial 3 resulted in improved performance when compared to trial 1 (P < .0001). There was no significant difference in median scores (P = .13) between the real-time visualization and delayed visualization groups for the last trial after both groups had received visualization. Participants reported a significant improvement in confidence in performing supraclavicular access to the subclavian vein. Standard deviations of scores, a measure of performance variability, decreased in the delayed visualization group after viewing the visualization. Real-time visual augmentation (3D visualization) in the mixed reality simulator improved performance during supraclavicular access to the subclavian vein. No difference was seen in the final trial of the group that received real-time visualization compared to the group that had delayed visualization playback of their prior attempt. Training with the mixed reality simulator improved participant confidence in performing an unfamiliar technique.

  12. Real-time Three-dimensional Echocardiography: From Diagnosis to Intervention.

    PubMed

    Orvalho, João S

    2017-09-01

    Echocardiography is one of the most important diagnostic tools in veterinary cardiology, and one of the greatest recent developments is real-time three-dimensional imaging. Real-time three-dimensional echocardiography is a new ultrasonography modality that provides comprehensive views of the cardiac valves and congenital heart defects. The main advantages of this technique, particularly real-time three-dimensional transesophageal echocardiography, are the ability to visualize the catheters, and balloons or other devices, and the ability to image the structure that is undergoing intervention with unprecedented quality. This technique may become one of the main choices for the guidance of interventional cardiology procedures. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Advanced visualization platform for surgical operating room coordination: distributed video board system.

    PubMed

    Hu, Peter F; Xiao, Yan; Ho, Danny; Mackenzie, Colin F; Hu, Hao; Voigt, Roger; Martz, Douglas

    2006-06-01

    One of the major challenges for day-of-surgery operating room coordination is accurate and timely situation awareness. Distributed and secure real-time status information is key to addressing these challenges. This article reports on the design and implementation of a passive status monitoring system in a 19-room surgical suite of a major academic medical center. Key design requirements considered included integrated real-time operating room status display, access control, security, and network impact. The system used live operating room video images and patient vital signs obtained through monitors to automatically update events and operating room status. Images were presented on a "need-to-know" basis, and access was controlled by identification badge authorization. The system delivered reliable real-time operating room images and status with acceptable network impact. Operating room status was visualized at 4 separate locations and was used continuously by clinicians and operating room service providers to coordinate operating room activities.

  14. Visual-servoing optical microscopy

    DOEpatents

    Callahan, Daniel E.; Parvin, Bahram

    2009-06-09

    The present invention provides methods and devices for the knowledge-based discovery and optimization of differences between cell types. In particular, the present invention provides visual servoing optical microscopy, as well as analysis methods. The present invention provides means for the close monitoring of hundreds of individual, living cells over time: quantification of dynamic physiological responses in multiple channels; real-time digital image segmentation and analysis; intelligent, repetitive computer-applied cell stress and cell stimulation; and the ability to return to the same field of cells for long-term studies and observation. The present invention further provides means to optimize culture conditions for specific subpopulations of cells.

  15. Visual-servoing optical microscopy

    DOEpatents

    Callahan, Daniel E [Martinez, CA; Parvin, Bahram [Mill Valley, CA

    2011-05-24

    The present invention provides methods and devices for the knowledge-based discovery and optimization of differences between cell types. In particular, the present invention provides visual servoing optical microscopy, as well as analysis methods. The present invention provides means for the close monitoring of hundreds of individual, living cells over time; quantification of dynamic physiological responses in multiple channels; real-time digital image segmentation and analysis; intelligent, repetitive computer-applied cell stress and cell stimulation; and the ability to return to the same field of cells for long-term studies and observation. The present invention further provides means to optimize culture conditions for specific subpopulations of cells.

  16. Visual-servoing optical microscopy

    DOEpatents

    Callahan, Daniel E; Parvin, Bahram

    2013-10-01

    The present invention provides methods and devices for the knowledge-based discovery and optimization of differences between cell types. In particular, the present invention provides visual servoing optical microscopy, as well as analysis methods. The present invention provides means for the close monitoring of hundreds of individual, living cells over time; quantification of dynamic physiological responses in multiple channels; real-time digital image segmentation and analysis; intelligent, repetitive computer-applied cell stress and cell stimulation; and the ability to return to the same field of cells for long-term studies and observation. The present invention further provides means to optimize culture conditions for specific subpopulations of cells.

  17. Five-dimensional ultrasound system for soft tissue visualization.

    PubMed

    Deshmukh, Nishikant P; Caban, Jesus J; Taylor, Russell H; Hager, Gregory D; Boctor, Emad M

    2015-12-01

    A five-dimensional ultrasound (US) system is proposed as a real-time pipeline involving fusion of 3D B-mode data with the 3D ultrasound elastography (USE) data as well as visualization of these fused data and a real-time update capability over time for each consecutive scan. 3D B-mode data assist in visualizing the anatomy of the target organ, and 3D elastography data adds strain information. We investigate the feasibility of such a system and show that an end-to-end real-time system, from acquisition to visualization, can be developed. We present a system that consists of (a) a real-time 3D elastography algorithm based on a normalized cross-correlation (NCC) computation on a GPU; (b) real-time 3D B-mode acquisition and network transfer; (c) scan conversion of 3D elastography and B-mode volumes (if acquired by 4D wobbler probe); and (d) visualization software that fuses, visualizes, and updates 3D B-mode and 3D elastography data in real time. We achieved a speed improvement of 4.45-fold for the threaded version of the NCC-based 3D USE versus the non-threaded version. The maximum speed was 79 volumes/s for 3D scan conversion. In a phantom, we validated the dimensions of a 2.2-cm-diameter sphere scan-converted to B-mode volume. Also, we validated the 5D US system visualization transfer function and detected 1- and 2-cm spherical objects (phantom lesion). Finally, we applied the system to a phantom consisting of three lesions to delineate the lesions from the surrounding background regions of the phantom. A 5D US system is achievable with real-time performance. We can distinguish between hard and soft areas in a phantom using the transfer functions.

  18. SensorDB: a virtual laboratory for the integration, visualization and analysis of varied biological sensor data.

    PubMed

    Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T

    2015-01-01

    To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.

  19. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  20. 3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.

    PubMed

    Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S

    2015-10-20

    Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  1. Improving lower limb weight distribution asymmetry during the squat using Nintendo Wii Balance Boards and real-time feedback.

    PubMed

    McGough, Rian; Paterson, Kade; Bradshaw, Elizabeth J; Bryant, Adam L; Clark, Ross A

    2012-01-01

    Weight-bearing asymmetry (WBA) may be detrimental to performance and could increase the risk of injury; however, detecting and reducing it is difficult in a field setting. This study assessed whether a portable and simple-to-use system designed with multiple Nintendo Wii Balance Boards (NWBBs) and customized software can be used to evaluate and improve WBA. Fifteen elite Australian Rules Footballers and 32 age-matched, untrained participants were tested for measures of WBA while squatting. The NWBB and customized software provided real-time visual feedback of WBA during half of the trials. Outcome measures included the mean mass difference (MMD) between limbs, interlimb symmetry index (SI), and percentage of time spent favoring a single limb (TFSL). Significant reductions in MMD (p = 0.028) and SI (p = 0.007) with visual feedback were observed for the entire group data. Subgroup analysis revealed significant reductions in MMD (p = 0.047) and SI (p = 0.026) with visual feedback in the untrained sample; however, the reductions in the trained sample were nonsignificant. The trained group showed significantly less WBA for TFSL under both visual conditions (no feedback: p = 0.015, feedback: p = 0.017). Correlation analysis revealed that participants with high levels of WBA had the greatest response to feedback (p < 0.001, ρ = 0.557). In conclusion, WBA exists in healthy untrained adults, and these asymmetries can be reduced using real-time visual feedback provided by an NWBB-based system. Healthy, well-trained professional athletes do not possess the same magnitude of WBA. Inexpensive, portable, and widely available gaming technology may be used to evaluate and improve WBA in clinical and sporting settings.

  2. Interactive dual-volume rendering visualization with real-time fusion and transfer function enhancement

    NASA Astrophysics Data System (ADS)

    Macready, Hugh; Kim, Jinman; Feng, David; Cai, Weidong

    2006-03-01

    Dual-modality imaging scanners combining functional PET and anatomical CT constitute a challenge in volumetric visualization that can be limited by the high computational demand and expense. This study aims at providing physicians with multi-dimensional visualization tools, in order to navigate and manipulate the data running on a consumer PC. We have maximized the utilization of pixel-shader architecture of the low-cost graphic hardware and the texture-based volume rendering to provide visualization tools with high degree of interactivity. All the software was developed using OpenGL and Silicon Graphics Inc. Volumizer, tested on a Pentium mobile CPU on a PC notebook with 64M graphic memory. We render the individual modalities separately, and performing real-time per-voxel fusion. We designed a novel "alpha-spike" transfer function to interactively identify structure of interest from volume rendering of PET/CT. This works by assigning a non-linear opacity to the voxels, thus, allowing the physician to selectively eliminate or reveal information from the PET/CT volumes. As the PET and CT are rendered independently, manipulations can be applied to individual volumes, for instance, the application of transfer function to CT to reveal the lung boundary while adjusting the fusion ration between the CT and PET to enhance the contrast of a tumour region, with the resultant manipulated data sets fused together in real-time as the adjustments are made. In addition to conventional navigation and manipulation tools, such as scaling, LUT, volume slicing, and others, our strategy permits efficient visualization of PET/CT volume rendering which can potentially aid in interpretation and diagnosis.

  3. LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.

  4. Real-Time Monitoring and Evaluation of a Visual-Based Cervical Cancer Screening Program Using a Decision Support Job Aid.

    PubMed

    Peterson, Curtis W; Rose, Donny; Mink, Jonah; Levitz, David

    2016-05-16

    In many developing nations, cervical cancer screening is done by visual inspection with acetic acid (VIA). Monitoring and evaluation (M&E) of such screening programs is challenging. An enhanced visual assessment (EVA) system was developed to augment VIA procedures in low-resource settings. The EVA System consists of a mobile colposcope built around a smartphone, and an online image portal for storing and annotating images. A smartphone app is used to control the mobile colposcope, and upload pictures to the image portal. In this paper, a new app feature that documents clinical decisions using an integrated job aid was deployed in a cervical cancer screening camp in Kenya. Six organizations conducting VIA used the EVA System to screen 824 patients over the course of a week, and providers recorded their diagnoses and treatments in the application. Real-time aggregated statistics were broadcast on a public website. Screening organizations were able to assess the number of patients screened, alongside treatment rates, and the patients who tested positive and required treatment in real time, which allowed them to make adjustments as needed. The real-time M&E enabled by "smart" diagnostic medical devices holds promise for broader use in screening programs in low-resource settings.

  5. Weighted feature selection criteria for visual servoing of a telerobot

    NASA Technical Reports Server (NTRS)

    Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.

    1989-01-01

    Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.

  6. TRMM Precipitation Application Examples Using Data Services at NASA GES DISC

    NASA Technical Reports Server (NTRS)

    Liu, Zhong; Ostrenga, D.; Teng, W.; Kempler, S.; Greene, M.

    2012-01-01

    Data services to support precipitation applications are important for maximizing the NASA TRMM (Tropical Rainfall Measuring Mission) and the future GPM (Global Precipitation Mission) mission's societal benefits. TRMM Application examples using data services at the NASA GES DISC, including samples from users around the world will be presented in this poster. Precipitation applications often require near-real-time support. The GES DISC provides such support through: 1) Providing near-real-time precipitation products through TOVAS; 2) Maps of current conditions for monitoring precipitation and its anomaly around the world; 3) A user friendly tool (TOVAS) to analyze and visualize near-real-time and historical precipitation products; and 4) The GES DISC Hurricane Portal that provides near-real-time monitoring services for the Atlantic basin. Since the launch of TRMM, the GES DISC has developed data services to support precipitation applications around the world. In addition to the near-real-time services, other services include: 1) User friendly TRMM Online Visualization and Analysis System (TOVAS; URL: http://disc2.nascom.nasa.gov/Giovanni/tovas/); 2) Mirador (http://mirador.gsfc.nasa.gov/), a simplified interface for searching, browsing, and ordering Earth science data at GES DISC. Mirador is designed to be fast and easy to learn; 3) Data via OPeNDAP (http://disc.sci.gsfc.nasa.gov/services/opendap/). The OPeNDAP provides remote access to individual variables within datasets in a form usable by many tools, such as IDV, McIDAS-V, Panoply, Ferret and GrADS; and 4) The Open Geospatial Consortium (OGC) Web Map Service (WMS) (http://disc.sci.gsfc.nasa.gov/services/wxs_ogc.shtml). The WMS is an interface that allows the use of data and enables clients to build customized maps with data coming from a different network.

  7. Interactive MPEG-4 low-bit-rate speech/audio transmission over the Internet

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Kim, JongWon; Kuo, C.-C. Jay

    1999-11-01

    The recently developed MPEG-4 technology enables the coding and transmission of natural and synthetic audio-visual data in the form of objects. In an effort to extend the object-based functionality of MPEG-4 to real-time Internet applications, architectural prototypes of multiplex layer and transport layer tailored for transmission of MPEG-4 data over IP are under debate among Internet Engineering Task Force (IETF), and MPEG-4 systems Ad Hoc group. In this paper, we present an architecture for interactive MPEG-4 speech/audio transmission system over the Internet. It utilities a framework of Real Time Streaming Protocol (RTSP) over Real-time Transport Protocol (RTP) to provide controlled, on-demand delivery of real time speech/audio data. Based on a client-server model, a couple of low bit-rate bit streams (real-time speech/audio, pre- encoded speech/audio) are multiplexed and transmitted via a single RTP channel to the receiver. The MPEG-4 Scene Description (SD) and Object Descriptor (OD) bit streams are securely sent through the RTSP control channel. Upon receiving, an initial MPEG-4 audio- visual scene is constructed after de-multiplexing, decoding of bit streams, and scene composition. A receiver is allowed to manipulate the initial audio-visual scene presentation locally, or interactively arrange scene changes by sending requests to the server. A server may also choose to update the client with new streams and list of contents for user selection.

  8. On-patient see-through augmented reality based on visual SLAM.

    PubMed

    Mahmoud, Nader; Grasa, Óscar G; Nicolau, Stéphane A; Doignon, Christophe; Soler, Luc; Marescaux, Jacques; Montiel, J M M

    2017-01-01

    An augmented reality system to visualize a 3D preoperative anatomical model on intra-operative patient is proposed. The hardware requirement is commercial tablet-PC equipped with a camera. Thus, no external tracking device nor artificial landmarks on the patient are required. We resort to visual SLAM to provide markerless real-time tablet-PC camera location with respect to the patient. The preoperative model is registered with respect to the patient through 4-6 anchor points. The anchors correspond to anatomical references selected on the tablet-PC screen at the beginning of the procedure. Accurate and real-time preoperative model alignment (approximately 5-mm mean FRE and TRE) was achieved, even when anchors were not visible in the current field of view. The system has been experimentally validated on human volunteers, in vivo pigs and a phantom. The proposed system can be smoothly integrated into the surgical workflow because it: (1) operates in real time, (2) requires minimal additional hardware only a tablet-PC with camera, (3) is robust to occlusion, (4) requires minimal interaction from the medical staff.

  9. Live Aircraft Encounter Visualization at FutureFlight Central

    NASA Technical Reports Server (NTRS)

    Murphy, James R.; Chinn, Fay; Monheim, Spencer; Otto, Neil; Kato, Kenji; Archdeacon, John

    2018-01-01

    Researchers at the National Aeronautics and Space Administration (NASA) have developed an aircraft data streaming capability that can be used to visualize live aircraft in near real-time. During a joint Federal Aviation Administration (FAA)/NASA Airborne Collision Avoidance System flight series, test sorties between unmanned aircraft and manned intruder aircraft were shown in real-time at NASA Ames' FutureFlight Central tower facility as a virtual representation of the encounter. This capability leveraged existing live surveillance, video, and audio data streams distributed through a Live, Virtual, Constructive test environment, then depicted the encounter from the point of view of any aircraft in the system showing the proximity of the other aircraft. For the demonstration, position report data were sent to the ground from on-board sensors on the unmanned aircraft. The point of view can be change dynamically, allowing encounters from all angles to be observed. Visualizing the encounters in real-time provides a safe and effective method for observation of live flight testing and a strong alternative to travel to the remote test range.

  10. Real-time Visualization of Tissue Dynamics during Embryonic Development and Malignant Transformation

    NASA Astrophysics Data System (ADS)

    Yamada, Kenneth

    Tissues undergo dramatic changes in organization during embryonic development, as well as during cancer progression and invasion. Recent advances in microscopy now allow us to visualize and track directly the dynamic movements of tissues, their constituent cells, and cellular substructures. This behavior can now be visualized not only in regular tissue culture on flat surfaces (`2D' environments), but also in a variety of 3D environments that may provide physiological cues relevant to understanding dynamics within living organisms. Acquisition of imaging data using various microscopy modalities will provide rich opportunities for determining the roles of physical factors and for computational modeling of complex processes in living tissues. Direct visualization of real-time motility is providing insight into biology spanning multiple spatio-temporal scales. Many cells in our body are known to be in contact with connective tissue and other forms of extracellular matrix. They do so through microscopic cellular adhesions that bind to matrix proteins. In particular, fluorescence microscopy has revealed that cells dynamically probe and bend the matrix at the sites of cell adhesions, and that 3D matrix architecture, stiffness, and elasticity can each regulate migration of the cells. Conversely, cells remodel their local matrix as organs form or tumors invade. Cancer cells can invade tissues using microscopic protrusions that degrade the surrounding matrix; in this case, the local matrix protein concentration is more important for inducing the micro-invasive protrusions than stiffness. On the length scales of tissues, transiently high rates of individual cell movement appear to help establish organ architecture. In fact, isolated cells can self-organize to form tissue structures. In all of these cases, in-depth real-time visualization will ultimately provide the extensive data needed for computer modeling and for testing hypotheses in which physical forces interact closely with cell signaling to form organs or promote tumor invasion.

  11. Towards real-time medical diagnostics using hyperspectral imaging technology

    NASA Astrophysics Data System (ADS)

    Bjorgan, Asgeir; Randeberg, Lise L.

    2015-07-01

    Hyperspectral imaging provides non-contact, high resolution spectral images which has a substantial diagnostic potential. This can be used for e.g. diagnosis and early detection of arthritis in finger joints. Processing speed is currently a limitation for clinical use of the technique. A real-time system for analysis and visualization using GPU processing and threaded CPU processing is presented. Images showing blood oxygenation, blood volume fraction and vessel enhanced images are among the data calculated in real-time. This study shows the potential of real-time processing in this context. A combination of the processing modules will be used in detection of arthritic finger joints from hyperspectral reflectance and transmittance data.

  12. BreathBased Monitoring of Pilot Hypoxia - Proof of Concept

    DTIC Science & Technology

    2016-04-21

    vest, and there are no aircraft connections required. Operation is entirely automatic and data visualization is available via a Bluetooth connected...to USB-connected Flash-RAM (storage depends on module size, 32Gb supported). • Bluetooth transmission of data in real time • Automated storage...via an Android tablet (Figure 4). The tablet acquires the data transmitted using Bluetooth by the pilot worn system module and provides a real-time

  13. A Comprehensive Optimization Strategy for Real-time Spatial Feature Sharing and Visual Analytics in Cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    Li, W.; Shao, H.

    2017-12-01

    For geospatial cyberinfrastructure enabled web services, the ability of rapidly transmitting and sharing spatial data over the Internet plays a critical role to meet the demands of real-time change detection, response and decision-making. Especially for the vector datasets which serve as irreplaceable and concrete material in data-driven geospatial applications, their rich geometry and property information facilitates the development of interactive, efficient and intelligent data analysis and visualization applications. However, the big-data issues of vector datasets have hindered their wide adoption in web services. In this research, we propose a comprehensive optimization strategy to enhance the performance of vector data transmitting and processing. This strategy combines: 1) pre- and on-the-fly generalization, which automatically determines proper simplification level through the introduction of appropriate distance tolerance (ADT) to meet various visualization requirements, and at the same time speed up simplification efficiency; 2) a progressive attribute transmission method to reduce data size and therefore the service response time; 3) compressed data transmission and dynamic adoption of a compression method to maximize the service efficiency under different computing and network environments. A cyberinfrastructure web portal was developed for implementing the proposed technologies. After applying our optimization strategies, substantial performance enhancement is achieved. We expect this work to widen the use of web service providing vector data to support real-time spatial feature sharing, visual analytics and decision-making.

  14. Real-time software-based end-to-end wireless visual communications simulation platform

    NASA Astrophysics Data System (ADS)

    Chen, Ting-Chung; Chang, Li-Fung; Wong, Andria H.; Sun, Ming-Ting; Hsing, T. Russell

    1995-04-01

    Wireless channel impairments pose many challenges to real-time visual communications. In this paper, we describe a real-time software based wireless visual communications simulation platform which can be used for performance evaluation in real-time. This simulation platform consists of two personal computers serving as hosts. Major components of each PC host include a real-time programmable video code, a wireless channel simulator, and a network interface for data transport between the two hosts. The three major components are interfaced in real-time to show the interaction of various wireless channels and video coding algorithms. The programmable features in the above components allow users to do performance evaluation of user-controlled wireless channel effects without physically carrying out these experiments which are limited in scope, time-consuming, and costly. Using this simulation platform as a testbed, we have experimented with several wireless channel effects including Rayleigh fading, antenna diversity, channel filtering, symbol timing, modulation, and packet loss.

  15. The Effect of Nonverbal Cues on the Interpretation of Utterances by People with Visual Impairments

    ERIC Educational Resources Information Center

    Sak-Wernicka, Jolanta

    2014-01-01

    Introduction: The purpose of this article is to explore the effect of nonverbal information (gestures and facial expressions) provided in real time on the interpretation of utterances by people with total blindness. Methods: The article reports on an exploratory study performed on two groups of participants with visual impairments who were tested…

  16. A Free Program for Using and Teaching an Accessible Electronic Wayfinding Device

    ERIC Educational Resources Information Center

    Greenberg, Maya Delgado; Kuns, Jerry

    2012-01-01

    Accessible Global Positioning Systems (GPS) are changing the way many people with visual impairments (that is, those who are blind or have low vision) travel. GPS provides real-time orientation information so that a traveler with a visual impairment can make informed decisions about path of travel and destination. Orientation and mobility (O&M)…

  17. Parametric Study of Diffusion-Enhancement Networks for Spatiotemporal Grouping in Real-Time Artificial Vision

    DTIC Science & Technology

    1993-04-01

    suggesting it occurs in later visual motion processing (long-range or second-order system). STIMULUS PERCEPT L" FLASH DURATION FLASH DURATION (a) TIME ( b ...TIME Figure 2. Gamma motion. (a) A light of fixed spatial extent is illuminated then extim- guished. ( b ) The percept is of a light expanding and then...while smaller, type- B cells provide input to its parvocellular subdivision. From here the magnocellular pathway progresses up through visual cortex area V

  18. Realistic tissue visualization using photoacoustic image

    NASA Astrophysics Data System (ADS)

    Cho, Seonghee; Managuli, Ravi; Jeon, Seungwan; Kim, Jeesu; Kim, Chulhong

    2018-02-01

    Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.

  19. C-SPADE: a web-tool for interactive analysis and visualization of drug screening experiments through compound-specific bioactivity dendrograms

    PubMed Central

    Alam, Zaid; Peddinti, Gopal

    2017-01-01

    Abstract The advent of polypharmacology paradigm in drug discovery calls for novel chemoinformatic tools for analyzing compounds’ multi-targeting activities. Such tools should provide an intuitive representation of the chemical space through capturing and visualizing underlying patterns of compound similarities linked to their polypharmacological effects. Most of the existing compound-centric chemoinformatics tools lack interactive options and user interfaces that are critical for the real-time needs of chemical biologists carrying out compound screening experiments. Toward that end, we introduce C-SPADE, an open-source exploratory web-tool for interactive analysis and visualization of drug profiling assays (biochemical, cell-based or cell-free) using compound-centric similarity clustering. C-SPADE allows the users to visually map the chemical diversity of a screening panel, explore investigational compounds in terms of their similarity to the screening panel, perform polypharmacological analyses and guide drug-target interaction predictions. C-SPADE requires only the raw drug profiling data as input, and it automatically retrieves the structural information and constructs the compound clusters in real-time, thereby reducing the time required for manual analysis in drug development or repurposing applications. The web-tool provides a customizable visual workspace that can either be downloaded as figure or Newick tree file or shared as a hyperlink with other users. C-SPADE is freely available at http://cspade.fimm.fi/. PMID:28472495

  20. A real-time plantar pressure feedback device for foot unloading.

    PubMed

    Femery, Virginie G; Moretto, Pierre G; Hespel, Jean-Michel G; Thévenon, André; Lensel, Ghislaine

    2004-10-01

    To develop and test a plantar pressure control device that provides both visual and auditory feedback and is suitable for correcting plantar pressure distribution patterns in persons susceptible to neuropathic foot ulceration. Pilot test. Sports medicine laboratory in a university in France. One healthy man in his mid thirties. Not applicable. Main outcome measures A device was developed based on real-time feedback, incorporating an acoustic alarm and visual signals, adjusted to a specific pressure load. Plantar pressure measured during walking, at 6 sensor locations over 27 steps under 2 different conditions: (1) natural and (2) unloaded in response to device feedback. The subject was able to modify his gait in response to the auditory and visual signals. He did not compensate for the decrease of peak pressure under the first metarsal by increasing the duration of the load shift under this area. Gait pattern modification centered on a mediolateral load shift. The auditory signal provided a warning system alerting the user to potentially harmful plantar pressures. The visual signal warned of the degree of pressure. People who have lost nociceptive perception, as in cases of diabetic neuropathy, may be able to change their walking pattern in response to the feedback provided by this device. The visual may have diagnostic value in determining plantar pressures in such patients. This pilot test indicates that further studies are warranted.

  1. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  2. Pilot Task Profiles, Human Factors, And Image Realism

    NASA Astrophysics Data System (ADS)

    McCormick, Dennis

    1982-06-01

    Computer Image Generation (CIG) visual systems provide real time scenes for state-of-the-art flight training simulators. The visual system reauires a greater understanding of training tasks, human factors, and the concept of image realism to produce an effective and efficient training scene than is required by other types of visual systems. Image realism must be defined in terms of pilot visual information reauirements. Human factors analysis of training and perception is necessary to determine the pilot's information requirements. System analysis then determines how the CIG and display device can best provide essential information to the pilot. This analysis procedure ensures optimum training effectiveness and system performance.

  3. Real-Time MRI-Guided Cardiac Cryo-Ablation: A Feasibility Study.

    PubMed

    Kholmovski, Eugene G; Coulombe, Nicolas; Silvernagel, Joshua; Angel, Nathan; Parker, Dennis; Macleod, Rob; Marrouche, Nassir; Ranjan, Ravi

    2016-05-01

    MRI-based ablation provides an attractive capability of seeing ablation-related tissue changes in real time. Here we describe a real-time MRI-based cardiac cryo-ablation system. Studies were performed in canine model (n = 4) using MR-compatible cryo-ablation devices built for animal use: focal cryo-catheter with 8 mm tip and 28 mm diameter cryo-balloon. The main steps of MRI-guided cardiac cryo-ablation procedure (real-time navigation, confirmation of tip-tissue contact, confirmation of vessel occlusion, real-time monitoring of a freeze zone formation, and intra-procedural assessment of lesions) were validated in a 3 Tesla clinical MRI scanner. The MRI compatible cryo-devices were advanced to the right atrium (RA) and right ventricle (RV) and their position was confirmed by real-time MRI. Specifically, contact between catheter tip and myocardium and occlusion of superior vena cava (SVC) by the balloon was visually validated. Focal cryo-lesions were created in the RV septum. Circumferential ablation of SVC-RA junction with no gaps was achieved using the cryo-balloon. Real-time visualization of freeze zone formation was achieved in all studies when lesions were successfully created. The ablations and presence of collateral damage were confirmed by T1-weighted and late gadolinium enhancement MRI and gross pathological examination. This study confirms the feasibility of a MRI-based cryo-ablation system in performing cardiac ablation procedures. The system allows real-time catheter navigation, confirmation of catheter tip-tissue contact, validation of vessel occlusion by cryo-balloon, real-time monitoring of a freeze zone formation, and intra-procedural assessment of ablations including collateral damage. © 2016 Wiley Periodicals, Inc.

  4. A Web service-based architecture for real-time hydrologic sensor networks

    NASA Astrophysics Data System (ADS)

    Wong, B. P.; Zhao, Y.; Kerkez, B.

    2014-12-01

    Recent advances in web services and cloud computing provide new means by which to process and respond to real-time data. This is particularly true of platforms built for the Internet of Things (IoT). These enterprise-scale platforms have been designed to exploit the IP-connectivity of sensors and actuators, providing a robust means by which to route real-time data feeds and respond to events of interest. While powerful and scalable, these platforms have yet to be adopted by the hydrologic community, where the value of real-time data impacts both scientists and decision makers. We discuss the use of one such IoT platform for the purpose of large-scale hydrologic measurements, showing how rapid deployment and ease-of-use allows scientists to focus on their experiment rather than software development. The platform is hardware agnostic, requiring only IP-connectivity of field devices to capture, store, process, and visualize data in real-time. We demonstrate the benefits of real-time data through a real-world use case by showing how our architecture enables the remote control of sensor nodes, thereby permitting the nodes to adaptively change sampling strategies to capture major hydrologic events of interest.

  5. Effect of Real-Time Feedback on Screw Placement Into Synthetic Cancellous Bone.

    PubMed

    Gustafson, Peter A; Geeslin, Andrew G; Prior, David M; Chess, Joseph L

    2016-08-01

    The objective of this study is to evaluate whether real-time torque feedback may reduce the occurrence of stripping when inserting nonlocking screws through fracture plates into synthetic cancellous bone. Five attending orthopaedic surgeons and 5 senior level orthopaedic residents inserted 8 screws in each phase. In phase I, screws were inserted without feedback simulating conventional techniques. In phase II, screws were driven with visual torque feedback. In phase III, screws were again inserted with conventional techniques. Comparison of these 3 phases with respect to screw insertion torque, surgeon rank, and perception of stripping was used to establish the effects of feedback. Seventy-three of 239 screws resulted in stripping. During the first phase, no feedback was provided and the overall strip rate was 41.8%; this decreased to 15% with visual feedback (P < 0.001) and returned to 35% when repeated without feedback. With feedback, a lower average torque was applied over a narrower torque distribution. Residents stripped 40.8% of screws compared with 20.2% for attending surgeons. Surgeons were poor at perceiving whether they stripped. Prevention and identification of stripping is influenced by surgeon perception of tactile sensation. This is significantly improved with utilization of real-time visual feedback of a torque versus roll curve. This concept of real-time feedback seems beneficial toward performance in synthetic cancellous bone and may lead to improved fixation in cancellous bone in a surgical setting.

  6. Real-Time Performance Feedback for the Manual Control of Spacecraft

    NASA Astrophysics Data System (ADS)

    Karasinski, John Austin

    Real-time performance metrics were developed to quantify workload, situational awareness, and manual task performance for use as visual feedback to pilots of aerospace vehicles. Results from prior lunar lander experiments with variable levels of automation were replicated and extended to provide insights for the development of real-time metrics. Increased levels of automation resulted in increased flight performance, lower workload, and increased situational awareness. Automated Speech Recognition (ASR) was employed to detect verbal callouts as a limited measure of subjects' situational awareness. A one-dimensional manual tracking task and simple instructor-model visual feedback scheme was developed. This feedback was indicated to the operator by changing the color of a guidance element on the primary flight display, similar to how a flight instructor points out elements of a display to a student pilot. Experiments showed that for this low-complexity task, visual feedback did not change subject performance, but did increase the subjects' measured workload. Insights gained from these experiments were applied to a Simplified Aid for EVA Rescue (SAFER) inspection task. The effects of variations of an instructor-model performance-feedback strategy on human performance in a novel SAFER inspection task were investigated. Real-time feedback was found to have a statistically significant effect of improving subject performance and decreasing workload in this complicated four degree of freedom manual control task with two secondary tasks.

  7. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    PubMed

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  8. Robot Vision Library

    NASA Technical Reports Server (NTRS)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  9. Graphical user interface concepts for tactical augmented reality

    NASA Astrophysics Data System (ADS)

    Argenta, Chris; Murphy, Anne; Hinton, Jeremy; Cook, James; Sherrill, Todd; Snarski, Steve

    2010-04-01

    Applied Research Associates and BAE Systems are working together to develop a wearable augmented reality system under the DARPA ULTRA-Vis program†. Our approach to achieve the objectives of ULTRAVis, called iLeader, incorporates a full color 40° field of view (FOV) see-thru holographic waveguide integrated with sensors for full position and head tracking to provide an unobtrusive information system for operational maneuvers. iLeader will enable warfighters to mark-up the 3D battle-space with symbologic identification of graphical control measures, friendly force positions and enemy/target locations. Our augmented reality display provides dynamic real-time painting of symbols on real objects, a pose-sensitive 360° representation of relevant object positions, and visual feedback for a variety of system activities. The iLeader user interface and situational awareness graphical representations are highly intuitive, nondisruptive, and always tactically relevant. We used best human-factors practices, system engineering expertise, and cognitive task analysis to design effective strategies for presenting real-time situational awareness to the military user without distorting their natural senses and perception. We present requirements identified for presenting information within a see-through display in combat environments, challenges in designing suitable visualization capabilities, and solutions that enable us to bring real-time iconic command and control to the tactical user community.

  10. Demonstrating the Value of Near Real-time Satellite-based Earth Observations in a Research and Education Framework

    NASA Astrophysics Data System (ADS)

    Chiu, L.; Hao, X.; Kinter, J. L.; Stearn, G.; Aliani, M.

    2017-12-01

    The launch of GOES-16 series provides an opportunity to advance near real-time applications in natural hazard detection, monitoring and warning. This study demonstrates the capability and values of receiving real-time satellite-based Earth observations over a fast terrestrial networks and processing high-resolution remote sensing data in a university environment. The demonstration system includes 4 components: 1) Near real-time data receiving and processing; 2) data analysis and visualization; 3) event detection and monitoring; and 4) information dissemination. Various tools are developed and integrated to receive and process GRB data in near real-time, produce images and value-added data products, and detect and monitor extreme weather events such as hurricane, fire, flooding, fog, lightning, etc. A web-based application system is developed to disseminate near-real satellite images and data products. The images are generated with GIS-compatible format (GeoTIFF) to enable convenient use and integration in various GIS platforms. This study enhances the capacities for undergraduate and graduate education in Earth system and climate sciences, and related applications to understand the basic principles and technology in real-time applications with remote sensing measurements. It also provides an integrated platform for near real-time monitoring of extreme weather events, which are helpful for various user communities.

  11. Real-Time Aerodynamic Flow and Data Visualization in an Interactive Virtual Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2005-01-01

    Significant advances have been made to non-intrusive flow field diagnostics in the past decade. Camera based techniques are now capable of determining physical qualities such as surface deformation, surface pressure and temperature, flow velocities, and molecular species concentration. In each case, extracting the pertinent information from the large volume of acquired data requires powerful and efficient data visualization tools. The additional requirement for real time visualization is fueled by an increased emphasis on minimizing test time in expensive facilities. This paper will address a capability titled LiveView3D, which is the first step in the development phase of an in depth, real time data visualization and analysis tool for use in aerospace testing facilities.

  12. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience

    NASA Astrophysics Data System (ADS)

    Hanhart, Philippe; Ebrahimi, Touradj

    2014-03-01

    Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.

  13. Flood Damage and Loss Estimation for Iowa on Web-based Systems using HAZUS

    NASA Astrophysics Data System (ADS)

    Yildirim, E.; Sermet, M. Y.; Demir, I.

    2016-12-01

    Importance of decision support systems for flood emergency response and loss estimation increases with its social and economic impacts. To estimate the damage of the flood, there are several software systems available to researchers and decision makers. HAZUS-MH is one of the most widely used desktop program, developed by FEMA (Federal Emergency Management Agency), to estimate economic loss and social impacts of disasters such as earthquake, hurricane and flooding (riverine and coastal). HAZUS used loss estimation methodology and implements through geographic information system (GIS). HAZUS contains structural, demographic, and vehicle information across United States. Thus, it allows decision makers to understand and predict possible casualties and damage of the floods by running flood simulations through GIS application. However, it doesn't represent real time conditions because of using static data. To close this gap, an overview of a web-based infrastructure coupling HAZUS and real time data provided by IFIS (Iowa Flood Information System) is presented by this research. IFIS is developed by the Iowa Flood Center, and a one-stop web-platform to access community-based flood conditions, forecasts, visualizations, inundation maps and flood-related data, information, and applications. Large volume of real-time observational data from a variety of sensors and remote sensing resources (radars, rain gauges, stream sensors, etc.) and flood inundation models are staged on a user-friendly maps environment that is accessible to the general public. Providing cross sectional analyses between HAZUS-MH and IFIS datasets, emergency managers are able to evaluate flood damage during flood events easier and more accessible in real time conditions. With matching data from HAZUS-MH census tract layer and IFC gauges, economical effects of flooding can be observed and evaluated by decision makers. The system will also provide visualization of the data by using augmented reality for see-through displays. Emergency management experts can take advantage of this visualization mode to manage flood response activities in real time. Also, forecast system developed by the Iowa Flood Center will be used to predict probable damage of the flood.

  14. Interactive Visualization and Analysis of Geospatial Data Sets - TrikeND-iGlobe

    NASA Astrophysics Data System (ADS)

    Rosebrock, Uwe; Hogan, Patrick; Chandola, Varun

    2013-04-01

    The visualization of scientific datasets is becoming an ever-increasing challenge as advances in computing technologies have enabled scientists to build high resolution climate models that have produced petabytes of climate data. To interrogate and analyze these large datasets in real-time is a task that pushes the boundaries of computing hardware and software. But integration of climate datasets with geospatial data requires considerable amount of effort and close familiarity of various data formats and projection systems, which has prevented widespread utilization outside of climate community. TrikeND-iGlobe is a sophisticated software tool that bridges this gap, allows easy integration of climate datasets with geospatial datasets and provides sophisticated visualization and analysis capabilities. The objective for TrikeND-iGlobe is the continued building of an open source 4D virtual globe application using NASA World Wind technology that integrates analysis of climate model outputs with remote sensing observations as well as demographic and environmental data sets. This will facilitate a better understanding of global and regional phenomenon, and the impact analysis of climate extreme events. The critical aim is real-time interactive interrogation. At the data centric level the primary aim is to enable the user to interact with the data in real-time for the purpose of analysis - locally or remotely. TrikeND-iGlobe provides the basis for the incorporation of modular tools that provide extended interactions with the data, including sub-setting, aggregation, re-shaping, time series analysis methods and animation to produce publication-quality imagery. TrikeND-iGlobe may be run locally or can be accessed via a web interface supported by high-performance visualization compute nodes placed close to the data. It supports visualizing heterogeneous data formats: traditional geospatial datasets along with scientific data sets with geographic coordinates (NetCDF, HDF, etc.). It also supports multiple data access mechanisms, including HTTP, FTP, WMS, WCS, and Thredds Data Server (for NetCDF data and for scientific data, TrikeND-iGlobe supports various visualization capabilities, including animations, vector field visualization, etc. TrikeND-iGlobe is a collaborative open-source project, contributors include NASA (ARC-PX), ORNL (Oakridge National Laboratories), Unidata, Kansas University, CSIRO CMAR Australia and Geoscience Australia.

  15. Fast Tracking Data to Informed Decisions: An Advanced Information System to Improve Environmental Understanding and Management (Invited)

    NASA Astrophysics Data System (ADS)

    Minsker, B. S.; Myers, J.; Liu, Y.; Bajcsy, P.

    2010-12-01

    Emerging sensing and information technology are rapidly creating a new paradigm for environmental research and management, in which data from multiple sensors and information sources can guide real-time adaptive observation and decision making. This talk will provide an overview of emerging cyberinfrastructure and three case studies that illustrate their potential: combined sewer overflows in Chicago, hypoxia in Corpus Christi Bay, Texas, and sustainable agriculture in Illinois. An advanced information system for real-time decision making and visual geospatial analytics will be presented as an example of cyberinfrastructure that enables easier implementation of numerous real-time applications.

  16. PC-PVT 2.0: An updated platform for psychomotor vigilance task testing, analysis, prediction, and visualization.

    PubMed

    Reifman, Jaques; Kumar, Kamal; Khitrov, Maxim Y; Liu, Jianbo; Ramakrishnan, Sridhar

    2018-07-01

    The psychomotor vigilance task (PVT) has been widely used to assess the effects of sleep deprivation on human neurobehavioral performance. To facilitate research in this field, we previously developed the PC-PVT, a freely available software system analogous to the "gold-standard" PVT-192 that, in addition to allowing for simple visual reaction time (RT) tests, also allows for near real-time PVT analysis, prediction, and visualization in a personal computer (PC). Here we present the PC-PVT 2.0 for Windows 10 operating system, which has the capability to couple PVT tests of a study protocol with the study's sleep/wake and caffeine schedules, and make real-time individualized predictions of PVT performance for such schedules. We characterized the accuracy and precision of the software in measuring RT, using 44 distinct combinations of PC hardware system configurations. We found that 15 system configurations measured RTs with an average delay of less than 10 ms, an error comparable to that of the PVT-192. To achieve such small delays, the system configuration should always use a gaming mouse as the means to respond to visual stimuli. We recommend using a discrete graphical processing unit for desktop PCs and an external monitor for laptop PCs. This update integrates a study's sleep/wake and caffeine schedules with the testing software, facilitating testing and outcome visualization, and provides near-real-time individualized PVT predictions for any sleep-loss condition considering caffeine effects. The software, with its enhanced PVT analysis, visualization, and prediction capabilities, can be freely downloaded from https://pcpvt.bhsai.org. Published by Elsevier B.V.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.

    Real-time terrain rendering for interactive visualization remains a demanding task. We present a novel algorithm with several advantages over previous methods: our method is unusually stingy with polygons yet achieves real-time performance and is scalable to arbitrary regions and resolutions. The method provides a continuous terrain mesh of specified triangle count having provably minimum error in restricted but reasonably general classes of permissible meshes and error metrics. Our method provides an elegant solution to guaranteeing certain elusive types of consistency in scenes produced by multiple scene generators which share a common finest-resolution database but which otherwise operate entirely independently. Thismore » consistency is achieved by exploiting the freedom of choice of error metric allowed by the algorithm to provide, for example, multiple exact lines-of-sight in real-time. Our methods rely on an off-line pre-processing phase to construct a multi-scale data structure consisting of triangular terrain approximations enhanced ({open_quotes}thickened{close_quotes}) with world-space error information. In real time, this error data is efficiently transformed into screen-space where it is used to guide a greedy top-down triangle subdivision algorithm which produces the desired minimal error continuous terrain mesh. Our algorithm has been implemented and it operates at real-time rates.« less

  18. Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2011-01-01

    This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.

  19. Spectral analysis method and sample generation for real time visualization of speech

    NASA Astrophysics Data System (ADS)

    Hobohm, Klaus

    A method for translating speech signals into optical models, characterized by high sound discrimination and learnability and designed to provide to deaf persons a feedback towards control of their way of speaking, is presented. Important properties of speech production and perception processes and organs involved in these mechanisms are recalled in order to define requirements for speech visualization. It is established that the spectral representation of time, frequency and amplitude resolution of hearing must be fair and continuous variations of acoustic parameters of speech signal must be depicted by a continuous variation of images. A color table was developed for dynamic illustration and sonograms were generated with five spectral analysis methods such as Fourier transformations and linear prediction coding. For evaluating sonogram quality, test persons had to recognize consonant/vocal/consonant words and an optimized analysis method was achieved with a fast Fourier transformation and a postprocessor. A hardware concept of a real time speech visualization system, based on multiprocessor technology in a personal computer, is presented.

  20. Near Real Time Integration of Satellite and Radar Data for Probabilistic Nearcasting of Severe Weather

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Quinn, P.; Mitchell, A. E.; Baynes, K.; Shum, D.

    2014-12-01

    This talk introduces the audience to some of the very real challenges associated with visualizing data from disparate data sources as encountered during the development of real world applications. In addition to the fundamental challenges of dealing with the data and imagery, this talk discusses usability problems encountered while trying to provide interactive and user-friendly visualization tools. At the end of this talk the audience will be aware of some of the pitfalls of data visualization along with tools and techniques to help mitigate them. There are many sources of variable resolution visualizations of science data available to application developers including NASA's Global Imagery Browse Services (GIBS), however integrating and leveraging visualizations in modern applications faces a number of challenges, including: - Varying visualized Earth "tile sizes" resulting in challenges merging disparate sources - Multiple visualization frameworks and toolkits with varying strengths and weaknesses - Global composite imagery vs. imagery matching EOSDIS granule distribution - Challenges visualizing geographically overlapping data with different temporal bounds - User interaction with overlapping or collocated data - Complex data boundaries and shapes combined with multi-orbit data and polar projections - Discovering the availability of visualizations and the specific parameters, color palettes, and configurations used to produce them In addition to discussing the challenges and approaches involved in visualizing disparate data, we will discuss solutions and components we'll be making available as open source to encourage reuse and accelerate application development.

  1. Self-regulation of inter-hemispheric visual cortex balance through real-time fMRI neurofeedback training.

    PubMed

    Robineau, F; Rieger, S W; Mermoud, C; Pichon, S; Koush, Y; Van De Ville, D; Vuilleumier, P; Scharnowski, F

    2014-10-15

    Recent advances in neurofeedback based on real-time functional magnetic resonance imaging (fMRI) allow for learning to control spatially localized brain activity in the range of millimeters across the entire brain. Real-time fMRI neurofeedback studies have demonstrated the feasibility of self-regulating activation in specific areas that are involved in a variety of functions, such as perception, motor control, language, and emotional processing. In most of these previous studies, participants trained to control activity within one region of interest (ROI). In the present study, we extended the neurofeedback approach by now training healthy participants to control the interhemispheric balance between their left and right visual cortices. This was accomplished by providing feedback based on the difference in activity between a target visual ROI and the corresponding homologue region in the opposite hemisphere. Eight out of 14 participants learned to control the differential feedback signal over the course of 3 neurofeedback training sessions spread over 3 days, i.e., they produced consistent increases in the visual target ROI relative to the opposite visual cortex. Those who learned to control the differential feedback signal were subsequently also able to exert that control in the absence of neurofeedback. Such learning to voluntarily control the balance between cortical areas of the two hemispheres might offer promising rehabilitation approaches for neurological or psychiatric conditions associated with pathological asymmetries in brain activity patterns, such as hemispatial neglect, dyslexia, or mood disorders. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. A Review on Real-Time 3D Ultrasound Imaging Technology

    PubMed Central

    Zeng, Zhaozheng

    2017-01-01

    Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail. PMID:28459067

  3. A Review on Real-Time 3D Ultrasound Imaging Technology.

    PubMed

    Huang, Qinghua; Zeng, Zhaozheng

    2017-01-01

    Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail.

  4. A reconfigurable visual-programming library for real-time closed-loop cellular electrophysiology

    PubMed Central

    Biró, István; Giugliano, Michele

    2015-01-01

    Most of the software platforms for cellular electrophysiology are limited in terms of flexibility, hardware support, ease of use, or re-configuration and adaptation for non-expert users. Moreover, advanced experimental protocols requiring real-time closed-loop operation to investigate excitability, plasticity, dynamics, are largely inaccessible to users without moderate to substantial computer proficiency. Here we present an approach based on MATLAB/Simulink, exploiting the benefits of LEGO-like visual programming and configuration, combined to a small, but easily extendible library of functional software components. We provide and validate several examples, implementing conventional and more sophisticated experimental protocols such as dynamic-clamp or the combined use of intracellular and extracellular methods, involving closed-loop real-time control. The functionality of each of these examples is demonstrated with relevant experiments. These can be used as a starting point to create and support a larger variety of electrophysiological tools and methods, hopefully extending the range of default techniques and protocols currently employed in experimental labs across the world. PMID:26157385

  5. A real-time inverse quantised transform for multi-standard with dynamic resolution support

    NASA Astrophysics Data System (ADS)

    Sun, Chi-Chia; Lin, Chun-Ying; Zhang, Ce

    2016-06-01

    In this paper, a real-time configurable intelligent property (IP) core is presented for image/video decoding process in compatibility with the standard MPEG-4 Visual and the standard H.264/AVC. The inverse quantised discrete cosine and integer transform can be used to perform inverse quantised discrete cosine transform and inverse quantised inverse integer transforms which only required shift and add operations. Meanwhile, COordinate Rotation DIgital Computer iterations and compensation steps are adjustable in order to compensate for the video compression quality regarding various data throughput. The implementations are embedded in publicly available software XVID Codes 1.2.2 for the standard MPEG-4 Visual and the H.264/AVC reference software JM 16.1, where the experimental results show that the balance between the computational complexity and video compression quality is retained. At the end, FPGA synthesised results show that the proposed IP core can bring advantages to low hardware costs and also provide real-time performance for Full HD and 4K-2K video decoding.

  6. Stereoscopic augmented reality for laparoscopic surgery.

    PubMed

    Kang, Xin; Azizian, Mahdi; Wilson, Emmanuel; Wu, Kyle; Martin, Aaron D; Kane, Timothy D; Peters, Craig A; Cleary, Kevin; Shekhar, Raj

    2014-07-01

    Conventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field. The stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine. The preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy. We presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and expand the capacity of minimally invasive laparoscopic surgeries.

  7. Real-time detection and discrimination of visual perception using electrocorticographic signals

    NASA Astrophysics Data System (ADS)

    Kapeller, C.; Ogawa, H.; Schalk, G.; Kunii, N.; Coon, W. G.; Scharinger, J.; Guger, C.; Kamada, K.

    2018-06-01

    Objective. Several neuroimaging studies have demonstrated that the ventral temporal cortex contains specialized regions that process visual stimuli. This study investigated the spatial and temporal dynamics of electrocorticographic (ECoG) responses to different types and colors of visual stimulation that were presented to four human participants, and demonstrated a real-time decoder that detects and discriminates responses to untrained natural images. Approach. ECoG signals from the participants were recorded while they were shown colored and greyscale versions of seven types of visual stimuli (images of faces, objects, bodies, line drawings, digits, and kanji and hiragana characters), resulting in 14 classes for discrimination (experiment I). Additionally, a real-time system asynchronously classified ECoG responses to faces, kanji and black screens presented via a monitor (experiment II), or to natural scenes (i.e. the face of an experimenter, natural images of faces and kanji, and a mirror) (experiment III). Outcome measures in all experiments included the discrimination performance across types based on broadband γ activity. Main results. Experiment I demonstrated an offline classification accuracy of 72.9% when discriminating among the seven types (without color separation). Further discrimination of grey versus colored images reached an accuracy of 67.1%. Discriminating all colors and types (14 classes) yielded an accuracy of 52.1%. In experiment II and III, the real-time decoder correctly detected 73.7% responses to face, kanji and black computer stimuli and 74.8% responses to presented natural scenes. Significance. Seven different types and their color information (either grey or color) could be detected and discriminated using broadband γ activity. Discrimination performance maximized for combined spatial-temporal information. The discrimination of stimulus color information provided the first ECoG-based evidence for color-related population-level cortical broadband γ responses in humans. Stimulus categories can be detected by their ECoG responses in real time within 500 ms with respect to stimulus onset.

  8. Visualization and Analysis for Near-Real-Time Decision Making in Distributed Workflows

    DOE PAGES

    Pugmire, David; Kress, James; Choi, Jong; ...

    2016-08-04

    Data driven science is becoming increasingly more common, complex, and is placing tremendous stresses on visualization and analysis frameworks. Data sources producing 10GB per second (and more) are becoming increasingly commonplace in both simulation, sensor and experimental sciences. These data sources, which are often distributed around the world, must be analyzed by teams of scientists that are also distributed. Enabling scientists to view, query and interact with such large volumes of data in near-real-time requires a rich fusion of visualization and analysis techniques, middleware and workflow systems. Here, this paper discusses initial research into visualization and analysis of distributed datamore » workflows that enables scientists to make near-real-time decisions of large volumes of time varying data.« less

  9. Practical, Real-Time, and Robust Watermarking on the Spatial Domain for High-Definition Video Contents

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu

    Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.

  10. Effective real-time vehicle tracking using discriminative sparse coding on local patches

    NASA Astrophysics Data System (ADS)

    Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei

    2016-01-01

    A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.

  11. OCT-based angiography in real time with hand-held probe

    NASA Astrophysics Data System (ADS)

    Gelikonov, Grigory V.; Moiseev, Alexander A.; Ksenofontov, Sergey Y.; Terpelov, Dmitry A.; Gelikonov, Valentine M.

    2018-03-01

    This work is dedicated to development of the OCT system capable to visualize blood vessel network for everyday clinical use. Following problems were solved during the development: compensation of specific natural tissue displacements, induced by contact scanning mode and physiological motion of patients (e.g. respiratory and cardiac motions) and on-line visualization of vessel net to provide the feedback for system operator.

  12. A Lyapunov Function Based Remedial Action Screening Tool Using Real-Time Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitra, Joydeep; Ben-Idris, Mohammed; Faruque, Omar

    This report summarizes the outcome of a research project that comprised the development of a Lyapunov function based remedial action screening tool using real-time data (L-RAS). The L-RAS is an advanced computational tool that is intended to assist system operators in making real-time redispatch decisions to preserve power grid stability. The tool relies on screening contingencies using a homotopy method based on Lyapunov functions to avoid, to the extent possible, the use of time domain simulations. This enables transient stability evaluation at real-time speed without the use of massively parallel computational resources. The project combined the following components. 1. Developmentmore » of a methodology for contingency screening using a homotopy method based on Lyapunov functions and real-time data. 2. Development of a methodology for recommending remedial actions based on the screening results. 3. Development of a visualization and operator interaction interface. 4. Testing of screening tool, validation of control actions, and demonstration of project outcomes on a representative real system simulated on a Real-Time Digital Simulator (RTDS) cluster. The project was led by Michigan State University (MSU), where the theoretical models including homotopy-based screening, trajectory correction using real-time data, and remedial action were developed and implemented in the form of research-grade software. Los Alamos National Laboratory (LANL) contributed to the development of energy margin sensitivity dynamics, which constituted a part of the remedial action portfolio. Florida State University (FSU) and Southern California Edison (SCE) developed a model of the SCE system that was implemented on FSU's RTDS cluster to simulate real-time data that was streamed over the internet to MSU where the L-RAS tool was executed and remedial actions were communicated back to FSU to execute stabilizing controls on the simulated system. LCG Consulting developed the visualization and operator interaction interface, based on specifications provided by MSU. The project was performed from October 2012 to December 2016, at the end of which the L-RAS tool, as described above, was completed and demonstrated. The project resulted in the following innovations and contributions: (a) the L-RAS software prototype, tested on a simulated system, vetted by utility personnel, and potentially ready for wider testing and commercialization; (b) an RTDS-based test bed that can be used for future research in the field; (c) a suite of breakthrough theoretical contributions to the field of power system stability and control; and (d) a new tool for visualization of power system stability margins. While detailed descriptions of the development and implementation of the various project components have been provided in the quarterly reports, this final report provides an overview of the complete project, and is demonstrated using public domain test systems commonly used in the literature. The SCE system, and demonstrations thereon, are not included in this report due to Critical Energy Infrastructure Information (CEII) restrictions.« less

  13. Learning Reverse Engineering and Simulation with Design Visualization

    NASA Technical Reports Server (NTRS)

    Hemsworth, Paul J.

    2018-01-01

    The Design Visualization (DV) group supports work at the Kennedy Space Center by utilizing metrology data with Computer-Aided Design (CAD) models and simulations to provide accurate visual representations that aid in decision-making. The capability to measure and simulate objects in real time helps to predict and avoid potential problems before they become expensive in addition to facilitating the planning of operations. I had the opportunity to work on existing and new models and simulations in support of DV and NASA’s Exploration Ground Systems (EGS).

  14. A low cost real-time motion tracking approach using webcam technology.

    PubMed

    Krishnan, Chandramouli; Washabaugh, Edward P; Seetharaman, Yogesh

    2015-02-05

    Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject's limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. A low cost real-time motion tracking approach using webcam technology

    PubMed Central

    Krishnan, Chandramouli; Washabaugh, Edward P.; Seetharaman, Yogesh

    2014-01-01

    Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject’s limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. PMID:25555306

  16. Real-time, haptics-enabled simulator for probing ex vivo liver tissue.

    PubMed

    Lister, Kevin; Gao, Zhan; Desai, Jaydev P

    2009-01-01

    The advent of complex surgical procedures has driven the need for realistic surgical training simulators. Comprehensive simulators that provide realistic visual and haptic feedback during surgical tasks are required to familiarize surgeons with the procedures they are to perform. Complex organ geometry inherent to biological tissues and intricate material properties drive the need for finite element methods to assure accurate tissue displacement and force calculations. Advances in real-time finite element methods have not reached the state where they are applicable to soft tissue surgical simulation. Therefore a real-time, haptics-enabled simulator for probing of soft tissue has been developed which utilizes preprocessed finite element data (derived from accurate constitutive model of the soft-tissue obtained from carefully collected experimental data) to accurately replicate the probing task in real-time.

  17. Towards human-controlled, real-time shape sensing based flexible needle steering for MRI-guided percutaneous therapies.

    PubMed

    Li, Meng; Li, Gang; Gonenc, Berk; Duan, Xingguang; Iordachita, Iulian

    2017-06-01

    Accurate needle placement into soft tissue is essential to percutaneous prostate cancer diagnosis and treatment procedures. This paper discusses the steering of a 20 gauge (G) FBG-integrated needle with three sets of Fiber Bragg Grating (FBG) sensors. A fourth-order polynomial shape reconstruction method is introduced and compared with previous approaches. To control the needle, a bicycle model based navigation method is developed to provide visual guidance lines for clinicians. A real-time model updating method is proposed for needle steering inside inhomogeneous tissue. A series of experiments were performed to evaluate the proposed needle shape reconstruction, visual guidance and real-time model updating methods. Targeting experiments were performed in soft plastic phantoms and in vitro tissues with insertion depths ranging between 90 and 120 mm. Average targeting errors calculated based upon the acquired camera images were 0.40 ± 0.35 mm in homogeneous plastic phantoms, 0.61 ± 0.45 mm in multilayer plastic phantoms and 0.69 ± 0.25 mm in ex vivo tissue. Results endorse the feasibility and accuracy of the needle shape reconstruction and visual guidance methods developed in this work. The approach implemented for the multilayer phantom study could facilitate accurate needle placement efforts in real inhomogeneous tissues. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Real-time visual biofeedback during weight bearing improves therapy compliance in patients following lower extremity fractures.

    PubMed

    Raaben, Marco; Holtslag, Herman R; Leenen, Luke P H; Augustine, Robin; Blokhuis, Taco J

    2018-01-01

    Individuals with lower extremity fractures are often instructed on how much weight to bear on the affected extremity. Previous studies have shown limited therapy compliance in weight bearing during rehabilitation. In this study we investigated the effect of real-time visual biofeedback on weight bearing in individuals with lower extremity fractures in two conditions: full weight bearing and touch-down weight bearing. 11 participants with full weight bearing and 12 participants with touch-down weight bearing after lower extremity fractures have been measured with an ambulatory biofeedback system. The participants first walked 15m and the biofeedback system was only used to register the weight bearing. The same protocol was then repeated with real-time visual feedback during weight bearing. The participants could thereby adapt their loading to the desired level and improve therapy compliance. In participants with full weight bearing, real-time visual biofeedback resulted in a significant increase in loading from 50.9±7.51% bodyweight (BW) without feedback to 63.2±6.74%BW with feedback (P=0.0016). In participants with touch-down weight bearing, the exerted lower extremity load decreased from 16.7±9.77kg without feedback to 10.27±4.56kg with feedback (P=0.0718). More important, the variance between individual steps significantly decreased after feedback (P=0.018). Ambulatory monitoring weight bearing after lower extremity fractures showed that therapy compliance is low, both in full and touch-down weight bearing. Real-time visual biofeedback resulted in significantly higher peak loads in full weight bearing and increased accuracy of individual steps in touch-down weight bearing. Real-time visual biofeedback therefore results in improved therapy compliance after lower extremity fractures. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Cherenkov Video Imaging Allows for the First Visualization of Radiation Therapy in Real Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarvis, Lesley A., E-mail: Lesley.a.jarvis@hitchcock.org; Norris Cotton Cancer Center at the Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire; Zhang, Rongxiao

    Purpose: To determine whether Cherenkov light imaging can visualize radiation therapy in real time during breast radiation therapy. Methods and Materials: An intensified charge-coupled device (CCD) camera was synchronized to the 3.25-μs radiation pulses of the clinical linear accelerator with the intensifier set × 100. Cherenkov images were acquired continuously (2.8 frames/s) during fractionated whole breast irradiation with each frame an accumulation of 100 radiation pulses (approximately 5 monitor units). Results: The first patient images ever created are used to illustrate that Cherenkov emission can be visualized as a video during conditions typical for breast radiation therapy, even with complex treatment plans,more » mixed energies, and modulated treatment fields. Images were generated correlating to the superficial dose received by the patient and potentially the location of the resulting skin reactions. Major blood vessels are visible in the image, providing the potential to use these as biological landmarks for improved geometric accuracy. The potential for this system to detect radiation therapy misadministrations, which can result from hardware malfunction or patient positioning setup errors during individual fractions, is shown. Conclusions: Cherenkoscopy is a unique method for visualizing surface dose resulting in real-time quality control. We propose that this system could detect radiation therapy errors in everyday clinical practice at a time when these errors can be corrected to result in improved safety and quality of radiation therapy.« less

  20. The development of real-time stability supports visual working memory performance: Young children's feature binding can be improved through perceptual structure.

    PubMed

    Simmering, Vanessa R; Wood, Chelsey M

    2017-08-01

    Working memory is a basic cognitive process that predicts higher-level skills. A central question in theories of working memory development is the generality of the mechanisms proposed to explain improvements in performance. Prior theories have been closely tied to particular tasks and/or age groups, limiting their generalizability. The cognitive dynamics theory of visual working memory development has been proposed to overcome this limitation. From this perspective, developmental improvements arise through the coordination of cognitive processes to meet demands of different behavioral tasks. This notion is described as real-time stability, and can be probed through experiments that assess how changing task demands impact children's performance. The current studies test this account by probing visual working memory for colors and shapes in a change detection task that compares detection of changes to new features versus swaps in color-shape binding. In Experiment 1, 3- to 4-year-old children showed impairments specific to binding swaps, as predicted by decreased real-time stability early in development; 5- to 6-year-old children showed a slight advantage on binding swaps, but 7- to 8-year-old children and adults showed no difference across trial types. Experiment 2 tested the proposed explanation of young children's binding impairment through added perceptual structure, which supported the stability and precision of feature localization in memory-a process key to detecting binding swaps. This additional structure improved young children's binding swap detection, but not new-feature detection or adults' performance. These results provide further evidence for the cognitive dynamics and real-time stability explanation of visual working memory development. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Loop-Mediated Isothermal Amplification for Detection of Endogenous Sad1 Gene in Cotton: An Internal Control for Rapid Onsite GMO Testing.

    PubMed

    Singh, Monika; Bhoge, Rajesh K; Randhawa, Gurinderjit

    2018-04-20

    Background : Confirming the integrity of seed samples in powdered form is important priorto conducting a genetically modified organism (GMO) test. Rapid onsite methods may provide a technological solution to check for genetically modified (GM) events at ports of entry. In India, Bt cotton is the commercialized GM crop with four approved GM events; however, 59 GM events have been approved globally. GMO screening is required to test for authorized GM events. The identity and amplifiability of test samples could be ensured first by employing endogenous genes as an internal control. Objective : A rapid onsite detection method was developed for an endogenous reference gene, stearoyl acyl carrier protein desaturase ( Sad1 ) of cotton, employing visual and real-time loop-mediated isothermal amplification (LAMP). Methods : The assays were performed at a constant temperature of 63°C for 30 min for visual LAMP and 62ºC for 40 min for real-time LAMP. Positive amplification was visualized as a change in color from orange to green on addition of SYBR ® Green or detected as real-time amplification curves. Results : Specificity of LAMP assays was confirmed using a set of 10 samples. LOD for visual LAMP was up to 0.1%, detecting 40 target copies, and for real-time LAMP up to 0.05%, detecting 20 target copies. Conclusions : The developed methods could be utilized to confirm the integrity of seed powder prior to conducting a GMO test for specific GM events of cotton. Highlights : LAMP assays for the endogenous Sad1 gene of cotton have been developed to be used as an internal control for onsite GMO testing in cotton.

  2. A systematic review: the influence of real time feedback on wheelchair propulsion biomechanics.

    PubMed

    Symonds, Andrew; Barbareschi, Giulia; Taylor, Stephen; Holloway, Catherine

    2018-01-01

    Clinical guidelines recommend that, in order to minimize upper limb injury risk, wheelchair users adopt a semi-circular pattern with a slow cadence and a large push arc. To examine whether real time feedback can be used to influence manual wheelchair propulsion biomechanics. Clinical trials and case series comparing the use of real time feedback against no feedback were included. A general review was performed and methodological quality assessed by two independent practitioners using the Downs and Black checklist. The review was completed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) guidelines. Six papers met the inclusion criteria. Selected studies involved 123 participants and analysed the effect of visual and, in one case, haptic feedback. Across the studies it was shown that participants were able to achieve significant changes in propulsion biomechanics, when provided with real time feedback. However, the effect of targeting a single propulsion variable might lead to unwanted alterations in other parameters. Methodological assessment identified weaknesses in external validity. Visual feedback could be used to consistently increase push arc and decrease push rate, and may be the best focus for feedback training. Further investigation is required to assess such intervention during outdoor propulsion. Implications for Rehabilitation Upper limb pain and injuries are common secondary disorders that negatively affect wheelchair users' physical activity and quality of life. Clinical guidelines suggest that manual wheelchair users should aim to propel with a semi-circular pattern with low a push rate and large push arc in the range in order to minimise upper limbs' loading. Real time visual and haptic feedback are effective tools for improving propulsion biomechanics in both complete novices and experienced manual wheelchair users.

  3. Are Visual Informatics Actually Useful in Practice: A Study in a Film Studies Context

    NASA Astrophysics Data System (ADS)

    Mohamad Ali, Nazlena; Smeaton, Alan F.

    This paper describes our work in examining the question of whether providing a visual informatics application in an educational scenario, in particular, providing video content analysis, does actually yield real benefit in practice. We provide a new software tool in the domain of movie content analysis technologies for use by students of film studies students at Dublin City University, and we try to address the research question of measuring the 'benefit' from the use of these technologies to students. We examine their real practices in studying for the module using our advanced application as compared to using conventional DVD browsing of movie content. In carrying out this experiment, we found that students have better essay outcomes, higher satisfactions levels and the mean time spent on movie analyzing is longer with the new technologies.

  4. SU-D-BRF-05: A Novel System to Provide Real-Time Image-Guidance for Intrauterine Tandem Insertion and Placement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, M; Fontenot, J

    Purpose: To develop a system that provides real-time image-guidance for intrauterine tandem insertion and placement for brachytherapy. Methods: The conceptualized system consists of an intrauterine tandem with a transparent, lensed tip, a flexible miniature fiber optic scope, light source and interface for CCD coupling. The tandem tip was designed to act as a lens providing a wide field-of-view (FOV) with minimal image distortion and focus length appropriate for the application. The system is designed so that once inserted, the image-guidance component of the system can be removed and brachytherapy can be administered without interfering with source transport or disturbing tandemmore » placement. Proof-of-principle studies were conducted to assess the conceptualized system's (1) lens functionality (clarity, focus and FOV) (2) and ability to visualize the cervical os of a female placed in the lithotomy position. Results: A prototype of this device was constructed using a commercial tandem modified to incorporate a transparent tip that internally coupled with a 1.9mm diameter fiber optic cable. The 900mm-long cable terminated at an interface that provided illumination as well as facilitated visualization of patient anatomy on a computer. The system provided a 23mm FOV with a focal length of 1cm and provided clear visualization of the cervix, cervical fornix and cervical os. The optical components of the system are easily removed without perturbing the position of a tandem placed in a common fixation clamp. Conclusion: Clinicians frequently encounter difficulty inserting an intrauterine tandem through the cervical os, circumventing fibrotic tissue or masses within the uterus, and positioning the tandem without perforating the uterus. To mitigate these challenges, we have designed and conducted proof-of- principle studies to discern the utility of a prototype device that provides real-time image-guidance for intrauterine tandem placement using fiber optic components.« less

  5. Real-Time Visualization Tool Integrating STEREO, ACE, SOHO and the SDO

    NASA Astrophysics Data System (ADS)

    Schroeder, P. C.; Luhmann, J. G.; Marchant, W.

    2011-12-01

    The STEREO/IMPACT team has developed a new web-based visualization tool for near real-time data from the STEREO instruments, ACE and SOHO as well as relevant models of solar activity. This site integrates images, solar energetic particle, solar wind plasma and magnetic field measurements in an intuitive way using near real-time products from NOAA and other sources to give an overview of recent space weather events. This site enhances the browse tools already available at UC Berkeley, UCLA and Caltech which allow users to visualize similar data from the start of the STEREO mission. Our new near real-time tool utilizes publicly available real-time data products from a number of missions and instruments, including SOHO LASCO C2 images from the SOHO team's NASA site, SDO AIA images from the SDO team's NASA site, STEREO IMPACT SEP data plots and ACE EPAM data plots from the NOAA Space Weather Prediction Center and STEREO spacecraft positions from the STEREO Science Center.

  6. Real-time distributed video coding for 1K-pixel visual sensor networks

    NASA Astrophysics Data System (ADS)

    Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian

    2016-07-01

    Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.

  7. Tracking real-time neural activation of conceptual knowledge using single-trial event-related potentials.

    PubMed

    Amsel, Ben D

    2011-04-01

    Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed understanding of the timecourse and intensity of influence of several such knowledge types on real-time neural activity. A linear mixed-effects model was applied to single trial event-related potentials for 207 visually presented concrete words measured on total number of features (semantic richness), imageability, and number of visual motion, color, visual form, smell, taste, sound, and function features. Significant influences of multiple feature types occurred before 200ms, suggesting parallel neural computation of word form and conceptual knowledge during language comprehension. Function and visual motion features most prominently influenced neural activity, underscoring the importance of action-related knowledge in computing word meaning. The dynamic time courses and topographies of these effects are most consistent with a flexible conceptual system wherein temporally dynamic recruitment of representations in modal and supramodal cortex are a crucial element of the constellation of processes constituting word meaning computation in the brain. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Real-time visualization and quantification of retrograde cardioplegia delivery using near infrared fluorescent imaging.

    PubMed

    Rangaraj, Aravind T; Ghanta, Ravi K; Umakanthan, Ramanan; Soltesz, Edward G; Laurence, Rita G; Fox, John; Cohn, Lawrence H; Bolman, R M; Frangioni, John V; Chen, Frederick Y

    2008-01-01

    Homogeneous delivery of cardioplegia is essential for myocardial protection during cardiac surgery. Presently, there exist no established methods to quantitatively assess cardioplegia distribution intraoperatively and determine when retrograde cardioplegia is required. In this study, we evaluate the feasibility of near infrared (NIR) imaging for real-time visualization of cardioplegia distribution in a porcine model. A portable, intraoperative, real-time NIR imaging system was utilized. NIR fluorescent cardioplegia solution was developed by incorporating indocyanine green (ICG) into crystalloid cardioplegia solution. Real-time NIR imaging was performed while the fluorescent cardioplegia solution was infused via the retrograde route in five ex vivo normal porcine hearts and in five ex vivo porcine hearts status post left anterior descending (LAD) coronary artery ligation. Horizontal cross-sections of the hearts were obtained at proximal, middle, and distal LAD levels. Videodensitometry was performed to quantify distribution of fluorophore content. The progressive distribution of cardioplegia was clearly visualized with NIR imaging. Complete visualization of retrograde distribution occurred within 4 minutes of infusion. Videodensitometry revealed retrograde cardioplegia, primarily distributed to the left ventricle (LV) and anterior septum. In hearts with LAD ligation, antegrade cardioplegia did not distribute to the anterior LV. This deficiency was compensated for with retrograde cardioplegia supplementation. Incorporation of ICG into cardioplegia allows real-time visualization of cardioplegia delivery via NIR imaging. This technology may prove useful in guiding intraoperative decisions pertaining to when retrograde cardioplegia is mandated.

  9. Visual Data Exploration for Balance Quantification in Real-Time During Exergaming.

    PubMed

    Soancatl Aguilar, Venustiano; J van de Gronde, Jasper; J C Lamoth, Claudine; van Diest, Mike; M Maurits, Natasha; B T M Roerdink, Jos

    2017-01-01

    Unintentional injuries are among the ten leading causes of death in older adults; falls cause 60% of these deaths. Despite their effectiveness to improve balance and reduce the risk of falls, balance training programs have several drawbacks in practice, such as lack of engaging elements, boring exercises, and the effort and cost of travelling, ultimately resulting in low adherence. Exergames, that is, digital games controlled by body movements, have been proposed as an alternative to improve balance. One of the main challenges for exergames is to automatically quantify balance during game-play in order to adapt the game difficulty according to the skills of the player. Here we perform a multidimensional exploratory data analysis, using visualization techniques, to find useful measures for quantifying balance in real-time. First, we visualize exergaming data, derived from 400 force plate recordings of 40 participants from 20 to 79 years and 10 trials per participant, as heat maps and violin plots to get quick insight into the nature of the data. Second, we extract known and new features from the data, such as instantaneous speed, measures of dispersion, turbulence measures derived from speed, and curvature values. Finally, we analyze and visualize these features using several visualizations such as a heat map, overlapping violin plots, a parallel coordinate plot, a projection of the two first principal components, and a scatter plot matrix. Our visualizations and findings suggest that heat maps and violin plots can provide quick insight and directions for further data exploration. The most promising measures to quantify balance in real-time are speed, curvature and a turbulence measure, because these measures show age-related changes in balance performance. The next step is to apply the present techniques to data of whole body movements as recorded by devices such as Kinect.

  10. Visual Data Exploration for Balance Quantification in Real-Time During Exergaming

    PubMed Central

    J. van de Gronde, Jasper; J. C. Lamoth, Claudine; van Diest, Mike; M. Maurits, Natasha; B. T. M. Roerdink, Jos

    2017-01-01

    Unintentional injuries are among the ten leading causes of death in older adults; falls cause 60% of these deaths. Despite their effectiveness to improve balance and reduce the risk of falls, balance training programs have several drawbacks in practice, such as lack of engaging elements, boring exercises, and the effort and cost of travelling, ultimately resulting in low adherence. Exergames, that is, digital games controlled by body movements, have been proposed as an alternative to improve balance. One of the main challenges for exergames is to automatically quantify balance during game-play in order to adapt the game difficulty according to the skills of the player. Here we perform a multidimensional exploratory data analysis, using visualization techniques, to find useful measures for quantifying balance in real-time. First, we visualize exergaming data, derived from 400 force plate recordings of 40 participants from 20 to 79 years and 10 trials per participant, as heat maps and violin plots to get quick insight into the nature of the data. Second, we extract known and new features from the data, such as instantaneous speed, measures of dispersion, turbulence measures derived from speed, and curvature values. Finally, we analyze and visualize these features using several visualizations such as a heat map, overlapping violin plots, a parallel coordinate plot, a projection of the two first principal components, and a scatter plot matrix. Our visualizations and findings suggest that heat maps and violin plots can provide quick insight and directions for further data exploration. The most promising measures to quantify balance in real-time are speed, curvature and a turbulence measure, because these measures show age-related changes in balance performance. The next step is to apply the present techniques to data of whole body movements as recorded by devices such as Kinect. PMID:28135284

  11. Usefulness of real-time three-dimensional ultrasonography in percutaneous nephrostomy: an animal study.

    PubMed

    Hongzhang, Hong; Xiaojuan, Qin; Shengwei, Zhang; Feixiang, Xiang; Yujie, Xu; Haibing, Xiao; Gallina, Kazobinka; Wen, Ju; Fuqing, Zeng; Xiaoping, Zhang; Mingyue, Ding; Huageng, Liang; Xuming, Zhang

    2018-05-17

    To evaluate the effect of real-time three-dimensional (3D) ultrasonography (US) in guiding percutaneous nephrostomy (PCN). A hydronephrosis model was devised in which the ureters of 16 beagles were obstructed. The beagles were divided equally into groups 1 and 2. In group 1, the PCN was performed using real-time 3D US guidance, while in group 2 the PCN was guided using two-dimensional (2D) US. Visualization of the needle tract, length of puncture time and number of puncture times were recorded for the two groups. In group 1, score for visualization of the needle tract, length of puncture time and number of puncture times were 3, 7.3 ± 3.1 s and one time, respectively. In group 2, the respective results were 1.4 ± 0.5, 21.4 ± 5.8 s and 2.1 ± 0.6 times. The visualization of needle tract in group 1 was superior to that in group 2, and length of puncture time and number of puncture times were both lower in group 1 than in group 2. Real-time 3D US-guided PCN is superior to 2D US-guided PCN in terms of visualization of needle tract and the targeted pelvicalyceal system, leading to quick puncture. Real-time 3D US-guided puncture of the kidney holds great promise for clinical implementation in PCN. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.

  12. On the use of Augmented Reality techniques in learning and interpretation of cardiologic data.

    PubMed

    Lamounier, Edgard; Bucioli, Arthur; Cardoso, Alexandre; Andrade, Adriano; Soares, Alcimar

    2010-01-01

    Augmented Reality is a technology which provides people with more intuitive ways of interaction and visualization, close to those in real world. The amount of applications using Augmented Reality is growing every day, and results can be already seen in several fields such as Education, Training, Entertainment and Medicine. The system proposed in this article intends to provide a friendly and intuitive interface based on Augmented Reality for heart beating evaluation and visualization. Cardiologic data is loaded from several distinct sources: simple standards of heart beating frequencies (for example situations like running or sleeping), files of heart beating signals, scanned electrocardiographs and real time data acquisition of patient's heart beating. All this data is processed to produce visualization within Augmented Reality environments. The results obtained in this research have shown that the developed system is able to simplify the understanding of concepts about heart beating and its functioning. Furthermore, the system can help health professionals in the task of retrieving, processing and converting data from all the sources handled by the system, with the support of an edition and visualization mode.

  13. Robot-assisted real-time magnetic resonance image-guided transcatheter aortic valve replacement.

    PubMed

    Miller, Justin G; Li, Ming; Mazilu, Dumitru; Hunt, Tim; Horvath, Keith A

    2016-05-01

    Real-time magnetic resonance imaging (rtMRI)-guided transcatheter aortic valve replacement (TAVR) offers improved visualization, real-time imaging, and pinpoint accuracy with device delivery. Unfortunately, performing a TAVR in a MRI scanner can be a difficult task owing to limited space and an awkward working environment. Our solution was to design a MRI-compatible robot-assisted device to insert and deploy a self-expanding valve from a remote computer console. We present our preliminary results in a swine model. We used an MRI-compatible robotic arm and developed a valve delivery module. A 12-mm trocar was inserted in the apex of the heart via a subxiphoid incision. The delivery device and nitinol stented prosthesis were mounted on the robot. Two continuous real-time imaging planes provided a virtual real-time 3-dimensional reconstruction. The valve was deployed remotely by the surgeon via a graphic user interface. In this acute nonsurvival study, 8 swine underwent robot-assisted rtMRI TAVR for evaluation of feasibility. Device deployment took a mean of 61 ± 5 seconds. Postdeployment necropsy was performed to confirm correlations between imaging and actual valve positions. These results demonstrate the feasibility of robotic-assisted TAVR using rtMRI guidance. This approach may eliminate some of the challenges of performing a procedure while working inside of an MRI scanner, and may improve the success of TAVR. It provides superior visualization during the insertion process, pinpoint accuracy of deployment, and, potentially, communication between the imaging device and the robotic module to prevent incorrect or misaligned deployment. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  14. TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections.

    PubMed

    Kim, Minjeong; Kang, Kyeongpil; Park, Deokgun; Choo, Jaegul; Elmqvist, Niklas

    2017-01-01

    Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.

  15. Managing Quality and Safety in Real Time? Evidence from an Interview Study.

    PubMed

    Randell, Rebecca; Keen, Justin; Gates, Cara; Ferguson, Emma; Long, Andrew; Ginn, Claire; McGinnis, Elizabeth; Whittle, Jackie

    2016-01-01

    Health systems around the world are investing increasing effort in monitoring care quality and safety. Dashboards can support this process, providing summary data on processes and outcomes of care, making use of data visualization techniques such as graphs. As part of a study exploring development and use of dashboards in English hospitals, we interviewed senior managers across 15 healthcare providers. Findings revealed substantial variation in sophistication of the dashboards in place, largely presenting retrospective data items determined by national bodies and dependent on manual collation from a number of systems. Where real time systems were in place, they supported staff in proactively managing quality and safety.

  16. Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation

    PubMed Central

    Yang, Kailun; Wang, Kaiwei; Romera, Eduardo; Hu, Weijian; Sun, Dongming; Sun, Junwei; Cheng, Ruiqi; Chen, Tianxue; López, Elena

    2018-01-01

    Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework. PMID:29748508

  17. Real-Time Noise Removal for Line-Scanning Hyperspectral Devices Using a Minimum Noise Fraction-Based Approach

    PubMed Central

    Bjorgan, Asgeir; Randeberg, Lise Lyngsnes

    2015-01-01

    Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717

  18. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation.

    PubMed

    Arujuna, Aruna V; Housden, R James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D; Razavi, Reza; Rhode, Kawal S

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures.

  19. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    PubMed

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    NASA Astrophysics Data System (ADS)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  1. Tactical visualization module

    NASA Astrophysics Data System (ADS)

    Kachejian, Kerry C.; Vujcic, Doug

    1999-07-01

    The Tactical Visualization Module (TVM) research effort will develop and demonstrate a portable, tactical information system to enhance the situational awareness of individual warfighters and small military units by providing real-time access to manned and unmanned aircraft, tactically mobile robots, and unattended sensors. TVM consists of a family of portable and hand-held devices being advanced into a next- generation, embedded capability. It enables warfighters to visualize the tactical situation by providing real-time video, imagery, maps, floor plans, and 'fly-through' video on demand. When combined with unattended ground sensors, such as Combat- Q, TVM permits warfighters to validate and verify tactical targets. The use of TVM results in faster target engagement times, increased survivability, and reduction of the potential for fratricide. TVM technology can support both mounted and dismounted tactical forces involved in land, sea, and air warfighting operations. As a PCMCIA card, TVM can be embedded in portable, hand-held, and wearable PCs. Thus, it leverages emerging tactical displays including flat-panel, head-mounted displays. The end result of the program will be the demonstration of the system with U.S. Army and USMC personnel in an operational environment. Raytheon Systems Company, the U.S. Army Soldier Systems Command -- Natick RDE Center (SSCOM- NRDEC) and the Defense Advanced Research Projects Agency (DARPA) are partners in developing and demonstrating the TVM technology.

  2. Real-Time Single Molecule Visualization of SH2 Domain Membrane Recruitment in Growth Factor Stimulated Cells.

    PubMed

    Oh, Dongmyung

    2017-01-01

    In the last decade, single molecule tracking (SMT) techniques have emerged as a versatile tool for molecular cell biology research. This approach allows researchers to monitor the real-time behavior of individual molecules in living cells with nanometer and millisecond resolution. As a result, it is possible to visualize biological processes as they occur at a molecular level in real time. Here we describe a method for the real-time visualization of SH2 domain membrane recruitment from the cytoplasm to epidermal growth factor (EGF) induced phosphotyrosine sites on the EGF receptor. Further, we describe methods that utilize SMT data to define SH2 domain membrane dynamics parameters such as binding (τ), dissociation (k d ), and diffusion (D) rates. Together these methods may allow us to gain greater understanding of signal transduction dynamics and the molecular basis of disease-related aberrant pathways.

  3. Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera

    NASA Astrophysics Data System (ADS)

    Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu

    2003-01-01

    This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.

  4. Data-Driven Geospatial Visual Analytics for Real-Time Urban Flooding Decision Support

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Hill, D.; Rodriguez, A.; Marini, L.; Kooper, R.; Myers, J.; Wu, X.; Minsker, B. S.

    2009-12-01

    Urban flooding is responsible for the loss of life and property as well as the release of pathogens and other pollutants into the environment. Previous studies have shown that spatial distribution of intense rainfall significantly impacts the triggering and behavior of urban flooding. However, no general purpose tools yet exist for deriving rainfall data and rendering them in real-time at the resolution of hydrologic units used for analyzing urban flooding. This paper presents a new visual analytics system that derives and renders rainfall data from the NEXRAD weather radar system at the sewershed (i.e. urban hydrologic unit) scale in real-time for a Chicago stormwater management project. We introduce a lightweight Web 2.0 approach which takes advantages of scientific workflow management and publishing capabilities developed at NCSA (National Center for Supercomputing Applications), streaming data-aware semantic content management repository, web-based Google Earth/Map and time-aware KML (Keyhole Markup Language). A collection of polygon-based virtual sensors is created from the NEXRAD Level II data using spatial, temporal and thematic transformations at the sewershed level in order to produce persistent virtual rainfall data sources for the animation. Animated color-coded rainfall map in the sewershed can be played in real-time as a movie using time-aware KML inside the web browser-based Google Earth for visually analyzing the spatiotemporal patterns of the rainfall intensity in the sewershed. Such system provides valuable information for situational awareness and improved decision support during extreme storm events in an urban area. Our further work includes incorporating additional data (such as basement flooding events data) or physics-based predictive models that can be used for more integrated data-driven decision support.

  5. A High-Speed, Real-Time Visualization and State Estimation Platform for Monitoring and Control of Electric Distribution Systems: Implementation and Field Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundstrom, Blake; Gotseff, Peter; Giraldez, Julieta

    Continued deployment of renewable and distributed energy resources is fundamentally changing the way that electric distribution systems are controlled and operated; more sophisticated active system control and greater situational awareness are needed. Real-time measurements and distribution system state estimation (DSSE) techniques enable more sophisticated system control and, when combined with visualization applications, greater situational awareness. This paper presents a novel demonstration of a high-speed, real-time DSSE platform and related control and visualization functionalities, implemented using existing open-source software and distribution system monitoring hardware. Live scrolling strip charts of meter data and intuitive annotated map visualizations of the entire state (obtainedmore » via DSSE) of a real-world distribution circuit are shown. The DSSE implementation is validated to demonstrate provision of accurate voltage data. This platform allows for enhanced control and situational awareness using only a minimum quantity of distribution system measurement units and modest data and software infrastructure.« less

  6. Computational Modeling and Real-Time Control of Patient-Specific Laser Treatment of Cancer

    PubMed Central

    Fuentes, D.; Oden, J. T.; Diller, K. R.; Hazle, J. D.; Elliott, A.; Shetty, A.; Stafford, R. J.

    2014-01-01

    An adaptive feedback control system is presented which employs a computational model of bioheat transfer in living tissue to guide, in real-time, laser treatments of prostate cancer monitored by magnetic resonance thermal imaging (MRTI). The system is built on what can be referred to as cyberinfrastructure - a complex structure of high-speed network, large-scale parallel computing devices, laser optics, imaging, visualizations, inverse-analysis algorithms, mesh generation, and control systems that guide laser therapy to optimally control the ablation of cancerous tissue. The computational system has been successfully tested on in-vivo, canine prostate. Over the course of an 18 minute laser induced thermal therapy (LITT) performed at M.D. Anderson Cancer Center (MDACC) in Houston, Texas, the computational models were calibrated to intra-operative real time thermal imaging treatment data and the calibrated models controlled the bioheat transfer to within 5°C of the predetermined treatment plan. The computational arena is in Austin, Texas and managed at the Institute for Computational Engineering and Sciences (ICES). The system is designed to control the bioheat transfer remotely while simultaneously providing real-time remote visualization of the on-going treatment. Post operative histology of the canine prostate reveal that the damage region was within the targeted 1.2cm diameter treatment objective. PMID:19148754

  7. Computational modeling and real-time control of patient-specific laser treatment of cancer.

    PubMed

    Fuentes, D; Oden, J T; Diller, K R; Hazle, J D; Elliott, A; Shetty, A; Stafford, R J

    2009-04-01

    An adaptive feedback control system is presented which employs a computational model of bioheat transfer in living tissue to guide, in real-time, laser treatments of prostate cancer monitored by magnetic resonance thermal imaging. The system is built on what can be referred to as cyberinfrastructure-a complex structure of high-speed network, large-scale parallel computing devices, laser optics, imaging, visualizations, inverse-analysis algorithms, mesh generation, and control systems that guide laser therapy to optimally control the ablation of cancerous tissue. The computational system has been successfully tested on in vivo, canine prostate. Over the course of an 18 min laser-induced thermal therapy performed at M.D. Anderson Cancer Center (MDACC) in Houston, Texas, the computational models were calibrated to intra-operative real-time thermal imaging treatment data and the calibrated models controlled the bioheat transfer to within 5 degrees C of the predetermined treatment plan. The computational arena is in Austin, Texas and managed at the Institute for Computational Engineering and Sciences (ICES). The system is designed to control the bioheat transfer remotely while simultaneously providing real-time remote visualization of the on-going treatment. Post-operative histology of the canine prostate reveal that the damage region was within the targeted 1.2 cm diameter treatment objective.

  8. Connectivity-based neurofeedback: Dynamic causal modeling for real-time fMRI☆

    PubMed Central

    Koush, Yury; Rosa, Maria Joao; Robineau, Fabien; Heinen, Klaartje; W. Rieger, Sebastian; Weiskopf, Nikolaus; Vuilleumier, Patrik; Van De Ville, Dimitri; Scharnowski, Frank

    2013-01-01

    Neurofeedback based on real-time fMRI is an emerging technique that can be used to train voluntary control of brain activity. Such brain training has been shown to lead to behavioral effects that are specific to the functional role of the targeted brain area. However, real-time fMRI-based neurofeedback so far was limited to mainly training localized brain activity within a region of interest. Here, we overcome this limitation by presenting near real-time dynamic causal modeling in order to provide feedback information based on connectivity between brain areas rather than activity within a single brain area. Using a visual–spatial attention paradigm, we show that participants can voluntarily control a feedback signal that is based on the Bayesian model comparison between two predefined model alternatives, i.e. the connectivity between left visual cortex and left parietal cortex vs. the connectivity between right visual cortex and right parietal cortex. Our new approach thus allows for training voluntary control over specific functional brain networks. Because most mental functions and most neurological disorders are associated with network activity rather than with activity in a single brain region, this novel approach is an important methodological innovation in order to more directly target functionally relevant brain networks. PMID:23668967

  9. [Image processing system of visual prostheses based on digital signal processor DM642].

    PubMed

    Xie, Chengcheng; Lu, Yanyu; Gu, Yun; Wang, Jing; Chai, Xinyu

    2011-09-01

    This paper employed a DSP platform to create the real-time and portable image processing system, and introduced a series of commonly used algorithms for visual prostheses. The results of performance evaluation revealed that this platform could afford image processing algorithms to be executed in real time.

  10. Real-time Author Co-citation Mapping for Online Searching.

    ERIC Educational Resources Information Center

    Lin, Xia; White, Howard D.; Buzydlowski, Jan

    2003-01-01

    Describes the design and implementation of a prototype visualization system, AuthorLink, to enhance author searching. AuthorLink is based on author co-citation analysis and visualization mapping algorithms. AuthorLink produces interactive author maps in real time from a database of 1.26 million records supplied by the Institute for Scientific…

  11. Using Real-Time Visual Feedback to Improve Posture at Computer Workstations

    ERIC Educational Resources Information Center

    Sigurdsson, Sigurdur O.; Austin, John

    2008-01-01

    The purpose of the current study was to examine the effects of a multicomponent intervention that included discrimination training, real-time visual feedback, and self-monitoring on postural behavior at a computer workstation in a simulated office environment. Using a nonconcurrent multiple baseline design across 8 participants, the study assessed…

  12. Automatic optimization high-speed high-resolution OCT retinal imaging at 1μm

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Liu, Xiyun; Miao, Dongkai; Lee, Sujin; Lee, Sieun; Bonora, Stefano; Zawadzki, Robert J.; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.

    2015-03-01

    High-resolution OCT retinal imaging is important in providing visualization of various retinal structures to aid researchers in better understanding the pathogenesis of vision-robbing diseases. However, conventional optical coherence tomography (OCT) systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking optical coherence tomography (OCT) system with automatic optimization for high-resolution, extended-focal-range clinical retinal imaging. A variable-focus liquid lens was added to correct for de-focus in real-time. A GPU-accelerated segmentation and optimization was used to provide real-time layer-specific enface visualization as well as depth-specific focus adjustment. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the ONH, from which we extracted clinically-relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  13. A Real-Time Cardiac Arrhythmia Classification System with Wearable Sensor Networks

    PubMed Central

    Hu, Sheng; Wei, Hongxing; Chen, Youdong; Tan, Jindong

    2012-01-01

    Long term continuous monitoring of electrocardiogram (ECG) in a free living environment provides valuable information for prevention on the heart attack and other high risk diseases. This paper presents the design of a real-time wearable ECG monitoring system with associated cardiac arrhythmia classification algorithms. One of the striking advantages is that ECG analog front-end and on-node digital processing are designed to remove most of the noise and bias. In addition, the wearable sensor node is able to monitor the patient's ECG and motion signal in an unobstructive way. To realize the real-time medical analysis, the ECG is digitalized and transmitted to a smart phone via Bluetooth. On the smart phone, the ECG waveform is visualized and a novel layered hidden Markov model is seamlessly integrated to classify multiple cardiac arrhythmias in real time. Experimental results demonstrate that the clean and reliable ECG waveform can be captured in multiple stressed conditions and the real-time classification on cardiac arrhythmia is competent to other workbenches. PMID:23112746

  14. Real-time Visualization and Quantification of Retrograde Cardioplegia Delivery using Near Infrared Fluorescent Imaging

    PubMed Central

    Rangaraj, Aravind T.; Ghanta, Ravi K.; Umakanthan, Ramanan; Soltesz, Edward G.; Laurence, Rita G.; Fox, John; Cohn, Lawrence H.; Bolman, R. M.; Frangioni, John V.; Chen, Frederick Y.

    2009-01-01

    Background and Aim of the Study Homogeneous delivery of cardioplegia is essential for myocardial protection during cardiac surgery. Presently, there exist no established methods to quantitatively assess cardioplegia distribution intraoperatively and determine when retrograde cardioplegia is required. In this study, we evaluate the feasibility of near infrared (NIR) imaging for real-time visualization of cardioplegia distribution in a porcine model. Methods A portable, intraoperative, real-time NIR imaging system was utilized. NIR fluorescent cardioplegia solution was developed by incorporating indocyanine green (ICG) into crystalloid cardioplegia solution. Real-time NIR imaging was performed while the fluorescent cardioplegia solution was infused via the retrograde route in 5 ex-vivo normal porcine hearts and in 5 ex-vivo porcine hearts status post left anterior descending (LAD) coronary artery ligation. Horizontal cross-sections of the hearts were obtained at proximal, middle, and distal LAD levels. Videodensitometry was performed to quantify distribution of fluorophore content. Results The progressive distribution of cardioplegia was clearly visualized with NIR imaging. Complete visualization of retrograde distribution occurred within 4 minutes of infusion. Videodensitometry revealed that retrograde cardioplegia primarily distributed to the left ventricle and anterior septum. In hearts with LAD ligation, antegrade cardioplegia did not distribute to the anterior left ventricle. This deficiency was compensated for with retrograde cardioplegia supplementation. Conclusions Incorporation of ICG into cardioplegia allows real-time visualization of cardioplegia delivery via NIR imaging. This technology may prove useful in guiding intraoperative decisions pertaining to when retrograde cardioplegia is mandated. PMID:19016995

  15. Direct manipulation of virtual objects

    NASA Astrophysics Data System (ADS)

    Nguyen, Long K.

    Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities -- proprioception, haptics, and audition -- and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum -- Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.

  16. Real-time processing of dual band HD video for maintaining operational effectiveness in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Parker, Steve C. J.; Hickman, Duncan L.; Smith, Moira I.

    2015-05-01

    Effective reconnaissance, surveillance and situational awareness, using dual band sensor systems, require the extraction, enhancement and fusion of salient features, with the processed video being presented to the user in an ergonomic and interpretable manner. HALO™ is designed to meet these requirements and provides an affordable, real-time, and low-latency image fusion solution on a low size, weight and power (SWAP) platform. The system has been progressively refined through field trials to increase its operating envelope and robustness. The result is a video processor that improves detection, recognition and identification (DRI) performance, whilst lowering operator fatigue and reaction times in complex and highly dynamic situations. This paper compares the performance of HALO™, both qualitatively and quantitatively, with conventional blended fusion for operation in degraded visual environments (DVEs), such as those experienced during ground and air-based operations. Although image blending provides a simple fusion solution, which explains its common adoption, the results presented demonstrate that its performance is poor compared to the HALO™ fusion scheme in DVE scenarios.

  17. NASA World Wind Near Real Time Data for Earth

    NASA Astrophysics Data System (ADS)

    Hogan, P.

    2013-12-01

    Innovation requires open standards for data exchange, not to mention ^access to data^ so that value-added, the information intelligence, can be continually created and advanced by the larger community. Likewise, innovation by academia and entrepreneurial enterprise alike, are greatly benefited by an open platform that provides the basic technology for access and visualization of that data. NASA World Wind Java, and now NASA World Wind iOS for the iPhone and iPad, provides that technology. Whether the interest is weather science or climate science, emergency response or supply chain, seeing spatial data in its native context of Earth accelerates understanding and improves decision-making. NASA World Wind open source technology provides the basic elements for 4D visualization, using Open Geospatial Consortium (OGC) protocols, while allowing for customized access to any data, big or small, including support for NetCDF. NASA World Wind includes access to a suite of US Government WMS servers with near real time data. The larger community can readily capitalize on this technology, building their own value-added applications, either open or proprietary. Night lights heat map Glacier National Park

  18. The Ocean Observatories Initiative: Data Access and Visualization via the Graphical User Interface

    NASA Astrophysics Data System (ADS)

    Garzio, L. M.; Belabbassi, L.; Knuth, F.; Smith, M. J.; Crowley, M. F.; Vardaro, M.; Kerfoot, J.

    2016-02-01

    The Ocean Observatories Initiative (OOI), funded by the National Science Foundation, is a broad-scale, multidisciplinary effort to transform oceanographic research by providing users with real-time access to long-term datasets from a variety of deployed physical, chemical, biological, and geological sensors. The global array component of the OOI includes four high latitude sites: Irminger Sea off Greenland, Station Papa in the Gulf of Alaska, Argentine Basin off the coast of Argentina, and Southern Ocean near coordinates 55°S and 90°W. Each site is composed of fixed moorings, hybrid profiler moorings and mobile assets, with a total of approximately 110 instruments at each site. Near real-time (telemetered) and recovered data from these instruments can be visualized and downloaded via the OOI Graphical User Interface. In this Interface, the user can visualize scientific parameters via six different plotting functions with options to specify time ranges and apply various QA/QC tests. Data streams from all instruments can also be downloaded in different formats (CSV, JSON, and NetCDF) for further data processing, visualization, and comparison to supplementary datasets. In addition, users can view alerts and alarms in the system, access relevant metadata and deployment information for specific instruments, and find infrastructure specifics for each array including location, sampling strategies, deployment schedules, and technical drawings. These datasets from the OOI provide an unprecedented opportunity to transform oceanographic research and education, and will be readily accessible to the general public via the OOI's Graphical User Interface.

  19. Interactive Learning Modules: Enabling Near Real-Time Oceanographic Data Use In Undergraduate Education

    NASA Astrophysics Data System (ADS)

    Kilb, D. L.; Fundis, A. T.; Risien, C. M.

    2012-12-01

    The focus of the Education and Public Engagement (EPE) component of the NSF's Ocean Observatories Initiative (OOI) is to provide a new layer of cyber-interactivity for undergraduate educators to bring near real-time data from the global ocean into learning environments. To accomplish this, we are designing six online services including: 1) visualization tools, 2) a lesson builder, 3) a concept map builder, 4) educational web services (middleware), 5) collaboration tools and 6) an educational resource database. Here, we report on our Fall 2012 release that includes the first four of these services: 1) Interactive visualization tools allow users to interactively select data of interest, display the data in various views (e.g., maps, time-series and scatter plots) and obtain statistical measures such as mean, standard deviation and a regression line fit to select data. Specific visualization tools include a tool to compare different months of data, a time series explorer tool to investigate the temporal evolution of select data parameters (e.g., sea water temperature or salinity), a glider profile tool that displays ocean glider tracks and associated transects, and a data comparison tool that allows users to view the data either in scatter plot view comparing one parameter with another, or in time series view. 2) Our interactive lesson builder tool allows users to develop a library of online lesson units, which are collaboratively editable and sharable and provides starter templates designed from learning theory knowledge. 3) Our interactive concept map tool allows the user to build and use concept maps, a graphical interface to map the connection between concepts and ideas. This tool also provides semantic-based recommendations, and allows for embedding of associated resources such as movies, images and blogs. 4) Education web services (middleware) will provide an educational resource database API.

  20. Visualization assisted by parallel processing

    NASA Astrophysics Data System (ADS)

    Lange, B.; Rey, H.; Vasques, X.; Puech, W.; Rodriguez, N.

    2011-01-01

    This paper discusses the experimental results of our visualization model for data extracted from sensors. The objective of this paper is to find a computationally efficient method to produce a real time rendering visualization for a large amount of data. We develop visualization method to monitor temperature variance of a data center. Sensors are placed on three layers and do not cover all the room. We use particle paradigm to interpolate data sensors. Particles model the "space" of the room. In this work we use a partition of the particle set, using two mathematical methods: Delaunay triangulation and Voronoý cells. Avis and Bhattacharya present these two algorithms in. Particles provide information on the room temperature at different coordinates over time. To locate and update particles data we define a computational cost function. To solve this function in an efficient way, we use a client server paradigm. Server computes data and client display this data on different kind of hardware. This paper is organized as follows. The first part presents related algorithm used to visualize large flow of data. The second part presents different platforms and methods used, which was evaluated in order to determine the better solution for the task proposed. The benchmark use the computational cost of our algorithm that formed based on located particles compared to sensors and on update of particles value. The benchmark was done on a personal computer using CPU, multi core programming, GPU programming and hybrid GPU/CPU. GPU programming method is growing in the research field; this method allows getting a real time rendering instates of a precompute rendering. For improving our results, we compute our algorithm on a High Performance Computing (HPC), this benchmark was used to improve multi-core method. HPC is commonly used in data visualization (astronomy, physic, etc) for improving the rendering and getting real-time.

  1. On-chip visual perception of motion: a bio-inspired connectionist model on FPGA.

    PubMed

    Torres-Huitzil, César; Girau, Bernard; Castellanos-Sánchez, Claudio

    2005-01-01

    Visual motion provides useful information to understand the dynamics of a scene to allow intelligent systems interact with their environment. Motion computation is usually restricted by real time requirements that need the design and implementation of specific hardware architectures. In this paper, the design of hardware architecture for a bio-inspired neural model for motion estimation is presented. The motion estimation is based on a strongly localized bio-inspired connectionist model with a particular adaptation of spatio-temporal Gabor-like filtering. The architecture is constituted by three main modules that perform spatial, temporal, and excitatory-inhibitory connectionist processing. The biomimetic architecture is modeled, simulated and validated in VHDL. The synthesis results on a Field Programmable Gate Array (FPGA) device show the potential achievement of real-time performance at an affordable silicon area.

  2. IoT for Real-Time Measurement of High-Throughput Liquid Dispensing in Laboratory Environments.

    PubMed

    Shumate, Justin; Baillargeon, Pierre; Spicer, Timothy P; Scampavia, Louis

    2018-04-01

    Critical to maintaining quality control in high-throughput screening is the need for constant monitoring of liquid-dispensing fidelity. Traditional methods involve operator intervention with gravimetric analysis to monitor the gross accuracy of full plate dispenses, visual verification of contents, or dedicated weigh stations on screening platforms that introduce potential bottlenecks and increase the plate-processing cycle time. We present a unique solution using open-source hardware, software, and 3D printing to automate dispenser accuracy determination by providing real-time dispense weight measurements via a network-connected precision balance. This system uses an Arduino microcontroller to connect a precision balance to a local network. By integrating the precision balance as an Internet of Things (IoT) device, it gains the ability to provide real-time gravimetric summaries of dispensing, generate timely alerts when problems are detected, and capture historical dispensing data for future analysis. All collected data can then be accessed via a web interface for reviewing alerts and dispensing information in real time or remotely for timely intervention of dispense errors. The development of this system also leveraged 3D printing to rapidly prototype sensor brackets, mounting solutions, and component enclosures.

  3. Real-Time System for Water Modeling and Management

    NASA Astrophysics Data System (ADS)

    Lee, J.; Zhao, T.; David, C. H.; Minsker, B.

    2012-12-01

    Working closely with the Texas Commission on Environmental Quality (TCEQ) and the University of Texas at Austin (UT-Austin), we are developing a real-time system for water modeling and management using advanced cyberinfrastructure, data integration and geospatial visualization, and numerical modeling. The state of Texas suffered a severe drought in 2011 that cost the state $7.62 billion in agricultural losses (crops and livestock). Devastating situations such as this could potentially be avoided with better water modeling and management strategies that incorporate state of the art simulation and digital data integration. The goal of the project is to prototype a near-real-time decision support system for river modeling and management in Texas that can serve as a national and international model to promote more sustainable and resilient water systems. The system uses National Weather Service current and predicted precipitation data as input to the Noah-MP Land Surface model, which forecasts runoff, soil moisture, evapotranspiration, and water table levels given land surface features. These results are then used by a river model called RAPID, along with an error model currently under development at UT-Austin, to forecast stream flows in the rivers. Model forecasts are visualized as a Web application for TCEQ decision makers, who issue water diversion (withdrawal) permits and any needed drought restrictions; permit holders; and reservoir operation managers. Users will be able to adjust model parameters to predict the impacts of alternative curtailment scenarios or weather forecasts. A real-time optimization system under development will help TCEQ to identify optimal curtailment strategies to minimize impacts on permit holders and protect health and safety. To develop the system we have implemented RAPID as a remotely-executed modeling service using the Cyberintegrator workflow system with input data downloaded from the North American Land Data Assimilation System. The Cyberintegrator workflow system provides RESTful web services for users to provide inputs, execute workflows, and retrieve outputs. Along with REST endpoints, PAW (Publishable Active Workflows) provides the web user interface toolkit for us to develop web applications with scientific workflows. The prototype web application is built on top of workflows with PAW, so that users will have a user-friendly web environment to provide input parameters, execute the model, and visualize/retrieve the results using geospatial mapping tools. In future work the optimization model will be developed and integrated into the workflow.; Real-Time System for Water Modeling and Management

  4. Identification of real-time diagnostic measures of visual distraction with an automatic eye-tracking system.

    PubMed

    Zhang, Harry; Smith, Matthew R H; Witt, Gerald J

    2006-01-01

    This study was conducted to identify eye glance measures that are diagnostic of visual distraction. Visual distraction degrades performance, but real-time diagnostic measures have not been identified. In a driving simulator, 14 participants responded to a lead vehicle braking at -2 or -2.7 m/s2 periodically while reading a varying number of words (6-15 words every 13 s) on peripheral displays (with diagonal eccentricities of 24 degrees, 43 degrees, and 75 degrees). As the number of words and display eccentricity increased, total glance duration and reaction time increased and driving performance suffered. Correlation coefficients between several glance measures and reaction time or performance variables were reliably high, indicating that these glance measures are diagnostic of visual distraction. It is predicted that for every 25% increase in total glance duration, reaction time is increased by 0.39 s and standard deviation of lane position is increased by 0.06 m. Potential applications of this research include assessing visual distraction in real time, delivering advisories to distracted drivers to reorient their attention to driving, and using distraction information to adapt forward collision and lane departure warning systems to enhance system effectiveness.

  5. A reference web architecture and patterns for real-time visual analytics on large streaming data

    NASA Astrophysics Data System (ADS)

    Kandogan, Eser; Soroker, Danny; Rohall, Steven; Bak, Peter; van Ham, Frank; Lu, Jie; Ship, Harold-Jeffrey; Wang, Chun-Fu; Lai, Jennifer

    2013-12-01

    Monitoring and analysis of streaming data, such as social media, sensors, and news feeds, has become increasingly important for business and government. The volume and velocity of incoming data are key challenges. To effectively support monitoring and analysis, statistical and visual analytics techniques need to be seamlessly integrated; analytic techniques for a variety of data types (e.g., text, numerical) and scope (e.g., incremental, rolling-window, global) must be properly accommodated; interaction, collaboration, and coordination among several visualizations must be supported in an efficient manner; and the system should support the use of different analytics techniques in a pluggable manner. Especially in web-based environments, these requirements pose restrictions on the basic visual analytics architecture for streaming data. In this paper we report on our experience of building a reference web architecture for real-time visual analytics of streaming data, identify and discuss architectural patterns that address these challenges, and report on applying the reference architecture for real-time Twitter monitoring and analysis.

  6. Research on robot mobile obstacle avoidance control based on visual information

    NASA Astrophysics Data System (ADS)

    Jin, Jiang

    2018-03-01

    Robots to detect obstacles and control robots to avoid obstacles has been a key research topic of robot control. In this paper, a scheme of visual information acquisition is proposed. By judging visual information, the visual information is transformed into the information source of path processing. In accordance with the established route, in the process of encountering obstacles, the algorithm real-time adjustment trajectory to meet the purpose of intelligent control of mobile robots. Simulation results show that, through the integration of visual sensing information, the obstacle information is fully obtained, while the real-time and accuracy of the robot movement control is guaranteed.

  7. Tracking the Spatiotemporal Neural Dynamics of Real-world Object Size and Animacy in the Human Brain.

    PubMed

    Khaligh-Razavi, Seyed-Mahdi; Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude

    2018-06-07

    Animacy and real-world size are properties that describe any object and thus bring basic order into our perception of the visual world. Here, we investigated how the human brain processes real-world size and animacy. For this, we applied representational similarity to fMRI and MEG data to yield a view of brain activity with high spatial and temporal resolutions, respectively. Analysis of fMRI data revealed that a distributed and partly overlapping set of cortical regions extending from occipital to ventral and medial temporal cortex represented animacy and real-world size. Within this set, parahippocampal cortex stood out as the region representing animacy and size stronger than most other regions. Further analysis of the detailed representational format revealed differences among regions involved in processing animacy. Analysis of MEG data revealed overlapping temporal dynamics of animacy and real-world size processing starting at around 150 msec and provided the first neuromagnetic signature of real-world object size processing. Finally, to investigate the neural dynamics of size and animacy processing simultaneously in space and time, we combined MEG and fMRI with a novel extension of MEG-fMRI fusion by representational similarity. This analysis revealed partly overlapping and distributed spatiotemporal dynamics, with parahippocampal cortex singled out as a region that represented size and animacy persistently when other regions did not. Furthermore, the analysis highlighted the role of early visual cortex in representing real-world size. A control analysis revealed that the neural dynamics of processing animacy and size were distinct from the neural dynamics of processing low-level visual features. Together, our results provide a detailed spatiotemporal view of animacy and size processing in the human brain.

  8. Exploring Gigabyte Datasets in Real Time: Architectures, Interfaces and Time-Critical Design

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Gerald-Yamasaki, Michael (Technical Monitor)

    1998-01-01

    Architectures and Interfaces: The implications of real-time interaction on software architecture design: decoupling of interaction/graphics and computation into asynchronous processes. The performance requirements of graphics and computation for interaction. Time management in such an architecture. Examples of how visualization algorithms must be modified for high performance. Brief survey of interaction techniques and design, including direct manipulation and manipulation via widgets. talk discusses how human factors considerations drove the design and implementation of the virtual wind tunnel. Time-Critical Design: A survey of time-critical techniques for both computation and rendering. Emphasis on the assignment of a time budget to both the overall visualization environment and to each individual visualization technique in the environment. The estimation of the benefit and cost of an individual technique. Examples of the modification of visualization algorithms to allow time-critical control.

  9. Real-time, label-free, intraoperative visualization of peripheral nerves and micro-vasculatures using multimodal optical imaging techniques

    PubMed Central

    Cha, Jaepyeong; Broch, Aline; Mudge, Scott; Kim, Kihoon; Namgoong, Jung-Man; Oh, Eugene; Kim, Peter

    2018-01-01

    Accurate, real-time identification and display of critical anatomic structures, such as the nerve and vasculature structures, are critical for reducing complications and improving surgical outcomes. Human vision is frequently limited in clearly distinguishing and contrasting these structures. We present a novel imaging system, which enables noninvasive visualization of critical anatomic structures during surgical dissection. Peripheral nerves are visualized by a snapshot polarimetry that calculates the anisotropic optical properties. Vascular structures, both venous and arterial, are identified and monitored in real-time using a near-infrared laser-speckle-contrast imaging. We evaluate the system by performing in vivo animal studies with qualitative comparison by contrast-agent-aided fluorescence imaging. PMID:29541506

  10. Real-time digital signal processing for live electro-optic imaging.

    PubMed

    Sasagawa, Kiyotaka; Kanno, Atsushi; Tsuchiya, Masahiro

    2009-08-31

    We present an imaging system that enables real-time magnitude and phase detection of modulated signals and its application to a Live Electro-optic Imaging (LEI) system, which realizes instantaneous visualization of RF electric fields. The real-time acquisition of magnitude and phase images of a modulated optical signal at 5 kHz is demonstrated by imaging with a Si-based high-speed CMOS image sensor and real-time signal processing with a digital signal processor. In the LEI system, RF electric fields are probed with light via an electro-optic crystal plate and downconverted to an intermediate frequency by parallel optical heterodyning, which can be detected with the image sensor. The artifacts caused by the optics and the image sensor characteristics are corrected by image processing. As examples, we demonstrate real-time visualization of electric fields from RF circuits.

  11. End-to-End Flow Control for Visual-Haptic Communication under Bandwidth Change

    NASA Astrophysics Data System (ADS)

    Yashiro, Daisuke; Tian, Dapeng; Yakoh, Takahiro

    This paper proposes an end-to-end flow controller for visual-haptic communication. A visual-haptic communication system transmits non-real-time packets, which contain large-size visual data, and real-time packets, which contain small-size haptic data. When the transmission rate of visual data exceeds the communication bandwidth, the visual-haptic communication system becomes unstable owing to buffer overflow. To solve this problem, an end-to-end flow controller is proposed. This controller determines the optimal transmission rate of visual data on the basis of the traffic conditions, which are estimated by the packets for haptic communication. Experimental results confirm that in the proposed method, a short packet-sending interval and a short delay are achieved under bandwidth change, and thus, high-precision visual-haptic communication is realized.

  12. Hand-held optoacoustic probe for three-dimensional imaging of human morphology and function

    NASA Astrophysics Data System (ADS)

    Deán-Ben, X. Luís.; Razansky, Daniel

    2014-03-01

    We report on a hand-held imaging probe for real-time optoacoustic visualization of deep tissues in three dimensions. The proposed solution incorporates a two-dimensional array of ultrasonic sensors densely distributed on a spherical surface, whereas illumination is performed coaxially through a cylindrical cavity in the array. Visualization of three-dimensional tomographic data at a frame rate of 10 images per second is enabled by parallel recording of 256 time-resolved signals for each individual laser pulse along with a highly efficient GPUbased real-time reconstruction. A liquid coupling medium (water), enclosed in a transparent membrane, is used to guarantee transmission of the optoacoustically generated waves to the ultrasonic detectors. Excitation at multiple wavelengths further allows imaging spectrally distinctive tissue chromophores such as oxygenated and deoxygenated haemoglobin. The performance is showcased by video-rate tracking of deep tissue vasculature and three-dimensional measurements of blood oxygenenation in a healthy human volunteer. The flexibility provided by the hand-held hardware design, combined with the real-time operation, makes the developed platform highly usable for both small animal research and clinical imaging in multiple indications, including cancer, inflammation, skin and cardiovascular diseases, diagnostics of lymphatic system and breast

  13. Real time 3D visualization of intraoperative organ deformations using structured dictionary.

    PubMed

    Wang, Dan; Tewfik, Ahmed H

    2012-04-01

    Restricted visualization of the surgical field is one of the most critical challenges for minimally invasive surgery (MIS). Current intraoperative visualization systems are promising. However, they can hardly meet the requirements of high resolution and real time 3D visualization of the surgical scene to support the recognition of anatomic structures for safe MIS procedures. In this paper, we present a new approach for real time 3D visualization of organ deformations based on optical imaging patches with limited field-of-view and a single preoperative scan of magnetic resonance imaging (MRI) or computed tomography (CT). The idea for reconstruction is motivated by our empirical observation that the spherical harmonic coefficients corresponding to distorted surfaces of a given organ lie in lower dimensional subspaces in a structured dictionary that can be learned from a set of representative training surfaces. We provide both theoretical and practical designs for achieving these goals. Specifically, we discuss details about the selection of limited optical views and the registration of partial optical images with a single preoperative MRI/CT scan. The design proposed in this paper is evaluated with both finite element modeling data and ex vivo experiments. The ex vivo test is conducted on fresh porcine kidneys using 3D MRI scans with 1.2 mm resolution and a portable laser scanner with an accuracy of 0.13 mm. Results show that the proposed method achieves a sub-3 mm spatial resolution in terms of Hausdorff distance when using only one preoperative MRI scan and the optical patch from the single-sided view of the kidney. The reconstruction frame rate is between 10 frames/s and 39 frames/s depending on the complexity of the test model.

  14. Subjective and objective evaluation of visual fatigue on viewing 3D display continuously

    NASA Astrophysics Data System (ADS)

    Wang, Danli; Xie, Yaohua; Yang, Xinpan; Lu, Yang; Guo, Anxiang

    2015-03-01

    In recent years, three-dimensional (3D) displays become more and more popular in many fields. Although they can provide better viewing experience, they cause extra problems, e.g., visual fatigue. Subjective or objective methods are usually used in discrete viewing processes to evaluate visual fatigue. However, little research combines subjective indicators and objective ones in an entirely continuous viewing process. In this paper, we propose a method to evaluate real-time visual fatigue both subjectively and objectively. Subjects watch stereo contents on a polarized 3D display continuously. Visual Reaction Time (VRT), Critical Flicker Frequency (CFF), Punctum Maximum Accommodation (PMA) and subjective scores of visual fatigue are collected before and after viewing. During the viewing process, the subjects rate the visual fatigue whenever it changes, without breaking the viewing process. At the same time, the blink frequency (BF) and percentage of eye closure (PERCLOS) of each subject is recorded for comparison to a previous research. The results show that the subjective visual fatigue and PERCLOS increase with time and they are greater in a continuous process than a discrete one. The BF increased with time during the continuous viewing process. Besides, the visual fatigue also induced significant changes of VRT, CFF and PMA.

  15. 3D visualization techniques for the STEREO-mission

    NASA Astrophysics Data System (ADS)

    Wiegelmann, T.; Podlipnik, B.; Inhester, B.; Feng, L.; Ruan, P.

    The forthcoming STEREO-mission will observe the Sun from two different viewpoints We expect about 2GB data per day which ask for suitable data presentation techniques A key feature of STEREO is that it will provide for the first time a 3D-view of the Sun and the solar corona In our normal environment we see objects three dimensional because the light from real 3D objects needs different travel times to our left and right eye As a consequence we see slightly different images with our eyes which gives us information about the depth of objects and a corresponding 3D impression Techniques for the 3D-visualization of scientific and other data on paper TV computer screen cinema etc are well known e g two colour anaglyph technique shutter glasses polarization filters and head-mounted displays We discuss advantages and disadvantages of these techniques and how they can be applied to STEREO-data The 3D-visualization techniques are not limited to visual images but can be also used to show the reconstructed coronal magnetic field and energy and helicity distribution In the advent of STEREO we test the method with data from SOHO which provides us different viewpoints by the solar rotation This restricts the analysis to structures which remain stationary for several days Real STEREO-data will not be affected by these limitations however

  16. Kinematic Visual Biofeedback Improves Accuracy of Learning a Swallowing Maneuver and Accuracy of Clinician Cues During Training.

    PubMed

    Azola, Alba M; Sunday, Kirstyn L; Humbert, Ianessa A

    2017-02-01

    Submental surface electromyography (ssEMG) visual biofeedback is widely used to train swallowing maneuvers. This study compares the effect of ssEMG and videofluoroscopy (VF) visual biofeedback on hyo-laryngeal accuracy when training a swallowing maneuver. Furthermore, it examines the clinician's ability to provide accurate verbal cues during swallowing maneuver training. Thirty healthy adults performed the volitional laryngeal vestibule closure maneuver (vLVC), which involves swallowing and sustaining closure of the laryngeal vestibule for 2 s. The study included two stages: (1) first accurate demonstration of the vLVC maneuver, followed by (2) training-20 vLVC training swallows. Participants were randomized into three groups: (a) ssEMG biofeedback only, (b) VF biofeedback only, and (c) mixed biofeedback (VF for the first accurate demonstration achieving stage and ssEMG for the training stage). Participants' performances were verbally critiqued or reinforced in real time while both the clinician and participant were observing the assigned visual biofeedback. VF and ssEMG were continuously recorded for all participants. Results show that accuracy of both vLVC performance and clinician cues was greater with VF biofeedback than with either ssEMG or mixed biofeedback (p < 0.001). Using ssEMG for providing real-time biofeedback during training could lead to errors while learning and training a swallowing maneuver.

  17. Use of a Mobile Application to Help Students Develop Skills Needed in Solving Force Equilibrium Problems

    NASA Astrophysics Data System (ADS)

    Yang, Eunice

    2016-02-01

    This paper discusses the use of a free mobile engineering application (app) called Autodesk® ForceEffect™ to provide students assistance with spatial visualization of forces and more practice in solving/visualizing statics problems compared to the traditional pencil-and-paper method. ForceEffect analyzes static rigid-body systems using free-body diagrams (FBDs) and provides solutions in real time. It is a cost-free software that is available for download on the Internet. The software is supported on the iOS™, Android™, and Google Chrome™ platforms. It is easy to use and the learning curve is approximately two hours using the tutorial provided within the app. The use of ForceEffect has the ability to provide students different problem modalities (textbook, real-world, and design) to help them acquire and improve on skills that are needed to solve force equilibrium problems. Although this paper focuses on the engineering mechanics statics course, the technology discussed is also relevant to the introductory physics course.

  18. Visualization of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.

  19. A GIS-Enabled, Michigan-Specific, Hierarchical Groundwater Modeling and Visualization System

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Li, S.; Mandle, R.; Simard, A.; Fisher, B.; Brown, E.; Ross, S.

    2005-12-01

    Efficient management of groundwater resources relies on a comprehensive database that represents the characteristics of the natural groundwater system as well as analysis and modeling tools to describe the impacts of decision alternatives. Many agencies in Michigan have spent several years compiling expensive and comprehensive surface water and groundwater inventories and other related spatial data that describe their respective areas of responsibility. However, most often this wealth of descriptive data has only been utilized for basic mapping purposes. The benefits from analyzing these data, using GIS analysis functions or externally developed analysis models or programs, has yet to be systematically realized. In this talk, we present a comprehensive software environment that allows Michigan groundwater resources managers and frontline professionals to make more effective use of the available data and improve their ability to manage and protect groundwater resources, address potential conflicts, design cleanup schemes, and prioritize investigation activities. In particular, we take advantage of the Interactive Ground Water (IGW) modeling system and convert it to a customized software environment specifically for analyzing, modeling, and visualizing the Michigan statewide groundwater database. The resulting Michigan IGW modeling system (IGW-M) is completely window-based, fully interactive, and seamlessly integrated with a GIS mapping engine. The system operates in real-time (on the fly) providing dynamic, hierarchical mapping, modeling, spatial analysis, and visualization. Specifically, IGW-M allows water resources and environmental professionals in Michigan to: * Access and utilize the extensive data from the statewide groundwater database, interactively manipulate GIS objects, and display and query the associated data and attributes; * Analyze and model the statewide groundwater database, interactively convert GIS objects into numerical model features, automatically extract data and attributes, and simulate unsteady groundwater flow and contaminant transport in response to water and land management decisions; * Visualize and map model simulations and predictions with data from the statewide groundwater database in a seamless interactive environment. IGW-M has the potential to significantly improve the productivity of Michigan groundwater management investigations. It changes the role of engineers and scientists in modeling and analyzing the statewide groundwater database from heavily physical to cognitive problem-solving and decision-making tasks. The seamless real-time integration, real-time visual interaction, and real-time processing capability allows a user to focus on critical management issues, conflicts, and constraints, to quickly and iteratively examine conceptual approximations, management and planning scenarios, and site characterization assumptions, to identify dominant processes, to evaluate data worth and sensitivity, and to guide further data-collection activities. We illustrate the power and effectiveness of the M-IGW modeling and visualization system with a real case study and a real-time, live demonstration.

  20. Evaluation of Augmented Reality Feedback in Surgical Training Environment.

    PubMed

    Zahiri, Mohsen; Nelson, Carl A; Oleynikov, Dmitry; Siu, Ka-Chun

    2018-02-01

    Providing computer-based laparoscopic surgical training has several advantages that enhance the training process. Self-evaluation and real-time performance feedback are 2 of these advantages, which avoid dependency of trainees on expert feedback. The goal of this study was to investigate the use of a visual time indicator as real-time feedback correlated with the laparoscopic surgical training. Twenty novices participated in this study working with (and without) different presentations of time indicators. They performed a standard peg transfer task, and their completion times and muscle activity were recorded and compared. Also of interest was whether the use of this type of feedback induced any side effect in terms of motivation or muscle fatigue. Of the 20 participants, 15 (75%) preferred using a time indicator in the training process rather than having no feedback. However, time to task completion showed no significant difference in performance with the time indicator; furthermore, no significant differences in muscle activity or muscle fatigue were detected with/without time feedback. The absence of significant difference between task performance with/without time feedback shows that using visual real-time feedback can be included in surgical training based on user preference. Trainees may benefit from this type of feedback in the form of increased motivation. The extent to which this can influence training frequency leading to performance improvement is a question for further study.

  1. Iowa Flood Information System: Towards Integrated Data Management, Analysis and Visualization

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.; Goska, R.; Mantilla, R.; Weber, L. J.; Young, N.

    2012-04-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. Simple 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for around 500 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods. This presentation provides an overview and live demonstration of the tools and interfaces in the IFIS developed to date to provide a platform for one-stop access to flood related data, visualizations, flood conditions, and forecast.

  2. Reference Network Real-Time Services Control Techniques

    NASA Astrophysics Data System (ADS)

    Nykiel, Grzegorz; Szolucha, Marcin

    2013-04-01

    Differential corrections and services for real-time kinematic method (RTK) in many cases are used to support survey being base for administration decision. For that reason, services which allow to perform GNSS measurements should be constantly monitored to minimize the risk of any errors or unexpected gap in observation. System providing such control is the subject of the work carried out under a grant NR09-0010-10/2010 conducted by the Military University of Technology. This study was made to develop the concept of monitoring real-time services of Polish reference network ASG-EUPOS and the implementation of software providing users information on system accuracy. The main objectives of all concepts were: maximum use of existing infrastructure while minimizing the cost of installation of new elements, providing users calculation results via the ASG-EUPOS website. In the same time concept assume openness of the module that allow the successive development of applications and integration with existing solutions. This paper present several solutions and algorithms which have been implemented and tested. It also consist some examples of data visualization methods.

  3. Developing an Interactive Data Visualization Tool to Assess the Impact of Decision Support on Clinical Operations.

    PubMed

    Huber, Timothy C; Krishnaraj, Arun; Monaghan, Dayna; Gaskin, Cree M

    2018-05-18

    Due to mandates from recent legislation, clinical decision support (CDS) software is being adopted by radiology practices across the country. This software provides imaging study decision support for referring providers at the point of order entry. CDS systems produce a large volume of data, providing opportunities for research and quality improvement. In order to better visualize and analyze trends in this data, an interactive data visualization dashboard was created using a commercially available data visualization platform. Following the integration of a commercially available clinical decision support product into the electronic health record, a dashboard was created using a commercially available data visualization platform (Tableau, Seattle, WA). Data generated by the CDS were exported from the data warehouse, where they were stored, into the platform. This allowed for real-time visualization of the data generated by the decision support software. The creation of the dashboard allowed the output from the CDS platform to be more easily analyzed and facilitated hypothesis generation. Integrating data visualization tools into clinical decision support tools allows for easier data analysis and can streamline research and quality improvement efforts.

  4. Dare to Compare

    ERIC Educational Resources Information Center

    Beigie, Darin

    2016-01-01

    A recent trend in school mathematics has been to launch student inquiry with real-world contexts that capture student interest and intrigue (Meyer 2011, 2012; Kane 2015). Often these starting points harness the immediacy and power of the Internet to provide strong visuals and timely relevance. The goal of the resulting student inquiry is to foster…

  5. Visually Exploring Transportation Schedules.

    PubMed

    Palomo, Cesar; Guo, Zhan; Silva, Cláudio T; Freire, Juliana

    2016-01-01

    Public transportation schedules are designed by agencies to optimize service quality under multiple constraints. However, real service usually deviates from the plan. Therefore, transportation analysts need to identify, compare and explain both eventual and systemic performance issues that must be addressed so that better timetables can be created. The purely statistical tools commonly used by analysts pose many difficulties due to the large number of attributes at trip- and station-level for planned and real service. Also challenging is the need for models at multiple scales to search for patterns at different times and stations, since analysts do not know exactly where or when relevant patterns might emerge and need to compute statistical summaries for multiple attributes at different granularities. To aid in this analysis, we worked in close collaboration with a transportation expert to design TR-EX, a visual exploration tool developed to identify, inspect and compare spatio-temporal patterns for planned and real transportation service. TR-EX combines two new visual encodings inspired by Marey's Train Schedule: Trips Explorer for trip-level analysis of frequency, deviation and speed; and Stops Explorer for station-level study of delay, wait time, reliability and performance deficiencies such as bunching. To tackle overplotting and to provide a robust representation for a large numbers of trips and stops at multiple scales, the system supports variable kernel bandwidths to achieve the level of detail required by users for different tasks. We justify our design decisions based on specific analysis needs of transportation analysts. We provide anecdotal evidence of the efficacy of TR-EX through a series of case studies that explore NYC subway service, which illustrate how TR-EX can be used to confirm hypotheses and derive new insights through visual exploration.

  6. Real-time visual simulation of APT system based on RTW and Vega

    NASA Astrophysics Data System (ADS)

    Xiong, Shuai; Fu, Chengyu; Tang, Tao

    2012-10-01

    The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.

  7. Real-time biscuit tile image segmentation method based on edge detection.

    PubMed

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter

    2018-05-01

    In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  8. REAL TIME MRI GUIDED RADIOFREQUENCY ATRIAL ABLATION AND VISUALIZATION OF LESION FORMATION AT 3-TESLA

    PubMed Central

    Vergara, Gaston R.; Vijayakumar, Sathya; Kholmovski, Eugene G.; Blauer, Joshua J.E.; Guttman, Mike A.; Gloschat, Christopher; Payne, Gene; Vij, Kamal; Akoum, Nazem W.; Daccarett, Marcos; McGann, Christopher J.; MacLeod, Rob S.; Marrouche, Nassir F.

    2011-01-01

    Background MRI allows visualization of location and extent of RF ablation lesion, myocardial scar formation, and real-time (RT) assessment of lesion formation. In this study, we report a novel 3-Tesla RT-MRI based porcine RF ablation model and visualization of lesion formation in the atrium during RF energy delivery. Objective To develop of a 3-Tesla RT-MRI based catheter ablation and lesion visualization system. Methods RF energy was delivered to six pigs under RT-MRI guidance. A novel MRI compatible mapping and ablation catheter was used. Under RT-MRI this catheter was safely guided and positioned within either the left or right atrium. Unipolar and bi-polar electrograms were recorded. The catheter tip-tissue interface was visualized with a T1-weighted gradient echo sequence. RF energy was then delivered in a power-controlled fashion. Myocardial changes and lesion formation were visualized with a T2-weighted (T2w) HASTE sequence during ablation. Results Real-time visualization of lesion formation was achieved in 30% of the ablations performed. In the other cases, either the lesion was formed outside the imaged region (25%) or lesion was not created (45%) presumably due to poor tissue-catheter tip contact. The presence of lesions was confirmed by late gadolinium enhancement (LGE) MRI and macroscopic tissue examination. Conclusion MRI compatible catheters can be navigated and RF energy safely delivered under 3-Tesla RT-MRI guidance. It is also feasible to record electrograms during RT imaging. Real-time visualization of lesion as it forms during delivery of RF energy is possible and was demonstrated using T2w HASTE imaging. PMID:21034854

  9. Visualization and simulation techniques for surgical simulators using actual patient's data.

    PubMed

    Radetzky, Arne; Nürnberger, Andreas

    2002-11-01

    Because of the increasing complexity of surgical interventions research in surgical simulation became more and more important over the last years. However, the simulation of tissue deformation is still a challenging problem, mainly due to the short response times that are required for real-time interaction. The demands to hard and software are even larger if not only the modeled human anatomy is used but the anatomy of actual patients. This is required if the surgical simulator should be used as training medium for expert surgeons rather than students. In this article, suitable visualization and simulation methods for surgical simulation utilizing actual patient's datasets are described. Therefore, the advantages and disadvantages of direct and indirect volume rendering for the visualization are discussed and a neuro-fuzzy system is described, which can be used for the simulation of interactive tissue deformations. The neuro-fuzzy system makes it possible to define the deformation behavior based on a linguistic description of the tissue characteristics or to learn the dynamics by using measured data of real tissue. Furthermore, a simulator for minimally-invasive neurosurgical interventions is presented that utilizes the described visualization and simulation methods. The structure of the simulator is described in detail and the results of a system evaluation by an experienced neurosurgeon--a quantitative comparison between different methods of virtual endoscopy as well as a comparison between real brain images and virtual endoscopies--are given. The evaluation proved that the simulator provides a higher realism of the visualization and simulation then other currently available simulators. Copyright 2002 Elsevier Science B.V.

  10. Promoting smoke-free homes: a novel behavioral intervention using real-time audio-visual feedback on airborne particle levels.

    PubMed

    Klepeis, Neil E; Hughes, Suzanne C; Edwards, Rufus D; Allen, Tracy; Johnson, Michael; Chowdhury, Zohir; Smith, Kirk R; Boman-Davis, Marie; Bellettiere, John; Hovell, Melbourne F

    2013-01-01

    Interventions are needed to protect the health of children who live with smokers. We pilot-tested a real-time intervention for promoting behavior change in homes that reduces second hand tobacco smoke (SHS) levels. The intervention uses a monitor and feedback system to provide immediate auditory and visual signals triggered at defined thresholds of fine particle concentration. Dynamic graphs of real-time particle levels are also shown on a computer screen. We experimentally evaluated the system, field-tested it in homes with smokers, and conducted focus groups to obtain general opinions. Laboratory tests of the monitor demonstrated SHS sensitivity, stability, precision equivalent to at least 1 µg/m(3), and low noise. A linear relationship (R(2) = 0.98) was observed between the monitor and average SHS mass concentrations up to 150 µg/m(3). Focus groups and interviews with intervention participants showed in-home use to be acceptable and feasible. The intervention was evaluated in 3 homes with combined baseline and intervention periods lasting 9 to 15 full days. Two families modified their behavior by opening windows or doors, smoking outdoors, or smoking less. We observed evidence of lower SHS levels in these homes. The remaining household voiced reluctance to changing their smoking activity and did not exhibit lower SHS levels in main smoking areas or clear behavior change; however, family members expressed receptivity to smoking outdoors. This study established the feasibility of the real-time intervention, laying the groundwork for controlled trials with larger sample sizes. Visual and auditory cues may prompt family members to take immediate action to reduce SHS levels. Dynamic graphs of SHS levels may help families make decisions about specific mitigation approaches.

  11. Promoting Smoke-Free Homes: A Novel Behavioral Intervention Using Real-Time Audio-Visual Feedback on Airborne Particle Levels

    PubMed Central

    Klepeis, Neil E.; Hughes, Suzanne C.; Edwards, Rufus D.; Allen, Tracy; Johnson, Michael; Chowdhury, Zohir; Smith, Kirk R.; Boman-Davis, Marie; Bellettiere, John; Hovell, Melbourne F.

    2013-01-01

    Interventions are needed to protect the health of children who live with smokers. We pilot-tested a real-time intervention for promoting behavior change in homes that reduces second hand tobacco smoke (SHS) levels. The intervention uses a monitor and feedback system to provide immediate auditory and visual signals triggered at defined thresholds of fine particle concentration. Dynamic graphs of real-time particle levels are also shown on a computer screen. We experimentally evaluated the system, field-tested it in homes with smokers, and conducted focus groups to obtain general opinions. Laboratory tests of the monitor demonstrated SHS sensitivity, stability, precision equivalent to at least 1 µg/m3, and low noise. A linear relationship (R2 = 0.98) was observed between the monitor and average SHS mass concentrations up to 150 µg/m3. Focus groups and interviews with intervention participants showed in-home use to be acceptable and feasible. The intervention was evaluated in 3 homes with combined baseline and intervention periods lasting 9 to 15 full days. Two families modified their behavior by opening windows or doors, smoking outdoors, or smoking less. We observed evidence of lower SHS levels in these homes. The remaining household voiced reluctance to changing their smoking activity and did not exhibit lower SHS levels in main smoking areas or clear behavior change; however, family members expressed receptivity to smoking outdoors. This study established the feasibility of the real-time intervention, laying the groundwork for controlled trials with larger sample sizes. Visual and auditory cues may prompt family members to take immediate action to reduce SHS levels. Dynamic graphs of SHS levels may help families make decisions about specific mitigation approaches. PMID:24009742

  12. Towards real-time remote processing of laparoscopic video

    NASA Astrophysics Data System (ADS)

    Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.

    2015-03-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.

  13. The effect of automated monitoring and real-time prompting on nurses' hand hygiene performance.

    PubMed

    Levchenko, Alexander I; Boscart, Veronique M; Fernie, Geoff R

    2013-10-01

    Adequate hand hygiene compliance by healthcare staff is considered an effective method to reduce hospital-acquired infections. The electronic system developed at Toronto Rehabilitation Institute automatically detects hand hygiene opportunities and records hand hygiene actions. It includes an optional visual hand hygiene status indication, generates real-time hand hygiene prompting signals, and enables automated monitoring of individual and aggregated hand hygiene performance. The system was installed on a complex continuous care unit at the entrance to 17 patient rooms and a utility room. A total of 93 alcohol gel and soap dispensers were instrumented and 14 nurses were provided with the personal wearable electronic monitors. The study included three phases with the system operating in three different modes: (1) an inactive mode during the first phase when hand hygiene opportunities and hand hygiene actions were recorded but prompting and visual indication functions were disabled, (2) only hand hygiene status indicators were enabled during the second phase, and (3) both hand hygiene status and real-time hand hygiene prompting signals were enabled during the third phase. Data collection was performed automatically during all of the three phases. The system indicated significantly higher hand hygiene activity rates and compliance during the third phase, with both hand hygiene indication and real-time prompting functions enabled. To increase the efficacy of the technology, its use was supplemented with individual performance reviews of the automatically collected data.

  14. Applicability of Deep-Learning Technology for Relative Object-Based Navigation

    DTIC Science & Technology

    2017-09-01

    burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing...possible selections for navigating an unmanned ground vehicle (UGV) is through real- time visual odometry. To navigate in such an environment, the UGV...UGV) is through real- time visual odometry. To navigate in such an environment, the UGV needs to be able to detect, identify, and relate the static

  15. Probe-based confocal laser endomicroscopy (pCLE) - a new imaging technique for in situ localization of spermatozoa.

    PubMed

    Trottmann, Matthias; Stepp, Herbert; Sroka, Ronald; Heide, Michael; Liedl, Bernhard; Reese, Sven; Becker, Armin J; Stief, Christian G; Kölle, Sabine

    2015-05-01

    In azoospermic patients, spermatozoa are routinely obtained by testicular sperm extraction (TESE). However, success rates of this technique are moderate, because the site of excision of testicular tissue is determined arbitrarily. Therefore the aim of this study was to establish probe-based laser endomicroscopy (pCLE) a noval biomedical imaging technique, which provides the opportunity of non-invasive, real-time visualisation of tissue at histological resolution. Using pCLE we clearly visualized longitudinal and horizontal views of the tubuli seminiferi contorti and localized vital spermatozoa. Obtained images and real-time videos were subsequently compared with confocal laser scanning microscopy (CLSM) of spermatozoa and tissues, respectively. Comparative visualization of single native Confocal laser scanning microscopy (CLSM, left) and probe-based laser endomicroscopy (pCLE, right) using Pro Flex(TM) UltraMini O after staining with acriflavine. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Integration of Real-Time Intraoperative Contrast-Enhanced Ultrasound and Color Doppler Ultrasound in the Surgical Treatment of Spinal Cord Dural Arteriovenous Fistulas.

    PubMed

    Della Pepa, Giuseppe Maria; Sabatino, Giovanni; Sturiale, Carmelo Lucio; Marchese, Enrico; Puca, Alfredo; Olivi, Alessandro; Albanese, Alessio

    2018-04-01

    In the surgical treatment of spinal dural arteriovenous fistulas (DAVFs), intraoperative definition of anatomic characteristics of the DAVF and identification of the fistulous point is mandatory to effectively exclude the DAVF. Intraoperative ultrasound and contrast-enhanced ultrasound integrated with color Doppler ultrasound was applied in the surgical setting for a cervical DAVF to identify the fistulous point and evaluate correct occlusion of the fistula. Integration of intraoperative ultrasound and contrast-enhanced ultrasound is a simple, cost-effective technique that provides an opportunity for real-time dynamic visualization of DAVF vascular patterns, identification of the fistulous point, and assessment of correct exclusion. Compared with other intraoperative tools, such as indocyanine green videoangiography, it allows the surgeon to visualize hidden anatomic and vascular structures, minimizing surgical manipulation and guiding the surgeon during resection. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Guidance of visual attention by semantic information in real-world scenes

    PubMed Central

    Wu, Chia-Chien; Wick, Farahnaz Ahmed; Pomplun, Marc

    2014-01-01

    Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces. PMID:24567724

  18. Visual analytics techniques for large multi-attribute time series data

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Dayal, Umeshwar; Keim, Daniel A.

    2008-01-01

    Time series data commonly occur when variables are monitored over time. Many real-world applications involve the comparison of long time series across multiple variables (multi-attributes). Often business people want to compare this year's monthly sales with last year's sales to make decisions. Data warehouse administrators (DBAs) want to know their daily data loading job performance. DBAs need to detect the outliers early enough to act upon them. In this paper, two new visual analytic techniques are introduced: The color cell-based Visual Time Series Line Charts and Maps highlight significant changes over time in a long time series data and the new Visual Content Query facilitates finding the contents and histories of interesting patterns and anomalies, which leads to root cause identification. We have applied both methods to two real-world applications to mine enterprise data warehouse and customer credit card fraud data to illustrate the wide applicability and usefulness of these techniques.

  19. Harnessing the web information ecosystem with wiki-based visualization dashboards.

    PubMed

    McKeon, Matt

    2009-01-01

    We describe the design and deployment of Dashiki, a public website where users may collaboratively build visualization dashboards through a combination of a wiki-like syntax and interactive editors. Our goals are to extend existing research on social data analysis into presentation and organization of data from multiple sources, explore new metaphors for these activities, and participate more fully in the web!s information ecology by providing tighter integration with real-time data. To support these goals, our design includes novel and low-barrier mechanisms for editing and layout of dashboard pages and visualizations, connection to data sources, and coordinating interaction between visualizations. In addition to describing these technologies, we provide a preliminary report on the public launch of a prototype based on this design, including a description of the activities of our users derived from observation and interviews.

  20. A Novel Artificial Intelligence System for Endotracheal Intubation.

    PubMed

    Carlson, Jestin N; Das, Samarjit; De la Torre, Fernando; Frisch, Adam; Guyette, Francis X; Hodgins, Jessica K; Yealy, Donald M

    2016-01-01

    Adequate visualization of the glottic opening is a key factor to successful endotracheal intubation (ETI); however, few objective tools exist to help guide providers' ETI attempts toward the glottic opening in real-time. Machine learning/artificial intelligence has helped to automate the detection of other visual structures but its utility with ETI is unknown. We sought to test the accuracy of various computer algorithms in identifying the glottic opening, creating a tool that could aid successful intubation. We collected a convenience sample of providers who each performed ETI 10 times on a mannequin using a video laryngoscope (C-MAC, Karl Storz Corp, Tuttlingen, Germany). We recorded each attempt and reviewed one-second time intervals for the presence or absence of the glottic opening. Four different machine learning/artificial intelligence algorithms analyzed each attempt and time point: k-nearest neighbor (KNN), support vector machine (SVM), decision trees, and neural networks (NN). We used half of the videos to train the algorithms and the second half to test the accuracy, sensitivity, and specificity of each algorithm. We enrolled seven providers, three Emergency Medicine attendings, and four paramedic students. From the 70 total recorded laryngoscopic video attempts, we created 2,465 time intervals. The algorithms had the following sensitivity and specificity for detecting the glottic opening: KNN (70%, 90%), SVM (70%, 90%), decision trees (68%, 80%), and NN (72%, 78%). Initial efforts at computer algorithms using artificial intelligence are able to identify the glottic opening with over 80% accuracy. With further refinements, video laryngoscopy has the potential to provide real-time, direction feedback to the provider to help guide successful ETI.

  1. Real-time echocardiogram transmission protocol based on regions and visualization modes.

    PubMed

    Cavero, Eva; Alesanco, Álvaro; García, José

    2014-09-01

    This paper proposes an Echocardiogram Transmission Protocol (ETP) for real-time end-to-end transmission of echocardiograms over IP networks. The ETP has been designed taking into account the echocardiogram characteristics of each visualized region, encoding each region according to its data type, visualization characteristics and diagnostic importance in order to improve the coding and thus the transmission efficiency. Furthermore, each region is sent separately and different error protection techniques can be used for each region. This leads to an efficient use of resources and provides greater protection for those regions with more clinical information. Synchronization is implemented for regions that change over time. The echocardiogram composition is different for each device. The protocol is valid for all echocardiogram devices thanks to the incorporation of configuration information which includes the composition of the echocardiogram. The efficiency of the ETP has been proved in terms of the number of bits sent with the proposed protocol. The codec and transmission rates used for the regions of interest have been set according to previous recommendations. Although the saving in the codified bits depends on the video composition, a coding gain higher than 7% with respect to without using ETP has been achieved.

  2. Using a virtual world for robot planning

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Monaco, John V.; Lin, Yixia; Funk, Christopher; Lyons, Damian

    2012-06-01

    We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment, including people, and uses the model to process perceptual information and to plan its movements. This paper describes the structure of this architecture. The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the degree of detail required for the task. As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction unexpectedly. Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's actions. We report experimental results in indoor environments.

  3. Three-dimensional online surface reconstruction of augmented fluorescence lifetime maps using photometric stereo (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Unger, Jakob; Lagarto, Joao; Phipps, Jennifer; Ma, Dinglong; Bec, Julien; Sorger, Jonathan; Farwell, Gregory; Bold, Richard; Marcu, Laura

    2017-02-01

    Multi-Spectral Time-Resolved Fluorescence Spectroscopy (ms-TRFS) can provide label-free real-time feedback on tissue composition and pathology during surgical procedures by resolving the fluorescence decay dynamics of the tissue. Recently, an ms-TRFS system has been developed in our group, allowing for either point-spectroscopy fluorescence lifetime measurements or dynamic raster tissue scanning by merging a 450 nm aiming beam with the pulsed fluorescence excitation light in a single fiber collection. In order to facilitate an augmented real-time display of fluorescence decay parameters, the lifetime values are back projected to the white light video. The goal of this study is to develop a 3D real-time surface reconstruction aiming for a comprehensive visualization of the decay parameters and providing an enhanced navigation for the surgeon. Using a stereo camera setup, we use a combination of image feature matching and aiming beam stereo segmentation to establish a 3D surface model of the decay parameters. After camera calibration, texture-related features are extracted for both camera images and matched providing a rough estimation of the surface. During the raster scanning, the rough estimation is successively refined in real-time by tracking the aiming beam positions using an advanced segmentation algorithm. The method is evaluated for excised breast tissue specimens showing a high precision and running in real-time with approximately 20 frames per second. The proposed method shows promising potential for intraoperative navigation, i.e. tumor margin assessment. Furthermore, it provides the basis for registering the fluorescence lifetime maps to the tissue surface adapting it to possible tissue deformations.

  4. An Intelligent Cooperative Visual Sensor Network for Urban Mobility

    PubMed Central

    Leone, Giuseppe Riccardo; Petracca, Matteo; Salvetti, Ovidio; Azzarà, Andrea

    2017-01-01

    Smart cities are demanding solutions for improved traffic efficiency, in order to guarantee optimal access to mobility resources available in urban areas. Intelligent video analytics deployed directly on board embedded sensors offers great opportunities to gather highly informative data about traffic and transport, allowing reconstruction of a real-time neat picture of urban mobility patterns. In this paper, we present a visual sensor network in which each node embeds computer vision logics for analyzing in real time urban traffic. The nodes in the network share their perceptions and build a global and comprehensive interpretation of the analyzed scenes in a cooperative and adaptive fashion. This is possible thanks to an especially designed Internet of Things (IoT) compliant middleware which encompasses in-network event composition as well as full support of Machine-2-Machine (M2M) communication mechanism. The potential of the proposed cooperative visual sensor network is shown with two sample applications in urban mobility connected to the estimation of vehicular flows and parking management. Besides providing detailed results of each key component of the proposed solution, the validity of the approach is demonstrated by extensive field tests that proved the suitability of the system in providing a scalable, adaptable and extensible data collection layer for managing and understanding mobility in smart cities. PMID:29125535

  5. An Intelligent Cooperative Visual Sensor Network for Urban Mobility.

    PubMed

    Leone, Giuseppe Riccardo; Moroni, Davide; Pieri, Gabriele; Petracca, Matteo; Salvetti, Ovidio; Azzarà, Andrea; Marino, Francesco

    2017-11-10

    Smart cities are demanding solutions for improved traffic efficiency, in order to guarantee optimal access to mobility resources available in urban areas. Intelligent video analytics deployed directly on board embedded sensors offers great opportunities to gather highly informative data about traffic and transport, allowing reconstruction of a real-time neat picture of urban mobility patterns. In this paper, we present a visual sensor network in which each node embeds computer vision logics for analyzing in real time urban traffic. The nodes in the network share their perceptions and build a global and comprehensive interpretation of the analyzed scenes in a cooperative and adaptive fashion. This is possible thanks to an especially designed Internet of Things (IoT) compliant middleware which encompasses in-network event composition as well as full support of Machine-2-Machine (M2M) communication mechanism. The potential of the proposed cooperative visual sensor network is shown with two sample applications in urban mobility connected to the estimation of vehicular flows and parking management. Besides providing detailed results of each key component of the proposed solution, the validity of the approach is demonstrated by extensive field tests that proved the suitability of the system in providing a scalable, adaptable and extensible data collection layer for managing and understanding mobility in smart cities.

  6. Feasibility of real-time magnetic resonance imaging-guided endomyocardial biopsies: An in-vitro study.

    PubMed

    Lossnitzer, Dirk; Seitz, Sebastian A; Krautz, Birgit; Schnackenburg, Bernhard; André, Florian; Korosoglou, Grigorios; Katus, Hugo A; Steen, Henning

    2015-07-26

    To investigate if magnetic resonance (MR)-guided biopsy can improve the performance and safety of such procedures. A novel MR-compatible bioptome was evaluated in a series of in-vitro experiments in a 1.5T magnetic resonance imaging (MRI) system. The bioptome was inserted into explanted porcine and bovine hearts under real-time MR-guidance employing a steady state free precession sequence. The artifact produced by the metal element at the tip and the signal voids caused by the bioptome were visually tracked for navigation and allowed its constant and precise localization. Cardiac structural elements and the target regions for the biopsy were clearly visible. Our method allowed a significantly better spatial visualization of the bioptoms tip compared to conventional X-ray guidance. The specific device design of the bioptome avoided inducible currents and therefore subsequent heating. The novel MR-compatible bioptome provided a superior cardiovascular magnetic resonance (imaging) soft-tissue visualization for MR-guided myocardial biopsies. Not at least the use of MRI guidance for endomyocardial biopsies completely avoided radiation exposure for both patients and interventionalists. MRI-guided endomyocardial biopsies provide a better than conventional X-ray guided navigation and could therefore improve the specificity and reproducibility of cardiac biopsies in future studies.

  7. Real-time decoding of the direction of covert visuospatial attention

    NASA Astrophysics Data System (ADS)

    Andersson, Patrik; Ramsey, Nick F.; Raemaekers, Mathijs; Viergever, Max A.; Pluim, Josien P. W.

    2012-08-01

    Brain-computer interfaces (BCIs) make it possible to translate a person’s intentions into actions without depending on the muscular system. Brain activity is measured and classified into commands, thereby creating a direct link between the mind and the environment, enabling, e.g., cursor control or navigation of a wheelchair or robot. Most BCI research is conducted with scalp EEG but recent developments move toward intracranial electrodes for paralyzed people. The vast majority of BCI studies focus on the motor system as the appropriate target for recording and decoding movement intentions. However, properties of the visual system may make the visual system an attractive and intuitive alternative. We report on a study investigating feasibility of decoding covert visuospatial attention in real time, exploiting the full potential of a 7 T MRI scanner to obtain the necessary signal quality, capitalizing on earlier fMRI studies indicating that covert visuospatial attention changes activity in the visual areas that respond to stimuli presented in the attended area of the visual field. Healthy volunteers were instructed to shift their attention from the center of the screen to one of four static targets in the periphery, without moving their eyes from the center. During the first part of the fMRI-run, the relevant brain regions were located using incremental statistical analysis. During the second part, the activity in these regions was extracted and classified, and the subject was given visual feedback of the result. Performance was assessed as the number of trials where the real-time classifier correctly identified the direction of attention. On average, 80% of trials were correctly classified (chance level <25%) based on a single image volume, indicating very high decoding performance. While we restricted the experiment to five attention target regions (four peripheral and one central), the number of directions can be higher provided the brain activity patterns can be distinguished. In summary, the visual system promises to be an effective target for BCI control.

  8. Using Visualization in Cockpit Decision Support Systems

    NASA Technical Reports Server (NTRS)

    Aragon, Cecilia R.

    2005-01-01

    In order to safely operate their aircraft, pilots must make rapid decisions based on integrating and processing large amounts of heterogeneous information. Visual displays are often the most efficient method of presenting safety-critical data to pilots in real time. However, care must be taken to ensure the pilot is provided with the appropriate amount of information to make effective decisions and not become cognitively overloaded. The results of two usability studies of a prototype airflow hazard visualization cockpit decision support system are summarized. The studies demonstrate that such a system significantly improves the performance of helicopter pilots landing under turbulent conditions. Based on these results, design principles and implications for cockpit decision support systems using visualization are presented.

  9. Real-time continuous visual biofeedback in the treatment of speech breathing disorders following childhood traumatic brain injury: report of one case.

    PubMed

    Murdoch, B E; Pitt, G; Theodoros, D G; Ward, E C

    1999-01-01

    The efficacy of traditional and physiological biofeedback methods for modifying abnormal speech breathing patterns was investigated in a child with persistent dysarthria following severe traumatic brain injury (TBI). An A-B-A-B single-subject experimental research design was utilized to provide the subject with two exclusive periods of therapy for speech breathing, based on traditional therapy techniques and physiological biofeedback methods, respectively. Traditional therapy techniques included establishing optimal posture for speech breathing, explanation of the movement of the respiratory muscles, and a hierarchy of non-speech and speech tasks focusing on establishing an appropriate level of sub-glottal air pressure, and improving the subject's control of inhalation and exhalation. The biofeedback phase of therapy utilized variable inductance plethysmography (or Respitrace) to provide real-time, continuous visual biofeedback of ribcage circumference during breathing. As in traditional therapy, a hierarchy of non-speech and speech tasks were devised to improve the subject's control of his respiratory pattern. Throughout the project, the subject's respiratory support for speech was assessed both instrumentally and perceptually. Instrumental assessment included kinematic and spirometric measures, and perceptual assessment included the Frenchay Dysarthria Assessment, Assessment of Intelligibility of Dysarthric Speech, and analysis of a speech sample. The results of the study demonstrated that real-time continuous visual biofeedback techniques for modifying speech breathing patterns were not only effective, but superior to the traditional therapy techniques for modifying abnormal speech breathing patterns in a child with persistent dysarthria following severe TBI. These results show that physiological biofeedback techniques are potentially useful clinical tools for the remediation of speech breathing impairment in the paediatric dysarthric population.

  10. Advanced Technology for Portable Personal Visualization.

    DTIC Science & Technology

    1992-06-01

    interactive radiosity . 6 Advanced Technology for Portable Personal Visualization Progress Report January-June 1992 9 2.5 Virtual-Environment Ultrasound...the system, with support for textures, model partitioning, more complex radiosity emitters, and the replacement of model parts with objects from our...model libraries. "* Add real-time, interactive radiosity to the display program on Pixel-Planes 5. "* Move the real-time model mesh-generation to the

  11. Optoelectronic aid for patients with severely restricted visual fields in daylight conditions

    NASA Astrophysics Data System (ADS)

    Peláez-Coca, María Dolores; Sobrado-Calvo, Paloma; Vargas-Martín, Fernando

    2011-11-01

    In this study we evaluated the immediate effectiveness of an optoelectronic visual field expander in a sample of subjects with retinitis pigmentosa suffering from a severe peripheral visual field restriction. The aid uses the augmented view concept and provides subjects with visual information from outside their visual field. The tests were carried out in daylight conditions. The optoelectronic aid comprises a FPGA (real-time video processor), a wide-angle mini camera and a transparent see-through head-mounted display. This optoelectronic aid is called SERBA (Sistema Electro-óptico Reconfigurable de Ayuda para Baja Visión). We previously showed that, without compromising residual vision, the SERBA system provides information about objects within an area about three times greater on average than the remaining visual field of the subjects [1]. In this paper we address the effects of the device on mobility under daylight conditions with and without SERBA. The participants were six subjects with retinitis pigmentosa. In this mobility test, better results were obtained when subjects were wearing the SERBA system; specifically, both the number of contacts with low-level obstacles and mobility errors decreased significantly. A longer training period with the device might improve its usefulness.

  12. Real-Time Indoor Scene Description for the Visually Impaired Using Autoencoder Fusion Strategies with Visible Cameras.

    PubMed

    Malek, Salim; Melgani, Farid; Mekhalfi, Mohamed Lamine; Bazi, Yakoub

    2017-11-16

    This paper describes three coarse image description strategies, which are meant to promote a rough perception of surrounding objects for visually impaired individuals, with application to indoor spaces. The described algorithms operate on images (grabbed by the user, by means of a chest-mounted camera), and provide in output a list of objects that likely exist in his context across the indoor scene. In this regard, first, different colour, texture, and shape-based feature extractors are generated, followed by a feature learning step by means of AutoEncoder (AE) models. Second, the produced features are fused and fed into a multilabel classifier in order to list the potential objects. The conducted experiments point out that fusing a set of AE-learned features scores higher classification rates with respect to using the features individually. Furthermore, with respect to reference works, our method: (i) yields higher classification accuracies, and (ii) runs (at least four times) faster, which enables a potential full real-time application.

  13. Climate Outreach Using Regional Coastal Ocean Observing System Portals

    NASA Astrophysics Data System (ADS)

    Anderson, D. M.; Hernandez, D. L.; Wakely, A.; Bochenek, R. J.; Bickel, A.

    2015-12-01

    Coastal oceans are dynamic, changing environments affected by processes ranging from seconds to millennia. On the east and west coast of the U.S., regional observing systems have deployed and sustained a remarkable diverse array of observing tools and sensors. Data portals visualize and provide access to real-time sensor networks. Portals have emerged as an interactive tool for educators to help students explore and understand climate. Bringing data portals to outreach events, into classrooms, and onto tablets and smartphones enables educators to address topics and phenomena happening right now. For example at the 2015 Charleston Science Technology Engineering and Math (STEM) Festival, visitors navigated the SECOORA (Southeast Coastal Ocean Observing regional Association) data portal to view the real-time marine meteorological conditions off South Carolina. Map-based entry points provide an intuitive interface for most students, an array of time series and other visualizations depict many of the essential principles of climate science manifest in the coastal zone, and data down-load/ extract options provide access to the data and documentation for further inquiry by advanced users. Beyond the exposition of climate principles, the portal experience reveals remarkable technologies in action and shows how the observing system is enabled by the activity of many different partners.

  14. Hierarchical Spatio-temporal Visual Analysis of Cluster Evolution in Electrocorticography Data

    DOE PAGES

    Murugesan, Sugeerth; Bouchard, Kristofer; Chang, Edward; ...

    2016-10-02

    Here, we present ECoG ClusterFlow, a novel interactive visual analysis tool for the exploration of high-resolution Electrocorticography (ECoG) data. Our system detects and visualizes dynamic high-level structures, such as communities, using the time-varying spatial connectivity network derived from the high-resolution ECoG data. ECoG ClusterFlow provides a multi-scale visualization of the spatio-temporal patterns underlying the time-varying communities using two views: 1) an overview summarizing the evolution of clusters over time and 2) a hierarchical glyph-based technique that uses data aggregation and small multiples techniques to visualize the propagation of clusters in their spatial domain. ECoG ClusterFlow makes it possible 1) tomore » compare the spatio-temporal evolution patterns across various time intervals, 2) to compare the temporal information at varying levels of granularity, and 3) to investigate the evolution of spatial patterns without occluding the spatial context information. Lastly, we present case studies done in collaboration with neuroscientists on our team for both simulated and real epileptic seizure data aimed at evaluating the effectiveness of our approach.« less

  15. Drawing disability in Japanese manga: visual politics, embodied masculinity, and wheelchair basketball in Inoue Takehiko's REAL.

    PubMed

    Wood, Andrea

    2013-12-01

    This work explores disability in the cultural context of contemporary Japanese comics. In contrast to Western comics, Japanese manga have permeated the social fabric of Japan to the extent that vast numbers of people read manga on a daily basis. It has, in fact, become such a popular medium for visual communication that the Japanese government and education systems utilize manga as a social acculturation and teaching tool. This multibillion dollar industry is incredibly diverse, and one particularly popular genre is sports manga. However, Inoue Takehiko's award-winning manga series REAL departs from more conventional sports manga, which typically focus on able-bodied characters with sometimes exaggerated superhuman physical abilities, by adopting a more realistic approach to the world of wheelchair basketball and the people who play it. At the same time REAL explores cultural attitudes toward disability in Japanese culture-where disability is at times rendered "invisible" either through accessibility problems or lingering associations of disability and shame. It is therefore extremely significant that manga, a visual medium, is rendering disability visible-the ultimate movement from margin to center. REAL devotes considerable attention to realistically illustrating the lived experiences of its characters both on and off the court. Consequently, the series not only educates readers about wheelchair basketball but also provides compelling insight into Japanese cultural notions about masculinity, family, responsibility, and identity. The basketball players-at first marginalized by their disability-join together in the unity of a sport typically characterized by its "abledness."

  16. MO-DE-BRA-04: Hands-On Fluoroscopy Safety Training with Real-Time Patient and Staff Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanderhoek, M; Bevins, N

    Purpose: Fluoroscopically guided interventions (FGI) are routinely performed across many different hospital departments. However, many involved staff members have minimal training regarding safe and optimal use of fluoroscopy systems. We developed and taught a hands-on fluoroscopy safety class incorporating real-time patient and staff dosimetry in order to promote safer and more optimal use of fluoroscopy during FGI. Methods: The hands-on fluoroscopy safety class is taught in an FGI suite, unique to each department. A patient equivalent phantom is set on the patient table with an ion chamber positioned at the x-ray beam entrance to the phantom. This provides a surrogatemore » measure of patient entrance dose. Multiple solid state dosimeters (RaySafe i2 dosimetry systemTM) are deployed at different distances from the phantom (0.1, 1, 3 meters), which provide surrogate measures of staff dose. Instructors direct participating clinical staff to operate the fluoroscopy system as they view live fluoroscopic images, patient entrance dose, and staff doses in real-time. During class, instructors work with clinical staff to investigate how patient entrance dose, staff doses, and image quality are affected by different parameters, including pulse rate, magnification, collimation, beam angulation, imaging mode, system geometry, distance, and shielding. Results: Real-time dose visualization enables clinical staff to directly see and learn how to optimize their use of their own fluoroscopy system to minimize patient and staff dose, yet maintain sufficient image quality for FGI. As a direct result of the class, multiple hospital departments have implemented changes to their imaging protocols, including reduction of the default fluoroscopy pulse rate and increased use of collimation and lower dose fluoroscopy modes. Conclusion: Hands-on fluoroscopy safety training substantially benefits from real-time patient and staff dosimetry incorporated into the class. Real-time dose display helps clinical staff visualize, internalize, and ultimately utilize the safety techniques learned during the training. RaySafe/Unfors/Fluke lent us a portable version of their RaySafe i2 Dosimetry System for 6 months.« less

  17. Visualizing Mobility of Public Transportation System.

    PubMed

    Zeng, Wei; Fu, Chi-Wing; Arisona, Stefan Müller; Erath, Alexander; Qu, Huamin

    2014-12-01

    Public transportation systems (PTSs) play an important role in modern cities, providing shared/massive transportation services that are essential for the general public. However, due to their increasing complexity, designing effective methods to visualize and explore PTS is highly challenging. Most existing techniques employ network visualization methods and focus on showing the network topology across stops while ignoring various mobility-related factors such as riding time, transfer time, waiting time, and round-the-clock patterns. This work aims to visualize and explore passenger mobility in a PTS with a family of analytical tasks based on inputs from transportation researchers. After exploring different design alternatives, we come up with an integrated solution with three visualization modules: isochrone map view for geographical information, isotime flow map view for effective temporal information comparison and manipulation, and OD-pair journey view for detailed visual analysis of mobility factors along routes between specific origin-destination pairs. The isotime flow map linearizes a flow map into a parallel isoline representation, maximizing the visualization of mobility information along the horizontal time axis while presenting clear and smooth pathways from origin to destinations. Moreover, we devise several interactive visual query methods for users to easily explore the dynamics of PTS mobility over space and time. Lastly, we also construct a PTS mobility model from millions of real passenger trajectories, and evaluate our visualization techniques with assorted case studies with the transportation researchers.

  18. Pen-Enabled, Real-Time Student Engagement for Teaching in STEM Subjects

    ERIC Educational Resources Information Center

    Urban, Sylvia

    2017-01-01

    The introduction of pen-enabling devices has been demonstrated to increase a student's ability to solve problems, communicate, and learn during note taking. For the science, technology, engineering, and mathematics subjects that are considered to be symbolic in nature, pen interfaces are better suited for visual-spatial content and also provide a…

  19. Dormitory Residents Reduce Electricity Consumption when Exposed to Real-Time Visual Feedback and Incentives

    ERIC Educational Resources Information Center

    Petersen, John E.; Shunturov, Vladislav; Janda, Kathryn; Platt, Gavin; Weinberger, Kate

    2007-01-01

    Purpose: In residential buildings, personal choices influence electricity and water consumption. Prior studies indicate that information feedback can stimulate resource conservation. College dormitories provide an excellent venue for controlled study of the effects of feedback. The goal of this study is to assess how different resolutions of…

  20. Digital Education Governance: Data Visualization, Predictive Analytics, and "Real-Time" Policy Instruments

    ERIC Educational Resources Information Center

    Williamson, Ben

    2016-01-01

    Educational institutions and governing practices are increasingly augmented with digital database technologies that function as new kinds of policy instruments. This article surveys and maps the landscape of digital policy instrumentation in education and provides two detailed case studies of new digital data systems. The Learning Curve is a…

  1. Tracking Real-Time Neural Activation of Conceptual Knowledge Using Single-Trial Event-Related Potentials

    ERIC Educational Resources Information Center

    Amsel, Ben D.

    2011-01-01

    Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed…

  2. An affordable wearable video system for emergency response training

    NASA Astrophysics Data System (ADS)

    King-Smith, Deen; Mikkilineni, Aravind; Ebert, David; Collins, Timothy; Delp, Edward J.

    2009-02-01

    Many emergency response units are currently faced with restrictive budgets that prohibit their use of advanced technology-based training solutions. Our work focuses on creating an affordable, mobile, state-of-the-art emergency response training solution through the integration of low-cost, commercially available products. The system we have developed consists of tracking, audio, and video capability, coupled with other sensors that can all be viewed through a unified visualization system. In this paper we focus on the video sub-system which helps provide real time tracking and video feeds from the training environment through a system of wearable and stationary cameras. These two camera systems interface with a management system that handles storage and indexing of the video during and after training exercises. The wearable systems enable the command center to have live video and tracking information for each trainee in the exercise. The stationary camera systems provide a fixed point of reference for viewing action during the exercise and consist of a small Linux based portable computer and mountable camera. The video management system consists of a server and database which work in tandem with a visualization application to provide real-time and after action review capability to the training system.

  3. RealSurf - A Tool for the Interactive Visualization of Mathematical Models

    NASA Astrophysics Data System (ADS)

    Stussak, Christian; Schenzel, Peter

    For applications in fine art, architecture and engineering it is often important to visualize and to explore complex mathematical models. In former times there were static models of them collected in museums respectively in mathematical institutes. In order to check their properties for esthetical reasons it could be helpful to explore them interactively in 3D in real time. For the class of implicitly given algebraic surfaces we developed the tool RealSurf. Here we give an introduction to the program and some hints for the design of interesting surfaces.

  4. Climate Engine - Monitoring Drought with Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Hegewisch, K.; Daudert, B.; Morton, C.; McEvoy, D.; Huntington, J. L.; Abatzoglou, J. T.

    2016-12-01

    Drought has adverse effects on society through reduced water availability and agricultural production and increased wildfire risk. An abundance of remotely sensed imagery and climate data are being collected in near-real time that can provide place-based monitoring and early warning of drought and related hazards. However, in an era of increasing wealth of earth observations, tools that quickly access, compute, and visualize archives, and provide answers at relevant scales to better inform decision-making are lacking. We have developed ClimateEngine.org, a web application that uses Google's Earth Engine platform to enable users to quickly compute and visualize real-time observations. A suite of drought indices allow us to monitor and track drought from local (30-meters) to regional scales and contextualize current droughts within the historical record. Climate Engine is currently being used by U.S. federal agencies and researchers to develop baseline conditions and impact assessments related to agricultural, ecological, and hydrological drought. Climate Engine is also working with the Famine Early Warning Systems Network (FEWS NET) to expedite monitoring agricultural drought over broad areas at risk of food insecurity globally.

  5. Real-time new satellite product demonstration from microwave sensors and GOES-16 at NRL TC web

    NASA Astrophysics Data System (ADS)

    Cossuth, J.; Richardson, K.; Surratt, M. L.; Bankert, R.

    2017-12-01

    The Naval Research Laboratory (NRL) Tropical Cyclone (TC) satellite webpage (https://www.nrlmry.navy.mil/TC.html) provides demonstration analyses of storm imagery to benefit operational TC forecast centers around the world. With the availability of new spectral information provided by GOES-16 satellite data and recent research into improved visualization methods of microwave data, experimental imagery was operationally tested to visualize the structural changes of TCs during the 2017 hurricane season. This presentation provides an introduction into these innovative satellite analysis methods, NRL's next generation satellite analysis system (the Geolocated Information Processing System, GeoIPSTM), and demonstration the added value of additional spectral frequencies when monitoring storms in near-realtime.

  6. A real-time phoneme counting algorithm and application for speech rate monitoring.

    PubMed

    Aharonson, Vered; Aharonson, Eran; Raichlin-Levi, Katia; Sotzianu, Aviv; Amir, Ofer; Ovadia-Blechman, Zehava

    2017-03-01

    Adults who stutter can learn to control and improve their speech fluency by modifying their speaking rate. Existing speech therapy technologies can assist this practice by monitoring speaking rate and providing feedback to the patient, but cannot provide an accurate, quantitative measurement of speaking rate. Moreover, most technologies are too complex and costly to be used for home practice. We developed an algorithm and a smartphone application that monitor a patient's speaking rate in real time and provide user-friendly feedback to both patient and therapist. Our speaking rate computation is performed by a phoneme counting algorithm which implements spectral transition measure extraction to estimate phoneme boundaries. The algorithm is implemented in real time in a mobile application that presents its results in a user-friendly interface. The application incorporates two modes: one provides the patient with visual feedback of his/her speech rate for self-practice and another provides the speech therapist with recordings, speech rate analysis and tools to manage the patient's practice. The algorithm's phoneme counting accuracy was validated on ten healthy subjects who read a paragraph at slow, normal and fast paces, and was compared to manual counting of speech experts. Test-retest and intra-counter reliability were assessed. Preliminary results indicate differences of -4% to 11% between automatic and human phoneme counting. Differences were largest for slow speech. The application can thus provide reliable, user-friendly, real-time feedback for speaking rate control practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. PACS-based interface for 3D anatomical structure visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Koehl, Christophe; Soler, Luc; Marescaux, Jacques

    2002-05-01

    The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.

  8. Visual tracking using neuromorphic asynchronous event-based cameras.

    PubMed

    Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad

    2015-04-01

    This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.

  9. REACH: Real-Time Data Awareness in Multi-Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Maks, Lori; Coleman, Jason; Obenschain, Arthur F. (Technical Monitor)

    2002-01-01

    Missions have been proposed that will use multiple spacecraft to perform scientific or commercial tasks. Indeed, in the commercial world, some spacecraft constellations already exist. Aside from the technical challenges of constructing and flying these missions, there is also the financial challenge presented by the tradition model of the flight operations team (FOT) when it is applied to a constellation mission. Proposed constellation missions range in size from three spacecraft to more than 50. If the current ratio of three-to-five FOT personnel per spacecraft is maintained, the size of the FOT becomes cost prohibitive. The Advanced Architectures and Automation Branch at the Goddard Space Flight Center (GSFC Code 588) saw the potential to reduce the cost of these missions by creating new user interfaces to the ground system health-and-safety data. The goal is to enable a smaller FOT to remain aware and responsive to the increased amount of ground system information in a multi-spacecraft environment. Rather than abandon the tried and true, these interfaces were developed to run alongside existing ground system software to provide additional support to the FOT. These new user interfaces have been combined in a tool called REACH. REACH-the Real-time Evaluation and Analysis of Consolidated Health-is a software product that uses advanced visualization techniques to make spacecraft anomalies easy to spot, no matter how many spacecraft are in the constellation. REACH reads a real-time stream of data from the ground system and displays it to the FOT such that anomalies are easy to pick out and investigate. Data visualization has been used in ground system operations for many years. To provide a unique visualization tool, we developed a unique source of data to visualize: the REACH Health Model Engine. The Health Model Engine is rule-based software that receives real-time telemetry information and outputs "health" information related to the subsystems and spacecraft that the telemetry belong to. The Health Engine can run out-of-the-box or can be tailored with a scripting language. Out of the box, it uses limit violations to determine the health of subsystems and spacecraft; when tailored, it determines health using equations combining the values and limits of any telemetry in the spacecraft. The REACH visualizations then "roll up" the information from the Health Engine into high level, summary displays. These summary visualizations can be "zoomed" into for increasing levels of detail. Currently REACH is installed in the Small Explorer (SMEX) lab at GSFC, and is monitoring three of their five spacecraft. We are scheduled to install REACH in the Mid-sized Explorer (MIDEX) lab, which will allow us to monitor up to six more spacecraft. The process of installing and using our "research" software in an operational environment has provided many insights into which parts of REACH are a step forward and which of our ideas are missteps. Our paper explores both the new concepts in spacecraft health-and-safety visualization, the difficulties of such systems in the operational environment, and the cost and safety issues of multi-spacecraft missions.

  10. Visual in vivo degradation of injectable hydrogel by real-time and non-invasive tracking using carbon nanodots as fluorescent indicator.

    PubMed

    Wang, Lei; Li, Baoqiang; Xu, Feng; Li, Ying; Xu, Zheheng; Wei, Daqing; Feng, Yujie; Wang, Yaming; Jia, Dechang; Zhou, Yu

    2017-11-01

    Visual in vivo degradation of hydrogel by fluorescence-related tracking and monitoring is crucial for quantitatively depicting the degradation profile of hydrogel in a real-time and non-invasive manner. However, the commonly used fluorescent imaging usually encounters limitations, such as intrinsic photobleaching of organic fluorophores and uncertain perturbation of degradation induced by the change in molecular structure of hydrogel. To address these problems, we employed photoluminescent carbon nanodots (CNDs) with low photobleaching, red emission and good biocompatibility as fluorescent indicator for real-time and non-invasive visual in vitro/in vivo degradation of injectable hydrogels that are mixed with CNDs. The in vitro/in vivo toxicity results suggested that CNDs were nontoxic. The embedded CNDs in hydrogels did not diffuse outside in the absence of hydrogel degradation. We had acquired similar degradation kinetics (PBS-Enzyme) between gravimetric and visual determination, and established mathematical equation to quantitatively depict in vitro degradation profile of hydrogels for the predication of in vivo hydrogel degradation. Based on the in vitro data, we developed a visual platform that could quantitatively depict in vivo degradation behavior of new injectable biomaterials by real-time and non-invasive fluorescence tracking. This fluorescence-related visual imaging methodology could be applied to subcutaneous degradation of injectable hydrogel with down to 7 mm depth in small animal trials so far. This fluorescence-related visual imaging methodology holds great potentials for rational design and convenient in vivo screening of biocompatible and biodegradable injectable hydrogels in tissue engineering. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. RighTime: A real time clock correcting program for MS-DOS-based computer systems

    NASA Technical Reports Server (NTRS)

    Becker, G. Thomas

    1993-01-01

    A computer program is described which effectively eliminates the misgivings of the DOS system clock in PC/AT-class computers. RighTime is a small, sophisticated memory-resident program that automatically corrects both the DOS system clock and the hardware 'CMOS' real time clock (RTC) in real time. RighTime learns what corrections are required without operator interaction beyond the occasional accurate time set. Both warm (power on) and cool (power off) errors are corrected, usually yielding better than one part per million accuracy in the typical desktop computer with no additional hardware, and RighTime increases the system clock resolution from approximately 0.0549 second to 0.01 second. Program tools are also available which allow visualization of RighTime's actions, verification of its performance, display of its history log, and which provide data for graphing of the system clock behavior. The program has found application in a wide variety of industries, including astronomy, satellite tracking, communications, broadcasting, transportation, public utilities, manufacturing, medicine, and the military.

  12. Real-Time Electronic Dashboard Technology and Its Use to Improve Pediatric Radiology Workflow.

    PubMed

    Shailam, Randheer; Botwin, Ariel; Stout, Markus; Gee, Michael S

    The purpose of our study was to create a real-time electronic dashboard in the pediatric radiology reading room providing a visual display of updated information regarding scheduled and in-progress radiology examinations that could help radiologists to improve clinical workflow and efficiency. To accomplish this, a script was set up to automatically send real-time HL7 messages from the radiology information system (Epic Systems, Verona, WI) to an Iguana Interface engine, with relevant data regarding examinations stored in an SQL Server database for visual display on the dashboard. Implementation of an electronic dashboard in the reading room of a pediatric radiology academic practice has led to several improvements in clinical workflow, including decreasing the time interval for radiologist protocol entry for computed tomography or magnetic resonance imaging examinations as well as fewer telephone calls related to unprotocoled examinations. Other advantages include enhanced ability of radiologists to anticipate and attend to examinations requiring radiologist monitoring or scanning, as well as to work with technologists and operations managers to optimize scheduling in radiology resources. We foresee increased utilization of electronic dashboard technology in the future as a method to improve radiology workflow and quality of patient care. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Real-time network security situation visualization and threat assessment based on semi-Markov process

    NASA Astrophysics Data System (ADS)

    Chen, Junhua

    2013-03-01

    To cope with a large amount of data in current sensed environments, decision aid tools should provide their understanding of situations in a time-efficient manner, so there is an increasing need for real-time network security situation awareness and threat assessment. In this study, the state transition model of vulnerability in the network based on semi-Markov process is proposed at first. Once events are triggered by an attacker's action or system response, the current states of the vulnerabilities are known. Then we calculate the transition probabilities of the vulnerability from the current state to security failure state. Furthermore in order to improve accuracy of our algorithms, we adjust the probabilities that they exploit the vulnerability according to the attacker's skill level. In the light of the preconditions and post-conditions of vulnerabilities in the network, attack graph is built to visualize security situation in real time. Subsequently, we predict attack path, recognize attack intention and estimate the impact through analysis of attack graph. These help administrators to insight into intrusion steps, determine security state and assess threat. Finally testing in a network shows that this method is reasonable and feasible, and can undertake tremendous analysis task to facilitate administrators' work.

  14. Hand Path Priming in Manual Obstacle Avoidance: Evidence that the Dorsal Stream Does Not Only Control Visually Guided Actions in Real Time

    ERIC Educational Resources Information Center

    Jax, Steven A.; Rosenbaum, David A.

    2007-01-01

    According to a prominent theory of human perception and performance (M. A. Goodale & A. D. Milner, 1992), the dorsal, action-related stream only controls visually guided actions in real time. Such a system would be predicted to show little or no action priming from previous experience. The 3 experiments reported here were designed to determine…

  15. Real-time processing of ASL signs: Delayed first language acquisition affects organization of the mental lexicon

    PubMed Central

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2014-01-01

    Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of linguistic input and for deaf individuals is highly heterogeneous, which is rarely the case for hearing learners of spoken languages. Little is known about how these modality and developmental factors affect real-time lexical processing. In this study, we ask how these factors impact real-time recognition of American Sign Language (ASL) signs using a novel adaptation of the visual world paradigm in deaf adults who learned sign from birth (Experiment 1), and in deaf individuals who were late-learners of ASL (Experiment 2). Results revealed that although both groups of signers demonstrated rapid, incremental processing of ASL signs, only native-signers demonstrated early and robust activation of sub-lexical features of signs during real-time recognition. Our findings suggest that the organization of the mental lexicon into units of both form and meaning is a product of infant language learning and not the sensory and motor modality through which the linguistic signal is sent and received. PMID:25528091

  16. The CommonGround Visual Paradigm for Biosurveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livnat, Yarden; Jurrus, Elizabeth R.; Gundlapalli, Adi V.

    2013-06-14

    Biosurveillance is a critical area in the intelligence community for real-time detection of disease outbreaks. Identifying epidemics enables analysts to detect and monitor disease outbreaks that might be spread from natural causes or from possible biological warfare attacks. Containing these events and disseminating alerts requires the ability to rapidly find, classify and track harmful biological signatures. In this paper, we describe a novel visual paradigm to conduct biosurveillance using an Infectious Disease Weather Map. Our system provides a visual common ground in which users can view, explore and discover emerging concepts and correlations such as symptoms, syndromes, pathogens, and geographicmore » locations.« less

  17. GPU-based efficient realistic techniques for bleeding and smoke generation in surgical simulators.

    PubMed

    Halic, Tansel; Sankaranarayanan, Ganesh; De, Suvranu

    2010-12-01

    In actual surgery, smoke and bleeding due to cauterization processes provide important visual cues to the surgeon, which have been proposed as factors in surgical skill assessment. While several virtual reality (VR)-based surgical simulators have incorporated the effects of bleeding and smoke generation, they are not realistic due to the requirement of real-time performance. To be interactive, visual update must be performed at at least 30 Hz and haptic (touch) information must be refreshed at 1 kHz. Simulation of smoke and bleeding is, therefore, either ignored or simulated using highly simplified techniques, since other computationally intensive processes compete for the available Central Processing Unit (CPU) resources. In this study we developed a novel low-cost method to generate realistic bleeding and smoke in VR-based surgical simulators, which outsources the computations to the graphical processing unit (GPU), thus freeing up the CPU for other time-critical tasks. This method is independent of the complexity of the organ models in the virtual environment. User studies were performed using 20 subjects to determine the visual quality of the simulations compared to real surgical videos. The smoke and bleeding simulation were implemented as part of a laparoscopic adjustable gastric banding (LAGB) simulator. For the bleeding simulation, the original implementation using the shader did not incur noticeable overhead. However, for smoke generation, an input/output (I/O) bottleneck was observed and two different methods were developed to overcome this limitation. Based on our benchmark results, a buffered approach performed better than a pipelined approach and could support up to 15 video streams in real time. Human subject studies showed that the visual realism of the simulations were as good as in real surgery (median rating of 4 on a 5-point Likert scale). Based on the performance results and subject study, both bleeding and smoke simulations were concluded to be efficient, highly realistic and well suited to VR-based surgical simulators. Copyright © 2010 John Wiley & Sons, Ltd.

  18. GPU-based Efficient Realistic Techniques for Bleeding and Smoke Generation in Surgical Simulators

    PubMed Central

    Halic, Tansel; Sankaranarayanan, Ganesh; De, Suvranu

    2010-01-01

    Background In actual surgery, smoke and bleeding due to cautery processes, provide important visual cues to the surgeon which have been proposed as factors in surgical skill assessment. While several virtual reality (VR)-based surgical simulators have incorporated effects of bleeding and smoke generation, they are not realistic due to the requirement of real time performance. To be interactive, visual update must be performed at least 30 Hz and haptic (touch) information must be refreshed at 1 kHz. Simulation of smoke and bleeding is, therefore, either ignored or simulated using highly simplified techniques since other computationally intensive processes compete for the available CPU resources. Methods In this work, we develop a novel low-cost method to generate realistic bleeding and smoke in VR-based surgical simulators which outsources the computations to the graphical processing unit (GPU), thus freeing up the CPU for other time-critical tasks. This method is independent of the complexity of the organ models in the virtual environment. User studies were performed using 20 subjects to determine the visual quality of the simulations compared to real surgical videos. Results The smoke and bleeding simulation were implemented as part of a Laparoscopic Adjustable Gastric Banding (LAGB) simulator. For the bleeding simulation, the original implementation using the shader did not incur in noticeable overhead. However, for smoke generation, an I/O (Input/Output) bottleneck was observed and two different methods were developed to overcome this limitation. Based on our benchmark results, a buffered approach performed better than a pipelined approach and could support up to 15 video streams in real time. Human subject studies showed that the visual realism of the simulations were as good as in real surgery (median rating of 4 on a 5-point Likert scale). Conclusions Based on the performance results and subject study, both bleeding and smoke simulations were concluded to be efficient, highly realistic and well suited in VR-based surgical simulators. PMID:20878651

  19. Intelligent video storage of visual evidences on site in fast deployment

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Bastide, Arnaud; Delaigle, Jean-Francois

    2004-07-01

    In this article we present a generic, flexible, scalable and robust approach for an intelligent real-time forensic visual system. The proposed implementation could be rapidly deployable and integrates minimum logistic support as it embeds low complexity devices (PCs and cameras) that communicate through wireless network. The goal of these advanced tools is to provide intelligent video storage of potential video evidences for fast intervention during deployment around a hazardous sector after a terrorism attack, a disaster, an air crash or before attempt of it. Advanced video analysis tools, such as segmentation and tracking are provided to support intelligent storage and annotation.

  20. Iowa Flood Information System

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.; Goska, R.; Mantilla, R.; Weber, L. J.; Young, N.

    2011-12-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. Simple 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for around 500 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods. This presentation provides an overview of the tools and interfaces in the IFIS developed to date to provide a platform for one-stop access to flood related data, visualizations, flood conditions, and forecast.

  1. Flood Risk Management in Iowa through an Integrated Flood Information System

    NASA Astrophysics Data System (ADS)

    Demir, Ibrahim; Krajewski, Witold

    2013-04-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. Simple 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for around 1100 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods. This presentation provides an overview and live demonstration of the tools and interfaces in the IFIS developed to date to provide a platform for one-stop access to flood related data, visualizations, flood conditions, and forecast.

  2. Early warning of active fire hotspots through NASA FIRMS fire information system

    NASA Astrophysics Data System (ADS)

    Ilavajhala, S.; Davies, D.; Schmaltz, J. E.; Murphy, K. J.

    2014-12-01

    Forest fires and wildfires can threaten ecosystems, wildlife, property, and often, large swaths of populations. Early warning of active fire hotspots plays a crucial role in planning, managing, and mitigating the damaging effects of wildfires. The NASA Fire Information for Resource Management System (FIRMS) has been providing active fire location information to users in easy-to-use formats for the better part of last decade, with a view to improving the alerting mechanisms and response times to fight forest and wildfires. FIRMS utilizes fires flagged as hotspots by the MODIS instrument flying aboard the Aqua and Terra satellites and sends early warning of detected hotspots via email in near real-time or as daily and weekly summaries. The email alerts can also be customized to send alerts for a particular region of interest, a country, or a specific protected area or park. In addition, a web mapping component, named "Web Fire Mapper" helps query and visualize hotspots. A newer version of Web Fire Mapper is being developed to enhance the existing visualization and alerting capabilities. Plans include supporting near real-time imagery from Aqua and Terra satellites to provide a more helpful context while viewing fires. Plans are also underway to upgrade the email alerts system to provide mobile-formatted messages and short text messages (SMS). The newer version of FIRMS will also allow users to obtain geo-located image snapshots, which can be imported into local GIS software by stakeholders to help further analyses. This talk will discuss the FIRMS system, its enhancements and its role in helping map, alert, and monitor fire hotspots by providing quick data visualization, querying, and download capabilities.

  3. Real-time three-dimensional transesophageal echocardiography in the assessment of mechanical prosthetic mitral valve ring thrombosis.

    PubMed

    Ozkan, Mehmet; Gürsoy, Ozan Mustafa; Astarcıoğlu, Mehmet Ali; Gündüz, Sabahattin; Cakal, Beytullah; Karakoyun, Süleyman; Kalçık, Macit; Kahveci, Gökhan; Duran, Nilüfer Ekşi; Yıldız, Mustafa; Cevik, Cihan

    2013-10-01

    Although 2-dimensional (2D) transesophageal echocardiography (TEE) is the gold standard for the diagnosis of prosthetic valve thrombosis, nonobstructive clots located on mitral valve rings can be missed. Real-time 3-dimensional (3D) TEE has incremental value in the visualization of mitral prosthesis. The aim of this study was to investigate the utility of real-time 3D TEE in the diagnosis of mitral prosthetic ring thrombosis. The clinical outcomes of these patients in relation to real-time 3D transesophageal echocardiographic findings were analyzed. Of 1,263 patients who underwent echocardiographic studies, 174 patients (37 men, 137 women) with mitral ring thrombosis detected by real-time 3D TEE constituted the main study population. Patients were followed prospectively on oral anticoagulation for 25 ± 7 months. Eighty-nine patients (51%) had thrombi that were missed on 2D TEE and depicted only on real-time 3D TEE. The remaining cases were partially visualized with 2D TEE but completely visualized with real-time 3D TEE. Thirty-seven patients (21%) had thromboembolism. The mean thickness of the ring thrombosis in patients with thromboembolism was greater than that in patients without thromboembolism (3.8 ± 0.9 vs 2.8 ± 0.7 mm, p <0.001). One hundred fifty-five patients (89%) underwent real-time 3D TEE during follow-up. There were no thrombi in 39 patients (25%); 45 (29%) had regression of thrombi, and there was no change in thrombus size in 68 patients (44%). Thrombus size increased in 3 patients (2%). Thrombosis was confirmed surgically and histopathologically in 12 patients (7%). In conclusion, real-time 3D TEE can detect prosthetic mitral ring thrombosis that could be missed on 2D TEE and cause thromboembolic events. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. XpertTrack: Precision Autonomous Measuring Device Developed for Real Time Shipments Tracker

    PubMed Central

    Viman, Liviu; Daraban, Mihai; Fizesan, Raul; Iuonas, Mircea

    2016-01-01

    This paper proposes a software and hardware solution for real time condition monitoring applications. The proposed device, called XpertTrack, exchanges data through the GPRS protocol over a GSM network and monitories temperature and vibrations of critical merchandise during commercial shipments anywhere on the globe. Another feature of this real time tracker is to provide GPS and GSM positioning with a precision of 10 m or less. In order to interpret the condition of the merchandise, the data acquisition, analysis and visualization are done with 0.1 °C accuracy for the temperature sensor, and 10 levels of shock sensitivity for the acceleration sensor. In addition to this, the architecture allows increasing the number and the types of sensors, so that companies can use this flexible solution to monitor a large percentage of their fleet. PMID:26978360

  5. XpertTrack: Precision Autonomous Measuring Device Developed for Real Time Shipments Tracker.

    PubMed

    Viman, Liviu; Daraban, Mihai; Fizesan, Raul; Iuonas, Mircea

    2016-03-10

    This paper proposes a software and hardware solution for real time condition monitoring applications. The proposed device, called XpertTrack, exchanges data through the GPRS protocol over a GSM network and monitories temperature and vibrations of critical merchandise during commercial shipments anywhere on the globe. Another feature of this real time tracker is to provide GPS and GSM positioning with a precision of 10 m or less. In order to interpret the condition of the merchandise, the data acquisition, analysis and visualization are done with 0.1 °C accuracy for the temperature sensor, and 10 levels of shock sensitivity for the acceleration sensor. In addition to this, the architecture allows increasing the number and the types of sensors, so that companies can use this flexible solution to monitor a large percentage of their fleet.

  6. Watching excitons move: the time-dependent transition density matrix

    NASA Astrophysics Data System (ADS)

    Ullrich, Carsten

    2012-02-01

    Time-dependent density-functional theory allows one to calculate excitation energies and the associated transition densities in principle exactly. The transition density matrix (TDM) provides additional information on electron-hole localization and coherence of specific excitations of the many-body system. We have extended the TDM concept into the real-time domain in order to visualize the excited-state dynamics in conjugated molecules. The time-dependent TDM is defined as an implicit density functional, and can be approximately obtained from the time-dependent Kohn-Sham orbitals. The quality of this approximation is assessed in simple model systems. A computational scheme for real molecular systems is presented: the time-dependent Kohn-Sham equations are solved with the OCTOPUS code and the time-dependent Kohn-Sham TDM is calculated using a spatial partitioning scheme. The method is applied to show in real time how locally created electron-hole pairs spread out over neighboring conjugated molecular chains. The coupling mechanism, electron-hole coherence, and the possibility of charge separation are discussed.

  7. Living Color Frame System: PC graphics tool for data visualization

    NASA Technical Reports Server (NTRS)

    Truong, Long V.

    1993-01-01

    Living Color Frame System (LCFS) is a personal computer software tool for generating real-time graphics applications. It is highly applicable for a wide range of data visualization in virtual environment applications. Engineers often use computer graphics to enhance the interpretation of data under observation. These graphics become more complicated when 'run time' animations are required, such as found in many typical modern artificial intelligence and expert systems. Living Color Frame System solves many of these real-time graphics problems.

  8. Smart unattended sensor networks with scene understanding capabilities

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2006-05-01

    Unattended sensor systems are new technologies that are supposed to provide enhanced situation awareness to military and law enforcement agencies. A network of such sensors cannot be very effective in field conditions only if it can transmit visual information to human operators or alert them on motion. In the real field conditions, events may happen in many nodes of a network simultaneously. But the real number of control personnel is always limited, and attention of human operators can be simply attracted to particular network nodes, while more dangerous threat may be unnoticed at the same time in the other nodes. Sensor networks would be more effective if equipped with a system that is similar to human vision in its abilities to understand visual information. Human vision uses for that a rough but wide peripheral system that tracks motions and regions of interests, narrow but precise foveal vision that analyzes and recognizes objects in the center of selected region of interest, and visual intelligence that provides scene and object contexts and resolves ambiguity and uncertainty in the visual information. Biologically-inspired Network-Symbolic models convert image information into an 'understandable' Network-Symbolic format, which is similar to relational knowledge models. The equivalent of interaction between peripheral and foveal systems in the network-symbolic system is achieved via interaction between Visual and Object Buffers and the top-level knowledge system.

  9. U.S. Electric System Operating Data

    EIA Publications

    EIA provides hourly electricity operating data, including actual and forecast demand, net generation, and the power flowing between electric systems. EIA's new U.S. Electric System Operating Data tool provides nearly real-time demand data, plus analysis and visualizations of hourly, daily, and weekly electricity supply and demand on a national and regional level for all of the 66 electric system balancing authorities that make up the U.S. electric grid.

  10. Improving visual perception through neurofeedback

    PubMed Central

    Scharnowski, Frank; Hutton, Chloe; Josephs, Oliver; Weiskopf, Nikolaus; Rees, Geraint

    2012-01-01

    Perception depends on the interplay of ongoing spontaneous activity and stimulus-evoked activity in sensory cortices. This raises the possibility that training ongoing spontaneous activity alone might be sufficient for enhancing perceptual sensitivity. To test this, we trained human participants to control ongoing spontaneous activity in circumscribed regions of retinotopic visual cortex using real-time functional MRI based neurofeedback. After training, we tested participants using a new and previously untrained visual detection task that was presented at the visual field location corresponding to the trained region of visual cortex. Perceptual sensitivity was significantly enhanced only when participants who had previously learned control over ongoing activity were now exercising control, and only for that region of visual cortex. Our new approach allows us to non-invasively and non-pharmacologically manipulate regionally specific brain activity, and thus provide ‘brain training’ to deliver particular perceptual enhancements. PMID:23223302

  11. Archiving and Near Real Time Visualization of USGS Instantaneous Data

    NASA Astrophysics Data System (ADS)

    Zaslavsky, I.; Ryan, D.; Whitenack, T.; Valentine, D. W.; Rodriguez, M.

    2009-12-01

    The CUAHSI Hydrologic Information System project has been developing databases, services and online and desktop software applications supporting standards-based publication and access to large volumes of hydrologic data from US federal agencies and academic partners. In particular, the CUAHSI WaterML 1.x schema specification for exchanging hydrologic time series, earlier published as an OGC Discussion Paper (2007), has been adopted by the United States Geological Survey to provide web service access to USGS daily values and instantaneous data. The latter service, making available raw measurements of discharge, gage height and several other parameters for over 10,000 USGS real time measurement points, was announced by USGS, as an experimental WaterML-compliant service, at the end of July 2009. We demonstrate an online application that leverages the new service for nearly continuous harvesting of USGS real time data, and simultaneous visualization and analysis of the data streams. To make this possible, we integrate service components of the CUAHSI software stack with Open Source Data Turbine (OSDT) system, an NSF-supported software environment for robust and scalable assimilation of multimedia data streams (e.g. from sensors), and interfacing with a variety of viewers, databases, archival systems and client applications. Our application continuously queries USGS Instantaneous water data service (which provides access to 15-min measurements updated at USGS every 4 hours), and maps the results for each station-variable combination to a separate "channel", which is used by OSDT to quickly access and manipulate the time series. About 15,000 channels are used, which makes it by far the largest deployment of OSDT. Using RealTime Data Viewer, users can now select one or more stations of interest (e.g. from upstream or downstream from each other), and observe and annotate simultaneous dynamics in the respective discharge and gage height values, using fast forward or backward modes, real-time mode, etc. Memory management, scheduling service-based retrieval from USGS web services, and organizing access to 7,330 selected stations, turned out to be the major challenges in this project. To allow station navigation, they are grouped by state and county in the user interface. Memory footprint has been monitored under different Java VM settings, to find the correct regime. These and other solutions are discussed in the paper, and accompanied with a series of examples of simultaneous visualization of discharge from multiple stations as a component of hydrologic analysis.

  12. Development of real time abdominal compression force monitoring and visual biofeedback system

    NASA Astrophysics Data System (ADS)

    Kim, Tae-Ho; Kim, Siyong; Kim, Dong-Su; Kang, Seong-Hee; Cho, Min-Seok; Kim, Kyeong-Hyeon; Shin, Dong-Seok; Suh, Tae-Suk

    2018-03-01

    In this study, we developed and evaluated a system that could monitor abdominal compression force (ACF) in real time and provide a surrogating signal, even under abdominal compression. The system could also provide visual-biofeedback (VBF). The real-time ACF monitoring system developed consists of an abdominal compression device, an ACF monitoring unit and a control system including an in-house ACF management program. We anticipated that ACF variation information caused by respiratory abdominal motion could be used as a respiratory surrogate signal. Four volunteers participated in this test to obtain correlation coefficients between ACF variation and tidal volumes. A simulation study with another group of six volunteers was performed to evaluate the feasibility of the proposed system. In the simulation, we investigated the reproducibility of the compression setup and proposed a further enhanced shallow breathing (ESB) technique using VBF by intentionally reducing the amplitude of the breathing range under abdominal compression. The correlation coefficient between the ACF variation caused by the respiratory abdominal motion and the tidal volume signal for each volunteer was evaluated and R 2 values ranged from 0.79 to 0.84. The ACF variation was similar to a respiratory pattern and slight variations of ACF ranges were observed among sessions. About 73-77% average ACF control rate (i.e. compliance) over five trials was observed in all volunteer subjects except one (64%) when there was no VBF. The targeted ACF range was intentionally reduced to achieve ESB for VBF simulation. With VBF, in spite of the reduced target range, overall ACF control rate improved by about 20% in all volunteers except one (4%), demonstrating the effectiveness of VBF. The developed monitoring system could help reduce the inter-fraction ACF set up error and the intra fraction ACF variation. With the capability of providing a real time surrogating signal and VBF under compression, it could improve the quality of respiratory tumor motion management in abdominal compression radiation therapy.

  13. Large Terrain Continuous Level of Detail 3D Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan

    2012-01-01

    This software solved the problem of displaying terrains that are usually too large to be displayed on standard workstations in real time. The software can visualize terrain data sets composed of billions of vertices, and can display these data sets at greater than 30 frames per second. The Large Terrain Continuous Level of Detail 3D Visualization Tool allows large terrains, which can be composed of billions of vertices, to be visualized in real time. It utilizes a continuous level of detail technique called clipmapping to support this. It offloads much of the work involved in breaking up the terrain into levels of details onto the GPU (graphics processing unit) for faster processing.

  14. A real-time articulatory visual feedback approach with target presentation for second language pronunciation learning.

    PubMed

    Suemitsu, Atsuo; Dang, Jianwu; Ito, Takayuki; Tiede, Mark

    2015-10-01

    Articulatory information can support learning or remediating pronunciation of a second language (L2). This paper describes an electromagnetic articulometer-based visual-feedback approach using an articulatory target presented in real-time to facilitate L2 pronunciation learning. This approach trains learners to adjust articulatory positions to match targets for a L2 vowel estimated from productions of vowels that overlap in both L1 and L2. Training of Japanese learners for the American English vowel /æ/ that included visual training improved its pronunciation regardless of whether audio training was also included. Articulatory visual feedback is shown to be an effective method for facilitating L2 pronunciation learning.

  15. Recent Advancements in the Infrared Flow Visualization System for the NASA Ames Unitary Plan Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Garbeff, Theodore J., II; Baerny, Jennifer K.

    2017-01-01

    The following details recent efforts undertaken at the NASA Ames Unitary Plan wind tunnels to design and deploy an advanced, production-level infrared (IR) flow visualization data system. Highly sensitive IR cameras, coupled with in-line image processing, have enabled the visualization of wind tunnel model surface flow features as they develop in real-time. Boundary layer transition, shock impingement, junction flow, vortex dynamics, and buffet are routinely observed in both transonic and supersonic flow regimes all without the need of dedicated ramps in test section total temperature. Successful measurements have been performed on wing-body sting mounted test articles, semi-span floor mounted aircraft models, and sting mounted launch vehicle configurations. The unique requirements of imaging in production wind tunnel testing has led to advancements in the deployment of advanced IR cameras in a harsh test environment, robust data acquisition storage and workflow, real-time image processing algorithms, and evaluation of optimal surface treatments. The addition of a multi-camera IR flow visualization data system to the Ames UPWT has demonstrated itself to be a valuable analyses tool in the study of new and old aircraft/launch vehicle aerodynamics and has provided new insight for the evaluation of computational techniques.

  16. Glyph-based generic network visualization

    NASA Astrophysics Data System (ADS)

    Erbacher, Robert F.

    2002-03-01

    Network managers and system administrators have an enormous task set before them in this day of growing network usage. This is particularly true of e-commerce companies and others dependent on a computer network for their livelihood. Network managers and system administrators must monitor activity for intrusions and misuse while at the same time monitoring performance of the network. In this paper, we describe our visualization techniques for assisting in the monitoring of networks for both of these tasks. The goal of these visualization techniques is to integrate the visual representation of both network performance/usage as well as data relevant to intrusion detection. The main difficulties arise from the difference in the intrinsic data and layout needs of each of these tasks. Glyph based techniques are additionally used to indicate the representative values of the necessary data parameters over time. Additionally, our techniques are geared towards providing an environment that can be used continuously for constant real-time monitoring of the network environment.

  17. Real-Time Control of a Video Game Using Eye Movements and Two Temporal EEG Sensors.

    PubMed

    Belkacem, Abdelkader Nasreddine; Saetia, Supat; Zintus-art, Kalanyu; Shin, Duk; Kambara, Hiroyuki; Yoshimura, Natsue; Berrached, Nasreddine; Koike, Yasuharu

    2015-01-01

    EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control.

  18. Real-Time Control of a Video Game Using Eye Movements and Two Temporal EEG Sensors

    PubMed Central

    Saetia, Supat; Zintus-art, Kalanyu; Shin, Duk; Kambara, Hiroyuki; Yoshimura, Natsue; Berrached, Nasreddine; Koike, Yasuharu

    2015-01-01

    EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control. PMID:26690500

  19. Helmet-mounted displays in long-range-target visual acquisition

    NASA Astrophysics Data System (ADS)

    Wilkins, Donald F.

    1999-07-01

    Aircrews have always sought a tactical advantage within the visual range (WVR) arena -- usually defined as 'see the opponent first.' Even with radar and interrogation foe/friend (IFF) systems, the pilot who visually acquires his opponent first has a significant advantage. The Helmet Mounted Cueing System (HMCS) equipped with a camera offers an opportunity to correct the problems with the previous approaches. By utilizing real-time image enhancement technique and feeding the image to the pilot on the HMD, the target can be visually acquired well beyond the range provided by the unaided eye. This paper will explore the camera and display requirements for such a system and place those requirements within the context of other requirements, such as weight.

  20. Canine spontaneous glioma: A translational model system for convection-enhanced delivery

    PubMed Central

    Dickinson, Peter J.; LeCouteur, Richard A.; Higgins, Robert J.; Bringas, John R.; Larson, Richard F.; Yamashita, Yoji; Krauze, Michal T.; Forsayeth, John; Noble, Charles O.; Drummond, Daryl C.; Kirpotin, Dmitri B.; Park, John W.; Berger, Mitchel S.; Bankiewicz, Krystof S.

    2010-01-01

    Canine spontaneous intracranial tumors bear striking similarities to their human tumor counterparts and have the potential to provide a large animal model system for more realistic validation of novel therapies typically developed in small rodent models. We used spontaneously occurring canine gliomas to investigate the use of convection-enhanced delivery (CED) of liposomal nanoparticles, containing topoisomerase inhibitor CPT-11. To facilitate visualization of intratumoral infusions by real-time magnetic resonance imaging (MRI), we included identically formulated liposomes loaded with Gadoteridol. Real-time MRI defined distribution of infusate within both tumor and normal brain tissues. The most important limiting factor for volume of distribution within tumor tissue was the leakage of infusate into ventricular or subarachnoid spaces. Decreased tumor volume, tumor necrosis, and modulation of tumor phenotype correlated with volume of distribution of infusate (Vd), infusion location, and leakage as determined by real-time MRI and histopathology. This study demonstrates the potential for canine spontaneous gliomas as a model system for the validation and development of novel therapeutic strategies for human brain tumors. Data obtained from infusions monitored in real time in a large, spontaneous tumor may provide information, allowing more accurate prediction and optimization of infusion parameters. Variability in Vd between tumors strongly suggests that real-time imaging should be an essential component of CED therapeutic trials to allow minimization of inappropriate infusions and accurate assessment of clinical outcomes. PMID:20488958

  1. Comparative diagnostic evaluation of OMP31 gene based TaqMan® real-time PCR assay with visual LAMP assay and indirect ELISA for caprine brucellosis.

    PubMed

    Saini, Suman; Gupta, V K; Gururaj, K; Singh, D D; Pawaiya, R V S; Gangwar, N K; Mishra, A K; Dwivedi, Deepak; Andani, Dimple; Kumar, Ashok; Goswami, T K

    2017-08-01

    Brucellosis is one of the leading causes of abortion in domestic animals that imposes costs on both economy and society. The disease is highly zoonotic and poses risk to animal handlers due to its zoonotic nature. It causes stillbirth, loss of kids and abortion in last term of pregnancy. Reproductive damage includes infertility in does and orchitis and epididymitis in breeding bucks, which result in high financial losses to farmers and the agriculture industry as a whole. It requires highly sensitive and specific assays to diagnose the disease at field level. In the current study, a visual loop-mediated isothermal amplification (LAMP) assay and the TaqMan® real-time PCR were developed with high sensitivity and specificity. For the TaqMan® probe, real-time PCR primers were developed using Omp31 gene as target and primers were designed using discontiguous conserved sequences of Omp31 gene. The Omp31 probes were designed by attaching 6-FAM reporter dye at the 5' end and BHQ-1 quencher at the 3' end. Published primers were used for visual LAMP assay targeting the Omp25 gene. Sensitivity of the standardized visual LAMP assay and TaqMan® real-time PCR assay was determined by serial dilution of positive Brucella melitensis DNA (10 2 to 10 -4  ng) obtained from standard culture. The TaqMan® probe real-time assay can detect as low as 100 fg of B. melitensis DNA, whereas culture from vaginal swab washings has a limit of detection (LOD) of only 1 cfu/ml. Similarly, the visual LAMP assay can detect as low as 10 fg of B. melitensis DNA as compared to an LOD of 30 cfu/ml from culture of vaginal swab washings. Both assays were compared with serological tests (serum tube agglutination test (STAT) and indirect enzyme-linked immunosorbent assay (iELISA)) for diagnostic sensitivity and specificity. Diagnostic sensitivities and specificities for TaqMan® real-time PCR vs. LAMP assays were 98 and 100% vs. 100 and 97.8%, respectively. Results of visual LAMP assay indicated that LAMP is a fast, specific, sensitive, inexpensive and suitable method for diagnosis of B. melitensis infection under field conditions. On the other hand, Omp31 TaqMan® probe real-time assay can be used in conjunction with the other field-based diagnostic tests due to its high specificity.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joseph H.; Parashar, Manu; Lewis, Nancy Jo

    The Real Time System Operations (RTSO) 2006-2007 project focused on two parallel technical tasks: (1) Real-Time Applications of Phasors for Monitoring, Alarming and Control; and (2) Real-Time Voltage Security Assessment (RTVSA) Prototype Tool. The overall goal of the phasor applications project was to accelerate adoption and foster greater use of new, more accurate, time-synchronized phasor measurements by conducting research and prototyping applications on California ISO's phasor platform - Real-Time Dynamics Monitoring System (RTDMS) -- that provide previously unavailable information on the dynamic stability of the grid. Feasibility assessment studies were conducted on potential application of this technology for small-signal stabilitymore » monitoring, validating/improving existing stability nomograms, conducting frequency response analysis, and obtaining real-time sensitivity information on key metrics to assess grid stress. Based on study findings, prototype applications for real-time visualization and alarming, small-signal stability monitoring, measurement based sensitivity analysis and frequency response assessment were developed, factory- and field-tested at the California ISO and at BPA. The goal of the RTVSA project was to provide California ISO with a prototype voltage security assessment tool that runs in real time within California ISO?s new reliability and congestion management system. CERTS conducted a technical assessment of appropriate algorithms, developed a prototype incorporating state-of-art algorithms (such as the continuation power flow, direct method, boundary orbiting method, and hyperplanes) into a framework most suitable for an operations environment. Based on study findings, a functional specification was prepared, which the California ISO has since used to procure a production-quality tool that is now a part of a suite of advanced computational tools that is used by California ISO for reliability and congestion management.« less

  3. Error amplification to promote motor learning and motivation in therapy robotics.

    PubMed

    Shirzad, Navid; Van der Loos, H F Machiel

    2012-01-01

    To study the effects of different feedback error amplification methods on a subject's upper-limb motor learning and affect during a point-to-point reaching exercise, we developed a real-time controller for a robotic manipulandum. The reaching environment was visually distorted by implementing a thirty degrees rotation between the coordinate systems of the robot's end-effector and the visual display. Feedback error amplification was provided to subjects as they trained to learn reaching within the visually rotated environment. Error amplification was provided either visually or through both haptic and visual means, each method with two different amplification gains. Subjects' performance (i.e., trajectory error) and self-reports to a questionnaire were used to study the speed and amount of adaptation promoted by each error amplification method and subjects' emotional changes. We found that providing haptic and visual feedback promotes faster adaptation to the distortion and increases subjects' satisfaction with the task, leading to a higher level of attentiveness during the exercise. This finding can be used to design a novel exercise regimen, where alternating between error amplification methods is used to both increase a subject's motor learning and maintain a minimum level of motivational engagement in the exercise. In future experiments, we will test whether such exercise methods will lead to a faster learning time and greater motivation to pursue a therapy exercise regimen.

  4. NCAR's Research Data Archive: OPeNDAP Access for Complex Datasets

    NASA Astrophysics Data System (ADS)

    Dattore, R.; Worley, S. J.

    2014-12-01

    Many datasets have complex structures including hundreds of parameters and numerous vertical levels, grid resolutions, and temporal products. Making these data accessible is a challenge for a data provider. OPeNDAP is powerful protocol for delivering in real-time multi-file datasets that can be ingested by many analysis and visualization tools, but for these datasets there are too many choices about how to aggregate. Simple aggregation schemes can fail to support, or at least make it very challenging, for many potential studies based on complex datasets. We address this issue by using a rich file content metadata collection to create a real-time customized OPeNDAP service to match the full suite of access possibilities for complex datasets. The Climate Forecast System Reanalysis (CFSR) and it's extension, the Climate Forecast System Version 2 (CFSv2) datasets produced by the National Centers for Environmental Prediction (NCEP) and hosted by the Research Data Archive (RDA) at the Computational and Information Systems Laboratory (CISL) at NCAR are examples of complex datasets that are difficult to aggregate with existing data server software. CFSR and CFSv2 contain 141 distinct parameters on 152 vertical levels, six grid resolutions and 36 products (analyses, n-hour forecasts, multi-hour averages, etc.) where not all parameter/level combinations are available at all grid resolution/product combinations. These data are archived in the RDA with the data structure provided by the producer; no additional re-organization or aggregation have been applied. Since 2011, users have been able to request customized subsets (e.g. - temporal, parameter, spatial) from the CFSR/CFSv2, which are processed in delayed-mode and then downloaded to a user's system. Until now, the complexity has made it difficult to provide real-time OPeNDAP access to the data. We have developed a service that leverages the already-existing subsetting interface and allows users to create a virtual dataset with its own structure (das, dds). The user receives a URL to the customized dataset that can be used by existing tools to ingest, analyze, and visualize the data. This presentation will detail the metadata system and OPeNDAP server that enable user-customized real-time access and show an example of how a visualization tool can access the data.

  5. A mobile application to support collection and analytics of real-time critical care data.

    PubMed

    Vankipuram, Akshay; Vankipuram, Mithra; Ghaemmaghami, Vafa; Patel, Vimla L

    2017-11-01

    Data collection, in high intensity environments, poses several challenges including the ability to observe multiple streams of information. These problems are especially evident in critical care, where monitoring of the Advanced Trauma Life Support (ATLS) protocol provides an excellent opportunity to study the efficacy of applications that allow for the rapid capture of event information, providing theoretically-driven feedback using the data. Our goal was, (a) to design and implement a way to capture data on deviation from the standard practice based on the theoretical foundation of error classification from our past research, (b) to provide a means to meaningfully visualize the collected data, and (c) to provide a proof-of-concept for this implementation, using some understanding of user experience in clinical practice. We present the design and development of a web application designed to be used primarily on mobile devices and a summary data viewer to allow clinicians to, (a) track their activities, (b) provide real-time feedback of deviations from guidelines and protocols, and (c) provide summary feedback highlighting decisions made. We used a framework previously developed to classify activities in trauma as the theoretical foundation of the rules designed to do the same algorithmically, in our application. Attending physicians at a Level 1 trauma center used the application in the clinical setting and provided feedback for iterative development. Informal interviews and surveys were used to gain some deeper understanding of the user experience using this application in-situ. Activity visualizations were created highlighting decisions made during a trauma code as well as classification of tasks per the theoretical framework. The attendings reviewed the efficacy of the data visualizations as part of their interviews. We also conducted a proof-of-concept evaluation by way of usability questionnaire. Two attendings rated 4 out of the usability 6 categories highly (inter-rater reliability: R = 0.87; weighted kappa = 0.59). This could be attributed to the fact that they were able to fit the use of the application into their regular workflow during a trauma code relatively seamlessly. A deeper evaluation is required to answer explain this further. Our application can be used to capture and present data to provide an accurate reflection of work activities in real-time in complex critical care environments, without any significant interruptions to workflow. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Visual measurement of the evaporation process of a sessile droplet by dual-channel simultaneous phase-shifting interferometry.

    PubMed

    Sun, Peng; Zhong, Liyun; Luo, Chunshu; Niu, Wenhu; Lu, Xiaoxu

    2015-07-16

    To perform the visual measurement of the evaporation process of a sessile droplet, a dual-channel simultaneous phase-shifting interferometry (DCSPSI) method is proposed. Based on polarization components to simultaneously generate a pair of orthogonal interferograms with the phase shifts of π/2, the real-time phase of a dynamic process can be retrieved with two-step phase-shifting algorithm. Using this proposed DCSPSI system, the transient mass (TM) of the evaporation process of a sessile droplet with different initial mass were presented through measuring the real-time 3D shape of a droplet. Moreover, the mass flux density (MFD) of the evaporating droplet and its regional distribution were also calculated and analyzed. The experimental results show that the proposed DCSPSI will supply a visual, accurate, noncontact, nondestructive, global tool for the real-time multi-parameter measurement of the droplet evaporation.

  7. An Open-Source Hardware and Software System for Acquisition and Real-Time Processing of Electrophysiology during High Field MRI

    PubMed Central

    Purdon, Patrick L.; Millan, Hernan; Fuller, Peter L.; Bonmassar, Giorgio

    2008-01-01

    Simultaneous recording of electrophysiology and functional magnetic resonance imaging (fMRI) is a technique of growing importance in neuroscience. Rapidly evolving clinical and scientific requirements have created a need for hardware and software that can be customized for specific applications. Hardware may require customization to enable a variety of recording types (e.g., electroencephalogram, local field potentials, or multi-unit activity) while meeting the stringent and costly requirements of MRI safety and compatibility. Real-time signal processing tools are an enabling technology for studies of learning, attention, sleep, epilepsy, neurofeedback, and neuropharmacology, yet real-time signal processing tools are difficult to develop. We describe an open source system for simultaneous electrophysiology and fMRI featuring low-noise (< 0.6 uV p-p input noise), electromagnetic compatibility for MRI (tested up to 7 Tesla), and user-programmable real-time signal processing. The hardware distribution provides the complete specifications required to build an MRI-compatible electrophysiological data acquisition system, including circuit schematics, print circuit board (PCB) layouts, Gerber files for PCB fabrication and robotic assembly, a bill of materials with part numbers, data sheets, and vendor information, and test procedures. The software facilitates rapid implementation of real-time signal processing algorithms. This system has used in human EEG/fMRI studies at 3 and 7 Tesla examining the auditory system, visual system, sleep physiology, and anesthesia, as well as in intracranial electrophysiological studies of the non-human primate visual system during 3 Tesla fMRI, and in human hyperbaric physiology studies at depths of up to 300 feet below sea level. PMID:18761038

  8. An open-source hardware and software system for acquisition and real-time processing of electrophysiology during high field MRI.

    PubMed

    Purdon, Patrick L; Millan, Hernan; Fuller, Peter L; Bonmassar, Giorgio

    2008-11-15

    Simultaneous recording of electrophysiology and functional magnetic resonance imaging (fMRI) is a technique of growing importance in neuroscience. Rapidly evolving clinical and scientific requirements have created a need for hardware and software that can be customized for specific applications. Hardware may require customization to enable a variety of recording types (e.g., electroencephalogram, local field potentials, or multi-unit activity) while meeting the stringent and costly requirements of MRI safety and compatibility. Real-time signal processing tools are an enabling technology for studies of learning, attention, sleep, epilepsy, neurofeedback, and neuropharmacology, yet real-time signal processing tools are difficult to develop. We describe an open-source system for simultaneous electrophysiology and fMRI featuring low-noise (<0.6microV p-p input noise), electromagnetic compatibility for MRI (tested up to 7T), and user-programmable real-time signal processing. The hardware distribution provides the complete specifications required to build an MRI-compatible electrophysiological data acquisition system, including circuit schematics, print circuit board (PCB) layouts, Gerber files for PCB fabrication and robotic assembly, a bill of materials with part numbers, data sheets, and vendor information, and test procedures. The software facilitates rapid implementation of real-time signal processing algorithms. This system has been used in human EEG/fMRI studies at 3 and 7T examining the auditory system, visual system, sleep physiology, and anesthesia, as well as in intracranial electrophysiological studies of the non-human primate visual system during 3T fMRI, and in human hyperbaric physiology studies at depths of up to 300 feet below sea level.

  9. Biological Visualization, Imaging and Simulation(Bio-VIS) at NASA Ames Research Center: Developing New Software and Technology for Astronaut Training and Biology Research in Space

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey

    2003-01-01

    The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices (Ascention Inc.) attached to instrumented gloves (Immersion Inc.) which co-locate the user's hands with hand/forearm representations in the virtual workspace. Force-feedback is possible in a work volume defined by a Phantom Desktop device (SensAble inc.). Graphics are written in OpenGL. The system runs on a 2.2 GHz Pentium 4 PC. The prototype VGX provides astronauts and support personnel with a real-time physically-based VE system to simulate basic research tasks both on Earth and in the microgravity of Space. The immersive virtual environment of the VGX also makes it a useful tool for virtual engineering applications including CAD development, procedure design and simulation of human-system systems in a desktop-sized work volume.

  10. Tactical Mission Command (TMC)

    DTIC Science & Technology

    2016-03-01

    capabilities to Army commanders and their staffs, consisting primarily of a user-customizable Common Operating Picture ( COP ) enabled with real-time... COP viewer and data management capability. It is a collaborative, visualization and planning application that also provides a common map display... COP ): Display the COP consisting of the following:1 Friendly forces determined by the commander including subordinate and supporting units at

  11. The influence of clutter on real-world scene search: evidence from search efficiency and eye movements.

    PubMed

    Henderson, John M; Chanceaux, Myriam; Smith, Tim J

    2009-01-23

    We investigated the relationship between visual clutter and visual search in real-world scenes. Specifically, we investigated whether visual clutter, indexed by feature congestion, sub-band entropy, and edge density, correlates with search performance as assessed both by traditional behavioral measures (response time and error rate) and by eye movements. Our results demonstrate that clutter is related to search performance. These results hold for both traditional search measures and for eye movements. The results suggest that clutter may serve as an image-based proxy for search set size in real-world scenes.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugmire, David; Kress, James; Choi, Jong

    Data driven science is becoming increasingly more common, complex, and is placing tremendous stresses on visualization and analysis frameworks. Data sources producing 10GB per second (and more) are becoming increasingly commonplace in both simulation, sensor and experimental sciences. These data sources, which are often distributed around the world, must be analyzed by teams of scientists that are also distributed. Enabling scientists to view, query and interact with such large volumes of data in near-real-time requires a rich fusion of visualization and analysis techniques, middleware and workflow systems. Here, this paper discusses initial research into visualization and analysis of distributed datamore » workflows that enables scientists to make near-real-time decisions of large volumes of time varying data.« less

  13. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging.

    PubMed

    Tremsin, Anton S; Perrodin, Didier; Losko, Adrian S; Vogel, Sven C; Bourke, Mark A M; Bizarri, Gregory A; Bourret, Edith D

    2017-04-20

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes.

  14. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    NASA Astrophysics Data System (ADS)

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; Vogel, Sven C.; Bourke, Mark A. M.; Bizarri, Gregory A.; Bourret, Edith D.

    2017-04-01

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of “blind” processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes.

  15. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    PubMed Central

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; Vogel, Sven C.; Bourke, Mark A.M.; Bizarri, Gregory A.; Bourret, Edith D.

    2017-01-01

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of “blind” processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes. PMID:28425461

  16. Real-Time Process Analytics in Emergency Healthcare.

    PubMed

    Koufi, Vassiliki; Malamateniou, Flora; Prentza, Adrianna; Vassilacopoulos, George

    2017-01-01

    Emergency medical systems (EMS) are considered to be amongst the most crucial systems as they involve a variety of activities which are performed from the time of a call to an ambulance service till the time of patient's discharge from the emergency department of a hospital. These activities are closely interrelated so that collaboration and coordination becomes a vital issue for patients and for emergency healthcare service performance. The utilization of standard workflow technology in the context of Service Oriented Architecture can provide an appropriate technological infrastructure for defining and automating EMS processes that span organizational boundaries so that to create and empower collaboration and coordination among the participating organizations. In such systems, the utilization of leading-edge analytics tools can prove important as it can facilitate real-time extraction and visualization of useful insights from the mountains of generated data pertaining to emergency case management. This paper presents a framework which provides healthcare professionals with just-in-time insight within and across emergency healthcare processes by performing real-time analysis on process-related data in order to better support decision making and identify potential critical risks that may affect the provision of emergency care to patients.

  17. Implementation of a General Real-Time Visual Anomaly Detection System Via Soft Computing

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve; Ferrell, Bob; Steinrock, Todd (Technical Monitor)

    2001-01-01

    The intelligent visual system detects anomalies or defects in real time under normal lighting operating conditions. The application is basically a learning machine that integrates fuzzy logic (FL), artificial neural network (ANN), and generic algorithm (GA) schemes to process the image, run the learning process, and finally detect the anomalies or defects. The system acquires the image, performs segmentation to separate the object being tested from the background, preprocesses the image using fuzzy reasoning, performs the final segmentation using fuzzy reasoning techniques to retrieve regions with potential anomalies or defects, and finally retrieves them using a learning model built via ANN and GA techniques. FL provides a powerful framework for knowledge representation and overcomes uncertainty and vagueness typically found in image analysis. ANN provides learning capabilities, and GA leads to robust learning results. An application prototype currently runs on a regular PC under Windows NT, and preliminary work has been performed to build an embedded version with multiple image processors. The application prototype is being tested at the Kennedy Space Center (KSC), Florida, to visually detect anomalies along slide basket cables utilized by the astronauts to evacuate the NASA Shuttle launch pad in an emergency. The potential applications of this anomaly detection system in an open environment are quite wide. Another current, potentially viable application at NASA is in detecting anomalies of the NASA Space Shuttle Orbiter's radiator panels.

  18. Technical note: real-time web-based wireless visual guidance system for radiotherapy.

    PubMed

    Lee, Danny; Kim, Siyong; Palta, Jatinder R; Kim, Taeho

    2017-06-01

    Describe a Web-based wireless visual guidance system that mitigates issues associated with hard-wired audio-visual aided patient interactive motion management systems that are cumbersome to use in routine clinical practice. Web-based wireless visual display duplicates an existing visual display of a respiratory-motion management system for visual guidance. The visual display of the existing system is sent to legacy Web clients over a private wireless network, thereby allowing a wireless setting for real-time visual guidance. In this study, active breathing coordinator (ABC) trace was used as an input for visual display, which captured and transmitted to Web clients. Virtual reality goggles require two (left and right eye view) images for visual display. We investigated the performance of Web-based wireless visual guidance by quantifying (1) the network latency of visual displays between an ABC computer display and Web clients of a laptop, an iPad mini 2 and an iPhone 6, and (2) the frame rate of visual display on the Web clients in frames per second (fps). The network latency of visual display between the ABC computer and Web clients was about 100 ms and the frame rate was 14.0 fps (laptop), 9.2 fps (iPad mini 2) and 11.2 fps (iPhone 6). In addition, visual display for virtual reality goggles was successfully shown on the iPhone 6 with 100 ms and 11.2 fps. A high network security was maintained by utilizing the private network configuration. This study demonstrated that a Web-based wireless visual guidance can be a promising technique for clinical motion management systems, which require real-time visual display of their outputs. Based on the results of this study, our approach has the potential to reduce clutter associated with wired-systems, reduce space requirements, and extend the use of medical devices from static usage to interactive and dynamic usage in a radiotherapy treatment vault.

  19. Real-time visual tracking of less textured three-dimensional objects on mobile platforms

    NASA Astrophysics Data System (ADS)

    Seo, Byung-Kuk; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2012-12-01

    Natural feature-based approaches are still challenging for mobile applications (e.g., mobile augmented reality), because they are feasible only in limited environments such as highly textured and planar scenes/objects, and they need powerful mobile hardware for fast and reliable tracking. In many cases where conventional approaches are not effective, three-dimensional (3-D) knowledge of target scenes would be beneficial. We present a well-established framework for real-time visual tracking of less textured 3-D objects on mobile platforms. Our framework is based on model-based tracking that efficiently exploits partially known 3-D scene knowledge such as object models and a background's distinctive geometric or photometric knowledge. Moreover, we elaborate on implementation in order to make it suitable for real-time vision processing on mobile hardware. The performance of the framework is tested and evaluated on recent commercially available smartphones, and its feasibility is shown by real-time demonstrations.

  20. Sensing Super-position: Visual Instrument Sensor Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2006-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.

  1. Interactive visualization of vegetation dynamics

    USGS Publications Warehouse

    Reed, B.C.; Swets, D.; Bard, L.; Brown, J.; Rowland, James

    2001-01-01

    Satellite imagery provides a mechanism for observing seasonal dynamics of the landscape that have implications for near real-time monitoring of agriculture, forest, and range resources. This study illustrates a technique for visualizing timely information on key events during the growing season (e.g., onset, peak, duration, and end of growing season), as well as the status of the current growing season with respect to the recent historical average. Using time-series analysis of normalized difference vegetation index (NDVI) data from the advanced very high resolution radiometer (AVHRR) satellite sensor, seasonal dynamics can be derived. We have developed a set of Java-based visualization and analysis tools to make comparisons between the seasonal dynamics of the current year with those from the past twelve years. In addition, the visualization tools allow the user to query underlying databases such as land cover or administrative boundaries to analyze the seasonal dynamics of areas of their own interest. The Java-based tools (data exploration and visualization analysis or DEVA) use a Web-based client-server model for processing the data. The resulting visualization and analysis, available via the Internet, is of value to those responsible for land management decisions, resource allocation, and at-risk population targeting.

  2. Real Time Visualization and Manipulation of the Metastatic Trajectory ofBreast Cancer Cell

    DTIC Science & Technology

    2017-09-01

    AWARD NUMBER: W81XWH-13-1-0173 TITLE: Real-Time Visualization and Manipulation of the Metastatic Trajectory of Breast Cancer Cells ...of this work was to engineer breast cancer cells to irreversibly alter the genome of nearby cells through exosomal transfer of Cre recombinase from...the cancer cells to surrounding cells . Our goal was to use this study to activate green fluorescent protein in the host reporter cells in the

  3. Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller.

    PubMed

    Lopez-Franco, Carlos; Gomez-Avila, Javier; Alanis, Alma Y; Arana-Daniel, Nancy; Villaseñor, Carlos

    2017-08-12

    In recent years, unmanned aerial vehicles (UAVs) have gained significant attention. However, we face two major drawbacks when working with UAVs: high nonlinearities and unknown position in 3D space since it is not provided with on-board sensors that can measure its position with respect to a global coordinate system. In this paper, we present a real-time implementation of a servo control, integrating vision sensors, with a neural proportional integral derivative (PID), in order to develop an hexarotor image based visual servo control (IBVS) that knows the position of the robot by using a velocity vector as a reference to control the hexarotor position. This integration requires a tight coordination between control algorithms, models of the system to be controlled, sensors, hardware and software platforms and well-defined interfaces, to allow the real-time implementation, as well as the design of different processing stages with their respective communication architecture. All of these issues and others provoke the idea that real-time implementations can be considered as a difficult task. For the purpose of showing the effectiveness of the sensor integration and control algorithm to address these issues on a high nonlinear system with noisy sensors as cameras, experiments were performed on the Asctec Firefly on-board computer, including both simulation and experimenta results.

  4. Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller

    PubMed Central

    Lopez-Franco, Carlos; Alanis, Alma Y.; Arana-Daniel, Nancy; Villaseñor, Carlos

    2017-01-01

    In recent years, unmanned aerial vehicles (UAVs) have gained significant attention. However, we face two major drawbacks when working with UAVs: high nonlinearities and unknown position in 3D space since it is not provided with on-board sensors that can measure its position with respect to a global coordinate system. In this paper, we present a real-time implementation of a servo control, integrating vision sensors, with a neural proportional integral derivative (PID), in order to develop an hexarotor image based visual servo control (IBVS) that knows the position of the robot by using a velocity vector as a reference to control the hexarotor position. This integration requires a tight coordination between control algorithms, models of the system to be controlled, sensors, hardware and software platforms and well-defined interfaces, to allow the real-time implementation, as well as the design of different processing stages with their respective communication architecture. All of these issues and others provoke the idea that real-time implementations can be considered as a difficult task. For the purpose of showing the effectiveness of the sensor integration and control algorithm to address these issues on a high nonlinear system with noisy sensors as cameras, experiments were performed on the Asctec Firefly on-board computer, including both simulation and experimenta results. PMID:28805689

  5. Application and API for Real-time Visualization of Ground-motions and Tsunami

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Kunugi, T.; Suzuki, W.; Kubo, T.; Nakamura, H.; Azuma, H.; Fujiwara, H.

    2015-12-01

    Due to the recent progress of seismograph and communication environment, real-time and continuous ground-motion observation becomes technically and economically feasible. K-NET and KiK-net, which are nationwide strong motion networks operated by NIED, cover all Japan by about 1750 stations in total. More than half of the stations transmit the ground-motion indexes and/or waveform data in every second. Traditionally, strong-motion data were recorded by event-triggering based instruments with non-continues telephone line which is connected only after an earthquake. Though the data from such networks mainly contribute to preparations for future earthquakes, huge amount of real-time data from dense network are expected to directly contribute to the mitigation of ongoing earthquake disasters through, e.g., automatic shutdown plants and helping decision-making for initial response. By generating the distribution map of these indexes and uploading them to the website, we implemented the real-time ground motion monitoring system, Kyoshin (strong-motion in Japanese) monitor. This web service (www.kyoshin.bosai.go.jp) started in 2008 and anyone can grasp the current ground motions of Japan. Though this service provides only ground-motion map in GIF format, to take full advantage of real-time strong-motion data to mitigate the ongoing disasters, digital data are important. We have developed a WebAPI to provide real-time data and related information such as ground motions (5 km-mesh) and arrival times estimated from EEW (earthquake early warning). All response data from this WebAPI are in JSON format and are easy to parse. We also developed Kyoshin monitor application for smartphone, 'Kmoni view' using the API. In this application, ground motions estimated from EEW are overlapped on the map with the observed one-second-interval indexes. The application can playback previous earthquakes for demonstration or disaster drill. In mobile environment, data traffic and battery are limited and it is not practical to regularly visualize all the data. The application has automatic starting (pop-up) function triggered by EEW. Similar WebAPI and application for tsunami are being prepared using the pressure data recorded by dense offshore observation network (S-net), which is under construction along the Japan Trench.

  6. Novel techniques of real-time blood flow and functional mapping: technical note.

    PubMed

    Kamada, Kyousuke; Ogawa, Hiroshi; Saito, Masato; Tamura, Yukie; Anei, Ryogo; Kapeller, Christoph; Hayashi, Hideaki; Prueckl, Robert; Guger, Christoph

    2014-01-01

    There are two main approaches to intraoperative monitoring in neurosurgery. One approach is related to fluorescent phenomena and the other is related to oscillatory neuronal activity. We developed novel techniques to visualize blood flow (BF) conditions in real time, based on indocyanine green videography (ICG-VG) and the electrophysiological phenomenon of high gamma activity (HGA). We investigated the use of ICG-VG in four patients with moyamoya disease and two with arteriovenous malformation (AVM), and we investigated the use of real-time HGA mapping in four patients with brain tumors who underwent lesion resection with awake craniotomy. Real-time data processing of ICG-VG was based on perfusion imaging, which generated parameters including arrival time (AT), mean transit time (MTT), and BF of brain surface vessels. During awake craniotomy, we analyzed the frequency components of brain oscillation and performed real-time HGA mapping to identify functional areas. Processed results were projected on a wireless monitor linked to the operating microscope. After revascularization for moyamoya disease, AT and BF were significantly shortened and increased, respectively, suggesting hyperperfusion. Real-time fusion images on the wireless monitor provided anatomical, BF, and functional information simultaneously, and allowed the resection of AVMs under the microscope. Real-time HGA mapping during awake craniotomy rapidly indicated the eloquent areas of motor and language function and significantly shortened the operation time. These novel techniques, which we introduced might improve the reliability of intraoperative monitoring and enable the development of rational and objective surgical strategies.

  7. Novel Techniques of Real-time Blood Flow and Functional Mapping: Technical Note

    PubMed Central

    KAMADA, Kyousuke; OGAWA, Hiroshi; SAITO, Masato; TAMURA, Yukie; ANEI, Ryogo; KAPELLER, Christoph; HAYASHI, Hideaki; PRUECKL, Robert; GUGER, Christoph

    2014-01-01

    There are two main approaches to intraoperative monitoring in neurosurgery. One approach is related to fluorescent phenomena and the other is related to oscillatory neuronal activity. We developed novel techniques to visualize blood flow (BF) conditions in real time, based on indocyanine green videography (ICG-VG) and the electrophysiological phenomenon of high gamma activity (HGA). We investigated the use of ICG-VG in four patients with moyamoya disease and two with arteriovenous malformation (AVM), and we investigated the use of real-time HGA mapping in four patients with brain tumors who underwent lesion resection with awake craniotomy. Real-time data processing of ICG-VG was based on perfusion imaging, which generated parameters including arrival time (AT), mean transit time (MTT), and BF of brain surface vessels. During awake craniotomy, we analyzed the frequency components of brain oscillation and performed real-time HGA mapping to identify functional areas. Processed results were projected on a wireless monitor linked to the operating microscope. After revascularization for moyamoya disease, AT and BF were significantly shortened and increased, respectively, suggesting hyperperfusion. Real-time fusion images on the wireless monitor provided anatomical, BF, and functional information simultaneously, and allowed the resection of AVMs under the microscope. Real-time HGA mapping during awake craniotomy rapidly indicated the eloquent areas of motor and language function and significantly shortened the operation time. These novel techniques, which we introduced might improve the reliability of intraoperative monitoring and enable the development of rational and objective surgical strategies. PMID:25263624

  8. Detecting spatial patterns of rivermouth processes using a geostatistical framework for near-real-time analysis

    USGS Publications Warehouse

    Xu, Wenzhao; Collingsworth, Paris D.; Bailey, Barbara; Carlson Mazur, Martha L.; Schaeffer, Jeff; Minsker, Barbara

    2017-01-01

    This paper proposes a geospatial analysis framework and software to interpret water-quality sampling data from towed undulating vehicles in near-real time. The framework includes data quality assurance and quality control processes, automated kriging interpolation along undulating paths, and local hotspot and cluster analyses. These methods are implemented in an interactive Web application developed using the Shiny package in the R programming environment to support near-real time analysis along with 2- and 3-D visualizations. The approach is demonstrated using historical sampling data from an undulating vehicle deployed at three rivermouth sites in Lake Michigan during 2011. The normalized root-mean-square error (NRMSE) of the interpolation averages approximately 10% in 3-fold cross validation. The results show that the framework can be used to track river plume dynamics and provide insights on mixing, which could be related to wind and seiche events.

  9. Where Are the Academic Jobs? Interactive Exploration of Job Advertisements in Geospatial and Topical Space

    NASA Astrophysics Data System (ADS)

    Zoss, Angela M.; Conover, Michael; Börner, Katy

    This paper details a methodology for capturing, analyzing, and communicating one specific type of real time data: advertisements of currently available academic jobs. The work was inspired by the American Recovery and Reinvestment Act of 2009 (ARRA) [2] that provides approximately 100 billion for education, creating a historic opportunity to create and save hundreds of thousands of jobs. Here, we discuss methodological challenges and practical problems when developing interactive visual interfaces to real time data streams such as job advertisements. Related work is discussed, preliminary solutions are presented, and future work is outlined. The presented approach should be valuable to deal with the enormous volume and complexity of social and behavioral data that evolve continuously in real time, and analyses of them need to be communicated to a broad audience of researchers, practitioners, clients, educators, and interested policymakers, as originally suggested by Hemmings and Wilkinson [1].

  10. Real-time imaging of quantum entanglement.

    PubMed

    Fickler, Robert; Krenn, Mario; Lapkiewicz, Radek; Ramelow, Sven; Zeilinger, Anton

    2013-01-01

    Quantum Entanglement is widely regarded as one of the most prominent features of quantum mechanics and quantum information science. Although, photonic entanglement is routinely studied in many experiments nowadays, its signature has been out of the grasp for real-time imaging. Here we show that modern technology, namely triggered intensified charge coupled device (ICCD) cameras are fast and sensitive enough to image in real-time the effect of the measurement of one photon on its entangled partner. To quantitatively verify the non-classicality of the measurements we determine the detected photon number and error margin from the registered intensity image within a certain region. Additionally, the use of the ICCD camera allows us to demonstrate the high flexibility of the setup in creating any desired spatial-mode entanglement, which suggests as well that visual imaging in quantum optics not only provides a better intuitive understanding of entanglement but will improve applications of quantum science.

  11. Real-Time Imaging of Quantum Entanglement

    PubMed Central

    Fickler, Robert; Krenn, Mario; Lapkiewicz, Radek; Ramelow, Sven; Zeilinger, Anton

    2013-01-01

    Quantum Entanglement is widely regarded as one of the most prominent features of quantum mechanics and quantum information science. Although, photonic entanglement is routinely studied in many experiments nowadays, its signature has been out of the grasp for real-time imaging. Here we show that modern technology, namely triggered intensified charge coupled device (ICCD) cameras are fast and sensitive enough to image in real-time the effect of the measurement of one photon on its entangled partner. To quantitatively verify the non-classicality of the measurements we determine the detected photon number and error margin from the registered intensity image within a certain region. Additionally, the use of the ICCD camera allows us to demonstrate the high flexibility of the setup in creating any desired spatial-mode entanglement, which suggests as well that visual imaging in quantum optics not only provides a better intuitive understanding of entanglement but will improve applications of quantum science. PMID:23715056

  12. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  13. Enabling Real-time Water Decision Support Services Using Model as a Service

    NASA Astrophysics Data System (ADS)

    Zhao, T.; Minsker, B. S.; Lee, J. S.; Salas, F. R.; Maidment, D. R.; David, C. H.

    2014-12-01

    Through application of computational methods and an integrated information system, data and river modeling services can help researchers and decision makers more rapidly understand river conditions under alternative scenarios. To enable this capability, workflows (i.e., analysis and model steps) are created and published as Web services delivered through an internet browser, including model inputs, a published workflow service, and visualized outputs. The RAPID model, which is a river routing model developed at University of Texas Austin for parallel computation of river discharge, has been implemented as a workflow and published as a Web application. This allows non-technical users to remotely execute the model and visualize results as a service through a simple Web interface. The model service and Web application has been prototyped in the San Antonio and Guadalupe River Basin in Texas, with input from university and agency partners. In the future, optimization model workflows will be developed to link with the RAPID model workflow to provide real-time water allocation decision support services.

  14. Gas Discharge Visualization: An Imaging and Modeling Tool for Medical Biometrics

    PubMed Central

    Kostyuk, Nataliya; Cole, Phyadragren; Meghanathan, Natarajan; Isokpehi, Raphael D.; Cohly, Hari H. P.

    2011-01-01

    The need for automated identification of a disease makes the issue of medical biometrics very current in our society. Not all biometric tools available provide real-time feedback. We introduce gas discharge visualization (GDV) technique as one of the biometric tools that have the potential to identify deviations from the normal functional state at early stages and in real time. GDV is a nonintrusive technique to capture the physiological and psychoemotional status of a person and the functional status of different organs and organ systems through the electrophotonic emissions of fingertips placed on the surface of an impulse analyzer. This paper first introduces biometrics and its different types and then specifically focuses on medical biometrics and the potential applications of GDV in medical biometrics. We also present our previous experience with GDV in the research regarding autism and the potential use of GDV in combination with computer science for the potential development of biological pattern/biomarker for different kinds of health abnormalities including cancer and mental diseases. PMID:21747817

  15. Gas discharge visualization: an imaging and modeling tool for medical biometrics.

    PubMed

    Kostyuk, Nataliya; Cole, Phyadragren; Meghanathan, Natarajan; Isokpehi, Raphael D; Cohly, Hari H P

    2011-01-01

    The need for automated identification of a disease makes the issue of medical biometrics very current in our society. Not all biometric tools available provide real-time feedback. We introduce gas discharge visualization (GDV) technique as one of the biometric tools that have the potential to identify deviations from the normal functional state at early stages and in real time. GDV is a nonintrusive technique to capture the physiological and psychoemotional status of a person and the functional status of different organs and organ systems through the electrophotonic emissions of fingertips placed on the surface of an impulse analyzer. This paper first introduces biometrics and its different types and then specifically focuses on medical biometrics and the potential applications of GDV in medical biometrics. We also present our previous experience with GDV in the research regarding autism and the potential use of GDV in combination with computer science for the potential development of biological pattern/biomarker for different kinds of health abnormalities including cancer and mental diseases.

  16. Transforming GIS data into functional road models for large-scale traffic simulation.

    PubMed

    Wilkie, David; Sewall, Jason; Lin, Ming C

    2012-06-01

    There exists a vast amount of geographic information system (GIS) data that model road networks around the world as polylines with attributes. In this form, the data are insufficient for applications such as simulation and 3D visualization-tools which will grow in power and demand as sensor data become more pervasive and as governments try to optimize their existing physical infrastructure. In this paper, we propose an efficient method for enhancing a road map from a GIS database to create a geometrically and topologically consistent 3D model to be used in real-time traffic simulation, interactive visualization of virtual worlds, and autonomous vehicle navigation. The resulting representation provides important road features for traffic simulations, including ramps, highways, overpasses, legal merge zones, and intersections with arbitrary states, and it is independent of the simulation methodologies. We test the 3D models of road networks generated by our algorithm on real-time traffic simulation using both macroscopic and microscopic techniques.

  17. Enhancement of Temporal Resolution and BOLD Sensitivity in Real-Time fMRI using Multi-Slab Echo-Volumar Imaging

    PubMed Central

    Posse, Stefan; Ackley, Elena; Mutihac, Radu; Rick, Jochen; Shane, Matthew; Murray-Krezan, Cristina; Zaitsev, Maxim; Speck, Oliver

    2012-01-01

    In this study, a new approach to high-speed fMRI using multi-slab echo-volumar imaging (EVI) is developed that minimizes geometrical image distortion and spatial blurring, and enables nonaliased sampling of physiological signal fluctuation to increase BOLD sensitivity compared to conventional echo-planar imaging (EPI). Real-time fMRI using whole brain 4-slab EVI with 286 ms temporal resolution (4 mm isotropic voxel size) and partial brain 2-slab EVI with 136 ms temporal resolution (4×4×6 mm3 voxel size) was performed on a clinical 3 Tesla MRI scanner equipped with 12-channel head coil. Four-slab EVI of visual and motor tasks significantly increased mean (visual: 96%, motor: 66%) and maximum t-score (visual: 263%, motor: 124%) and mean (visual: 59%, motor: 131%) and maximum (visual: 29%, motor: 67%) BOLD signal amplitude compared with EPI. Time domain moving average filtering (2 s width) to suppress physiological noise from cardiac and respiratory fluctuations further improved mean (visual: 196%, motor: 140%) and maximum (visual: 384%, motor: 200%) t-scores and increased extents of activation (visual: 73%, motor: 70%) compared to EPI. Similar sensitivity enhancement, which is attributed to high sampling rate at only moderately reduced temporal signal-to-noise ratio (mean: − 52%) and longer sampling of the BOLD effect in the echo-time domain compared to EPI, was measured in auditory cortex. Two-slab EVI further improved temporal resolution for measuring task-related activation and enabled mapping of five major resting state networks (RSNs) in individual subjects in 5 min scans. The bilateral sensorimotor, the default mode and the occipital RSNs were detectable in time frames as short as 75 s. In conclusion, the high sampling rate of real-time multi-slab EVI significantly improves sensitivity for studying the temporal dynamics of hemodynamic responses and for characterizing functional networks at high field strength in short measurement times. PMID:22398395

  18. Novel Scientific Visualization Interfaces for Interactive Information Visualization and Sharing

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2012-12-01

    As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools in the Iowa Flood Information System (IFIS), developed within the light of these challenges. The IFIS is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to and visualization of flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, and other flood-related data for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS. 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods.

  19. A MATLAB-based graphical user interface for the identification of muscular activations from surface electromyography signals.

    PubMed

    Mengarelli, Alessandro; Cardarelli, Stefano; Verdini, Federica; Burattini, Laura; Fioretti, Sandro; Di Nardo, Francesco

    2016-08-01

    In this paper a graphical user interface (GUI) built in MATLAB® environment is presented. This interactive tool has been developed for the analysis of superficial electromyography (sEMG) signals and in particular for the assessment of the muscle activation time intervals. After the signal import, the tool performs a first analysis in a totally user independent way, providing a reliable computation of the muscular activation sequences. Furthermore, the user has the opportunity to modify each parameter of the on/off identification algorithm implemented in the presented tool. The presence of an user-friendly GUI allows the immediate evaluation of the effects that the modification of every single parameter has on the activation intervals recognition, through the real-time updating and visualization of the muscular activation/deactivation sequences. The possibility to accept the initial signal analysis or to modify the on/off identification with respect to each considered signal, with a real-time visual feedback, makes this GUI-based tool a valuable instrument in clinical, research applications and also in an educational perspective.

  20. Atmospheric Radiation Measurement's Data Management Facility captures metadata and uses visualization tools to assist in routine data management.

    NASA Astrophysics Data System (ADS)

    Keck, N. N.; Macduff, M.; Martin, T.

    2017-12-01

    The Atmospheric Radiation Measurement's (ARM) Data Management Facility (DMF) plays a critical support role in processing and curating data generated by the Department of Energy's ARM Program. Data are collected near real time from hundreds of observational instruments spread out all over the globe. Data are then ingested hourly to provide time series data in NetCDF (network Common Data Format) and includes standardized metadata. Based on automated processes and a variety of user reviews the data may need to be reprocessed. Final data sets are then stored and accessed by users through the ARM Archive. Over the course of 20 years, a suite of data visualization tools have been developed to facilitate the operational processes to manage and maintain the more than 18,000 real time events, that move 1.3 TB of data each day through the various stages of the DMF's data system. This poster will present the resources and methodology used to capture metadata and the tools that assist in routine data management and discoverability.

  1. Acting to gain information: Real-time reasoning meets real-time perception

    NASA Technical Reports Server (NTRS)

    Rosenschein, Stan

    1994-01-01

    Recent advances in intelligent reactive systems suggest new approaches to the problem of deriving task-relevant information from perceptual systems in real time. The author will describe work in progress aimed at coupling intelligent control mechanisms to real-time perception systems, with special emphasis on frame rate visual measurement systems. A model for integrated reasoning and perception will be discussed, and recent progress in applying these ideas to problems of sensor utilization for efficient recognition and tracking will be described.

  2. A contourlet transform based algorithm for real-time video encoding

    NASA Astrophysics Data System (ADS)

    Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris

    2012-06-01

    In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of artificial block artifacts.

  3. Real-time mandibular angle reduction surgical simulation with haptic rendering.

    PubMed

    Wang, Qiong; Chen, Hui; Wu, Wen; Jin, Hai-Yang; Heng, Pheng-Ann

    2012-11-01

    Mandibular angle reduction is a popular and efficient procedure widely used to alter the facial contour. The primary surgical instruments, the reciprocating saw and the round burr, employed in the surgery have a common feature: operating at a high-speed. Generally, inexperienced surgeons need a long-time practice to learn how to minimize the risks caused by the uncontrolled contacts and cutting motions in manipulation of instruments with high-speed reciprocation or rotation. A virtual reality-based surgical simulator for the mandibular angle reduction was designed and implemented on a CUDA-based platform in this paper. High-fidelity visual and haptic feedbacks are provided to enhance the perception in a realistic virtual surgical environment. The impulse-based haptic models were employed to simulate the contact forces and torques on the instruments. It provides convincing haptic sensation for surgeons to control the instruments under different reciprocation or rotation velocities. The real-time methods for bone removal and reconstruction during surgical procedures have been proposed to support realistic visual feedbacks. The simulated contact forces were verified by comparing against the actual force data measured through the constructed mechanical platform. An empirical study based on the patient-specific data was conducted to evaluate the ability of the proposed system in training surgeons with various experiences. The results confirm the validity of our simulator.

  4. Position estimation and driving of an autonomous vehicle by monocular vision

    NASA Astrophysics Data System (ADS)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  5. Registration of angiographic image on real-time fluoroscopic image for image-guided percutaneous coronary intervention.

    PubMed

    Kim, Dongkue; Park, Sangsoo; Jeong, Myung Ho; Ryu, Jeha

    2018-02-01

    In percutaneous coronary intervention (PCI), cardiologists must study two different X-ray image sources: a fluoroscopic image and an angiogram. Manipulating a guidewire while alternately monitoring the two separate images on separate screens requires a deep understanding of the anatomy of coronary vessels and substantial training. We propose 2D/2D spatiotemporal image registration of the two images in a single image in order to provide cardiologists with enhanced visual guidance in PCI. The proposed 2D/2D spatiotemporal registration method uses a cross-correlation of two ECG series in each image to temporally synchronize two separate images and register an angiographic image onto the fluoroscopic image. A guidewire centerline is then extracted from the fluoroscopic image in real time, and the alignment of the centerline with vessel outlines of the chosen angiographic image is optimized using the iterative closest point algorithm for spatial registration. A proof-of-concept evaluation with a phantom coronary vessel model with engineering students showed an error reduction rate greater than 74% on wrong insertion to nontarget branches compared to the non-registration method and more than 47% reduction in the task completion time in performing guidewire manipulation for very difficult tasks. Evaluation with a small number of experienced doctors shows a potentially significant reduction in both task completion time and error rate for difficult tasks. The total registration time with real procedure X-ray (angiographic and fluoroscopic) images takes [Formula: see text] 60 ms, which is within the fluoroscopic image acquisition rate of 15 Hz. By providing cardiologists with better visual guidance in PCI, the proposed spatiotemporal image registration method is shown to be useful in advancing the guidewire to the coronary vessel branches, especially those difficult to insert into.

  6. vMon-mobile provides wireless connection to the electronic patient record

    NASA Astrophysics Data System (ADS)

    Oliveira, Pedro P., Jr.; Rebelo, Marina; Pilon, Paulo E.; Gutierrez, Marco A.; Tachinardi, Umberto

    2002-05-01

    This work presents the development of a set of tools to help doctors to continuously monitor critical patients. Real-time monitoring signals are displayed via a Web Based Electronic Patient Record (Web-EPR) developed at the Heart Institute. Any computer on the Hospital's Intranet can access the Web-EPR that will open a browser plug-in called vMon. Recently vMon was adapted to wireless mobile devices providing the same real-time visualization of vital signals of its desktop counterpart. The monitoring network communicates with the hospital network through a gateway using HL7 messages and has the ability to export waveforms in real time using the multicast protocol through an API library. A dedicated ActiveX component was built that establishes the streaming of the biomedical signals under monitoring and displays them on an Internet Explorer 5.x browser. The mobile version - called vMon-mobile - will parse the browser window and deliver it to a PDA device connected to a local area network. The result is a virtual monitor presenting real-time data on a mobile device. All parameters and signals acquired from the moment the patient is connected to the monitors are stored for a few days. The most clinically relevant information is added to patient's EPR.

  7. NOAA's Science On a Sphere Education Program: Application of a Scientific Visualization System to Teach Earth System Science and Improve our Understanding About Creating Effective Visualizations

    NASA Astrophysics Data System (ADS)

    McDougall, C.; McLaughlin, J.

    2008-12-01

    NOAA has developed several programs aimed at facilitating the use of earth system science data and data visualizations by formal and informal educators. One of them, Science On a Sphere, a visualization display tool and system that uses networked LCD projectors to display animated global datasets onto the outside of a suspended, 1.7-meter diameter opaque sphere, enables science centers, museums, and universities to display real-time and current earth system science data. NOAA's Office of Education has provided grants to such education institutions to develop exhibits featuring Science On a Sphere (SOS) and create content for and evaluate audience impact. Currently, 20 public education institutions have permanent Science On a Sphere exhibits and 6 more will be installed soon. These institutions and others that are working to create and evaluate content for this system work collaboratively as a network to improve our collective knowledge about how to create educationally effective visualizations. Network members include other federal agencies, such as, NASA and the Dept. of Energy, and major museums such as Smithsonian and American Museum of Natural History, as well as a variety of mid-sized and small museums and universities. Although the audiences in these institutions vary widely in their scientific awareness and understanding, we find there are misconceptions and lack of familiarity with viewing visualizations that are common among the audiences. Through evaluations performed in these institutions we continue to evolve our understanding of how to create content that is understandable by those with minimal scientific literacy. The findings from our network will be presented including the importance of providing context, real-world connections and imagery to accompany the visualizations and the need for audience orientation before the visualizations are viewed. Additionally, we will review the publicly accessible virtual library housing over 200 datasets for SOS and any other real or virtual globe. These datasets represent contributions from NOAA, NASA, Dept. of Energy, and the public institutions that are displaying the spheres.

  8. High-resolution ultrasound imaging and noninvasive optoacoustic monitoring of blood variables in peripheral blood vessels

    NASA Astrophysics Data System (ADS)

    Petrov, Irene Y.; Petrov, Yuriy; Prough, Donald S.; Esenaliev, Rinat O.

    2011-03-01

    Ultrasound imaging is being widely used in clinics to obtain diagnostic information non-invasively and in real time. A high-resolution ultrasound imaging platform, Vevo (VisualSonics, Inc.) provides in vivo, real-time images with exceptional resolution (up to 30 microns) using high-frequency transducers (up to 80 MHz). Recently, we built optoacoustic systems for probing radial artery and peripheral veins that can be used for noninvasive monitoring of total hemoglobin concentration, oxyhemoglobin saturation, and concentration of important endogenous and exogenous chromophores (such as ICG). In this work we used the high-resolution ultrasound imaging system Vevo 770 for visualization of the radial artery and peripheral veins and acquired corresponding optoacoustic signals from them using the optoacoustic systems. Analysis of the optoacoustic data with a specially developed algorithm allowed for measurement of blood oxygenation in the blood vessels as well as for continuous, real-time monitoring of arterial and venous blood oxygenation. Our results indicate that: 1) the optoacoustic technique (unlike pure optical approaches and other noninvasive techniques) is capable of accurate peripheral venous oxygenation measurement; and 2) peripheral venous oxygenation is dependent on skin temperature and local hemodynamics. Moreover, we performed for the first time (to the best of our knowledge) a comparative study of optoacoustic arterial oximetry and a standard pulse oximeter in humans and demonstrated superior performance of the optoacoustic arterial oximeter, in particular at low blood flow.

  9. Real-Time Risk Assessment Framework for Unmanned Aircraft System (UAS) Traffic Management (UTM)

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Capristan, Francisco M.; Foster, John V.; Condotta, Ryan

    2017-01-01

    The new Federal Aviation Administration (FAA) Small Unmanned Aircraft rule (Part 107) marks the first national regulations for commercial operation of small unmanned aircraft systems (sUAS) under 55 pounds within the National Airspace System (NAS). Although sUAS flights may not be performed beyond visual line-of-sight or over non- participant structures and people, safety of sUAS operations must still be maintained and tracked at all times. Moreover, future safety-critical operation of sUAS (e.g., for package delivery) are already being conceived and tested. NASA's Unmanned Aircraft System Trac Management (UTM) concept aims to facilitate the safe use of low-altitude airspace for sUAS operations. This paper introduces the UTM Risk Assessment Framework (URAF) which was developed to provide real-time safety evaluation and tracking capability within the UTM concept. The URAF uses Bayesian Belief Networks (BBNs) to propagate off -nominal condition probabilities based on real-time component failure indicators. This information is then used to assess the risk to people on the ground by calculating the potential impact area and the effects of the impact. The visual representation of the expected area of impact and the nominal risk level can assist operators and controllers with dynamic trajectory planning and execution. The URAF was applied to a case study to illustrate the concept.

  10. Skylab explores the Earth

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Data from visual observations are integrated with results of analyses of approxmately 600 of the nearly 2000 photographs taken of Earth during the 84-day Skylab 4 mission to provide additional information on (1) Earth features and processes; (2) operational procedures and constraints in observing and photographing the planet; and (3) the use of man in real-time analysis of oceanic and atmospheric phenomena.

  11. Absence of Sublexical Representations in Late-Learning Signers? A Statistical Critique of Lieberman et al. (2015)

    ERIC Educational Resources Information Center

    Salverda, Anne Pier

    2016-01-01

    Lieberman, Borovsky, Hatrak, and Mayberry (2015) used a modified version of the visual-world paradigm to examine the real-time processing of signs in American Sign Language. They examined the activation of phonological and semantic competitors in native signers and late-learning signers and concluded that their results provide evidence that the…

  12. A Distributed GPU-Based Framework for Real-Time 3D Volume Rendering of Large Astronomical Data Cubes

    NASA Astrophysics Data System (ADS)

    Hassan, A. H.; Fluke, C. J.; Barnes, D. G.

    2012-05-01

    We present a framework to volume-render three-dimensional data cubes interactively using distributed ray-casting and volume-bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core central processing unit (CPU). The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 nodes with a total of 128GPUs. The framework proved to be scalable to render a 204GB data cube with an average of 30 frames per second. Our performance analyses also compare the use of NVIDIA Tesla 1060 and 2050GPU architectures and the effect of increasing the visualization output resolution on the rendering performance. Although our initial focus, as shown in the examples presented in this work, is volume rendering of spectral data cubes from radio astronomy, we contend that our approach has applicability to other disciplines where close to real-time volume rendering of terabyte-order three-dimensional data sets is a requirement.

  13. Neuromorphic VLSI vision system for real-time texture segregation.

    PubMed

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2008-10-01

    The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.

  14. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production.more » This technique is widely applicable and is not limited to crystal growth processes.« less

  15. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    DOE PAGES

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; ...

    2017-04-20

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production.more » This technique is widely applicable and is not limited to crystal growth processes.« less

  16. Indexed triangle strips optimization for real-time visualization using genetic algorithm: preliminary study

    NASA Astrophysics Data System (ADS)

    Tanaka, Kiyoshi; Takano, Shuichi; Sugimura, Tatsuo

    2000-10-01

    In this work we focus on the indexed triangle strips that is an extended representation of triangle strips to improve the efficiency for geometrical transformation of vertices, and present a method to construct optimum indexed triangle strips using Genetic Algorithm (GA) for real-time visualization. The main objective of this work is how to optimally construct indexed triangle strips by improving the ratio that reuses the data stored in the cash memory and simultaneously reducing the total index numbers with GA. Simulation results verify that the average index numbers and cache miss ratio per polygon cold be small, and consequently the total visualization time required for the optimum solution obtained by this scheme could be remarkably reduced.

  17. UML as a cell and biochemistry modeling language.

    PubMed

    Webb, Ken; White, Tony

    2005-06-01

    The systems biology community is building increasingly complex models and simulations of cells and other biological entities, and are beginning to look at alternatives to traditional representations such as those provided by ordinary differential equations (ODE). The lessons learned over the years by the software development community in designing and building increasingly complex telecommunication and other commercial real-time reactive systems, can be advantageously applied to the problems of modeling in the biology domain. Making use of the object-oriented (OO) paradigm, the unified modeling language (UML) and Real-Time Object-Oriented Modeling (ROOM) visual formalisms, and the Rational Rose RealTime (RRT) visual modeling tool, we describe a multi-step process we have used to construct top-down models of cells and cell aggregates. The simple example model described in this paper includes membranes with lipid bilayers, multiple compartments including a variable number of mitochondria, substrate molecules, enzymes with reaction rules, and metabolic pathways. We demonstrate the relevance of abstraction, reuse, objects, classes, component and inheritance hierarchies, multiplicity, visual modeling, and other current software development best practices. We show how it is possible to start with a direct diagrammatic representation of a biological structure such as a cell, using terminology familiar to biologists, and by following a process of gradually adding more and more detail, arrive at a system with structure and behavior of arbitrary complexity that can run and be observed on a computer. We discuss our CellAK (Cell Assembly Kit) approach in terms of features found in SBML, CellML, E-CELL, Gepasi, Jarnac, StochSim, Virtual Cell, and membrane computing systems.

  18. Binocular Goggle Augmented Imaging and Navigation System provides real-time fluorescence image guidance for tumor resection and sentinel lymph node mapping

    NASA Astrophysics Data System (ADS)

    B. Mondal, Suman; Gao, Shengkui; Zhu, Nan; Sudlow, Gail P.; Liang, Kexian; Som, Avik; Akers, Walter J.; Fields, Ryan C.; Margenthaler, Julie; Liang, Rongguang; Gruev, Viktor; Achilefu, Samuel

    2015-07-01

    The inability to identify microscopic tumors and assess surgical margins in real-time during oncologic surgery leads to incomplete tumor removal, increases the chances of tumor recurrence, and necessitates costly repeat surgery. To overcome these challenges, we have developed a wearable goggle augmented imaging and navigation system (GAINS) that can provide accurate intraoperative visualization of tumors and sentinel lymph nodes in real-time without disrupting normal surgical workflow. GAINS projects both near-infrared fluorescence from tumors and the natural color images of tissue onto a head-mounted display without latency. Aided by tumor-targeted contrast agents, the system detected tumors in subcutaneous and metastatic mouse models with high accuracy (sensitivity = 100%, specificity = 98% ± 5% standard deviation). Human pilot studies in breast cancer and melanoma patients using a near-infrared dye show that the GAINS detected sentinel lymph nodes with 100% sensitivity. Clinical use of the GAINS to guide tumor resection and sentinel lymph node mapping promises to improve surgical outcomes, reduce rates of repeat surgery, and improve the accuracy of cancer staging.

  19. Real-time Graphics Processing Unit Based Fourier Domain Optical Coherence Tomography and Surgical Applications

    NASA Astrophysics Data System (ADS)

    Zhang, Kang

    2011-12-01

    In this dissertation, real-time Fourier domain optical coherence tomography (FD-OCT) capable of multi-dimensional micrometer-resolution imaging targeted specifically for microsurgical intervention applications was developed and studied. As a part of this work several ultra-high speed real-time FD-OCT imaging and sensing systems were proposed and developed. A real-time 4D (3D+time) OCT system platform using the graphics processing unit (GPU) to accelerate OCT signal processing, the imaging reconstruction, visualization, and volume rendering was developed. Several GPU based algorithms such as non-uniform fast Fourier transform (NUFFT), numerical dispersion compensation, and multi-GPU implementation were developed to improve the impulse response, SNR roll-off and stability of the system. Full-range complex-conjugate-free FD-OCT was also implemented on the GPU architecture to achieve doubled image range and improved SNR. These technologies overcome the imaging reconstruction and visualization bottlenecks widely exist in current ultra-high speed FD-OCT systems and open the way to interventional OCT imaging for applications in guided microsurgery. A hand-held common-path optical coherence tomography (CP-OCT) distance-sensor based microsurgical tool was developed and validated. Through real-time signal processing, edge detection and feed-back control, the tool was shown to be capable of track target surface and compensate motion. The micro-incision test using a phantom was performed using a CP-OCT-sensor integrated hand-held tool, which showed an incision error less than +/-5 microns, comparing to >100 microns error by free-hand incision. The CP-OCT distance sensor has also been utilized to enhance the accuracy and safety of optical nerve stimulation. Finally, several experiments were conducted to validate the system for surgical applications. One of them involved 4D OCT guided micro-manipulation using a phantom. Multiple volume renderings of one 3D data set were performed with different view angles to allow accurate monitoring of the micro-manipulation, and the user to clearly monitor tool-to-target spatial relation in real-time. The system was also validated by imaging multiple biological samples, such as human fingerprint, human cadaver head and small animals. Compared to conventional surgical microscopes, GPU-based real-time FD-OCT can provide the surgeons with a real-time comprehensive spatial view of the microsurgical region and accurate depth perception.

  20. A cloud-based framework for large-scale traditional Chinese medical record retrieval.

    PubMed

    Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin

    2018-01-01

    Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.

  1. Forecasting and visualization of wildfires in a 3D geographical information system

    NASA Astrophysics Data System (ADS)

    Castrillón, M.; Jorge, P. A.; López, I. J.; Macías, A.; Martín, D.; Nebot, R. J.; Sabbagh, I.; Quintana, F. M.; Sánchez, J.; Sánchez, A. J.; Suárez, J. P.; Trujillo, A.

    2011-03-01

    This paper describes a wildfire forecasting application based on a 3D virtual environment and a fire simulation engine. A novel open-source framework is presented for the development of 3D graphics applications over large geographic areas, offering high performance 3D visualization and powerful interaction tools for the Geographic Information Systems (GIS) community. The application includes a remote module that allows simultaneous connections of several users for monitoring a real wildfire event. The system is able to make a realistic composition of what is really happening in the area of the wildfire with dynamic 3D objects and location of human and material resources in real time, providing a new perspective to analyze the wildfire information. The user is enabled to simulate and visualize the propagation of a fire on the terrain integrating at the same time spatial information on topography and vegetation types with weather and wind data. The application communicates with a remote web service that is in charge of the simulation task. The user may specify several parameters through a friendly interface before the application sends the information to the remote server responsible of carrying out the wildfire forecasting using the FARSITE simulation model. During the process, the server connects to different external resources to obtain up-to-date meteorological data. The client application implements a realistic 3D visualization of the fire evolution on the landscape. A Level Of Detail (LOD) strategy contributes to improve the performance of the visualization system.

  2. Human Haptic Interaction with Soft Objects: Discriminability, Force Control, and Contact Visualization

    DTIC Science & Technology

    1998-01-01

    consisted of a videomicroscopy system and a tactile stimulator system. By using this setup, real-time images from the contact region as wvell as the... Videomicroscopy system . 4.3.2 Tactile stimulator svsteln . 4.3.3 Real-time imaging setup. 4.3.4 Active and passive touch experiments. 4.3.5...contact process is an important step. In this study, therefore, a videomicroscopy system was built’to visualize the contact re- gion of the fingerpad

  3. Web-GIS-based SARS epidemic situation visualization

    NASA Astrophysics Data System (ADS)

    Lu, Xiaolin

    2004-03-01

    In order to research, perform statistical analysis and broadcast the information of SARS epidemic situation according to the relevant spatial position, this paper proposed a unified global visualization information platform for SARS epidemic situation based on Web-GIS and scientific virtualization technology. To setup the unified global visual information platform, the architecture of Web-GIS based interoperable information system is adopted to enable public report SARS virus information to health cure center visually by using the web visualization technology. A GIS java applet is used to visualize the relationship between spatial graphical data and virus distribution, and other web based graphics figures such as curves, bars, maps and multi-dimensional figures are used to visualize the relationship between SARS virus tendency with time, patient number or locations. The platform is designed to display the SARS information in real time, simulate visually for real epidemic situation and offer an analyzing tools for health department and the policy-making government department to support the decision-making for preventing against the SARS epidemic virus. It could be used to analyze the virus condition through visualized graphics interface, isolate the areas of virus source, and control the virus condition within shortest time. It could be applied to the visualization field of SARS preventing systems for SARS information broadcasting, data management, statistical analysis, and decision supporting.

  4. WISP information display system user's manual

    NASA Technical Reports Server (NTRS)

    Alley, P. L.; Smith, G. R.

    1978-01-01

    The wind shears program (WISP) supports the collection of data on magnetic tape for permanent storage or analysis. The document structure provides: (1) the hardware and software configuration required to execute the WISP system and start up procedure from a power down condition; (2) data collection task, calculations performed on the incoming data, and a description of the magnetic tape format; (3) the data display task and examples of displays obtained from execution of the real time simulation program; and (4) the raw data dump task and examples of operator actions required to obtained the desired format. The procedures outlines herein will allow continuous data collection at the expense of real time visual displays.

  5. Real-time image mosaicing for medical applications.

    PubMed

    Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth

    2007-01-01

    In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.

  6. LCFM - LIVING COLOR FRAME MAKER: PC GRAPHICS GENERATION AND MANAGEMENT TOOL FOR REAL-TIME APPLICATIONS

    NASA Technical Reports Server (NTRS)

    Truong, L. V.

    1994-01-01

    Computer graphics are often applied for better understanding and interpretation of data under observation. These graphics become more complicated when animation is required during "run-time", as found in many typical modern artificial intelligence and expert systems. Living Color Frame Maker is a solution to many of these real-time graphics problems. Living Color Frame Maker (LCFM) is a graphics generation and management tool for IBM or IBM compatible personal computers. To eliminate graphics programming, the graphic designer can use LCFM to generate computer graphics frames. The graphical frames are then saved as text files, in a readable and disclosed format, which can be easily accessed and manipulated by user programs for a wide range of "real-time" visual information applications. For example, LCFM can be implemented in a frame-based expert system for visual aids in management of systems. For monitoring, diagnosis, and/or controlling purposes, circuit or systems diagrams can be brought to "life" by using designated video colors and intensities to symbolize the status of hardware components (via real-time feedback from sensors). Thus status of the system itself can be displayed. The Living Color Frame Maker is user friendly with graphical interfaces, and provides on-line help instructions. All options are executed using mouse commands and are displayed on a single menu for fast and easy operation. LCFM is written in C++ using the Borland C++ 2.0 compiler for IBM PC series computers and compatible computers running MS-DOS. The program requires a mouse and an EGA/VGA display. A minimum of 77K of RAM is also required for execution. The documentation is provided in electronic form on the distribution medium in WordPerfect format. A sample MS-DOS executable is provided on the distribution medium. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The Living Color Frame Maker tool was developed in 1992.

  7. Sustained Attention in Real Classroom Settings: An EEG Study.

    PubMed

    Ko, Li-Wei; Komarov, Oleksii; Hairston, W David; Jung, Tzyy-Ping; Lin, Chin-Teng

    2017-01-01

    Sustained attention is a process that enables the maintenance of response persistence and continuous effort over extended periods of time. Performing attention-related tasks in real life involves the need to ignore a variety of distractions and inhibit attention shifts to irrelevant activities. This study investigates electroencephalography (EEG) spectral changes during a sustained attention task within a real classroom environment. Eighteen healthy students were instructed to recognize as fast as possible special visual targets that were displayed during regular university lectures. Sorting their EEG spectra with respect to response times, which indicated the level of visual alertness to randomly introduced visual stimuli, revealed significant changes in the brain oscillation patterns. The results of power-frequency analysis demonstrated a relationship between variations in the EEG spectral dynamics and impaired performance in the sustained attention task. Across subjects and sessions, prolongation of the response time was preceded by an increase in the delta and theta EEG powers over the occipital region, and decrease in the beta power over the occipital and temporal regions. Meanwhile, implementation of the complex attention task paradigm into a real-world classroom setting makes it possible to investigate specific mutual links between brain activities and factors that cause impaired behavioral performance, such as development and manifestation of classroom mental fatigue. The findings of the study set a basis for developing a system capable of estimating the level of visual attention during real classroom activities by monitoring changes in the EEG spectra.

  8. Sustained Attention in Real Classroom Settings: An EEG Study

    PubMed Central

    Ko, Li-Wei; Komarov, Oleksii; Hairston, W. David; Jung, Tzyy-Ping; Lin, Chin-Teng

    2017-01-01

    Sustained attention is a process that enables the maintenance of response persistence and continuous effort over extended periods of time. Performing attention-related tasks in real life involves the need to ignore a variety of distractions and inhibit attention shifts to irrelevant activities. This study investigates electroencephalography (EEG) spectral changes during a sustained attention task within a real classroom environment. Eighteen healthy students were instructed to recognize as fast as possible special visual targets that were displayed during regular university lectures. Sorting their EEG spectra with respect to response times, which indicated the level of visual alertness to randomly introduced visual stimuli, revealed significant changes in the brain oscillation patterns. The results of power-frequency analysis demonstrated a relationship between variations in the EEG spectral dynamics and impaired performance in the sustained attention task. Across subjects and sessions, prolongation of the response time was preceded by an increase in the delta and theta EEG powers over the occipital region, and decrease in the beta power over the occipital and temporal regions. Meanwhile, implementation of the complex attention task paradigm into a real-world classroom setting makes it possible to investigate specific mutual links between brain activities and factors that cause impaired behavioral performance, such as development and manifestation of classroom mental fatigue. The findings of the study set a basis for developing a system capable of estimating the level of visual attention during real classroom activities by monitoring changes in the EEG spectra. PMID:28824396

  9. An Integrated Hydrologic Monitoring Network

    NASA Astrophysics Data System (ADS)

    Tedesco, L. P.; Baker, M. P.; Hall, B. E.

    2004-12-01

    Ecological studies depend on the ability to monitor an environment, collect data at appropriate spatial and temporal scales, and analyze that data from the diverse viewpoints of many relevant disciplines. Historically, environmental studies have been conducted by small teams of researchers, usually collecting data by hand at some set but low frequency, and organizing it according to ad hoc, project-specific goals. Recent years have seen dramatic advancement in the ability to gather environmental data remotely and therefore at much higher frequency. We are working to create a dynamic and integrated network of environmental sensors in natural environments to acquire real time data and create tools for visualization appropriate for different audiences to promote scientific exploration. Instrumentation includes an array of water quality and water level sondes and probes distributed throughout three Central Indiana counties. Instrument platforms currently include five river monitoring platforms utilizing YSI water quality and level probes; a lake buoy array that includes three YSI sonde packages monitoring physical, chemical and biological parameters; and over fifteen YSI and Solinist groundwater probes recording both level and water quality. Many sites are providing real-time data and several additional sites are scheduled to be online in the coming months. Visualization of this real time data from remote sensors distributed throughout Central Indiana provides numerous challenges. The benefits of successfully integrating remotely deployed environmental sensors in a post 9-11 world is obvious. We are working to bridge both the extremes associated with the frequency of data collection and the lack of data coordination by creating techniques for data networking and retrieval, and data management, analysis, and visualization capabilities that operate across a range of computing platforms to make this data immediately accessible and useful to a range of interested parties, across multiple disciplines. We are working to integrate multiple data streams into a coherent data base and create applications that allow users to view data from multiple instruments at different sites. Creating visualizations of real time, dynamic data from the everyday world and delivering it via web applications as well as through innovative display spaces will be a key outcome of this program. On-line tools for QA/QC, data queries, graphing, and sensitivity analysis are under development. Our goal is to use the instrumented sites to create analysis and presentation applications to foster a community of learners interested in understanding these ecosystems, and the larger environmental issues that they represent. This broad-based community will include environmental researchers, university faculty in lecture halls, math and science teachers, university and K-12 students, civic leaders, and educators at informal learning centers.

  10. A computational model of visual marking using an inter-connected network of spiking neurons: the spiking search over time & space model (sSoTS).

    PubMed

    Mavritsaki, Eirini; Heinke, Dietmar; Humphreys, Glyn W; Deco, Gustavo

    2006-01-01

    In the real world, visual information is selected over time as well as space, when we prioritise new stimuli for attention. Watson and Humphreys [Watson, D., Humphreys, G.W., 1997. Visual marking: prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review 104, 90-122] presented evidence that new information in search tasks is prioritised by (amongst other processes) active ignoring of old items - a process they termed visual marking. In this paper we present, for the first time, an explicit computational model of visual marking using biologically plausible activation functions. The "spiking search over time and space" model (sSoTS) incorporates different synaptic components (NMDA, AMPA, GABA) and a frequency adaptation mechanism based on [Ca(2+)] sensitive K(+) current. This frequency adaptation current can act as a mechanism that suppresses the previously attended items. We show that, when coupled with a process of active inhibition applied to old items, frequency adaptation leads to old items being de-prioritised (and new items prioritised) across time in search. Furthermore, the time course of these processes mimics the time course of the preview effect in human search. The results indicate that the sSoTS model can provide a biologically plausible account of human search over time as well as space.

  11. Total On-line Access Data System (TOADS): Phase II Final Report for the Period August 2002 - August 2004

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuracko, K. L.; Parang, M.; Landguth, D. C.

    2004-09-13

    TOADS (Total On-line Access Data System) is a new generation of real-time monitoring and information management system developed to support unattended environmental monitoring and long-term stewardship of U.S. Department of Energy facilities and sites. TOADS enables project managers, regulators, and stakeholders to view environmental monitoring information in realtime over the Internet. Deployment of TOADS at government facilities and sites will reduce the cost of monitoring while increasing confidence and trust in cleanup and long term stewardship activities. TOADS: Reliably interfaces with and acquires data from a wide variety of external databases, remote systems, and sensors such as contaminant monitors, areamore » monitors, atmospheric condition monitors, visual surveillance systems, intrusion devices, motion detectors, fire/heat detection devices, and gas/vapor detectors; Provides notification and triggers alarms as appropriate; Performs QA/QC on data inputs and logs the status of instruments/devices; Provides a fully functional data management system capable of storing, analyzing, and reporting on data; Provides an easy-to-use Internet-based user interface that provides visualization of the site, data, and events; and Enables the community to monitor local environmental conditions in real time. During this Phase II STTR project, TOADS has been developed and successfully deployed for unattended facility, environmental, and radiological monitoring at a Department of Energy facility.« less

  12. Real-time visual mosaicking and navigation on the seafloor

    NASA Astrophysics Data System (ADS)

    Richmond, Kristof

    Remote robotic exploration holds vast potential for gaining knowledge about extreme environments accessible to humans only with great difficulty. Robotic explorers have been sent to other solar system bodies, and on this planet into inaccessible areas such as caves and volcanoes. In fact, the largest unexplored land area on earth lies hidden in the airless cold and intense pressure of the ocean depths. Exploration in the oceans is further hindered by water's high absorption of electromagnetic radiation, which both inhibits remote sensing from the surface, and limits communications with the bottom. The Earth's oceans thus provide an attractive target for developing remote exploration capabilities. As a result, numerous robotic vehicles now routinely survey this environment, from remotely operated vehicles piloted over tethers from the surface to torpedo-shaped autonomous underwater vehicles surveying the mid-waters. However, these vehicles are limited in their ability to navigate relative to their environment. This limits their ability to return to sites with precision without the use of external navigation aids, and to maneuver near and interact with objects autonomously in the water and on the sea floor. The enabling of environment-relative positioning on fully autonomous underwater vehicles will greatly extend their power and utility for remote exploration in the furthest reaches of the Earth's waters---even under ice and under ground---and eventually in extraterrestrial liquid environments such as Europa's oceans. This thesis presents an operational, fielded system for visual navigation of underwater robotic vehicles in unexplored areas of the seafloor. The system does not depend on external sensing systems, using only instruments on board the vehicle. As an area is explored, a camera is used to capture images and a composite view, or visual mosaic, of the ocean bottom is created in real time. Side-to-side visual registration of images is combined with dead-reckoned navigation information in a framework allowing the creation and updating of large, locally consistent mosaics. These mosaics are used as maps in which the vehicle can navigate and localize itself with respect to points in the environment. The system achieves real-time performance in several ways. First, wherever possible, direct sensing of motion parameters is used in place of extracting them from visual data. Second, trajectories are chosen to enable a hierarchical search for side-to-side links which limits the amount of searching performed without sacrificing robustness. Finally, the map estimation is formulated as a sparse, linear information filter allowing rapid updating of large maps. The visual navigation enabled by the work in this thesis represents a new capability for remotely operated vehicles, and an enabling capability for a new generation of autonomous vehicles which explore and interact with remote, unknown and unstructured underwater environments. The real-time mosaic can be used on current tethered vehicles to create pilot aids and provide a vehicle user with situational awareness of the local environment and the position of the vehicle within it. For autonomous vehicles, the visual navigation system enables precise environment-relative positioning and mapping, without requiring external navigation systems, opening the way for ever-expanding autonomous exploration capabilities. The utility of this system was demonstrated in the field at sites of scientific interest using the ROVs Ventana and Tiburon operated by the Monterey Bay Aquarium Research Institute. A number of sites in and around Monterey Bay, California were mosaicked using the system, culminating in a complete imaging of the wreck site of the USS Macon , where real-time visual mosaics containing thousands of images were generated while navigating using only sensor systems on board the vehicle.

  13. Progress in using real-time GPS for seismic monitoring of the Cascadia megathrust

    NASA Astrophysics Data System (ADS)

    Szeliga, W. M.; Melbourne, T. I.; Santillan, V. M.; Scrivner, C.; Webb, F.

    2014-12-01

    We report on progress in our development of a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone. This system is based on 1 Hz point position estimates computed in the ITRF08 reference frame. Convergence from phase and range observables to point position estimates is accelerated using a Kalman filter based, on-line stream editor. Positions are estimated using a short-arc approach and algorithms from JPL's GIPSY-OASIS software with satellite clock and orbit products from the International GNSS Service (IGS). The resulting positions show typical RMS scatter of 2.5 cm in the horizontal and 5 cm in the vertical with latencies below 2 seconds. To facilitate the use of these point position streams for applications such as seismic monitoring, we broadcast real-time positions and covariances using custom-built streaming software. This software is capable of buffering 24-hour streams for hundreds of stations and providing them through a REST-ful web interface. To demonstrate the power of this approach, we have developed a Java-based front-end that provides a real-time visual display of time-series, vector displacement, and contoured peak ground displacement. We have also implemented continuous estimation of finite fault slip along the Cascadia megathrust using an NIF approach. The resulting continuous slip distributions are combined with pre-computed tsunami Green's functions to generate real-time tsunami run-up estimates for the entire Cascadia coastal margin. This Java-based front-end is available for download through the PANGA website. We currently analyze 80 PBO and PANGA stations along the Cascadia margin and are gearing up to process all 400+ real-time stations operating in the Pacific Northwest, many of which are currently telemetered in real-time to CWU. These will serve as milestones towards our over-arching goal of extending our processing to include all of the available real-time streams from the Pacific rim. In addition, we are developing methodologies to combine our real-time solutions with those from Scripps Institute of Oceanography's PPP-AR real-time solutions as well as real-time solutions from the USGS. These combined products should improve the robustness and reliability of real-time point-position streams in the near future.

  14. Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS)

    NASA Astrophysics Data System (ADS)

    Daniels, M. D.; Graves, S. J.; Kerkez, B.; Chandrasekar, V.; Vernon, F.; Martin, C. L.; Maskey, M.; Keiser, K.; Dye, M. J.

    2015-12-01

    The Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) project, funded as part of NSF's EarthCube initiative, addresses the ever-increasing importance of real-time scientific data, particularly in mission critical scenarios, where informed decisions must be made rapidly. Advances in the distribution of real-time data are leading many new transient phenomena in space-time to be observed, however, real-time decision-making is infeasible in many cases as these streaming data are either completely inaccessible or only available to proprietary in-house tools or displays. This lack of accessibility prohibits advanced algorithm and workflow development that could be initiated or enhanced by these data streams. Small research teams do not have resources to develop tools for the broad dissemination of their valuable real-time data and could benefit from an easy to use, scalable, cloud-based solution to facilitate access. CHORDS proposes to make a very diverse suite of real-time data available to the broader geosciences community in order to allow innovative new science in these areas to thrive. This presentation will highlight recently developed CHORDS portal tools and processing systems aimed at addressing some of the gaps in handling real-time data, particularly in the provisioning of data from the "long-tail" scientific community through a simple interface deployed in the cloud. The CHORDS system will connect these real-time streams via standard services from the Open Geospatial Consortium (OGC) and does so in a way that is simple and transparent to the data provider. Broad use of the CHORDS framework will expand the role of real-time data within the geosciences, and enhance the potential of streaming data sources to enable adaptive experimentation and real-time hypothesis testing. Adherence to community data and metadata standards will promote the integration of CHORDS real-time data with existing standards-compliant analysis, visualization and modeling tools.

  15. A BHR Composite Network-Based Visualization Method for Deformation Risk Level of Underground Space

    PubMed Central

    Zheng, Wei; Zhang, Xiaoya; Lu, Qi

    2015-01-01

    This study proposes a visualization processing method for the deformation risk level of underground space. The proposed method is based on a BP-Hopfield-RGB (BHR) composite network. Complex environmental factors are integrated in the BP neural network. Dynamic monitoring data are then automatically classified in the Hopfield network. The deformation risk level is combined with the RGB color space model and is displayed visually in real time, after which experiments are conducted with the use of an ultrasonic omnidirectional sensor device for structural deformation monitoring. The proposed method is also compared with some typical methods using a benchmark dataset. Results show that the BHR composite network visualizes the deformation monitoring process in real time and can dynamically indicate dangerous zones. PMID:26011618

  16. A framework for visualization of battlefield network behavior

    NASA Astrophysics Data System (ADS)

    Perzov, Yury; Yurcik, William

    2006-05-01

    An extensible network simulation application was developed to study wireless battlefield communications. The application monitors node mobility and depicts broadcast and unicast traffic as expanding rings and directed links. The network simulation was specially designed to support fault injection to show the impact of air strikes on disabling nodes. The application takes standard ns-2 trace files as an input and provides for performance data output in different graphical forms (histograms and x/y plots). Network visualization via animation of simulation output can be saved in AVI format that may serve as a basis for a real-time battlefield awareness system.

  17. Time-frequency feature representation using multi-resolution texture analysis and acoustic activity detector for real-life speech emotion recognition.

    PubMed

    Wang, Kun-Ching

    2015-01-14

    The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.

  18. Perception-based 3D tactile rendering from a single image for human skin examinations by dynamic touch.

    PubMed

    Kim, K; Lee, S

    2015-05-01

    Diagnosis of skin conditions is dependent on the assessment of skin surface properties that are represented by more tactile properties such as stiffness, roughness, and friction than visual information. Due to this reason, adding tactile feedback to existing vision based diagnosis systems can help dermatologists diagnose skin diseases or disorders more accurately. The goal of our research was therefore to develop a tactile rendering system for skin examinations by dynamic touch. Our development consists of two stages: converting a single image to a 3D haptic surface and rendering the generated haptic surface in real-time. Converting to 3D surfaces from 2D single images was implemented with concerning human perception data collected by a psychophysical experiment that measured human visual and haptic sensibility to 3D skin surface changes. For the second stage, we utilized real skin biomechanical properties found by prior studies. Our tactile rendering system is a standalone system that can be used with any single cameras and haptic feedback devices. We evaluated the performance of our system by conducting an identification experiment with three different skin images with five subjects. The participants had to identify one of the three skin surfaces by using a haptic device (Falcon) only. No visual cue was provided for the experiment. The results indicate that our system provides sufficient performance to render discernable tactile rendering with different skin surfaces. Our system uses only a single skin image and automatically generates a 3D haptic surface based on human haptic perception. Realistic skin interactions can be provided in real-time for the purpose of skin diagnosis, simulations, or training. Our system can also be used for other applications like virtual reality and cosmetic applications. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Real-time magnetic resonance-guided ablation of typical right atrial flutter using a combination of active catheter tracking and passive catheter visualization in man: initial results from a consecutive patient series.

    PubMed

    Hilbert, Sebastian; Sommer, Philipp; Gutberlet, Matthias; Gaspar, Thomas; Foldyna, Borek; Piorkowski, Christopher; Weiss, Steffen; Lloyd, Thomas; Schnackenburg, Bernhard; Krueger, Sascha; Fleiter, Christian; Paetsch, Ingo; Jahnke, Cosima; Hindricks, Gerhard; Grothoff, Matthias

    2016-04-01

    Recently cardiac magnetic resonance (CMR) imaging has been found feasible for the visualization of the underlying substrate for cardiac arrhythmias as well as for the visualization of cardiac catheters for diagnostic and ablation procedures. Real-time CMR-guided cavotricuspid isthmus ablation was performed in a series of six patients using a combination of active catheter tracking and catheter visualization using real-time MR imaging. Cardiac magnetic resonance utilizing a 1.5 T system was performed in patients under deep propofol sedation. A three-dimensional-whole-heart sequence with navigator technique and a fast automated segmentation algorithm was used for online segmentation of all cardiac chambers, which were thereafter displayed on a dedicated image guidance platform. In three out of six patients complete isthmus block could be achieved in the MR scanner, two of these patients did not need any additional fluoroscopy. In the first patient technical issues called for a completion of the procedure in a conventional laboratory, in another two patients the isthmus was partially blocked by magnetic resonance imaging (MRI)-guided ablation. The mean procedural time for the MR procedure was 109 ± 58 min. The intubation of the CS was performed within a mean time of 2.75 ± 2.21 min. Total fluoroscopy time for completion of the isthmus block ranged from 0 to 7.5 min. The combination of active catheter tracking and passive real-time visualization in CMR-guided electrophysiologic (EP) studies using advanced interventional hardware and software was safe and enabled efficient navigation, mapping, and ablation. These cases demonstrate significant progress in the development of MR-guided EP procedures. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.

  20. Near Real Time Review of Instrument Performance using the Airborne Data Processing and Analysis Software Package

    NASA Astrophysics Data System (ADS)

    Delene, D. J.

    2014-12-01

    Research aircraft that conduct atmospheric measurements carry an increasing array of instrumentation. While on-board personnel constantly review instrument parameters and time series plots, there are an overwhelming number of items. Furthermore, directing the aircraft flight takes up much of the flight scientist time. Typically, a flight engineer is given the responsibility of reviewing the status of on-board instruments. While major issues like not receiving data are quickly identified during a flight, subtle issues like low but believable concentration measurements may go unnoticed. Therefore, it is critical to review data after a flight in near real time. The Airborne Data Processing and Analysis (ADPAA) software package used by the University of North Dakota automates the post-processing of aircraft flight data. Utilizing scripts to process the measurements recorded by data acquisition systems enables the generation of data files within an hour of flight completion. The ADPAA Cplot visualization program enables plots to be quickly generated that enable timely review of all recorded and processed parameters. Near real time review of aircraft flight data enables instrument problems to be identified, investigated and fixed before conducting another flight. On one flight, near real time data review resulted in the identification of unusually low measurements of cloud condensation nuclei, and rapid data visualization enabled the timely investigation of the cause. As a result, a leak was found and fixed before the next flight. Hence, with the high cost of aircraft flights, it is critical to find and fix instrument problems in a timely matter. The use of a automated processing scripts and quick visualization software enables scientists to review aircraft flight data in near real time to identify potential problems.

  1. How virtual reality works: illusions of vision in "real" and virtual environments

    NASA Astrophysics Data System (ADS)

    Stark, Lawrence W.

    1995-04-01

    Visual illusions abound in normal vision--illusions of clarity and completeness, of continuity in time and space, of presence and vivacity--and are part and parcel of the visual world inwhich we live. These illusions are discussed in terms of the human visual system, with its high- resolution fovea, moved from point to point in the visual scene by rapid saccadic eye movements (EMs). This sampling of visual information is supplemented by a low-resolution, wide peripheral field of view, especially sensitive to motion. Cognitive-spatial models controlling perception, imagery, and 'seeing,' also control the EMs that shift the fovea in the Scanpath mode. These illusions provide for presence, the sense off being within an environment. They equally well lead to 'Telepresence,' the sense of being within a virtual display, especially if the operator is intensely interacting within an eye-hand and head-eye human-machine interface that provides for congruent visual and motor frames of reference. Interaction, immersion, and interest compel telepresence; intuitive functioning and engineered information flows can optimize human adaptation to the artificial new world of virtual reality, as virtual reality expands into entertainment, simulation, telerobotics, and scientific visualization and other professional work.

  2. X-33 Flight Visualization

    NASA Technical Reports Server (NTRS)

    Laue, Jay H.

    1998-01-01

    The X-33 flight visualization effort has resulted in the integration of high-resolution terrain data with vehicle position and attitude data for planned flights of the X-33 vehicle from its launch site at Edwards AFB, California, to landings at Michael Army Air Field, Utah, and Maelstrom AFB, Montana. Video and Web Site representations of these flight visualizations were produced. In addition, a totally new module was developed to control viewpoints in real-time using a joystick input. Efforts have been initiated, and are presently being continued, for real-time flight coverage visualizations using the data streams from the X-33 vehicle flights. The flight visualizations that have resulted thus far give convincing support to the expectation that the flights of the X-33 will be exciting and significant space flight milestones... flights of this nation's one-half scale predecessor to its first single-stage-to-orbit, fully-reusable launch vehicle system.

  3. Image visualization of hyperspectral spectrum for LWIR

    NASA Astrophysics Data System (ADS)

    Chong, Eugene; Jeong, Young-Su; Lee, Jai-Hoon; Park, Dong Jo; Kim, Ju Hyun

    2015-07-01

    The image visualization of a real-time hyperspectral spectrum in the long-wave infrared (LWIR) range of 900-1450 cm-1 by a color-matching function is addressed. It is well known that the absorption spectra of main toxic industrial chemical (TIC) and chemical warfare agent (CWA) clouds are detected in this spectral region. Furthermore, a significant spectral peak due to various background species and unknown targets are also present. However, those are dismissed as noise, resulting in utilization limit. Herein, we applied a color-matching function that uses the information from hyperspectral data, which is emitted from the materials and surfaces of artificial or natural backgrounds in the LWIR region. This information was used to classify and differentiate the background signals from the targeted substances, and the results were visualized as image data without additional visual equipment. The tristimulus value based visualization information can quickly identify the background species and target in real-time detection in LWIR.

  4. Integrating advanced visualization technology into the planetary Geoscience workflow

    NASA Astrophysics Data System (ADS)

    Huffman, John; Forsberg, Andrew; Loomis, Andrew; Head, James; Dickson, James; Fassett, Caleb

    2011-09-01

    Recent advances in computer visualization have allowed us to develop new tools for analyzing the data gathered during planetary missions, which is important, since these data sets have grown exponentially in recent years to tens of terabytes in size. As part of the Advanced Visualization in Solar System Exploration and Research (ADVISER) project, we utilize several advanced visualization techniques created specifically with planetary image data in mind. The Geoviewer application allows real-time active stereo display of images, which in aggregate have billions of pixels. The ADVISER desktop application platform allows fast three-dimensional visualization of planetary images overlain on digital terrain models. Both applications include tools for easy data ingest and real-time analysis in a programmatic manner. Incorporation of these tools into our everyday scientific workflow has proved important for scientific analysis, discussion, and publication, and enabled effective and exciting educational activities for students from high school through graduate school.

  5. Real-time feedback enhances forward propulsion during walking in old adults.

    PubMed

    Franz, Jason R; Maletis, Michela; Kram, Rodger

    2014-01-01

    Reduced propulsive function during the push-off phase of walking plays a central role in the deterioration of walking ability with age. We used real-time propulsive feedback to test the hypothesis that old adults have an underutilized propulsive reserve available during walking. 8 old adults (mean [SD], age: 72.1 [3.9] years) and 11 young adults (age: 21.0 [1.5] years) participated. For our primary aim, old subjects walked: 1) normally, 2) with visual feedback of their peak propulsive ground reaction forces, and 3) with visual feedback of their medial gastrocnemius electromyographic activity during push-off. We asked those subjects to match a target set to 20% and 40% greater propulsive force or push-off muscle activity than normal walking. We tested young subjects walking normally only to provide reference ground reaction force values. Walking normally, old adults exerted 12.5% smaller peak propulsive forces than young adults (P<0.01). However, old adults significantly increased their propulsive forces and push-off muscle activities when we provided propulsive feedback. Most notably, force feedback elicited propulsive forces that were equal to or 10.5% greater than those of young adults (+20% target, P=0.87; +40% target, P=0.02). With electromyographic feedback, old adults significantly increased their push-off muscle activities but without increasing their propulsive forces. Old adults with propulsive deficits have a considerable and underutilized propulsive reserve available during level walking. Further, real-time propulsive feedback represents a promising therapeutic strategy to improve the forward propulsion of old adults and thus maintain their walking ability and independence. © 2013.

  6. Visualizing frequent patterns in large multivariate time series

    NASA Astrophysics Data System (ADS)

    Hao, M.; Marwah, M.; Janetzko, H.; Sharma, R.; Keim, D. A.; Dayal, U.; Patnaik, D.; Ramakrishnan, N.

    2011-01-01

    The detection of previously unknown, frequently occurring patterns in time series, often called motifs, has been recognized as an important task. However, it is difficult to discover and visualize these motifs as their numbers increase, especially in large multivariate time series. To find frequent motifs, we use several temporal data mining and event encoding techniques to cluster and convert a multivariate time series to a sequence of events. Then we quantify the efficiency of the discovered motifs by linking them with a performance metric. To visualize frequent patterns in a large time series with potentially hundreds of nested motifs on a single display, we introduce three novel visual analytics methods: (1) motif layout, using colored rectangles for visualizing the occurrences and hierarchical relationships of motifs in a multivariate time series, (2) motif distortion, for enlarging or shrinking motifs as appropriate for easy analysis and (3) motif merging, to combine a number of identical adjacent motif instances without cluttering the display. Analysts can interactively optimize the degree of distortion and merging to get the best possible view. A specific motif (e.g., the most efficient or least efficient motif) can be quickly detected from a large time series for further investigation. We have applied these methods to two real-world data sets: data center cooling and oil well production. The results provide important new insights into the recurring patterns.

  7. Concept of Operations Visualization in Support of Ares I Production

    NASA Technical Reports Server (NTRS)

    Chilton, James H.; Smith, Daid Alan

    2008-01-01

    Boeing was selected in 2007 to manufacture Ares I Upper Stage and Instrument Unit according to NASA's design which would require the use of the latest manufacturing and integration processes to meet NASA budget and schedule targets. Past production experience has established that the majority of the life cycle cost is established during the initial design process. Concept of Operations (CONOPs) visualizations/simulations help to reduce life cycle cost during the early design stage. Production and operation visualizations can reduce tooling, factory capacity, safety, and build process risks while spreading program support across government, academic, media and public constituencies. The NASA/Boeing production visualization (DELMIA; Digital Enterprise Lean Manufacturing Interactive Application) promotes timely, concurrent and collaborative producibility analysis (Boeing)while supporting Upper Stage Design Cycles (NASA). The DELMIA CONOPs visualization reduced overall Upper Stage production flow time at the manufacturing facility by over 100 man-days to 312.5 man-days and helped to identify technical access issues. The NASA/Boeing Interactive Concept of Operations (ICON) provides interactive access to Ares using real mission parameters, allows users to configure the mission which encourages ownership and identifies areas for improvement, allows mission operations or spacecraft detail to be added as needed, and provides an effective, low coast advocacy, outreach and education tool.

  8. Netgram: Visualizing Communities in Evolving Networks

    PubMed Central

    Mall, Raghvendra; Langone, Rocco; Suykens, Johan A. K.

    2015-01-01

    Real-world complex networks are dynamic in nature and change over time. The change is usually observed in the interactions within the network over time. Complex networks exhibit community like structures. A key feature of the dynamics of complex networks is the evolution of communities over time. Several methods have been proposed to detect and track the evolution of these groups over time. However, there is no generic tool which visualizes all the aspects of group evolution in dynamic networks including birth, death, splitting, merging, expansion, shrinkage and continuation of groups. In this paper, we propose Netgram: a tool for visualizing evolution of communities in time-evolving graphs. Netgram maintains evolution of communities over 2 consecutive time-stamps in tables which are used to create a query database using the sql outer-join operation. It uses a line-based visualization technique which adheres to certain design principles and aesthetic guidelines. Netgram uses a greedy solution to order the initial community information provided by the evolutionary clustering technique such that we have fewer line cross-overs in the visualization. This makes it easier to track the progress of individual communities in time evolving graphs. Netgram is a generic toolkit which can be used with any evolutionary community detection algorithm as illustrated in our experiments. We use Netgram for visualization of topic evolution in the NIPS conference over a period of 11 years and observe the emergence and merging of several disciplines in the field of information processing systems. PMID:26356538

  9. Foggy perception slows us down.

    PubMed

    Pretto, Paolo; Bresciani, Jean-Pierre; Rainer, Gregor; Bülthoff, Heinrich H

    2012-10-30

    Visual speed is believed to be underestimated at low contrast, which has been proposed as an explanation of excessive driving speed in fog. Combining psychophysics measurements and driving simulation, we confirm that speed is underestimated when contrast is reduced uniformly for all objects of the visual scene independently of their distance from the viewer. However, we show that when contrast is reduced more for distant objects, as is the case in real fog, visual speed is actually overestimated, prompting drivers to decelerate. Using an artificial anti-fog-that is, fog characterized by better visibility for distant than for close objects, we demonstrate for the first time that perceived speed depends on the spatial distribution of contrast over the visual scene rather than the global level of contrast per se. Our results cast new light on how reduced visibility conditions affect perceived speed, providing important insight into the human visual system.DOI:http://dx.doi.org/10.7554/eLife.00031.001.

  10. Adaptive Optics Analysis of Visual Benefit with Higher-order Aberrations Correction of Human Eye - Poster Paper

    NASA Astrophysics Data System (ADS)

    Xue, Lixia; Dai, Yun; Rao, Xuejun; Wang, Cheng; Hu, Yiyun; Liu, Qian; Jiang, Wenhan

    2008-01-01

    Higher-order aberrations correction can improve visual performance of human eye to some extent. To evaluate how much visual benefit can be obtained with higher-order aberrations correction we developed an adaptive optics vision simulator (AOVS). Dynamic real time optimized modal compensation was used to implement various customized higher-order ocular aberrations correction strategies. The experimental results indicate that higher-order aberrations correction can improve visual performance of human eye comparing with only lower-order aberration correction but the improvement degree and higher-order aberration correction strategy are different from each individual. Some subjects can acquire great visual benefit when higher-order aberrations were corrected but some subjects acquire little visual benefit even though all higher-order aberrations were corrected. Therefore, relative to general lower-order aberrations correction strategy, customized higher-order aberrations correction strategy is needed to obtain optimal visual improvement for each individual. AOVS provides an effective tool for higher-order ocular aberrations optometry for customized ocular aberrations correction.

  11. Media/Device Configurations for Platoon Leader Tactical Training

    DTIC Science & Technology

    1985-02-01

    munication and visual communication sig- na ls, VInputs to the The device should simulate the real- Platoon Leader time receipt of all tactical voice...communication, audio and visual battle- field cues, and visual communication signals. 14- Table 4 (Continued) Functional Capability Categories and...battlefield cues, and visual communication signals. 0.8 Receipt of limited tactical voice communication, plus audio and visual battlefield cues, and visual

  12. A framework for small infrared target real-time visual enhancement

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoliang; Long, Gucan; Shang, Yang; Liu, Xiaolin

    2015-03-01

    This paper proposes a framework for small infrared target real-time visual enhancement. The framework is consisted of three parts: energy accumulation for small infrared target enhancement, noise suppression and weighted fusion. Dynamic programming based track-before-detection algorithm is adopted in the energy accumulation to detect the target accurately and enhance the target's intensity notably. In the noise suppression, the target region is weighted by a Gaussian mask according to the target's Gaussian shape. In order to fuse the processed target region and unprocessed background smoothly, the intensity in the target region is treated as weight in the fusion. Experiments on real small infrared target images indicate that the framework proposed in this paper can enhances the small infrared target markedly and improves the image's visual quality notably. The proposed framework outperforms tradition algorithms in enhancing the small infrared target, especially for image in which the target is hardly visible.

  13. A real time biofeedback using Kinect and Wii to improve gait for post-total knee replacement rehabilitation: a case study report.

    PubMed

    Levinger, Pazit; Zeina, Daniel; Teshome, Assefa K; Skinner, Elizabeth; Begg, Rezaul; Abbott, John Haxby

    2016-01-01

    This study aimed to develop a low-cost real-time biofeedback system to assist with rehabilitation for patients following total knee replacement (TKR) and to assess its feasibility of use in a post-TKR patient case study design with a comparison group. The biofeedback system consisted of Microsoft Kinect(TM) and Nintendo Wii balance board with a dedicated software. A six-week inpatient rehabilitation program was augmented by biofeedback and tested in a single patient following TKR. Three patients underwent a six weeks standard rehabilitation with no biofeedback and served as a control group. Gait, function and pain were assessed and compared before and after the rehabilitation. The biofeedback software incorporated real time visual feedback to correct limb alignment, movement pattern and weight distribution. Improvements in pain, function and quality of life were observed in both groups. The strong improvement in the knee moment pattern demonstrated in the case study indicates feasibility of the biofeedback-augmented intervention. This novel biofeedback software has used simple commercially accessible equipment that can be feasibly incorporated to augment a post-TKR rehabilitation program. Our preliminary results indicate the potential of this biofeedback-assisted rehabilitation to improve knee function during gait. Research is required to test this hypothesis. Implications for Rehabilitation The real-time biofeedback system developed integrated custom-made software and simple low-cost commercially accessible equipment such as Kinect and Wii board to provide augmented information during rehabilitation following TKR. The software incorporated key rehabilitation principles and visual feedback to correct alignment of the lower legs, pelvic and trunk as well as providing feedback on limbs weight distribution. The case study patient demonstrated greater improvement in their knee function where a more normal biphasic knee moment was achieved following the six-week biofeedback intervention.

  14. Online decoding of object-based attention using real-time fMRI.

    PubMed

    Niazi, Adnan M; van den Broek, Philip L C; Klanke, Stefan; Barth, Markus; Poel, Mannes; Desain, Peter; van Gerven, Marcel A J

    2014-01-01

    Visual attention is used to selectively filter relevant information depending on current task demands and goals. Visual attention is called object-based attention when it is directed to coherent forms or objects in the visual field. This study used real-time functional magnetic resonance imaging for moment-to-moment decoding of attention to spatially overlapped objects belonging to two different object categories. First, a whole-brain classifier was trained on pictures of faces and places. Subjects then saw transparently overlapped pictures of a face and a place, and attended to only one of them while ignoring the other. The category of the attended object, face or place, was decoded on a scan-by-scan basis using the previously trained decoder. The decoder performed at 77.6% accuracy indicating that despite competing bottom-up sensory input, object-based visual attention biased neural patterns towards that of the attended object. Furthermore, a comparison between different classification approaches indicated that the representation of faces and places is distributed rather than focal. This implies that real-time decoding of object-based attention requires a multivariate decoding approach that can detect these distributed patterns of cortical activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. Visualizing electron dynamics in organic materials: Charge transport through molecules and angular resolved photoemission

    NASA Astrophysics Data System (ADS)

    Kümmel, Stephan

    Being able to visualize the dynamics of electrons in organic materials is a fascinating perspective. Simulations based on time-dependent density functional theory allow to realize this hope, as they visualize the flow of charge through molecular structures in real-space and real-time. We here present results on two fundamental processes: Photoemission from organic semiconductor molecules and charge transport through molecular structures. In the first part we demonstrate that angular resolved photoemission intensities - from both theory and experiment - can often be interpreted as a visualization of molecular orbitals. However, counter-intuitive quantum-mechanical electron dynamics such as emission perpendicular to the direction of the electrical field can substantially alter the picture, adding surprising features to the molecular orbital interpretation. In a second study we calculate the flow of charge through conjugated molecules. The calculations show in real time how breaks in the conjugation can lead to a local buildup of charge and the formation of local electrical dipoles. These can interact with neighboring molecular chains. As a consequence, collections of ''molecular electrical wires'' can show distinctly different characteristics than ''classical electrical wires''. German Science Foundation GRK 1640.

  16. Fast Deep Tracking via Semi-Online Domain Adaptation

    NASA Astrophysics Data System (ADS)

    Li, Xiaoping; Luo, Wenbing; Zhu, Yi; Li, Hanxi; Wang, Mingwen

    2018-04-01

    Deep tracking has been illustrating overwhelming superiorities over the shallow methods. Unfortunately, it also suffers from low FPS rates. To alleviate the problem, a number of real-time deep trackers have been proposed via removing the online updating procedure on the CNN model. However, the absent of the online update leads to a significant drop on tracking accuracy. In this work, we propose to perform the domain adaptation for visual tracking in two stages for transferring the information from the visual tracking domain and the instance domain respectively. In this way, the proposed visual tracker achieves comparable tracking accuracy to the state-of-the-art trackers and runs at real-time speed on an average consuming GPU.

  17. Specialized Computer Systems for Environment Visualization

    NASA Astrophysics Data System (ADS)

    Al-Oraiqat, Anas M.; Bashkov, Evgeniy A.; Zori, Sergii A.

    2018-06-01

    The need for real time image generation of landscapes arises in various fields as part of tasks solved by virtual and augmented reality systems, as well as geographic information systems. Such systems provide opportunities for collecting, storing, analyzing and graphically visualizing geographic data. Algorithmic and hardware software tools for increasing the realism and efficiency of the environment visualization in 3D visualization systems are proposed. This paper discusses a modified path tracing algorithm with a two-level hierarchy of bounding volumes and finding intersections with Axis-Aligned Bounding Box. The proposed algorithm eliminates the branching and hence makes the algorithm more suitable to be implemented on the multi-threaded CPU and GPU. A modified ROAM algorithm is used to solve the qualitative visualization of reliefs' problems and landscapes. The algorithm is implemented on parallel systems—cluster and Compute Unified Device Architecture-networks. Results show that the implementation on MPI clusters is more efficient than Graphics Processing Unit/Graphics Processing Clusters and allows real-time synthesis. The organization and algorithms of the parallel GPU system for the 3D pseudo stereo image/video synthesis are proposed. With realizing possibility analysis on a parallel GPU-architecture of each stage, 3D pseudo stereo synthesis is performed. An experimental prototype of a specialized hardware-software system 3D pseudo stereo imaging and video was developed on the CPU/GPU. The experimental results show that the proposed adaptation of 3D pseudo stereo imaging to the architecture of GPU-systems is efficient. Also it accelerates the computational procedures of 3D pseudo-stereo synthesis for the anaglyph and anamorphic formats of the 3D stereo frame without performing optimization procedures. The acceleration is on average 11 and 54 times for test GPUs.

  18. A new visually improved and sensitive loop mediated isothermal amplification (LAMP) for diagnosis of symptomatic falciparum malaria.

    PubMed

    Mohon, Abu Naser; Elahi, Rubayet; Khan, Wasif A; Haque, Rashidul; Sullivan, David J; Alam, Mohammad Shafiul

    2014-06-01

    Molecular diagnosis of malaria by nucleotide amplification requires sophisticated and expensive instruments, typically found only in well-established laboratories. Loop-mediated isothermal amplification (LAMP) has provided a new platform for an easily adaptable molecular technique for molecular diagnosis of malaria without the use of expensive instruments. A new primer set has been designed targeting the 18S rRNA gene for the detection of Plasmodium falciparum in whole blood samples. The efficacy of LAMP using the new primer set was assessed in this study in comparison to that of a previously described set of LAMP primers as well as with microscopy and real-time PCR as reference methods for detecting P. falciparum. Pre-addition of hydroxy napthol blue (HNB) in the LAMP reaction caused a distinct color change, thereby improving the visual detection system. The new LAMP assay was found to be 99.1% sensitive compared to microscopy and 98.1% when compared to real-time PCR. Meanwhile, its specificity was 99% and 100% in contrast to microscopy and real-time PCR, respectively. Moreover, the LAMP method was in very good agreement with microscopy and real-time PCR (0.94 and 0.98, respectively). This new LAMP method can detect at least 5parasites/μL of infected blood within 35min, while the other LAMP method tested in this study, could detect a minimum of 100parasites/μL of human blood after 60min of amplification. Thus, the new method is sensitive and specific, can be carried out in a very short time, and can substitute PCR in healthcare clinics and standard laboratories. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. The Use of Microcomputer Based Laboratories in Chemistry Secondary Education: Present State of the Art and Ideas for Research-Based Practice

    ERIC Educational Resources Information Center

    Tortosa, Montserrat

    2012-01-01

    In microcomputer based laboratories (MBL) and data loggers, one or more sensors are connected to an interphase and this to a computer. This equipment allows visualization in real time of the variables of an experiment and provides the possibility of measuring magnitudes which are difficult to measure with traditional equipment. Research shows that…

  20. [Design and implementation of online statistical analysis function in information system of air pollution and health impact monitoring].

    PubMed

    Lü, Yiran; Hao, Shuxin; Zhang, Guoqing; Liu, Jie; Liu, Yue; Xu, Dongqun

    2018-01-01

    To implement the online statistical analysis function in information system of air pollution and health impact monitoring, and obtain the data analysis information real-time. Using the descriptive statistical method as well as time-series analysis and multivariate regression analysis, SQL language and visual tools to implement online statistical analysis based on database software. Generate basic statistical tables and summary tables of air pollution exposure and health impact data online; Generate tendency charts of each data part online and proceed interaction connecting to database; Generate butting sheets which can lead to R, SAS and SPSS directly online. The information system air pollution and health impact monitoring implements the statistical analysis function online, which can provide real-time analysis result to its users.

  1. Verbal Modification via Visual Display

    ERIC Educational Resources Information Center

    Richmond, Edmun B.; Wallace-Childers, La Donna

    1977-01-01

    The inability of foreign language students to produce acceptable approximations of new vowel sounds initiated a study to devise a real-time visual display system whereby the students could match vowel production to a visual pedagogical model. The system used amateur radio equipment and a standard oscilloscope. (CHK)

  2. Stereoscopic applications for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2007-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  3. Visualization of ocean forecast in BYTHOS

    NASA Astrophysics Data System (ADS)

    Zhuk, E.; Zodiatis, G.; Nikolaidis, A.; Stylianou, S.; Karaolia, A.

    2016-08-01

    The Cyprus Oceanography Center has been constantly searching for new ideas for developing and implementing innovative methods and new developments concerning the use of Information Systems in Oceanography, to suit both the Center's monitoring and forecasting products. Within the frame of this scope two major online managing and visualizing data systems have been developed and utilized, those of CYCOFOS and BYTHOS. The Cyprus Coastal Ocean Forecasting and Observing System - CYCOFOS provides a variety of operational predictions such as ultra high, high and medium resolution ocean forecasts in the Levantine Basin, offshore and coastal sea state forecasts in the Mediterranean and Black Sea, tide forecasting in the Mediterranean, ocean remote sensing in the Eastern Mediterranean and coastal and offshore monitoring. As a rich internet application, BYTHOS enables scientists to search, visualize and download oceanographic data online and in real time. The recent improving of BYTHOS system is the extension with access and visualization of CYCOFOS data and overlay forecast fields and observing data. The CYCOFOS data are stored at OPENDAP Server in netCDF format. To search, process and visualize it the php and python scripts were developed. Data visualization is achieved through Mapserver. The BYTHOS forecast access interface allows to search necessary forecasting field by recognizing type, parameter, region, level and time. Also it provides opportunity to overlay different forecast and observing data that can be used for complex analyze of sea basin aspects.

  4. Building large mosaics of confocal edomicroscopic images using visual servoing.

    PubMed

    Rosa, Benoît; Erden, Mustafa Suphi; Vercauteren, Tom; Herman, Benoît; Szewczyk, Jérôme; Morel, Guillaume

    2013-04-01

    Probe-based confocal laser endomicroscopy provides real-time microscopic images of tissues contacted by a small probe that can be inserted in vivo through a minimally invasive access. Mosaicking consists in sweeping the probe in contact with a tissue to be imaged while collecting the video stream, and process the images to assemble them in a large mosaic. While most of the literature in this field has focused on image processing, little attention has been paid so far to the way the probe motion can be controlled. This is a crucial issue since the precision of the probe trajectory control drastically influences the quality of the final mosaic. Robotically controlled motion has the potential of providing enough precision to perform mosaicking. In this paper, we emphasize the difficulties of implementing such an approach. First, probe-tissue contacts generate deformations that prevent from properly controlling the image trajectory. Second, in the context of minimally invasive procedures targeted by our research, robotic devices are likely to exhibit limited quality of the distal probe motion control at the microscopic scale. To cope with these problems visual servoing from real-time endomicroscopic images is proposed in this paper. It is implemented on two different devices (a high-accuracy industrial robot and a prototype minimally invasive device). Experiments on different kinds of environments (printed paper and ex vivo tissues) show that the quality of the visually servoed probe motion is sufficient to build mosaics with minimal distortion in spite of disturbances.

  5. [Parallel virtual reality visualization of extreme large medical datasets].

    PubMed

    Tang, Min

    2010-04-01

    On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.

  6. RICA: a reliable and image configurable arena for cyborg bumblebee based on CAN bus.

    PubMed

    Gong, Fan; Zheng, Nenggan; Xue, Lei; Xu, Kedi; Zheng, Xiaoxiang

    2014-01-01

    In this paper, we designed a reliable and image configurable flight arena, RICA, for developing cyborg bumblebees. To meet the spatial and temporal requirements of bumblebees, the Controller Area Network (CAN) bus is adopted to interconnect the LED display modules to ensure the reliability and real-time performance of the arena system. Easily-configurable interfaces on a desktop computer implemented by python scripts are provided to transmit the visual patterns to the LED distributor online and configure RICA dynamically. The new arena system will be a power tool to investigate the quantitative relationship between the visual inputs and induced flight behaviors and also will be helpful to the visual-motor research in other related fields.

  7. Visualizing Chemical Interaction Dynamics of Confined DNA Molecules

    NASA Astrophysics Data System (ADS)

    Henkin, Gilead; Berard, Daniel; Stabile, Frank; Leslie, Sabrina

    We present a novel nanofluidic approach to controllably introducing reagent molecules to interact with confined biopolymers and visualizing the reaction dynamics in real time. By dynamically deforming a flow cell using CLiC (Convex Lens-induced Confinement) microscopy, we are able to tune reaction chamber dimensions from micrometer to nanometer scales. We apply this gentle deformation to load and extend DNA polymers within embedded nanotopographies and visualize their interactions with other molecules in solution. Quantifying the change in configuration of polymers within embedded nanotopographies in response to binding/unbinding of reagent molecules provides new insights into their consequent change in physical properties. CLiC technology enables an ultra sensitive, massively parallel biochemical analysis platform which can acces a broader range of interaction parameters than existing devices.

  8. Plug and Play web-based visualization of mobile air monitoring data (Abstract)

    EPA Science Inventory

    EPA’s Real-Time Geospatial (RETIGO) Data Viewer web-based tool is a new program reducing the technical barrier to visualize and understand geospatial air data time series collected using wearable, bicycle-mounted, or vehicle-mounted air sensors. The RETIGO tool, with anticipated...

  9. Interventional MRI-guided catheter placement and real time drug delivery to the central nervous system.

    PubMed

    Han, Seunggu J; Bankiewicz, Krystof; Butowski, Nicholas A; Larson, Paul S; Aghi, Manish K

    2016-06-01

    Local delivery of therapeutic agents into the brain has many advantages; however, the inability to predict, visualize and confirm the infusion into the intended target has been a major hurdle in its clinical development. Here, we describe the current workflow and application of the interventional MRI (iMRI) system for catheter placement and real time visualization of infusion. We have applied real time convection-enhanced delivery (CED) of therapeutic agents with iMRI across a number of different clinical trials settings in neuro-oncology and movement disorders. Ongoing developments and accumulating experience with the technique and technology of drug formulations, CED platforms, and iMRI systems will continue to make local therapeutic delivery into the brain more accurate, efficient, effective and safer.

  10. Wide-area, real-time monitoring and visualization system

    DOEpatents

    Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.

    2013-03-19

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  11. Wide-area, real-time monitoring and visualization system

    DOEpatents

    Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA

    2011-11-15

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  12. Progress on the CWU READI Analysis Center

    NASA Astrophysics Data System (ADS)

    Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C.

    2015-12-01

    Real-time GPS position streams are desirable for a variety of seismic monitoring and hazard mitigation applications. We report on progress in our development of a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone. This system is based on 1 Hz point position estimates computed in the ITRF08 reference frame. Convergence from phase and range observables to point position estimates is accelerated using a Kalman filter based, on-line stream editor that produces independent estimations of carrier phase integer biases and other parameters. Positions are then estimated using a short-arc approach and algorithms from JPL's GIPSY-OASIS software with satellite clock and orbit products from the International GNSS Service (IGS). The resulting positions show typical RMS scatter of 2.5 cm in the horizontal and 5 cm in the vertical with latencies below 2 seconds. To facilitate the use of these point position streams for applications such as seismic monitoring, we broadcast real-time positions and covariances using custom-built aggregation-distribution software based on RabbitMQ messaging platform. This software is capable of buffering 24-hour streams for hundreds of stations and providing them through a REST-ful web interface. To demonstrate the power of this approach, we have developed a Java-based front-end that provides a real-time visual display of time-series, displacement vector fields, and map-view, contoured, peak ground displacement. This Java-based front-end is available for download through the PANGA website. We are currently analyzing 80 PBO and PANGA stations along the Cascadia margin and gearing up to process all 400+ real-time stations that are operating in the Pacific Northwest, many of which are currently telemetered in real-time to CWU. These will serve as milestones towards our over-arching goal of extending our processing to include all of the available real-time streams from the Pacific rim. In addition, we have developed a Kalman filter to combine CWU real-time PPP solutions with those from Scripps Institute of Oceanography's PPP-AR real-time solutions as well as real-time solutions from the USGS. These combined products should improve the robustness and reliability of real-time point-position streams in the near future.

  13. Real-Time Visualization of an HPF-based CFD Simulation

    NASA Technical Reports Server (NTRS)

    Kremenetsky, Mark; Vaziri, Arsi; Haimes, Robert; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Current time-dependent CFD simulations produce very large multi-dimensional data sets at each time step. The visual analysis of computational results are traditionally performed by post processing the static data on graphics workstations. We present results from an alternate approach in which we analyze the simulation data in situ on each processing node at the time of simulation. The locally analyzed results, usually more economical and in a reduced form, are then combined and sent back for visualization on a graphics workstation.

  14. ChRIS--A web-based neuroimaging and informatics system for collecting, organizing, processing, visualizing and sharing of medical data.

    PubMed

    Pienaar, Rudolph; Rannou, Nicolas; Bernal, Jorge; Hahn, Daniel; Grant, P Ellen

    2015-01-01

    The utility of web browsers for general purpose computing, long anticipated, is only now coming into fruition. In this paper we present a web-based medical image data and information management software platform called ChRIS ([Boston] Children's Research Integration System). ChRIS' deep functionality allows for easy retrieval of medical image data from resources typically found in hospitals, organizes and presents information in a modern feed-like interface, provides access to a growing library of plugins that process these data - typically on a connected High Performance Compute Cluster, allows for easy data sharing between users and instances of ChRIS and provides powerful 3D visualization and real time collaboration.

  15. Method and System for Air Traffic Rerouting for Airspace Constraint Resolution

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz (Inventor); Morando, Alexander R. (Inventor); Sheth, Kapil S. (Inventor); McNally, B. David (Inventor); Clymer, Alexis A. (Inventor); Shih, Fu-tai (Inventor)

    2017-01-01

    A dynamic constraint avoidance route system automatically analyzes routes of aircraft flying, or to be flown, in or near constraint regions and attempts to find more time and fuel efficient reroutes around current and predicted constraints. The dynamic constraint avoidance route system continuously analyzes all flight routes and provides reroute advisories that are dynamically updated in real time. The dynamic constraint avoidance route system includes a graphical user interface that allows users to visualize, evaluate, modify if necessary, and implement proposed reroutes.

  16. An effective visualization technique for depth perception in augmented reality-based surgical navigation.

    PubMed

    Choi, Hyunseok; Cho, Byunghyun; Masamune, Ken; Hashizume, Makoto; Hong, Jaesung

    2016-03-01

    Depth perception is a major issue in augmented reality (AR)-based surgical navigation. We propose an AR and virtual reality (VR) switchable visualization system with distance information, and evaluate its performance in a surgical navigation set-up. To improve depth perception, seamless switching from AR to VR was implemented. In addition, the minimum distance between the tip of the surgical tool and the nearest organ was provided in real time. To evaluate the proposed techniques, five physicians and 20 non-medical volunteers participated in experiments. Targeting error, time taken, and numbers of collisions were measured in simulation experiments. There was a statistically significant difference between a simple AR technique and the proposed technique. We confirmed that depth perception in AR could be improved by the proposed seamless switching between AR and VR, and providing an indication of the minimum distance also facilitated the surgical tasks. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Science information systems: Archive, access, and retrieval

    NASA Technical Reports Server (NTRS)

    Campbell, William J.

    1991-01-01

    The objective of this research is to develop technology for the automated characterization and interactive retrieval and visualization of very large, complex scientific data sets. Technologies will be developed for the following specific areas: (1) rapidly archiving data sets; (2) automatically characterizing and labeling data in near real-time; (3) providing users with the ability to browse contents of databases efficiently and effectively; (4) providing users with the ability to access and retrieve system independent data sets electronically; and (5) automatically alerting scientists to anomalies detected in data.

  18. Design of penicillin fermentation process simulation system

    NASA Astrophysics Data System (ADS)

    Qi, Xiaoyu; Yuan, Zhonghu; Qi, Xiaoxuan; Zhang, Wenqi

    2011-10-01

    Real-time monitoring for batch process attracts increasing attention. It can ensure safety and provide products with consistent quality. The design of simulation system of batch process fault diagnosis is of great significance. In this paper, penicillin fermentation, a typical non-linear, dynamic, multi-stage batch production process, is taken as the research object. A visual human-machine interactive simulation software system based on Windows operation system is developed. The simulation system can provide an effective platform for the research of batch process fault diagnosis.

  19. Self-synchronizing Schlieren photography and interferometry for the visualization of unsteady transonic flows

    NASA Technical Reports Server (NTRS)

    Kadlec, R.

    1979-01-01

    The use of self synchronizing stroboscopic Schlieren and laser interferometer systems to obtain quantitative space time measurements of distinguished flow surfaces, steakline patterns, and the density field of two dimensional flows that exhibit a periodic content was investigated. A large field single path stroboscopic Schlieren system was designed, constructed and successfully applied to visualize four periodic flows: near wake behind an oscillating airfoil; edge tone sound generation; 2-D planar wall jet; and axisymmetric pulsed sonic jet. This visualization technique provides an effective means of studying quasi-periodic flows in real time. The image on the viewing screen is a spatial signal average of the coherent periodic motion rather than a single realization, the high speed motion of a quasi-periodic flow can be reconstructed by recording photographs of the flow at different fixed time delays in one cycle. The preliminary design and construction of a self synchronizing stroboscopic laser interferometer with a modified Mach-Zehnder optical system is also reported.

  20. A real-time dashboard for managing pathology processes.

    PubMed

    Halwani, Fawaz; Li, Wei Chen; Banerjee, Diponkar; Lessard, Lysanne; Amyot, Daniel; Michalowski, Wojtek; Giffen, Randy

    2016-01-01

    The Eastern Ontario Regional Laboratory Association (EORLA) is a newly established association of all the laboratory and pathology departments of Eastern Ontario that currently includes facilities from eight hospitals. All surgical specimens for EORLA are processed in one central location, the Department of Pathology and Laboratory Medicine (DPLM) at The Ottawa Hospital (TOH), where the rapid growth and influx of surgical and cytology specimens has created many challenges in ensuring the timely processing of cases and reports. Although the entire process is maintained and tracked in a clinical information system, this system lacks pre-emptive warnings that can help management address issues as they arise. Dashboard technology provides automated, real-time visual clues that could be used to alert management when a case or specimen is not being processed within predefined time frames. We describe the development of a dashboard helping pathology clinical management to make informed decisions on specimen allocation and tracking. The dashboard was designed and developed in two phases, following a prototyping approach. The first prototype of the dashboard helped monitor and manage pathology processes at the DPLM. The use of this dashboard helped to uncover operational inefficiencies and contributed to an improvement of turn-around time within The Ottawa Hospital's DPML. It also allowed the discovery of additional requirements, leading to a second prototype that provides finer-grained, real-time information about individual cases and specimens. We successfully developed a dashboard that enables managers to address delays and bottlenecks in specimen allocation and tracking. This support ensures that pathology reports are provided within time frame standards required for high-quality patient care. Given the importance of rapid diagnostics for a number of diseases, the use of real-time dashboards within pathology departments could contribute to improving the quality of patient care beyond EORLA's.

  1. Understanding and Analyzing Latency of Near Real-time Satellite Data

    NASA Astrophysics Data System (ADS)

    Han, W.; Jochum, M.; Brust, J.

    2016-12-01

    Acquiring and disseminating time-sensitive satellite data in a timely manner is much concerned by researchers and decision makers of weather forecast, severe weather warning, disaster and emergency response, environmental monitoring, and so on. Understanding and analyzing the latency of near real-time satellite data is very useful and helpful to explore the whole data transmission flow, indentify the possible issues, and connect data providers and users better. The STAR (Center for Satellite Applications and Research of NOAA) Central Data Repository (SCDR) is a central repository to acquire, manipulate, and disseminate various types of near real-time satellite datasets to internal and external users. In this system, important timestamps, including observation beginning/end, processing, uploading, downloading, and ingestion, are retrieved and organized in the database, so the time length of each transmission phase can be figured out easily. Open source NoSQL database MongoDB is selected to manage the timestamp information because of features of dynamic schema, aggregation and data processing. A user-friendly user interface is developed to visualize and characterize the latency interactively. Taking the Himawari-8 HSD (Himawari Standard Data) file as an example, the data transmission phases, including creating HSD file from satellite observation, uploading the file to HimawariCloud, updating file link in the webpage, downloading and ingesting the file to SCDR, are worked out from the above mentioned timestamps. The latencies can be observed by time of period, day of week, or hour of day in chart or table format, and the anomaly latencies can be detected and reported through the user interface. Latency analysis provides data providers and users actionable insight on how to improve the data transmission of near real-time satellite data, and enhance its acquisition and management.

  2. Using inferential sensors for quality control of Everglades Depth Estimation Network water-level data

    USGS Publications Warehouse

    Petkewich, Matthew D.; Daamen, Ruby C.; Roehl, Edwin A.; Conrads, Paul

    2016-09-29

    The Everglades Depth Estimation Network (EDEN), with over 240 real-time gaging stations, provides hydrologic data for freshwater and tidal areas of the Everglades. These data are used to generate daily water-level and water-depth maps of the Everglades that are used to assess biotic responses to hydrologic change resulting from the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. The generation of EDEN daily water-level and water-depth maps is dependent on high quality real-time data from water-level stations. Real-time data are automatically checked for outliers by assigning minimum and maximum thresholds for each station. Small errors in the real-time data, such as gradual drift of malfunctioning pressure transducers, are more difficult to immediately identify with visual inspection of time-series plots and may only be identified during on-site inspections of the stations. Correcting these small errors in the data often is time consuming and water-level data may not be finalized for several months. To provide daily water-level and water-depth maps on a near real-time basis, EDEN needed an automated process to identify errors in water-level data and to provide estimates for missing or erroneous water-level data.The Automated Data Assurance and Management (ADAM) software uses inferential sensor technology often used in industrial applications. Rather than installing a redundant sensor to measure a process, such as an additional water-level station, inferential sensors, or virtual sensors, were developed for each station that make accurate estimates of the process measured by the hard sensor (water-level gaging station). The inferential sensors in the ADAM software are empirical models that use inputs from one or more proximal stations. The advantage of ADAM is that it provides a redundant signal to the sensor in the field without the environmental threats associated with field conditions at stations (flood or hurricane, for example). In the event that a station does malfunction, ADAM provides an accurate estimate for the period of missing data. The ADAM software also is used in the quality assurance and quality control of the data. The virtual signals are compared to the real-time data, and if the difference between the two signals exceeds a certain tolerance, corrective action to the data and (or) the gaging station can be taken. The ADAM software is automated so that, each morning, the real-time EDEN data are compared to the inferential sensor signals and digital reports highlighting potential erroneous real-time data are generated for appropriate support personnel. The development and application of inferential sensors is easily transferable to other real-time hydrologic monitoring networks.

  3. The design of real time infrared image generation software based on Creator and Vega

    NASA Astrophysics Data System (ADS)

    Wang, Rui-feng; Wu, Wei-dong; Huo, Jun-xiu

    2013-09-01

    Considering the requirement of high reality and real-time quality dynamic infrared image of an infrared image simulation, a method to design real-time infrared image simulation application on the platform of VC++ is proposed. This is based on visual simulation software Creator and Vega. The functions of Creator are introduced simply, and the main features of Vega developing environment are analyzed. The methods of infrared modeling and background are offered, the designing flow chart of the developing process of IR image real-time generation software and the functions of TMM Tool and MAT Tool and sensor module are explained, at the same time, the real-time of software is designed.

  4. Discrimination of holograms and real objects by pigeons (Columba livia) and humans (Homo sapiens).

    PubMed

    Stephan, Claudia; Steurer, Michael M; Aust, Ulrike

    2014-08-01

    The type of stimulus material employed in visual tasks is crucial to all comparative cognition research that involves object recognition. There is considerable controversy about the use of 2-dimensional stimuli and the impact that the lack of the 3rd dimension (i.e., depth) may have on animals' performance in tests for their visual and cognitive abilities. We report evidence of discrimination learning using a completely novel type of stimuli, namely, holograms. Like real objects, holograms provide full 3-dimensional shape information but they also offer many possibilities for systematically modifying the appearance of a stimulus. Hence, they provide a promising means for investigating visual perception and cognition of different species in a comparative way. We trained pigeons and humans to discriminate either between 2 real objects or between holograms of the same 2 objects, and we subsequently tested both species for the transfer of discrimination to the other presentation mode. The lack of any decrements in accuracy suggests that real objects and holograms were perceived as equivalent in both species and shows the general appropriateness of holograms as stimuli in visual tasks. A follow-up experiment involving the presentation of novel views of the training objects and holograms revealed some interspecies differences in rotational invariance, thereby confirming and extending the results of previous studies. Taken together, these results suggest that holograms may not only provide a promising tool for investigating yet unexplored issues, but their use may also lead to novel insights into some crucial aspects of comparative visual perception and categorization.

  5. Videoexoscopic real-time intraoperative navigation for spinal neurosurgery: a novel co-adaptation of two existing technology platforms, technical note.

    PubMed

    Huang, Meng; Barber, Sean Michael; Steele, William James; Boghani, Zain; Desai, Viren Rajendrakumar; Britz, Gavin Wayne; West, George Alexander; Trask, Todd Wilson; Holman, Paul Joseph

    2018-06-01

    Image-guided approaches to spinal instrumentation and interbody fusion have been widely popularized in the last decade [1-5]. Navigated pedicle screws are significantly less likely to breach [2, 3, 5, 6]. Navigation otherwise remains a point reference tool because the projection is off-axis to the surgeon's inline loupe or microscope view. The Synaptive robotic brightmatter drive videoexoscope monitor system represents a new paradigm for off-axis high-definition (HD) surgical visualization. It has many advantages over the traditional microscope and loupes, which have already been demonstrated in a cadaveric study [7]. An auxiliary, but powerful capability of this system is projection of a second, modifiable image in a split-screen configuration. We hypothesized that integration of both Medtronic and Synaptive platforms could permit the visualization of reconstructed navigation and surgical field images simultaneously. By utilizing navigated instruments, this configuration has the ability to support live image-guided surgery or real-time navigation (RTN). Medtronic O-arm/Stealth S7 navigation, MetRx, NavLock, and SureTrak spinal systems were implemented on a prone cadaveric specimen with a stream output to the Synaptive Display. Surgical visualization was provided using a Storz Image S1 platform and camera mounted to the Synaptive robotic brightmatter drive. We were able to successfully technically co-adapt both platforms. A minimally invasive transforaminal lumbar interbody fusion (MIS TLIF) and an open pedicle subtraction osteotomy (PSO) were performed using a navigated high-speed drill under RTN. Disc Shaver and Trials under RTN were implemented on the MIS TLIF. The synergy of Synaptive HD videoexoscope robotic drive and Medtronic Stealth platforms allow for live image-guided surgery or real-time navigation (RTN). Off-axis projection also allows upright neutral cervical spine operative ergonomics for the surgeons and improved surgical team visualization and education compared to traditional means. This technique has the potential to augment existing minimally invasive and open approaches, but will require long-term outcome measurements for efficacy.

  6. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  7. Real time visualization of dynamic magnetic fields with a nanomagnetic ferrolens

    NASA Astrophysics Data System (ADS)

    Markoulakis, Emmanouil; Rigakis, Iraklis; Chatzakis, John; Konstantaras, Antonios; Antonidakis, Emmanuel

    2018-04-01

    Due to advancements in nanomagnetism and latest nanomagnetic materials and devices, a new potential field has been opened up for research and applications which was not possible before. We herein propose a new research field and application for nanomagnetism for the visualization of dynamic magnetic fields in real-time. In short, Nano Magnetic Vision. A new methodology, technique and apparatus were invented and prototyped in order to demonstrate and test this new application. As an application example the visualization of the dynamic magnetic field on a transmitting antenna was chosen. Never seen before high-resolution, photos and real-time color video revealing the actual dynamic magnetic field inside a transmitting radio antenna rod has been captured for the first time. The antenna rod is fed with six hundred volts, orthogonal pulses. This unipolar signal is in the very low frequency (i.e. VLF) range. The signal combined with an extremely short electrical length of the rod, ensures the generation of a relatively strong fluctuating magnetic field, analogue to the signal transmitted, along and inside the antenna. This field is induced into a ferrolens and becomes visible in real-time within the normal human eyes frequency spectrum. The name we have given to the new observation apparatus is, SPIONs Superparamagnetic Ferrolens Microscope (SSFM), a powerful passive scientific observation tool with many other potential applications in the near future.

  8. Real-Time Agent-Based Modeling Simulation with in-situ Visualization of Complex Biological Systems: A Case Study on Vocal Fold Inflammation and Healing.

    PubMed

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2016-05-01

    We present an efficient and scalable scheme for implementing agent-based modeling (ABM) simulation with In Situ visualization of large complex systems on heterogeneous computing platforms. The scheme is designed to make optimal use of the resources available on a heterogeneous platform consisting of a multicore CPU and a GPU, resulting in minimal to no resource idle time. Furthermore, the scheme was implemented under a client-server paradigm that enables remote users to visualize and analyze simulation data as it is being generated at each time step of the model. Performance of a simulation case study of vocal fold inflammation and wound healing with 3.8 million agents shows 35× and 7× speedup in execution time over single-core and multi-core CPU respectively. Each iteration of the model took less than 200 ms to simulate, visualize and send the results to the client. This enables users to monitor the simulation in real-time and modify its course as needed.

  9. Mapping language to visual referents: Does the degree of image realism matter?

    PubMed

    Saryazdi, Raheleh; Chambers, Craig G

    2018-01-01

    Studies of real-time spoken language comprehension have shown that listeners rapidly map unfolding speech to available referents in the immediate visual environment. This has been explored using various kinds of 2-dimensional (2D) stimuli, with convenience or availability typically motivating the choice of a particular image type. However, work in other areas has suggested that certain cognitive processes are sensitive to the level of realism in 2D representations. The present study examined the process of mapping language to depictions of objects that are more or less realistic, namely photographs versus clipart images. A custom stimulus set was first created by generating clipart images directly from photographs of real objects. Two visual world experiments were then conducted, varying whether referent identification was driven by noun or verb information. A modest benefit for clipart stimuli was observed during real-time processing, but only for noun-driving mappings. The results are discussed in terms of their implications for studies of visually situated language processing. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  10. Influences of Visual Attention and Reading Time on Children and Adults

    ERIC Educational Resources Information Center

    Wei, Chun-Chun; Ma, Min-Yuan

    2017-01-01

    This study investigates the relationship between visual attention and reading time using a mobile electroencephalography device. The mobile electroencephalography device uses a single channel dry sensor, which easily measures participants' attention in the real-world reading environment. The results reveal that age significantly influences visual…

  11. Obstructed bi-leaflet prosthetic mitral valve imaging with real-time three-dimensional transesophageal echocardiography.

    PubMed

    Shimbo, Mai; Watanabe, Hiroyuki; Kimura, Shunsuke; Terada, Mai; Iino, Takako; Iino, Kenji; Ito, Hiroshi

    2015-01-01

    Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) can provide unique visualization and better understanding of the relationship among cardiac structures. Here, we report the case of an 85-year-old woman with an obstructed mitral prosthetic valve diagnosed promptly by RT3D-TEE, which clearly showed a leaflet stuck in the closed position. The opening and closing angles of the valve leaflets measured by RT3D-TEE were compatible with those measured by fluoroscopy. Moreover, RT3D-TEE revealed, in the ring of the prosthetic valve, thrombi that were not visible on fluoroscopy. RT3D-TEE might be a valuable diagnostic technique for prosthetic mitral valve thrombosis. © 2014 Wiley Periodicals, Inc.

  12. 3-D surface reconstruction of patient specific anatomic data using a pre-specified number of polygons.

    PubMed

    Aharon, S; Robb, R A

    1997-01-01

    Virtual reality environments provide highly interactive, natural control of the visualization process, significantly enhancing the scientific value of the data produced by medical imaging systems. Due to the computational and real time display update requirements of virtual reality interfaces, however, the complexity of organ and tissue surfaces which can be displayed is limited. In this paper, we present a new algorithm for the production of a polygonal surface containing a pre-specified number of polygons from patient or subject specific volumetric image data. The advantage of this new algorithm is that it effectively tiles complex structures with a specified number of polygons selected to optimize the trade-off between surface detail and real-time display rates.

  13. Multimodality optical imaging of embryonic heart microstructure

    PubMed Central

    Yelin, Ronit; Yelin, Dvir; Oh, Wang-Yuhl; Yun, Seok H.; Boudoux, Caroline; Vakoc, Benjamin J.; Bouma, Brett E.; Tearney, Guillermo J.

    2009-01-01

    Study of developmental heart defects requires the visualization of the microstructure and function of the embryonic myocardium, ideally with minimal alterations to the specimen. We demonstrate multiple endogenous contrast optical techniques for imaging the Xenopus laevis tadpole heart. Each technique provides distinct and complementary imaging capabilities, including: 1. 3-D coherence microscopy with subcellular (1 to 2 µm) resolution in fixed embryos, 2. real-time reflectance confocal microscopy with large penetration depth in vivo, and 3. ultra-high speed (up to 900 frames per second) that enables real-time 4-D high resolution imaging in vivo. These imaging modalities can provide a comprehensive picture of the morphologic and dynamic phenotype of the embryonic heart. The potential of endogenous-contrast optical microscopy is demonstrated for investigation of the teratogenic effects of ethanol. Microstructural abnormalities associated with high levels of ethanol exposure are observed, including compromised heart looping and loss of ventricular trabecular mass. PMID:18163837

  14. Multimodality optical imaging of embryonic heart microstructure.

    PubMed

    Yelin, Ronit; Yelin, Dvir; Oh, Wang-Yuhl; Yun, Seok H; Boudoux, Caroline; Vakoc, Benjamin J; Bouma, Brett E; Tearney, Guillermo J

    2007-01-01

    Study of developmental heart defects requires the visualization of the microstructure and function of the embryonic myocardium, ideally with minimal alterations to the specimen. We demonstrate multiple endogenous contrast optical techniques for imaging the Xenopus laevis tadpole heart. Each technique provides distinct and complementary imaging capabilities, including: 1. 3-D coherence microscopy with subcellular (1 to 2 microm) resolution in fixed embryos, 2. real-time reflectance confocal microscopy with large penetration depth in vivo, and 3. ultra-high speed (up to 900 frames per second) that enables real-time 4-D high resolution imaging in vivo. These imaging modalities can provide a comprehensive picture of the morphologic and dynamic phenotype of the embryonic heart. The potential of endogenous-contrast optical microscopy is demonstrated for investigation of the teratogenic effects of ethanol. Microstructural abnormalities associated with high levels of ethanol exposure are observed, including compromised heart looping and loss of ventricular trabecular mass.

  15. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  16. Development of a GIS-based integrated framework for coastal seiches monitoring and forecasting: A North Jiangsu shoal case study

    NASA Astrophysics Data System (ADS)

    Qin, Rufu; Lin, Liangzhao

    2017-06-01

    Coastal seiches have become an increasingly important issue in coastal science and present many challenges, particularly when attempting to provide warning services. This paper presents the methodologies, techniques and integrated services adopted for the design and implementation of a Seiches Monitoring and Forecasting Integration Framework (SMAF-IF). The SMAF-IF is an integrated system with different types of sensors and numerical models and incorporates the Geographic Information System (GIS) and web techniques, which focuses on coastal seiche events detection and early warning in the North Jiangsu shoal, China. The in situ sensors perform automatic and continuous monitoring of the marine environment status and the numerical models provide the meteorological and physical oceanographic parameter estimates. A model outputs processing software was developed in C# language using ArcGIS Engine functions, which provides the capabilities of automatically generating visualization maps and warning information. Leveraging the ArcGIS Flex API and ASP.NET web services, a web based GIS framework was designed to facilitate quasi real-time data access, interactive visualization and analysis, and provision of early warning services for end users. The integrated framework proposed in this study enables decision-makers and the publics to quickly response to emergency coastal seiche events and allows an easy adaptation to other regional and scientific domains related to real-time monitoring and forecasting.

  17. [Clinical analysis of real-time iris recognition guided LASIK with femtosecond laser flap creation for myopic astigmatism].

    PubMed

    Jie, Li-ming; Wang, Qian; Zheng, Lin

    2013-08-01

    To assess the safety, efficacy, stability and changes in cylindrical degree and axis after real-time iris recognition guided LASIK with femtosecond laser flap creation for the correction of myopic astigmatism. Retrospective case series. This observational case study comprised 136 patients (249 eyes) with myopic astigmatism in a 6-month trial. Patients were divided into 3 groups according to the pre-operative cylindrical degree: Group 1, -0.75 to -1.25 D, 106 eyes;Group 2, -1.50 to -2.25 D, 89 eyes and Group 3, -2.50 to -5.00 D, 54 eyes. They were also grouped by pre-operative astigmatism axis:Group A, with the rule astigmatism (WTRA), 156 eyes; Group B, against the rule astigmatism (ATRA), 64 eyes;Group C, oblique axis astigmatism, 29 eyes. After femtosecond laser flap created, real-time iris recognized excimer ablation was performed. The naked visual acuity, the best-corrected visual acuity, the degree and axis of astigmatism were analyzed and compared at 1, 3 and 6 months postoperatively. Static iris recognition detected that eye cyclotorsional misalignment was 2.37° ± 2.16°, dynamic iris recognition detected that the intraoperative cyclotorsional misalignment range was 0-4.3°. Six months after operation, the naked visual acuity was 0.5 or better in 100% cases. No eye lost ≥ 1 line of best spectacle-corrected visual acuity (BSCVA). Six months after operation, the naked vision of 227 eyes surpassed the BSCVA, and 87 eyes gained 1 line of BSCVA. The degree of astigmatism decreased from (-1.72 ± 0.77) D (pre-operation) to (-0.29 ± 0.25) D (post-operation). Six months after operation, WTRA from 157 eyes (pre-operation) decreased to 43 eyes (post-operation), ATRA from 63 eyes (pre-operation) decreased to 28 eyes (post-operation), oblique astigmatism increased from 29 eyes to 34 eyes and 144 eyes became non-astigmatism. The real-time iris recognition guided LASIK with femtosecond laser flap creation can compensate deviation from eye cyclotorsion, decrease iatrogenic astigmatism, and provides more precise treatment for the degree and axis of astigmatism .It is an effective and safe procedure for the treatment of myopic astigmatism.

  18. Real-time dose calculation and visualization for the proton therapy of ocular tumours

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Karsten; Bendl, Rolf

    2001-03-01

    A new real-time dose calculation and visualization was developed as part of the new 3D treatment planning tool OCTOPUS for proton therapy of ocular tumours within a national research project together with the Hahn-Meitner Institut Berlin. The implementation resolves the common separation between parameter definition, dose calculation and evaluation and allows a direct examination of the expected dose distribution while adjusting the treatment parameters. The new tool allows the therapist to move the desired dose distribution under visual control in 3D to the appropriate place. The visualization of the resulting dose distribution as a 3D surface model, on any 2D slice or on the surface of specified ocular structures is done automatically when adapting parameters during the planning process. In addition, approximate dose volume histograms may be calculated with little extra time. The dose distribution is calculated and visualized in 200 ms with an accuracy of 6% for the 3D isodose surfaces and 8% for other objects. This paper discusses the advantages and limitations of this new approach.

  19. 40-in. OMS Kevlar(Registered Trademark) COPV S/N 007 Stress Rupture Test NDE

    NASA Technical Reports Server (NTRS)

    Saulsberry, Regor; Greene, Nate; Forth, Scott; Leifeste, Mark; Gallus, Tim; Yoder, Tommy; Keddy, Chris; Mandaras, Eric; Wincheski, Buzz; Williams, Philip; hide

    2010-01-01

    The presentation examines pretest nondestructive evaluation (NDE), including external/internal visual inspection, raman spectroscopy, laser shearography, and laser profilometry; real-time NDE including eddy current, acoustic emission (AE), and real-time portable raman spectroscopy; and AE application to carbon/epoxy composite overwrapped pressure vessels.

  20. ScatterBlogs2: real-time monitoring of microblog messages through user-guided filtering.

    PubMed

    Bosch, Harald; Thom, Dennis; Heimerl, Florian; Püttmann, Edwin; Koch, Steffen; Krüger, Robert; Wörner, Michael; Ertl, Thomas

    2013-12-01

    The number of microblog posts published daily has reached a level that hampers the effective retrieval of relevant messages, and the amount of information conveyed through services such as Twitter is still increasing. Analysts require new methods for monitoring their topic of interest, dealing with the data volume and its dynamic nature. It is of particular importance to provide situational awareness for decision making in time-critical tasks. Current tools for monitoring microblogs typically filter messages based on user-defined keyword queries and metadata restrictions. Used on their own, such methods can have drawbacks with respect to filter accuracy and adaptability to changes in trends and topic structure. We suggest ScatterBlogs2, a new approach to let analysts build task-tailored message filters in an interactive and visual manner based on recorded messages of well-understood previous events. These message filters include supervised classification and query creation backed by the statistical distribution of terms and their co-occurrences. The created filter methods can be orchestrated and adapted afterwards for interactive, visual real-time monitoring and analysis of microblog feeds. We demonstrate the feasibility of our approach for analyzing the Twitter stream in emergency management scenarios.

  1. Virtualized Traffic: reconstructing traffic flows from discrete spatiotemporal data.

    PubMed

    Sewall, Jason; van den Berg, Jur; Lin, Ming C; Manocha, Dinesh

    2011-01-01

    We present a novel concept, Virtualized Traffic, to reconstruct and visualize continuous traffic flows from discrete spatiotemporal data provided by traffic sensors or generated artificially to enhance a sense of immersion in a dynamic virtual world. Given the positions of each car at two recorded locations on a highway and the corresponding time instances, our approach can reconstruct the traffic flows (i.e., the dynamic motions of multiple cars over time) between the two locations along the highway for immersive visualization of virtual cities or other environments. Our algorithm is applicable to high-density traffic on highways with an arbitrary number of lanes and takes into account the geometric, kinematic, and dynamic constraints on the cars. Our method reconstructs the car motion that automatically minimizes the number of lane changes, respects safety distance to other cars, and computes the acceleration necessary to obtain a smooth traffic flow subject to the given constraints. Furthermore, our framework can process a continuous stream of input data in real time, enabling the users to view virtualized traffic events in a virtual world as they occur. We demonstrate our reconstruction technique with both synthetic and real-world input. © 2011 IEEE Published by the IEEE Computer Society

  2. Off-the-shelf real-time monitoring of satellite constellations in a visual 3-D environment

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M.; Hervias, Felipe; Cheng, Cecilia Han; Mactutis, Anthony; Angelino, Robert

    1996-01-01

    The multimission spacecraft analysis system (MSAS) data monitor is a generic software product for future real-time data monitoring and analysis. The system represents the status of a satellite constellation through the shape, color, motion and position of graphical objects floating in a three dimensional virtual reality environment. It may be used for the monitoring of large volumes of data, for viewing results in configurable displays, and for providing high level and detailed views of a constellation of monitored satellites. It is considered that the data monitor is an improvement on conventional graphic and text-based displays as it increases the amount of data that the operator can absorb in a given period, and can be installed and configured without the requirement for software development by the end user. The functionality of the system is described, including: the navigation abilities; the representation of alarms in the cybergrid; limit violation; real-time trend analysis, and alarm status indication.

  3. REACH: Real-Time Data Awareness in Multi-Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Maks, Lori; Coleman, Jason; Hennessy, Joseph F. (Technical Monitor)

    2002-01-01

    NASA's Advanced Architectures and Automation Branch at the Goddard Space Flight Center (Code 588) saw the potential to reduce the cost of constellation missions by creating new user interfaces to the ground system health-and-safety data. The goal is to enable a small Flight Operations Team (FOT) to remain aware and responsive to the increased amount of ground system information in a multi-spacecraft environment. Rather than abandon the tried and true, these interfaces were developed to run alongside existing ground system software to provide additional support to the FOT. These new user interfaces have been combined in a tool called REACH. REACH-the Real-time Evaluation and Analysis of Consolidated Health-is a software product that uses advanced visualization techniques to make spacecraft anomalies easy to spot, no matter how many spacecraft are in the constellation. REACH reads numerous real-time streams of data from the ground system(s) and displays synthesized information to the FOT such that anomalies are easy to pick out and investigate.

  4. GiPSi:a framework for open source/open architecture software development for organ-level surgical simulation.

    PubMed

    Cavuşoğlu, M Cenk; Göktekin, Tolga G; Tendick, Frank

    2006-04-01

    This paper presents the architectural details of an evolving open source/open architecture software framework for developing organ-level surgical simulations. Our goal is to facilitate shared development of reusable models, to accommodate heterogeneous models of computation, and to provide a framework for interfacing multiple heterogeneous models. The framework provides an application programming interface for interfacing dynamic models defined over spatial domains. It is specifically designed to be independent of the specifics of the modeling methods used, and therefore facilitates seamless integration of heterogeneous models and processes. Furthermore, each model has separate geometries for visualization, simulation, and interfacing, allowing the model developer to choose the most natural geometric representation for each case. Input/output interfaces for visualization and haptics for real-time interactive applications have also been provided.

  5. Toward real-time regional earthquake simulation II: Real-time Online earthquake Simulation (ROS) of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh

    2014-06-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  6. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    NASA Astrophysics Data System (ADS)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  7. South Atlantic Bight Synoptic Offshore Observational Network

    DTIC Science & Technology

    1999-09-30

    GOAL The long term goal is to evaluate underwater television for providing fishery managers real-time visual data on reef fish communities which will... overfishing , that a complete moratorium on fishing for this species has been suggested by the South Atlantic Fishery Management Council. There is a...our understanding of fish community dynamics. Also, SC DNR fishery scientists are conducting research on fish communities of artificial reef that are

  8. Improving Visual Survey Capabilities for Marine Mammal Studies

    DTIC Science & Technology

    2015-09-30

    pedastals, and wooden disks were shipped to Mount Desert Rock Island off the Maine coast for installation on the upper floor of the lighthouse there...ESTCP) and Navy Living Marine Resources (LMR) Program. This project will demonstrate and evaluate real-time passive acoustic detection...Four custom wooden disks were fabricated by the WHOI carpenter shop to provide a shelf for observers to rest their arms. Two sets of binoculars

  9. Force-Balance Dynamic Display

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T.; White, William C.

    1988-01-01

    Balance dynamic display unit (BDDU) is compact system conditioning six dynamic analog signals so they are monitored simultaneously in real time on single-trace oscilloscope. Typical BDDU oscilloscope display in scan mode shows each channel occupying one-sixth of total trace. System features two display modes usable with conventional, single-channel oscilloscope: multiplexed six-channel "bar-graph" format and single-channel display. Two-stage visual and audible limit alarm provided for each channel.

  10. Savant Genome Browser 2: visualization and analysis for population-scale genomics.

    PubMed

    Fiume, Marc; Smith, Eric J M; Brook, Andrew; Strbenac, Dario; Turner, Brian; Mezlini, Aziz M; Robinson, Mark D; Wodak, Shoshana J; Brudno, Michael

    2012-07-01

    High-throughput sequencing (HTS) technologies are providing an unprecedented capacity for data generation, and there is a corresponding need for efficient data exploration and analysis capabilities. Although most existing tools for HTS data analysis are developed for either automated (e.g. genotyping) or visualization (e.g. genome browsing) purposes, such tools are most powerful when combined. For example, integration of visualization and computation allows users to iteratively refine their analyses by updating computational parameters within the visual framework in real-time. Here we introduce the second version of the Savant Genome Browser, a standalone program for visual and computational analysis of HTS data. Savant substantially improves upon its predecessor and existing tools by introducing innovative visualization modes and navigation interfaces for several genomic datatypes, and synergizing visual and automated analyses in a way that is powerful yet easy even for non-expert users. We also present a number of plugins that were developed by the Savant Community, which demonstrate the power of integrating visual and automated analyses using Savant. The Savant Genome Browser is freely available (open source) at www.savantbrowser.com.

  11. Savant Genome Browser 2: visualization and analysis for population-scale genomics

    PubMed Central

    Smith, Eric J. M.; Brook, Andrew; Strbenac, Dario; Turner, Brian; Mezlini, Aziz M.; Robinson, Mark D.; Wodak, Shoshana J.; Brudno, Michael

    2012-01-01

    High-throughput sequencing (HTS) technologies are providing an unprecedented capacity for data generation, and there is a corresponding need for efficient data exploration and analysis capabilities. Although most existing tools for HTS data analysis are developed for either automated (e.g. genotyping) or visualization (e.g. genome browsing) purposes, such tools are most powerful when combined. For example, integration of visualization and computation allows users to iteratively refine their analyses by updating computational parameters within the visual framework in real-time. Here we introduce the second version of the Savant Genome Browser, a standalone program for visual and computational analysis of HTS data. Savant substantially improves upon its predecessor and existing tools by introducing innovative visualization modes and navigation interfaces for several genomic datatypes, and synergizing visual and automated analyses in a way that is powerful yet easy even for non-expert users. We also present a number of plugins that were developed by the Savant Community, which demonstrate the power of integrating visual and automated analyses using Savant. The Savant Genome Browser is freely available (open source) at www.savantbrowser.com. PMID:22638571

  12. Probe Oscillation Shear Wave Elastography: Initial In Vivo Results in Liver.

    PubMed

    Mellema, Daniel C; Song, Pengfei; Kinnick, Randall R; Trzasko, Joshua D; Urban, Matthew W; Greenleaf, James F; Manduca, Armando; Chen, Shigao

    2018-05-01

    Shear wave elastography methods are able to accurately measure tissue stiffness, allowing these techniques to monitor the progression of hepatic fibrosis. While many methods rely on acoustic radiation force to generate shear waves for 2-D imaging, probe oscillation shear wave elastography (PROSE) provides an alternative approach by generating shear waves through continuous vibration of the ultrasound probe while simultaneously detecting the resulting motion. The generated shear wave field in in vivo liver is complicated, and the amplitude and quality of these shear waves can be influenced by the placement of the vibrating probe. To address these challenges, a real-time shear wave visualization tool was implemented to provide instantaneous visual feedback to optimize probe placement. Even with the real-time display, it was not possible to fully suppress residual motion with established filtering methods. To solve this problem, the shear wave signal in each frame was decoupled from motion and other sources through the use of a parameter-free empirical mode decomposition before calculating shear wave speeds. This method was evaluated in a phantom as well as in in vivo livers from five volunteers. PROSE results in the phantom as well as in vivo liver correlated well with independent measurements using the commercial General Electric Logiq E9 scanner.

  13. The Integrated Virtual Environment Rehabilitation Treadmill System

    PubMed Central

    Feasel, Jeff; Whitton, Mary C.; Kassler, Laura; Brooks, Frederick P.; Lewek, Michael D.

    2015-01-01

    Slow gait speed and interlimb asymmetry are prevalent in a variety of disorders. Current approaches to locomotor retraining emphasize the need for appropriate feedback during intensive, task-specific practice. This paper describes the design and feasibility testing of the integrated virtual environment rehabilitation treadmill (IVERT) system intended to provide real-time, intuitive feedback regarding gait speed and asymmetry during training. The IVERT system integrates an instrumented, split-belt treadmill with a front-projection, immersive virtual environment. The novel adaptive control system uses only ground reaction force data from the treadmill to continuously update the speeds of the two treadmill belts independently, as well as to control the speed and heading in the virtual environment in real time. Feedback regarding gait asymmetry is presented 1) visually as walking a curved trajectory through the virtual environment and 2) proprioceptively in the form of different belt speeds on the split-belt treadmill. A feasibility study involving five individuals with asymmetric gait found that these individuals could effectively control the speed of locomotion and perceive gait asymmetry during the training session. Although minimal changes in overground gait symmetry were observed immediately following a single training session, further studies should be done to determine the IVERT’s potential as a tool for rehabilitation of asymmetric gait by providing patients with congruent visual and proprioceptive feedback. PMID:21652279

  14. Binocular Goggle Augmented Imaging and Navigation System provides real-time fluorescence image guidance for tumor resection and sentinel lymph node mapping

    PubMed Central

    B. Mondal, Suman; Gao, Shengkui; Zhu, Nan; Sudlow, Gail P.; Liang, Kexian; Som, Avik; Akers, Walter J.; Fields, Ryan C.; Margenthaler, Julie; Liang, Rongguang; Gruev, Viktor; Achilefu, Samuel

    2015-01-01

    The inability to identify microscopic tumors and assess surgical margins in real-time during oncologic surgery leads to incomplete tumor removal, increases the chances of tumor recurrence, and necessitates costly repeat surgery. To overcome these challenges, we have developed a wearable goggle augmented imaging and navigation system (GAINS) that can provide accurate intraoperative visualization of tumors and sentinel lymph nodes in real-time without disrupting normal surgical workflow. GAINS projects both near-infrared fluorescence from tumors and the natural color images of tissue onto a head-mounted display without latency. Aided by tumor-targeted contrast agents, the system detected tumors in subcutaneous and metastatic mouse models with high accuracy (sensitivity = 100%, specificity = 98% ± 5% standard deviation). Human pilot studies in breast cancer and melanoma patients using a near-infrared dye show that the GAINS detected sentinel lymph nodes with 100% sensitivity. Clinical use of the GAINS to guide tumor resection and sentinel lymph node mapping promises to improve surgical outcomes, reduce rates of repeat surgery, and improve the accuracy of cancer staging. PMID:26179014

  15. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  16. Genevar: a database and Java application for the analysis and visualization of SNP-gene associations in eQTL studies.

    PubMed

    Yang, Tsun-Po; Beazley, Claude; Montgomery, Stephen B; Dimas, Antigone S; Gutierrez-Arcelus, Maria; Stranger, Barbara E; Deloukas, Panos; Dermitzakis, Emmanouil T

    2010-10-01

    Genevar (GENe Expression VARiation) is a database and Java tool designed to integrate multiple datasets, and provides analysis and visualization of associations between sequence variation and gene expression. Genevar allows researchers to investigate expression quantitative trait loci (eQTL) associations within a gene locus of interest in real time. The database and application can be installed on a standard computer in database mode and, in addition, on a server to share discoveries among affiliations or the broader community over the Internet via web services protocols. http://www.sanger.ac.uk/resources/software/genevar.

  17. Musculoskeletal-see-through mirror: computational modeling and algorithm for whole-body muscle activity visualization in real time.

    PubMed

    Murai, Akihiko; Kurosaki, Kosuke; Yamane, Katsu; Nakamura, Yoshihiko

    2010-12-01

    In this paper, we present a system that estimates and visualizes muscle tensions in real time using optical motion capture and electromyography (EMG). The system overlays rendered musculoskeletal human model on top of a live video image of the subject. The subject therefore has an impression that he/she sees the muscles with tension information through the cloth and skin. The main technical challenge lies in real-time estimation of muscle tension. Since existing algorithms using mathematical optimization to distribute joint torques to muscle tensions are too slow for our purpose, we develop a new algorithm that computes a reasonable approximation of muscle tensions based on the internal connections between muscles known as neuronal binding. The algorithm can estimate the tensions of 274 muscles in only 16 ms, and the whole visualization system runs at about 15 fps. The developed system is applied to assisting sport training, and the user case studies show its usefulness. Possible applications include interfaces for assisting rehabilitation. Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. Automated Testing Experience of the Linear Aerospike SR-71 Experiment (LASRE) Controller

    NASA Technical Reports Server (NTRS)

    Larson, Richard R.

    1999-01-01

    System controllers must be fail-safe, low cost, flexible to software changes, able to output health and status words, and permit rapid retest qualification. The system controller designed and tested for the aerospike engine program was an attempt to meet these requirements. This paper describes (1) the aerospike controller design, (2) the automated simulation testing techniques, and (3) the real time monitoring data visualization structure. Controller cost was minimized by design of a single-string system that used an off-the-shelf 486 central processing unit (CPU). A linked-list architecture, with states (nodes) defined in a user-friendly state table, accomplished software changes to the controller. Proven to be fail-safe, this system reported the abort cause and automatically reverted to a safe condition for any first failure. A real time simulation and test system automated the software checkout and retest requirements. A program requirement to decode all abort causes in real time during all ground and flight tests assured the safety of flight decisions and the proper execution of mission rules. The design also included health and status words, and provided a real time analysis interpretation for all health and status data.

  19. The ultrasound brain helmet: early human feasibility study of multiple simultaneous 3D scans of cerebral vasculature

    NASA Astrophysics Data System (ADS)

    Lindsey, Brooks D.; Ivancevich, Nikolas M.; Whitman, John; Light, Edward; Fronheiser, Matthew; Nicoletto, Heather A.; Laskowitz, Daniel T.; Smith, Stephen W.

    2009-02-01

    We describe early stage experiments to test the feasibility of an ultrasound brain helmet to produce multiple simultaneous real-time 3D scans of the cerebral vasculature from temporal and suboccipital acoustic windows of the skull. The transducer hardware and software of the Volumetrics Medical Imaging real-time 3D scanner were modified to support dual 2.5 MHz matrix arrays of 256 transmit elements and 128 receive elements which produce two simultaneous 64° pyramidal scans. The real-time display format consists of two coronal B-mode images merged into a 128° sector, two simultaneous parasagittal images merged into a 128° × 64° C-mode plane, and a simultaneous 64° axial image. Real-time 3D color Doppler images acquired in initial clinical studies after contrast injection demonstrate flow in several representative blood vessels. An offline Doppler rendering of data from two transducers simultaneously scanning via the temporal windows provides an early visualization of the flow in vessels on both sides of the brain. The long-term goal is to produce real-time 3D ultrasound images of the cerebral vasculature from a portable unit capable of internet transmission, thus enabling interactive 3D imaging, remote diagnosis and earlier therapeutic intervention. We are motivated by the urgency for rapid diagnosis of stroke due to the short time window of effective therapeutic intervention.

  20. Global Static Indexing for Real-Time Exploration of Very Large Regular Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pascucci, V; Frank, R

    2001-07-23

    In this paper we introduce a new indexing scheme for progressive traversal and visualization of large regular grids. We demonstrate the potential of our approach by providing a tool that displays at interactive rates planar slices of scalar field data with very modest computing resources. We obtain unprecedented results both in terms of absolute performance and, more importantly, in terms of scalability. On a laptop computer we provide real time interaction with a 2048{sup 3} grid (8 Giga-nodes) using only 20MB of memory. On an SGI Onyx we slice interactively an 8192{sup 3} grid (1/2 tera-nodes) using only 60MB ofmore » memory. The scheme relies simply on the determination of an appropriate reordering of the rectilinear grid data and a progressive construction of the output slice. The reordering minimizes the amount of I/O performed during the out-of-core computation. The progressive and asynchronous computation of the output provides flexible quality/speed tradeoffs and a time-critical and interruptible user interface.« less

  1. Visual tracking for multi-modality computer-assisted image guidance

    NASA Astrophysics Data System (ADS)

    Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp

    2017-03-01

    With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.

  2. Multimodal ophthalmic imaging using spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    El-Haddad, Mohamed T.; Malone, Joseph D.; Li, Jianwei D.; Bozic, Ivan; Arquitola, Amber M.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2017-08-01

    Ophthalmic surgery involves manipulation of delicate, layered tissue structures on milli- to micrometer scales. Traditional surgical microscopes provide an inherently two-dimensional view of the surgical field with limited depth perception which precludes accurate depth-resolved visualization of these tissue layers, and limits the development of novel surgical techniques. We demonstrate multimodal swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography (SS-SESLO-OCT) to address current limitations of image-guided ophthalmic microsurgery. SS-SESLO-OCT provides inherently co-registered en face and cross-sectional field-of-views (FOVs) at a line rate of 400 kHz and >2 GPix/s throughput. We show in vivo imaging of the anterior segment and retinal fundus of a healthy volunteer, and preliminary results of multi-volumetric mosaicking for ultrawide-field retinal imaging with 90° FOV. Additionally, a scan-head was rapid-prototyped with a modular architecture which enabled integration of SS-SESLO-OCT with traditional surgical microscope and slit-lamp imaging optics. Ex vivo surgical maneuvers were simulated in cadaveric porcine eyes. The system throughput enabled volumetric acquisition at 10 volumes-per-second (vps) and allowed visualization of surgical dynamics in corneal sweeps, compressions, and dissections, and retinal sweeps, compressions, and elevations. SESLO en face images enabled simple real-time co-registration with the surgical microscope FOV, and OCT cross-sections provided depth-resolved visualization of instrument-tissue interactions. Finally, we demonstrate novel augmented-reality integration with the surgical view using segmentation overlays to aid surgical guidance. SS-SESLO-OCT may benefit clinical diagnostics by enabling aiming, registration, and mosaicking; and intraoperative imaging by allowing for real-time surgical feedback, instrument tracking, and overlays of computationally extracted biomarkers of disease.

  3. Time-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition

    PubMed Central

    Wang, Kun-Ching

    2015-01-01

    The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech. PMID:25594590

  4. A results-based process for evaluation of diverse visual analytics tools

    NASA Astrophysics Data System (ADS)

    Rubin, Gary; Berger, David H.

    2013-05-01

    With the pervasiveness of still and full-motion imagery in commercial and military applications, the need to ingest and analyze these media has grown rapidly in recent years. Additionally, video hosting and live camera websites provide a near real-time view of our changing world with unprecedented spatial coverage. To take advantage of these controlled and crowd-sourced opportunities, sophisticated visual analytics (VA) tools are required to accurately and efficiently convert raw imagery into usable information. Whether investing in VA products or evaluating algorithms for potential development, it is important for stakeholders to understand the capabilities and limitations of visual analytics tools. Visual analytics algorithms are being applied to problems related to Intelligence, Surveillance, and Reconnaissance (ISR), facility security, and public safety monitoring, to name a few. The diversity of requirements means that a onesize- fits-all approach to performance assessment will not work. We present a process for evaluating the efficacy of algorithms in real-world conditions, thereby allowing users and developers of video analytics software to understand software capabilities and identify potential shortcomings. The results-based approach described in this paper uses an analysis of end-user requirements and Concept of Operations (CONOPS) to define Measures of Effectiveness (MOEs), test data requirements, and evaluation strategies. We define metrics that individually do not fully characterize a system, but when used together, are a powerful way to reveal both strengths and weaknesses. We provide examples of data products, such as heatmaps, performance maps, detection timelines, and rank-based probability-of-detection curves.

  5. A Method for Measuring the Effective Throughput Time Delay in Simulated Displays Involving Manual Control

    NASA Technical Reports Server (NTRS)

    Jewell, W. F.; Clement, W. F.

    1984-01-01

    The advent and widespread use of the computer-generated image (CGI) device to simulate visual cues has a mixed impact on the realism and fidelity of flight simulators. On the plus side, CGIs provide greater flexibility in scene content than terrain boards and closed circuit television based visual systems, and they have the potential for a greater field of view. However, on the minus side, CGIs introduce into the visual simulation relatively long time delays. In many CGIs, this delay is as much as 200 ms, which is comparable to the inherent delay time of the pilot. Because most GCIs use multiloop processing and smoothing algorithms and are linked to a multiloop host computer, it is seldom possible to identify a unique throughput time delay, and it is therefore difficult to quantify the performance of the closed loop pilot simulator system relative to the real world task. A method to address these issues using the critical task tester is described. Some empirical results from applying the method are presented, and a novel technique for improving the performance of GCIs is discussed.

  6. Software architecture for time-constrained machine vision applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2013-01-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.

  7. Lie group model neuromorphic geometric engine for real-time terrain reconstruction from stereoscopic aerial photos

    NASA Astrophysics Data System (ADS)

    Tsao, Thomas R.; Tsao, Doris

    1997-04-01

    In the 1980's, neurobiologist suggested a simple mechanism in primate visual cortex for maintaining a stable and invariant representation of a moving object. The receptive field of visual neurons has real-time transforms in response to motion, to maintain a stable representation. When the visual stimulus is changed due to motion, the geometric transform of the stimulus triggers a dual transform of the receptive field. This dual transform in the receptive fields compensates geometric variation in the stimulus. This process can be modelled using a Lie group method. The massive array of affine parameter sensing circuits will function as a smart sensor tightly coupled to the passive imaging sensor (retina). Neural geometric engine is a neuromorphic computing device simulating our Lie group model of spatial perception of primate's primal visual cortex. We have developed the computer simulation and experimented on realistic and synthetic image data, and performed a preliminary research of using analog VLSI technology for implementation of the neural geometric engine. We have benchmark tested on DMA's terrain data with their result and have built an analog integrated circuit to verify the computational structure of the engine. When fully implemented on ANALOG VLSI chip, we will be able to accurately reconstruct a 3D terrain surface in real-time from stereoscopic imagery.

  8. Real-time distortion correction for visual inspection systems based on FPGA

    NASA Astrophysics Data System (ADS)

    Liang, Danhua; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin

    2008-03-01

    Visual inspection is a kind of new technology based on the research of computer vision, which focuses on the measurement of the object's geometry and location. It can be widely used in online measurement, and other real-time measurement process. Because of the defects of the traditional visual inspection, a new visual detection mode -all-digital intelligent acquisition and transmission is presented. The image processing, including filtering, image compression, binarization, edge detection and distortion correction, can be completed in the programmable devices -FPGA. As the wide-field angle lens is adopted in the system, the output images have serious distortion. Limited by the calculating speed of computer, software can only correct the distortion of static images but not the distortion of dynamic images. To reach the real-time need, we design a distortion correction system based on FPGA. The method of hardware distortion correction is that the spatial correction data are calculated first under software circumstance, then converted into the address of hardware storage and stored in the hardware look-up table, through which data can be read out to correct gray level. The major benefit using FPGA is that the same circuit can be used for other circularly symmetric wide-angle lenses without being modified.

  9. Phenomenological reliving and visual imagery during autobiographical recall in Alzheimer’s disease

    PubMed Central

    El Haj, Mohamad; Kapogiannis, Dimitrios; Antoine, Pascal

    2016-01-01

    Multiple studies have shown compromise of autobiographical memory and phenomenological reliving in Alzheimer’s disease (AD). We investigated various phenomenological features of autobiographical memory to determine their relative vulnerability in AD. To this aim, participants with early AD and cognitively normal older adult controls were asked to retrieve an autobiographical event and rate on a 5-point scale metacognitive judgments (i.e., reliving, back in time, remembering, and realness), component processes (i.e., visual imagery, auditory imagery, language, and emotion), narrative properties (i.e., rehearsal and importance), and spatiotemporal specificity (i.e., spatial details and temporal details). AD participants showed lower general autobiographical recall than controls, and poorer reliving, travel in time, remembering, realness, visual imagery, auditory imagery, language, rehearsal, and spatial detail – a decrease that was especially pronounced for visual imagery. Yet, AD participants showed high rating for emotion and importance. Early AD seems to compromise many phenomenological features, especially visual imagery, but also seems to preserve some other features. PMID:27003216

  10. Phenomenological Reliving and Visual Imagery During Autobiographical Recall in Alzheimer's Disease.

    PubMed

    El Haj, Mohamad; Kapogiannis, Dimitrios; Antoine, Pascal

    2016-03-16

    Multiple studies have shown compromise of autobiographical memory and phenomenological reliving in Alzheimer's disease (AD). We investigated various phenomenological features of autobiographical memory to determine their relative vulnerability in AD. To this aim, participants with early AD and cognitively normal older adult controls were asked to retrieve an autobiographical event and rate on a five-point scale metacognitive judgments (i.e., reliving, back in time, remembering, and realness), component processes (i.e., visual imagery, auditory imagery, language, and emotion), narrative properties (i.e., rehearsal and importance), and spatiotemporal specificity (i.e., spatial details and temporal details). AD participants showed lower general autobiographical recall than controls, and poorer reliving, travel in time, remembering, realness, visual imagery, auditory imagery, language, rehearsal, and spatial detail-a decrease that was especially pronounced for visual imagery. Yet, AD participants showed high rating for emotion and importance. Early AD seems to compromise many phenomenological features, especially visual imagery, but also seems to preserve some other features.

  11. Visualizing Dynamic Bitcoin Transaction Patterns.

    PubMed

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J

    2016-06-01

    This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network.

  12. Visualizing Dynamic Bitcoin Transaction Patterns

    PubMed Central

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J.

    2016-01-01

    Abstract This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  13. Human Factors in Streaming Data Analysis: Challenges and Opportunities for Information Visualization: Human Factors in Streaming Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, Aritra; Arendt, Dustin L.; Franklin, Lyndsey R.

    Real-world systems change continuously and across domains like traffic monitoring, cyber security, etc., such changes occur within short time scales. This leads to a streaming data problem and produces unique challenges for the human in the loop, as analysts have to ingest and make sense of dynamic patterns in real time. In this paper, our goal is to study how the state-of-the-art in streaming data visualization handles these challenges and reflect on the gaps and opportunities. To this end, we have three contributions: i) problem characterization for identifying domain-specific goals and challenges for handling streaming data, ii) a survey andmore » analysis of the state-of-the-art in streaming data visualization research with a focus on the visualization design space, and iii) reflections on the perceptually motivated design challenges and potential research directions for addressing them.« less

  14. Real-time catheter localization and visualization using three-dimensional echocardiography

    NASA Astrophysics Data System (ADS)

    Kozlowski, Pawel; Bandaru, Raja Sekhar; D'hooge, Jan; Samset, Eigil

    2017-03-01

    Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) is increasingly used during minimally invasive cardiac surgeries (MICS). In many cath labs, RT3D-TEE is already one of the requisite tools for image guidance during MICS. However, the visualization of the catheter is not always satisfactory making 3D- TEE challenging to use as the only modality for guidance. We propose a novel technique for better visualization of the catheter along with the cardiac anatomy using TEE alone - exploiting both beamforming and post processing methods. We extended our earlier method called Delay and Standard Deviation (DASD) beamforming to 3D in order to enhance specular reflections. The beam-formed image was further post-processed by the Frangi filter to segment the catheter. Multi-variate visualization techniques enabled us to render both the standard tissue and the DASD beam-formed image on a clinical ultrasound scanner simultaneously. A frame rate of 15 FPS was achieved.

  15. An intelligent system for real time automatic defect inspection on specular coated surfaces

    NASA Astrophysics Data System (ADS)

    Li, Jinhua; Parker, Johné M.; Hou, Zhen

    2005-07-01

    Product visual inspection is still performed manually or semi automatically in most industries from simple ceramic tile grading to complex automotive body panel paint defect and surface quality inspection. Moreover, specular surfaces present additional challenge to conventional vision systems due to specular reflections, which may mask the true location of objects and lead to incorrect measurements. There are some sophisticated visual inspection methods developed in recent years. Unfortunately, most of them are highly computational. Systems built on those methods are either inapplicable or very costly to achieve real time inspection. In this paper, we describe an integrated low-cost intelligent system developed to automatically capture, extract, and segment defects on specular surfaces with uniform color coatings. The system inspects and locates regular surface defects with lateral dimensions as small as a millimeter. The proposed system is implemented on a group of smart cameras using its on-board processing ability to achieve real time inspection. The experimental results on real test panels demonstrate the effectiveness and robustness of proposed system.

  16. EO/IR scene generation open source initiative for real-time hardware-in-the-loop and all-digital simulation

    NASA Astrophysics Data System (ADS)

    Morris, Joseph W.; Lowry, Mac; Boren, Brett; Towers, James B.; Trimble, Darian E.; Bunfield, Dennis H.

    2011-06-01

    The US Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) and the Redstone Test Center (RTC) has formed the Scene Generation Development Center (SGDC) to support the Department of Defense (DoD) open source EO/IR Scene Generation initiative for real-time hardware-in-the-loop and all-digital simulation. Various branches of the DoD have invested significant resources in the development of advanced scene and target signature generation codes. The SGDC goal is to maintain unlimited government rights and controlled access to government open source scene generation and signature codes. In addition, the SGDC provides development support to a multi-service community of test and evaluation (T&E) users, developers, and integrators in a collaborative environment. The SGDC has leveraged the DoD Defense Information Systems Agency (DISA) ProjectForge (https://Project.Forge.mil) which provides a collaborative development and distribution environment for the DoD community. The SGDC will develop and maintain several codes for tactical and strategic simulation, such as the Joint Signature Image Generator (JSIG), the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC), and Office of the Secretary of Defense (OSD) Test and Evaluation Science and Technology (T&E/S&T) thermal modeling and atmospherics packages, such as EOView, CHARM, and STAR. Other utility packages included are the ContinuumCore for real-time messaging and data management and IGStudio for run-time visualization and scenario generation.

  17. Virtual hydrology observatory: an immersive visualization of hydrology modeling

    NASA Astrophysics Data System (ADS)

    Su, Simon; Cruz-Neira, Carolina; Habib, Emad; Gerndt, Andreas

    2009-02-01

    The Virtual Hydrology Observatory will provide students with the ability to observe the integrated hydrology simulation with an instructional interface by using a desktop based or immersive virtual reality setup. It is the goal of the virtual hydrology observatory application to facilitate the introduction of field experience and observational skills into hydrology courses through innovative virtual techniques that mimic activities during actual field visits. The simulation part of the application is developed from the integrated atmospheric forecast model: Weather Research and Forecasting (WRF), and the hydrology model: Gridded Surface/Subsurface Hydrologic Analysis (GSSHA). Both the output from WRF and GSSHA models are then used to generate the final visualization components of the Virtual Hydrology Observatory. The various visualization data processing techniques provided by VTK are 2D Delaunay triangulation and data optimization. Once all the visualization components are generated, they are integrated into the simulation data using VRFlowVis and VR Juggler software toolkit. VR Juggler is used primarily to provide the Virtual Hydrology Observatory application with fully immersive and real time 3D interaction experience; while VRFlowVis provides the integration framework for the hydrologic simulation data, graphical objects and user interaction. A six-sided CAVETM like system is used to run the Virtual Hydrology Observatory to provide the students with a fully immersive experience.

  18. Geobrowser Enhanced Access of Real-Time Antarctic Data

    NASA Astrophysics Data System (ADS)

    Breen, P.; Judge, D.; Cunningham, N.; Kirsch, P. J.

    2007-12-01

    A proof of principle project was initiated in the Fall of 2006 to develop a system enabling remote field station and ship borne data, collected in near real-time to be discovered, visualised and acquired through a web accessible framework. The two principal enabling drivers for this system were the recent improvements in communications with remote field stations and ships and the advent of low cost, easily accessible geobrowser technology providing the ability to visualise multiple, sometimes physically disparate datasets within a common interface. Strongly spatial in nature the oceanographic datasets suggested the incorporation of geobrowser (Google Earth) technology into this framework. A number of scientific benefits were identified by the project, these include the overall enhancing of the value of many of the datasets through their real-time contribution to forecasting models, satellite ground truthing and calibration of autonomous instrumentation. Improved efficacy of fieldwork led to rapid discovery of problems and the ability to deal with them promptly. The ability to correct or improve experiment parameters and increase capability of routine collection of high-quality data. In the past it may have been over a year before data arrived back at HQ potentially unusable, definitely unrepeatable and significantly reducing or delaying scientific output. The geobrowser interface provides the platform from which the spatial data is discovered, for example ship tracks and aspects of the physical oceanography such as sea surface temperature can be directly visualized. Importantly, ancillary and auxiliary information and metadata can be linked to the cruise data in a straightforward and accessible manner; scientists in Cambridge using a geobrowser were able to access and visualize cruise data from the Southern ocean 20 minutes after collection.

  19. Eye guidance during real-world scene search: The role color plays in central and peripheral vision.

    PubMed

    Nuthmann, Antje; Malcolm, George L

    2016-01-01

    The visual system utilizes environmental features to direct gaze efficiently when locating objects. While previous research has isolated various features' contributions to gaze guidance, these studies generally used sparse displays and did not investigate how features facilitated search as a function of their location on the visual field. The current study investigated how features across the visual field--particularly color--facilitate gaze guidance during real-world search. A gaze-contingent window followed participants' eye movements, restricting color information to specified regions. Scene images were presented in full color, with color in the periphery and gray in central vision or gray in the periphery and color in central vision, or in grayscale. Color conditions were crossed with a search cue manipulation, with the target cued either with a word label or an exact picture. Search times increased as color information in the scene decreased. A gaze-data based decomposition of search time revealed color-mediated effects on specific subprocesses of search. Color in peripheral vision facilitated target localization, whereas color in central vision facilitated target verification. Picture cues facilitated search, with the effects of cue specificity and scene color combining additively. When available, the visual system utilizes the environment's color information to facilitate different real-world visual search behaviors based on the location within the visual field.

  20. Real-Time Tracking of the Extreme Rainfall of Hurricanes Harvey, Irma, and Maria using UCI CHRS's iRain System

    NASA Astrophysics Data System (ADS)

    Shearer, E. J.; Nguyen, P.; Ombadi, M.; Palacios, T.; Huynh, P.; Furman, D.; Tran, H.; Braithwaite, D.; Hsu, K. L.; Sorooshian, S.; Logan, W. S.

    2017-12-01

    During the 2017 hurricane season, three major hurricanes-Harvey, Irma, and Maria-devastated the Atlantic coast of the US and the Caribbean Islands. Harvey set the record for the rainiest storm in continental US history, Irma was the longest-lived powerful hurricane ever observed, and Maria was the costliest storm in Puerto Rican history. The recorded maximum precipitation totals for these storms were 65, 16, and 20 inches respectively. These events provided the Center for Hydrometeorology and Remote Sensing (CHRS) an opportunity to test its global real-time satellite precipitation observation system, iRain, for extreme storm events. The iRain system has been under development through a collaboration between CHRS at the University of California, Irvine (UCI) and UNESCO's International Hydrological Program (IHP). iRain provides near real-time high resolution (0.04°, approx. 4km) global (60°N - 60°S) satellite precipitation data estimated by the PERSIANN-Cloud Classification System (PERSIANN-CCS) algorithm developed by the scientists at CHRS. The user-interactive and web-accessible iRain system allows users to visualize and download real-time global satellite precipitation estimates and track the development and path of the current 50 largest storms globally from data generated by the PERSIANN-CCS algorithm. iRain continuously proves to be an effective tool for measuring real-time precipitation amounts of extreme storms-especially in locations that do not have extensive rain gauge or radar coverage. Such areas include large portions of the world's oceans and over continents such as Africa and Asia. CHRS also created a mobile app version of the system named "iRain UCI", available for iOS and Android devices. During these storms, real-time rainfall data generated by PERSIANN-CCS was consistently comparable to radar and rain gauge data. This presentation evaluates iRain's efficiency as a tool for extreme precipitation monitoring and provides an evaluation of the PERSIANN-CCS real-time rainfall estimates during Hurricanes Harvey, Irma, and Maria in relation to radar and rain gauge data using continuous (correlation, root mean square error, and bias) and categorical (POD and FAR) indices. These results present the relative skill of PERSIANN-CCS real-time data to radar and rain gauge data.

  1. Millimeter-wave imaging sensor data evaluation

    NASA Technical Reports Server (NTRS)

    Wilson, William J.; Ibbott, Anthony C.

    1987-01-01

    A passive 3-mm radiometer system with a mechanically scanned antenna was built for use on a small aircraft or an Unmanned Aerial Vehicle to produce real near-real-time, moderate-resolution (0.5) images of the ground. One of the main advantages of this passive imaging sensor is that it is able to provide surveillance information through dust, smoke, fog and clouds when visual and IR systems are unusable. It can also be used for a variety of remote sensing applications, such as measurements of surface moisture, surface temperature, vegetation extent and snow cover. It is also possible to detect reflective objects under vegetation cover.

  2. TU-FG-BRB-12: Real-Time Visualization of Discrete Spot Scanning Proton Therapy Beam for Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuzaki, Y; Jenkins, C; Yang, Y

    Purpose: With the growing adoption of proton beam therapy there is an increasing need for effective and user-friendly tools for performing quality assurance (QA) measurements. The speed and versatility of spot-scanning proton beam (PB) therapy systems present unique challenges for traditional QA tools. To address these challenges a proof-of-concept system was developed to visualize, in real-time, the delivery of individual spots from a spot-scanning PB in order to perform QA measurements. Methods: The PB is directed toward a custom phantom with planar faces coated with a radioluminescent phosphor (Gd2O2s:Tb). As the proton beam passes through the phantom visible light ismore » emitted from the coating and collected by a nearby CMOS camera. The images are processed to determine the locations at which the beam impinges on each face of the phantom. By so doing, the location of each beam can be determined relative to the phantom. The cameras are also used to capture images of the laser alignment system. The phantom contains x-ray fiducials so that it can be easily located with kV imagers. Using this data several quality assurance parameters can be evaluated. Results: The proof-of-concept system was able to visualize discrete PB spots with energies ranging from 70 MeV to 220 MeV. Images were obtained with integration times ranging from 20 to 0.019 milliseconds. If not limited by data transmission, this would correspond to a frame rate of 52,000 fps. Such frame rates enabled visualization of individual spots in real time. Spot locations were found to be highly correlated (R{sup 2}=0.99) with the nozzle-mounted spot position monitor indicating excellent spot positioning accuracy Conclusion: The system was shown to be capable of imaging individual spots for all clinical beam energies. Future development will focus on extending the image processing software to provide automated results for a variety of QA tests.« less

  3. Near Real-time Ecological Forecasting of Peatland Responses to Warming and CO2 Treatment through EcoPAD-SPRUCE

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Jiang, J.; Stacy, M.; Ricciuto, D. M.; Hanson, P. J.; Sundi, N.; Luo, Y.

    2016-12-01

    Ecological forecasting is critical in various aspects of our coupled human-nature systems, such as disaster risk reduction, natural resource management and climate change mitigation. Novel advancements are in urgent need to deepen our understandings of ecosystem dynamics, boost the predictive capacity of ecology, and provide timely and effective information for decision-makers in a rapidly changing world. Our Ecological Platform for Assimilation of Data (EcoPAD) facilitates the integration of current best knowledge from models, manipulative experimentations, observations and other modern techniques and provides both near real-time and long-term forecasting of ecosystem dynamics. As a case study, the web-based EcoPAD platform synchronizes real- or near real-time field measurements from the Spruce and Peatland Responses Under Climatic and Environmental Change Experiment (SPRUCE), a whole ecosystem warming and CO2 enrichment treatment experiment, assimilates multiple data streams into process based models, enhances timely feedback between modelers and experimenters, and ultimately improves ecosystem forecasting and makes best utilization of current knowledge. In addition to enable users to (i) estimate model parameters or state variables, (ii) quantify uncertainty of estimated parameters and projected states of ecosystems, (iii) evaluate model structures, (iv) assess sampling strategies, and (v) conduct ecological forecasting, EcoPAD-SPRUCE automated the workflow from real-time data acquisition, model simulation to result visualization. EcoPAD-SPRUCE promotes seamless feedback between modelers and experimenters, hand in hand to make better forecasting of future changes. The framework of EcoPAD-SPRUCE (with flexible API, Application Programming Interface) is easily portable and will benefit scientific communities, policy makers as well as the general public.

  4. Stereoscopic display of 3D models for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2006-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  5. Is There Computer Graphics after Multimedia?

    ERIC Educational Resources Information Center

    Booth, Kellogg S.

    Computer graphics has been driven by the desire to generate real-time imagery subject to constraints imposed by the human visual system. The future of computer graphics, when off-the-shelf systems have full multimedia capability and when standard computing engines render imagery faster than real-time, remains to be seen. A dedicated pipeline for…

  6. Real-Time Geospatial Data Viewer (RETIGO): Web-Based Tool for Researchers and Citizen Scientists to Explore their Air Measurements

    EPA Science Inventory

    The collection of air measurements in real-time on moving platforms, such as wearable, bicycle-mounted, or vehicle-mounted air sensors, is becoming an increasingly common method to investigate local air quality. However, visualizing and analyzing geospatial air monitoring data re...

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krause, Josua; Dasgupta, Aritra; Fekete, Jean-Daniel

    Dealing with the curse of dimensionality is a key challenge in high-dimensional data visualization. We present SeekAView to address three main gaps in the existing research literature. First, automated methods like dimensionality reduction or clustering suffer from a lack of transparency in letting analysts interact with their outputs in real-time to suit their exploration strategies. The results often suffer from a lack of interpretability, especially for domain experts not trained in statistics and machine learning. Second, exploratory visualization techniques like scatter plots or parallel coordinates suffer from a lack of visual scalability: it is difficult to present a coherent overviewmore » of interesting combinations of dimensions. Third, the existing techniques do not provide a flexible workflow that allows for multiple perspectives into the analysis process by automatically detecting and suggesting potentially interesting subspaces. In SeekAView we address these issues using suggestion based visual exploration of interesting patterns for building and refining multidimensional subspaces. Compared to the state-of-the-art in subspace search and visualization methods, we achieve higher transparency in showing not only the results of the algorithms, but also interesting dimensions calibrated against different metrics. We integrate a visually scalable design space with an iterative workflow guiding the analysts by choosing the starting points and letting them slice and dice through the data to find interesting subspaces and detect correlations, clusters, and outliers. We present two usage scenarios for demonstrating how SeekAView can be applied in real-world data analysis scenarios.« less

  8. Headlines: Planet Earth: Improving Climate Literacy with Short Format News Videos

    NASA Astrophysics Data System (ADS)

    Tenenbaum, L. F.; Kulikov, A.; Jackson, R.

    2012-12-01

    One of the challenges of communicating climate science is the sense that climate change is remote and unconnected to daily life--something that's happening to someone else or in the future. To help face this challenge, NASA's Global Climate Change website http://climate.nasa.gov has launched a new video series, "Headlines: Planet Earth," which focuses on current climate news events. This rapid-response video series uses 3D video visualization technology combined with real-time satellite data and images, to throw a spotlight on real-world events.. The "Headlines: Planet Earth" news video products will be deployed frequently, ensuring timeliness. NASA's Global Climate Change Website makes extensive use of interactive media, immersive visualizations, ground-based and remote images, narrated and time-lapse videos, time-series animations, and real-time scientific data, plus maps and user-friendly graphics that make the scientific content both accessible and engaging to the public. The site has also won two consecutive Webby Awards for Best Science Website. Connecting climate science to current real-world events will contribute to improving climate literacy by making climate science relevant to everyday life.

  9. Real Time Data Acquisition and Online Signal Processing for Magnetoencephalography

    NASA Astrophysics Data System (ADS)

    Rongen, H.; Hadamschek, V.; Schiek, M.

    2006-06-01

    To establish improved therapies for patients suffering from severe neurological and psychiatric diseases, a demand controlled and desynchronizing brain-pacemaker has been developed with techniques from statistical physics and nonlinear dynamics. To optimize the novel therapeutic approach, brain activity is investigated with a Magnetoencephalography (MEG) system prior to surgery. For this, a real time data acquisition system for a 148 channel MEG and online signal processing for artifact rejection, filtering, cross trial phase resetting analysis and three-dimensional (3-D) reconstruction of the cerebral current sources was developed. The developed PCI bus hardware is based on a FPGA and DSP design, using the benefits from both architectures. The reconstruction and visualization of the 3-D volume data is done by the PC which hosts the real time DAQ and pre-processing board. The framework of the MEG-online system is introduced and the architecture of the real time DAQ board and online reconstruction is described. In addition we show first results with the MEG-Online system for the investigation of dynamic brain activities in relation to external visual stimulation, based on test data sets.

  10. Real-world spatial regularities affect visual working memory for objects.

    PubMed

    Kaiser, Daniel; Stein, Timo; Peelen, Marius V

    2015-12-01

    Traditional memory research has focused on measuring and modeling the capacity of visual working memory for simple stimuli such as geometric shapes or colored disks. Although these studies have provided important insights, it is unclear how their findings apply to memory for more naturalistic stimuli. An important aspect of real-world scenes is that they contain a high degree of regularity: For instance, lamps appear above tables, not below them. In the present study, we tested whether such real-world spatial regularities affect working memory capacity for individual objects. Using a delayed change-detection task with concurrent verbal suppression, we found enhanced visual working memory performance for objects positioned according to real-world regularities, as compared to irregularly positioned objects. This effect was specific to upright stimuli, indicating that it did not reflect low-level grouping, because low-level grouping would be expected to equally affect memory for upright and inverted displays. These results suggest that objects can be held in visual working memory more efficiently when they are positioned according to frequently experienced real-world regularities. We interpret this effect as the grouping of single objects into larger representational units.

  11. Unleashing the Power of Distributed CPU/GPU Architectures: Massive Astronomical Data Analysis and Visualization Case Study

    NASA Astrophysics Data System (ADS)

    Hassan, A. H.; Fluke, C. J.; Barnes, D. G.

    2012-09-01

    Upcoming and future astronomy research facilities will systematically generate terabyte-sized data sets moving astronomy into the Petascale data era. While such facilities will provide astronomers with unprecedented levels of accuracy and coverage, the increases in dataset size and dimensionality will pose serious computational challenges for many current astronomy data analysis and visualization tools. With such data sizes, even simple data analysis tasks (e.g. calculating a histogram or computing data minimum/maximum) may not be achievable without access to a supercomputing facility. To effectively handle such dataset sizes, which exceed today's single machine memory and processing limits, we present a framework that exploits the distributed power of GPUs and many-core CPUs, with a goal of providing data analysis and visualizing tasks as a service for astronomers. By mixing shared and distributed memory architectures, our framework effectively utilizes the underlying hardware infrastructure handling both batched and real-time data analysis and visualization tasks. Offering such functionality as a service in a “software as a service” manner will reduce the total cost of ownership, provide an easy to use tool to the wider astronomical community, and enable a more optimized utilization of the underlying hardware infrastructure.

  12. a Real-Time GIS Platform for High Sour Gas Leakage Simulation, Evaluation and Visualization

    NASA Astrophysics Data System (ADS)

    Li, M.; Liu, H.; Yang, C.

    2015-07-01

    The development of high-sulfur gas fields, also known as sour gas field, is faced with a series of safety control and emergency management problems. The GIS-based emergency response system is placed high expectations under the consideration of high pressure, high content, complex terrain and highly density population in Sichuan Basin, southwest China. The most researches on high hydrogen sulphide gas dispersion simulation and evaluation are used for environmental impact assessment (EIA) or emergency preparedness planning. This paper introduces a real-time GIS platform for high-sulfur gas emergency response. Combining with real-time data from the leak detection systems and the meteorological monitoring stations, GIS platform provides the functions of simulating, evaluating and displaying of the different spatial-temporal toxic gas distribution patterns and evaluation results. This paper firstly proposes the architecture of Emergency Response/Management System, secondly explains EPA's Gaussian dispersion model CALPUFF simulation workflow under high complex terrain and real-time data, thirdly explains the emergency workflow and spatial analysis functions of computing the accident influencing areas, population and the optimal evacuation routes. Finally, a well blow scenarios is used for verify the system. The study shows that GIS platform which integrates the real-time data and CALPUFF models will be one of the essential operational platforms for high-sulfur gas fields emergency management.

  13. Image enhancement of real-time television to benefit the visually impaired.

    PubMed

    Wolffsohn, James S; Mukhopadhyay, Ditipriya; Rubinstein, Martin

    2007-09-01

    To examine the use of real-time, generic edge detection, image processing techniques to enhance the television viewing of the visually impaired. Prospective, clinical experimental study. One hundred and two sequential visually impaired (average age 73.8 +/- 14.8 years; 59% female) in a single center optimized a dynamic television image with respect to edge detection filter (Prewitt, Sobel, or the two combined), color (red, green, blue, or white), and intensity (one to 15 times) of the overlaid edges. They then rated the original television footage compared with a black-and-white image displaying the edges detected and the original television image with the detected edges overlaid in the chosen color and at the intensity selected. Footage of news, an advertisement, and the end of program credits were subjectively assessed in a random order. A Prewitt filter was preferred (44%) compared with the Sobel filter (27%) or a combination of the two (28%). Green and white were equally popular for displaying the detected edges (32%), with blue (22%) and red (14%) less so. The average preferred edge intensity was 3.5 +/- 1.7 times. The image-enhanced television was significantly preferred to the original (P < .001), which in turn was preferred to viewing the detected edges alone (P < .001) for each of the footage clips. Preference was not dependent on the condition causing visual impairment. Seventy percent were definitely willing to buy a set-top box that could achieve these effects for a reasonable price. Simple generic edge detection image enhancement options can be performed on television in real-time and significantly enhance the viewing of the visually impaired.

  14. Measuring, Predicting and Visualizing Short-Term Change in Word Representation and Usage in VKontakte Social Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Ian B.; Arendt, Dustin L.; Bell, Eric B.

    Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. This work addresses several important tasks of visualizing and predicting short term text representation shift, i.e. the change in a word’s contextual semantics. We study the relationship between short-term concept drift and representation shift on a large social media corpus – VKontakte collected during the Russia-Ukraine crisis in 2014 – 2015. We visualize short-term representation shift for example keywords and build predictive models to forecast short-term shifts in meaning from previous meaning as well as from conceptmore » drift. We show that short-term representation shift can be accurately predicted up to several weeks in advance and that visualization provides insight into meaning change. Our approach can be used to explore and characterize specific aspects of the streaming corpus during crisis events and potentially improve other downstream classification tasks including real-time event forecasting in social media.« less

  15. Enhancements to VTK enabling Scientific Visualization in Immersive Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, Patrick; Jhaveri, Sankhesh; Chaudhary, Aashish

    Modern scientific, engineering and medical computational sim- ulations, as well as experimental and observational data sens- ing/measuring devices, produce enormous amounts of data. While statistical analysis provides insight into this data, scientific vi- sualization is tactically important for scientific discovery, prod- uct design and data analysis. These benefits are impeded, how- ever, when scientific visualization algorithms are implemented from scratch—a time-consuming and redundant process in im- mersive application development. This process can greatly ben- efit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR)more » environment has only been attempted to varying degrees of success. In this pa- per, we demonstrate two new approaches to simplify this amalga- mation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that pro- vide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications.« less

  16. Visual Image Sensor Organ Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.

    2014-01-01

    This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.

  17. Visually-guided attention enhances target identification in a complex auditory scene.

    PubMed

    Best, Virginia; Ozmeral, Erol J; Shinn-Cunningham, Barbara G

    2007-06-01

    In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors.

  18. Visually-guided Attention Enhances Target Identification in a Complex Auditory Scene

    PubMed Central

    Ozmeral, Erol J.; Shinn-Cunningham, Barbara G.

    2007-01-01

    In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors. PMID:17453308

  19. Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.

    PubMed

    Lee, Sing Chun; Fuerst, Bernhard; Fotouhi, Javad; Fischer, Marius; Osgood, Greg; Navab, Nassir

    2016-06-01

    This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.

  20. A web Accessible Framework for Discovery, Visualization and Dissemination of Polar Data

    NASA Astrophysics Data System (ADS)

    Kirsch, P. J.; Breen, P.; Barnes, T. D.

    2007-12-01

    A web accessible information framework, currently under development within the Physical Sciences Division of the British Antarctic Survey is described. The datasets accessed are generally heterogeneous in nature from fields including space physics, meteorology, atmospheric chemistry, ice physics, and oceanography. Many of these are returned in near real time over a 24/7 limited bandwidth link from remote Antarctic Stations and ships. The requirement is to provide various user groups - each with disparate interests and demands - a system incorporating a browsable and searchable catalogue; bespoke data summary visualization, metadata access facilities and download utilities. The system allows timely access to raw and processed datasets through an easily navigable discovery interface. Once discovered, a summary of the dataset can be visualized in a manner prescribed by the particular projects and user communities or the dataset may be downloaded, subject to accessibility restrictions that may exist. In addition, access to related ancillary information including software, documentation, related URL's and information concerning non-electronic media (of particular relevance to some legacy datasets) is made directly available having automatically been associated with a dataset during the discovery phase. Major components of the framework include the relational database containing the catalogue, the organizational structure of the systems holding the data - enabling automatic updates of the system catalogue and real-time access to data -, the user interface design, and administrative and data management scripts allowing straightforward incorporation of utilities, datasets and system maintenance.

  1. A software module for implementing auditory and visual feedback on a video-based eye tracking system

    NASA Astrophysics Data System (ADS)

    Rosanlall, Bharat; Gertner, Izidor; Geri, George A.; Arrington, Karl F.

    2016-05-01

    We describe here the design and implementation of a software module that provides both auditory and visual feedback of the eye position measured by a commercially available eye tracking system. The present audio-visual feedback module (AVFM) serves as an extension to the Arrington Research ViewPoint EyeTracker, but it can be easily modified for use with other similar systems. Two modes of audio feedback and one mode of visual feedback are provided in reference to a circular area-of-interest (AOI). Auditory feedback can be either a click tone emitted when the user's gaze point enters or leaves the AOI, or a sinusoidal waveform with frequency inversely proportional to the distance from the gaze point to the center of the AOI. Visual feedback is in the form of a small circular light patch that is presented whenever the gaze-point is within the AOI. The AVFM processes data that are sent to a dynamic-link library by the EyeTracker. The AVFM's multithreaded implementation also allows real-time data collection (1 kHz sampling rate) and graphics processing that allow display of the current/past gaze-points as well as the AOI. The feedback provided by the AVFM described here has applications in military target acquisition and personnel training, as well as in visual experimentation, clinical research, marketing research, and sports training.

  2. An augmented-reality edge enhancement application for Google Glass.

    PubMed

    Hwang, Alex D; Peli, Eli

    2014-08-01

    Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer's real-world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Google Glass' camera lens distortions were corrected by using an image warping. Because the camera and virtual display are horizontally separated by 16 mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of three-dimensional transformations to minimize parallax errors before the final projection to the Glass' see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal-vision subjects, with and without a diffuser film to simulate vision loss. For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera's performance. The authors assume that this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration.

  3. What can neuromorphic event-driven precise timing add to spike-based pattern recognition?

    PubMed

    Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad

    2015-03-01

    This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.

  4. Integration of Geographical Information Systems and Geophysical Applications with Distributed Computing Technologies.

    NASA Astrophysics Data System (ADS)

    Pierce, M. E.; Aktas, M. S.; Aydin, G.; Fox, G. C.; Gadgil, H.; Sayar, A.

    2005-12-01

    We examine the application of Web Service Architectures and Grid-based distributed computing technologies to geophysics and geo-informatics. We are particularly interested in the integration of Geographical Information System (GIS) services with distributed data mining applications. GIS services provide the general purpose framework for building archival data services, real time streaming data services, and map-based visualization services that may be integrated with data mining and other applications through the use of distributed messaging systems and Web Service orchestration tools. Building upon on our previous work in these areas, we present our current research efforts. These include fundamental investigations into increasing XML-based Web service performance, supporting real time data streams, and integrating GIS mapping tools with audio/video collaboration systems for shared display and annotation.

  5. Real time 3D structural and Doppler OCT imaging on graphics processing units

    NASA Astrophysics Data System (ADS)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2013-03-01

    In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.

  6. ARC integration into the NEAMS Workbench

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stauff, N.; Gaughan, N.; Kim, T.

    2017-01-01

    One of the objectives of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Integration Product Line (IPL) is to facilitate the deployment of the high-fidelity codes developed within the program. The Workbench initiative was launched in FY-2017 by the IPL to facilitate the transition from conventional tools to high fidelity tools. The Workbench provides a common user interface for model creation, real-time validation, execution, output processing, and visualization for integrated codes.

  7. Method and System for Dynamic Automated Corrections to Weather Avoidance Routes for Aircraft in En Route Airspace

    NASA Technical Reports Server (NTRS)

    McNally, B. David (Inventor); Erzberger, Heinz (Inventor); Sheth, Kapil (Inventor)

    2015-01-01

    A dynamic weather route system automatically analyzes routes for in-flight aircraft flying in convective weather regions and attempts to find more time and fuel efficient reroutes around current and predicted weather cells. The dynamic weather route system continuously analyzes all flights and provides reroute advisories that are dynamically updated in real time while the aircraft are in flight. The dynamic weather route system includes a graphical user interface that allows users to visualize, evaluate, modify if necessary, and implement proposed reroutes.

  8. Foggy perception slows us down

    PubMed Central

    Pretto, Paolo; Bresciani, Jean-Pierre; Rainer, Gregor; Bülthoff, Heinrich H

    2012-01-01

    Visual speed is believed to be underestimated at low contrast, which has been proposed as an explanation of excessive driving speed in fog. Combining psychophysics measurements and driving simulation, we confirm that speed is underestimated when contrast is reduced uniformly for all objects of the visual scene independently of their distance from the viewer. However, we show that when contrast is reduced more for distant objects, as is the case in real fog, visual speed is actually overestimated, prompting drivers to decelerate. Using an artificial anti-fog—that is, fog characterized by better visibility for distant than for close objects, we demonstrate for the first time that perceived speed depends on the spatial distribution of contrast over the visual scene rather than the global level of contrast per se. Our results cast new light on how reduced visibility conditions affect perceived speed, providing important insight into the human visual system. DOI: http://dx.doi.org/10.7554/eLife.00031.001 PMID:23110253

  9. Detecting changes in real-world objects: The relationship between visual long-term memory and change blindness.

    PubMed

    Brady, Timothy F; Konkle, Talia; Oliva, Aude; Alvarez, George A

    2009-01-01

    A large body of literature has shown that observers often fail to notice significant changes in visual scenes, even when these changes happen right in front of their eyes. For instance, people often fail to notice if their conversation partner is switched to another person, or if large background objects suddenly disappear.1,2 These 'change blindness' studies have led to the inference that the amount of information we remember about each item in a visual scene may be quite low.1 However, in recent work we have demonstrated that long-term memory is capable of storing a massive number of visual objects with significant detail about each item.3 In the present paper we attempt to reconcile these findings by demonstrating that observers do not experience 'change blindness' with the real world objects used in our previous experiment if they are given sufficient time to encode each item. The results reported here suggest that one of the major causes of change blindness for real-world objects is a lack of encoding time or attention to each object (see also refs. 4 and 5).

  10. Rapid interactions between lexical semantic and word form analysis during word recognition in context: evidence from ERPs.

    PubMed

    Kim, Albert; Lai, Vicky

    2012-05-01

    We used ERPs to investigate the time course of interactions between lexical semantic and sublexical visual word form processing during word recognition. Participants read sentence-embedded pseudowords that orthographically resembled a contextually supported real word (e.g., "She measured the flour so she could bake a ceke…") or did not (e.g., "She measured the flour so she could bake a tont…") along with nonword consonant strings (e.g., "She measured the flour so she could bake a srdt…"). Pseudowords that resembled a contextually supported real word ("ceke") elicited an enhanced positivity at 130 msec (P130), relative to real words (e.g., "She measured the flour so she could bake a cake…"). Pseudowords that did not resemble a plausible real word ("tont") enhanced the N170 component, as did nonword consonant strings ("srdt"). The effect pattern shows that the visual word recognition system is, perhaps, counterintuitively, more rapidly sensitive to minor than to flagrant deviations from contextually predicted inputs. The findings are consistent with rapid interactions between lexical and sublexical representations during word recognition, in which rapid lexical access of a contextually supported word (CAKE) provides top-down excitation of form features ("cake"), highlighting the anomaly of an unexpected word "ceke."

  11. Interactive Visualization of Near Real-Time and Production Global Precipitation Mission Data Online Using CesiumJS

    NASA Astrophysics Data System (ADS)

    Lammers, M.

    2016-12-01

    Advancements in the capabilities of JavaScript frameworks and web browsing technology make online visualization of large geospatial datasets viable. Commonly this is done using static image overlays, pre-rendered animations, or cumbersome geoservers. These methods can limit interactivity and/or place a large burden on server-side post-processing and storage of data. Geospatial data, and satellite data specifically, benefit from being visualized both on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS, developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. It has entered the void left by the abandonment of the Google Earth Web API, and it serves as a capable and well-maintained platform upon which data can be displayed. This paper will describe the technology behind the two primary products developed as part of the NASA Precipitation Processing System STORM website: GPM Near Real Time Viewer (GPMNRTView) and STORM Virtual Globe (STORM VG). GPMNRTView reads small post-processed CZML files derived from various Level 1 through 3 near real-time products. For swath-based products, several brightness temperature channels or precipitation-related variables are available for animating in virtual real-time as the satellite observed them on and above the Earth's surface. With grid-based products, only precipitation rates are available, but the grid points are visualized in such a way that they can be interactively examined to explore raw values. STORM VG reads values directly off the HDF5 files, converting the information into JSON on the fly. All data points both on and above the surface can be examined here as well. Both the raw values and, if relevant, elevations are displayed. Surface and above-ground precipitation rates from select Level 2 and 3 products are shown. Examples from both products will be shown, including visuals from high impact events observed by GPM constellation satellites.

  12. Interactive Visualization of Near Real Time and Production Global Precipitation Measurement (GPM) Mission Data Online Using CesiumJS

    NASA Technical Reports Server (NTRS)

    Lammers, Matthew

    2016-01-01

    Advancements in the capabilities of JavaScript frameworks and web browsing technology make online visualization of large geospatial datasets viable. Commonly this is done using static image overlays, prerendered animations, or cumbersome geoservers. These methods can limit interactivity andor place a large burden on server-side post-processing and storage of data. Geospatial data, and satellite data specifically, benefit from being visualized both on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS, developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. It has entered the void left by the abandonment of the Google Earth Web API, and it serves as a capable and well-maintained platform upon which data can be displayed. This paper will describe the technology behind the two primary products developed as part of the NASA Precipitation Processing System STORM website: GPM Near Real Time Viewer (GPMNRTView) and STORM Virtual Globe (STORM VG). GPMNRTView reads small post-processed CZML files derived from various Level 1 through 3 near real-time products. For swath-based products, several brightness temperature channels or precipitation-related variables are available for animating in virtual real-time as the satellite-observed them on and above the Earths surface. With grid-based products, only precipitation rates are available, but the grid points are visualized in such a way that they can be interactively examined to explore raw values. STORM VG reads values directly off the HDF5 files, converting the information into JSON on the fly. All data points both on and above the surface can be examined here as well. Both the raw values and, if relevant, elevations are displayed. Surface and above-ground precipitation rates from select Level 2 and 3 products are shown. Examples from both products will be shown, including visuals from high impact events observed by GPM constellation satellites.

  13. Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation

    PubMed Central

    Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B.

    2016-01-01

    Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field. PMID:27853419

  14. Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation.

    PubMed

    Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B

    2016-01-01

    Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field.

  15. Localization of magnetic pills

    PubMed Central

    Laulicht, Bryan; Gidmark, Nicholas J.; Tripathi, Anubhav; Mathiowitz, Edith

    2011-01-01

    Numerous therapeutics demonstrate optimal absorption or activity at specific sites in the gastrointestinal (GI) tract. Yet, safe, effective pill retention within a desired region of the GI remains an elusive goal. We report a safe, effective method for localizing magnetic pills. To ensure safety and efficacy, we monitor and regulate attractive forces between a magnetic pill and an external magnet, while visualizing internal dose motion in real time using biplanar videofluoroscopy. Real-time monitoring yields direct visual confirmation of localization completely noninvasively, providing a platform for investigating the therapeutic benefits imparted by localized oral delivery of new and existing drugs. Additionally, we report the in vitro measurements and calculations that enabled prediction of successful magnetic localization in the rat small intestines for 12 h. The designed system for predicting and achieving successful magnetic localization can readily be applied to any area of the GI tract within any species, including humans. The described system represents a significant step forward in the ability to localize magnetic pills safely and effectively anywhere within the GI tract. What our magnetic pill localization strategy adds to the state of the art, if used as an oral drug delivery system, is the ability to monitor the force exerted by the pill on the tissue and to locate the magnetic pill within the test subject all in real time. This advance ensures both safety and efficacy of magnetic localization during the potential oral administration of any magnetic pill-based delivery system. PMID:21257903

  16. Design and implementation of visual-haptic assistive control system for virtual rehabilitation exercise and teleoperation manipulation.

    PubMed

    Veras, Eduardo J; De Laurentis, Kathryn J; Dubey, Rajiv

    2008-01-01

    This paper describes the design and implementation of a control system that integrates visual and haptic information to give assistive force feedback through a haptic controller (Omni Phantom) to the user. A sensor-based assistive function and velocity scaling program provides force feedback that helps the user complete trajectory following exercises for rehabilitation purposes. This system also incorporates a PUMA robot for teleoperation, which implements a camera and a laser range finder, controlled in real time by a PC, were implemented into the system to help the user to define the intended path to the selected target. The real-time force feedback from the remote robot to the haptic controller is made possible by using effective multithreading programming strategies in the control system design and by novel sensor integration. The sensor-based assistant function concept applied to teleoperation as well as shared control enhances the motion range and manipulation capabilities of the users executing rehabilitation exercises such as trajectory following along a sensor-based defined path. The system is modularly designed to allow for integration of different master devices and sensors. Furthermore, because this real-time system is versatile the haptic component can be used separately from the telerobotic component; in other words, one can use the haptic device for rehabilitation purposes for cases in which assistance is needed to perform tasks (e.g., stroke rehab) and also for teleoperation with force feedback and sensor assistance in either supervisory or automatic modes.

  17. Categorization and decision-making in a neurobiologically plausible spiking network using a STDP-like learning rule.

    PubMed

    Beyeler, Michael; Dutt, Nikil D; Krichmar, Jeffrey L

    2013-12-01

    Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Information Communication using Knowledge Engine on Flood Issues

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2012-04-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to and visualization of flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, and other flood-related data for communities in Iowa. The system is designed for use by general public, often people with no domain knowledge and poor general science background. To improve effective communication with such audience, we have introduced a new way in IFIS to get information on flood related issues - instead of by navigating within hundreds of features and interfaces of the information system and web-based sources-- by providing dynamic computations based on a collection of built-in data, analysis, and methods. The IFIS Knowledge Engine connects to distributed sources of real-time stream gauges, and in-house data sources, analysis and visualization tools to answer questions grouped into several categories. Users will be able to provide input based on the query within the categories of rainfall, flood conditions, forecast, inundation maps, flood risk and data sensors. Our goal is the systematization of knowledge on flood related issues, and to provide a single source for definitive answers to factual queries. Long-term goal of this knowledge engine is to make all flood related knowledge easily accessible to everyone, and provide educational geoinformatics tool. The future implementation of the system will be able to accept free-form input and voice recognition capabilities within browser and mobile applications. We intend to deliver increasing capabilities for the system over the coming releases of IFIS. This presentation provides an overview of our Knowledge Engine, its unique information interface and functionality as an educational tool, and discusses the future plans for providing knowledge on flood related issues and resources.

  19. Toward real-time regional earthquake simulation of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.

    2013-12-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  20. Space Radiation Monitoring Center at SINP MSU

    NASA Astrophysics Data System (ADS)

    Kalegaev, Vladimir; Barinova, Wera; Barinov, Oleg; Bobrovnikov, Sergey; Dolenko, Sergey; Mukhametdinova, Ludmila; Myagkova, Irina; Nguen, Minh; Panasyuk, Mikhail; Shiroky, Vladimir; Shugay, Julia

    2015-04-01

    Data on energetic particle fluxes from Russian satellites have been collected in Space monitoring data center at Moscow State University in the near real-time mode. Web-portal http://smdc.sinp.msu.ru/ provides operational information on radiation state of the near-Earth space. Operational data are coming from space missions ELECTRO-L1, Meteor-M2. High-resolution data on energetic electron fluxes from MSU's satellite VERNOV with RELEC instrumentation on board are also available. Specific tools allow the visual representation of the satellite orbit in 3D space simultaneously with particle fluxes variations. Concurrent operational data coming from other spacecraft (ACE, GOES, SDO) and from the Earth's surface (geomagnetic indices) are used to represent geomagnetic and radiation state of near-Earth environment. Internet portal http://swx.sinp.msu.ru provides access to the actual data characterizing the level of solar activity, geomagnetic and radiation conditions in heliosphere and the Earth's magnetosphere in the real-time mode. Operational forecasting services automatically generate alerts on particle fluxes enhancements above the threshold values, both for SEP and relativistic electrons, using data from LEO and GEO orbits. The models of space environment working in autonomous mode are used to generalize the information obtained from different missions for the whole magnetosphere. On-line applications created on the base of these models provide short-term forecasting for SEP particles and relativistic electron fluxes at GEO and LEO, Dst and Kp indices online forecasting up to 1.5 hours ahead. Velocities of high-speed streams in solar wind on the Earth orbit are estimated with advance time of 3-4 days. Visualization system provides representation of experimental and modeling data in 2D and 3D.

Top