Sample records for allowed visual observation

  1. Sequential Ideal-Observer Analysis of Visual Discriminations.

    ERIC Educational Resources Information Center

    Geisler, Wilson S.

    1989-01-01

    A new analysis, based on the concept of the ideal observer in signal detection theory, is described. It allows: tracing of the flow of discrimination information through the initial physiological stages of visual processing for arbitrary spatio-chromatic stimuli, and measurement of the information content of said visual stimuli. (TJH)

  2. Large-Scale Overlays and Trends: Visually Mining, Panning and Zooming the Observable Universe.

    PubMed

    Luciani, Timothy Basil; Cherinka, Brian; Oliphant, Daniel; Myers, Sean; Wood-Vasey, W Michael; Labrinidis, Alexandros; Marai, G Elisabeta

    2014-07-01

    We introduce a web-based computing infrastructure to assist the visual integration, mining and interactive navigation of large-scale astronomy observations. Following an analysis of the application domain, we design a client-server architecture to fetch distributed image data and to partition local data into a spatial index structure that allows prefix-matching of spatial objects. In conjunction with hardware-accelerated pixel-based overlays and an online cross-registration pipeline, this approach allows the fetching, displaying, panning and zooming of gigabit panoramas of the sky in real time. To further facilitate the integration and mining of spatial and non-spatial data, we introduce interactive trend images-compact visual representations for identifying outlier objects and for studying trends within large collections of spatial objects of a given class. In a demonstration, images from three sky surveys (SDSS, FIRST and simulated LSST results) are cross-registered and integrated as overlays, allowing cross-spectrum analysis of astronomy observations. Trend images are interactively generated from catalog data and used to visually mine astronomy observations of similar type. The front-end of the infrastructure uses the web technologies WebGL and HTML5 to enable cross-platform, web-based functionality. Our approach attains interactive rendering framerates; its power and flexibility enables it to serve the needs of the astronomy community. Evaluation on three case studies, as well as feedback from domain experts emphasize the benefits of this visual approach to the observational astronomy field; and its potential benefits to large scale geospatial visualization in general.

  3. First-Person Visualizations of the Special and General Theory of Relativity

    ERIC Educational Resources Information Center

    Kraus, U.

    2008-01-01

    Visualizations that adopt a first-person point of view allow observation and, in the case of interactive simulations, experimentation with relativistic scenes. This paper gives examples of three types of first-person visualizations: watching objects that move at nearly the speed of light, being a high-speed observer looking at a static environment…

  4. Visualizing SPH Cataclysmic Variable Accretion Disk Simulations with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.; Wood, Matthew A.

    2015-01-01

    We present innovative ways to use Blender, a 3D graphics package, to visualize smoothed particle hydrodynamics particle data of cataclysmic variable accretion disks. We focus on the methods of shape key data constructs to increasedata i/o and manipulation speed. The implementation of the methods outlined allow for compositing of the various visualization layers into a final animation. The viewing of the disk in 3D from different angles can allow for a visual analysisof the physical system and orbits. The techniques have a wide ranging set of applications in astronomical visualization,including both observation and theoretical data.

  5. An Ideal Observer Analysis of Visual Working Memory

    PubMed Central

    Sims, Chris R.; Jacobs, Robert A.; Knill, David C.

    2013-01-01

    Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this paper we develop an ideal observer analysis of human visual working memory, by deriving the expected behavior of an optimally performing, but limited-capacity memory system. This analysis is framed around rate–distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in two empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (for example, how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis—one which allows variability in the number of stored memory representations, but does not assume the presence of a fixed item limit—provides an excellent account of the empirical data, and further offers a principled re-interpretation of existing models of visual working memory. PMID:22946744

  6. A medaka model of cancer allowing direct observation of transplanted tumor cells in vivo at a cellular-level resolution.

    PubMed

    Hasegawa, Sumitaka; Maruyama, Kouichi; Takenaka, Hikaru; Furukawa, Takako; Saga, Tsuneo

    2009-08-18

    The recent success with small fish as an animal model of cancer with the aid of fluorescence technique has attracted cancer modelers' attention because it would be possible to directly visualize tumor cells in vivo in real time. Here, we report a medaka model capable of allowing the observation of various cell behaviors of transplanted tumor cells, such as cell proliferation and metastasis, which were visualized easily in vivo. We established medaka melanoma (MM) cells stably expressing GFP and transplanted them into nonirradiated and irradiated medaka. The tumor cells were grown at the injection sites in medaka, and the spatiotemporal changes were visualized under a fluorescence stereoscopic microscope at a cellular-level resolution, and even at a single-cell level. Tumor dormancy and metastasis were also observed. Interestingly, in irradiated medaka, accelerated tumor growth and metastasis of the transplanted tumor cells were directly visualized. Our medaka model provides an opportunity to visualize in vivo tumor cells "as seen in a culture dish" and would be useful for in vivo tumor cell biology.

  7. A Unified Air-Sea Visualization System: Survey on Gridding Structures

    NASA Technical Reports Server (NTRS)

    Anand, Harsh; Moorhead, Robert

    1995-01-01

    The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.

  8. Fingerprint detection

    DOEpatents

    Saunders, George C.

    1992-01-01

    A method for detection and visualization of latent fingerprints is provided and includes contacting a substrate containing a latent print thereon with a colloidal metal composition for time sufficient to allow reaction of said colloidal metal composition with said latent print, and preserving or recording the observable print. Further, the method for detection and visualization of latent fingerprints can include contacting the metal composition-latent print reaction product with a secondary metal-containing solution for time sufficient to allow precipitation of said secondary metal thereby enhancing the visibility of the latent print, and preserving or recording the observable print.

  9. VirGO: A Visual Browser for the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Chéreau, Fabien

    2012-04-01

    VirGO is the next generation Visual Browser for the ESO Science Archive Facility developed by the Virtual Observatory (VO) Systems Department. It is a plug-in for the popular open source software Stellarium adding capabilities for browsing professional astronomical data. VirGO gives astronomers the possibility to easily discover and select data from millions of observations in a new visual and intuitive way. Its main feature is to perform real-time access and graphical display of a large number of observations by showing instrumental footprints and image previews, and to allow their selection and filtering for subsequent download from the ESO SAF web interface. It also allows the loading of external FITS files or VOTables, the superimposition of Digitized Sky Survey (DSS) background images, and the visualization of the sky in a `real life' mode as seen from the main ESO sites. All data interfaces are based on Virtual Observatory standards which allow access to images and spectra from external data centers, and interaction with the ESO SAF web interface or any other VO applications supporting the PLASTIC messaging system.

  10. Software applications to three-dimensional visualization of forest landscapes -- A case study demontrating the use of visual nature studio (VNS) in visualizing fire spread in forest landscapes

    Treesearch

    Brian J. Williams; Bo Song; Chou Chiao-Ying; Thomas M. Williams; John Hom

    2010-01-01

    Three-dimensional (3D) visualization is a useful tool that depicts virtual forest landscapes on computer. Previous studies in visualization have required high end computer hardware and specialized technical skills. A virtual forest landscape can be used to show different effects of disturbances and management scenarios on a computer, which allows observation of forest...

  11. The Effect of Multispectral Image Fusion Enhancement on Human Efficiency

    DTIC Science & Technology

    2017-03-20

    human visual system by applying a technique commonly used in visual percep- tion research : ideal observer analysis. Using this approach, we establish...applications, analytic tech- niques, and procedural methods used across studies. This paper uses ideal observer analysis to establish a frame- work that allows...augmented similarly to incorpo- rate research involving more complex stimulus content. Additionally, the ideal observer can be adapted for a number of

  12. Video Allows Young Scientists New Ways to Be Seen

    ERIC Educational Resources Information Center

    Park, John C.

    2009-01-01

    Science is frequently a visual endeavor, dependent on direct or indirect observations. Teachers have long employed motion pictures in the science classroom to allow students to make indirect observations, but the capabilities of digital video offer opportunities to engage students in active science learning. Not only can watching a digital video…

  13. VirGO: A Visual Browser for the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Chéreau, F.

    2008-08-01

    VirGO is the next generation Visual Browser for the ESO Science Archive Facility developed by the Virtual Observatory (VO) Systems Department. It is a plug-in for the popular open source software Stellarium adding capabilities for browsing professional astronomical data. VirGO gives astronomers the possibility to easily discover and select data from millions of observations in a new visual and intuitive way. Its main feature is to perform real-time access and graphical display of a large number of observations by showing instrumental footprints and image previews, and to allow their selection and filtering for subsequent download from the ESO SAF web interface. It also allows the loading of external FITS files or VOTables, the superimposition of Digitized Sky Survey (DSS) background images, and the visualization of the sky in a `real life' mode as seen from the main ESO sites. All data interfaces are based on Virtual Observatory standards which allow access to images and spectra from external data centers, and interaction with the ESO SAF web interface or any other VO applications supporting the PLASTIC messaging system. The main website for VirGO is at http://archive.eso.org/cms/virgo.

  14. The use of VIEWIT and perspective plot to assist in determining the landscape's visual absorption capability

    Treesearch

    Wayne Tlusty

    1979-01-01

    The concept of Visual Absorption Capability (VAC) is widely used by Forest Service Landscape Architects. The use of computer generated graphics can aid in combining times an area is seen, distance from observer and land aspect relative viewer; to determine visual magnitude. Perspective Plot allows both fast and inexpensive graphic analysis of VAC allocations, for...

  15. VirGO: A Visual Browser for the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Hatziminaoglou, Evanthia; Chéreau, Fabien

    2009-03-01

    VirGO is the next generation Visual Browser for the ESO Science Archive Facility (SAF) developed in the Virtual Observatory Project Office. VirGO enables astronomers to discover and select data easily from millions of observations in a visual and intuitive way. It allows real-time access and the graphical display of a large number of observations by showing instrumental footprints and image previews, as well as their selection and filtering for subsequent download from the ESO SAF web interface. It also permits the loading of external FITS files or VOTables, as well as the superposition of Digitized Sky Survey images to be used as background. All data interfaces are based on Virtual Observatory (VO) standards that allow access to images and spectra from external data centres, and interaction with the ESO SAF web interface or any other VO applications.

  16. Exclusively visual analysis of classroom group interactions

    NASA Astrophysics Data System (ADS)

    Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric

    2016-12-01

    Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data only—without audio—as when using both visual and audio data to code. Also, interrater reliability is high when comparing use of visual and audio data to visual-only data. We see a small bias to code interactions as group discussion when visual and audio data are used compared with video-only data. This work establishes that meaningful educational observation can be made through visual information alone. Further, it suggests that after initial work to create a coding scheme and validate it in each environment, computer-automated visual coding could drastically increase the breadth of qualitative studies and allow for meaningful educational analysis on a far greater scale.

  17. An ideal observer analysis of visual working memory.

    PubMed

    Sims, Chris R; Jacobs, Robert A; Knill, David C

    2012-10-01

    Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this article we develop an ideal observer analysis of human VWM by deriving the expected behavior of an optimally performing but limited-capacity memory system. This analysis is framed around rate-distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in 2 empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (e.g., how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis-one that allows variability in the number of stored memory representations but does not assume the presence of a fixed item limit-provides an excellent account of the empirical data and further offers a principled reinterpretation of existing models of VWM. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  18. Bronchial intubation could be detected by the visual stethoscope techniques in pediatric patients.

    PubMed

    Kimura, Tetsuro; Suzuki, Akira; Mimuro, Soichiro; Makino, Hiroshi; Sato, Shigehito

    2012-12-01

    We created a system that allows the visualization of breath sounds (visual stethoscope). We compared the visual stethoscope technique with auscultation for the detection of bronchial intubation in pediatric patients. In the auscultation group, an anesthesiologist advanced the tracheal tube, while another anesthesiologist auscultated bilateral breath sounds to detect the change and/or disappearance of unilateral breath sounds. In the visualization group, the stethoscope was used to detect changes in breath sounds and/or disappearance of unilateral breath sounds. The distance from the edge of the mouth to the carina was measured using a fiberoptic bronchoscope. Forty pediatric patients were enrolled in the study. At the point at which irregular breath sounds were auscultated, the tracheal tube was located at 0.5 ± 0.8 cm on the bronchial side from the carina. When a detectable change of shape of the visualized breath sound was observed, the tracheal tube was located 0.1 ± 1.2 cm on the bronchial side (not significant). At the point at which unilateral breath sounds were auscultated or a unilateral shape of the visualized breath sound was observed, the tracheal tube was 1.5 ± 0.8 or 1.2 ± 1.0 cm on the bronchial side, respectively (not significant). The visual stethoscope allowed to display the left and the right lung sound simultaneously and detected changes of breath sounds and unilateral breath sound as a tracheal tube was advanced. © 2012 Blackwell Publishing Ltd.

  19. Direct Observation of Individual Charges and Their Dynamics on Graphene by Low-Energy Electron Holography.

    PubMed

    Latychevskaia, Tatiana; Wicki, Flavio; Longchamp, Jean-Nicolas; Escher, Conrad; Fink, Hans-Werner

    2016-09-14

    Visualizing individual charges confined to molecules and observing their dynamics with high spatial resolution is a challenge for advancing various fields in science, ranging from mesoscopic physics to electron transfer events in biological molecules. We show here that the high sensitivity of low-energy electrons to local electric fields can be employed to directly visualize individual charged adsorbates and to study their behavior in a quantitative way. This makes electron holography a unique probing tool for directly visualizing charge distributions with a sensitivity of a fraction of an elementary charge. Moreover, spatial resolution in the nanometer range and fast data acquisition inherent to lens-less low-energy electron holography allows for direct visual inspection of charge transfer processes.

  20. Effects of age, gender, and stimulus presentation period on visual short-term memory.

    PubMed

    Kunimi, Mitsunobu

    2016-01-01

    This study focused on age-related changes in visual short-term memory using visual stimuli that did not allow verbal encoding. Experiment 1 examined the effects of age and the length of the stimulus presentation period on visual short-term memory function. Experiment 2 examined the effects of age, gender, and the length of the stimulus presentation period on visual short-term memory function. The worst memory performance and the largest performance difference between the age groups were observed in the shortest stimulus presentation period conditions. The performance difference between the age groups became smaller as the stimulus presentation period became longer; however, it did not completely disappear. Although gender did not have a significant effect on d' regardless of the presentation period in the young group, a significant gender-based difference was observed for stimulus presentation periods of 500 ms and 1,000 ms in the older group. This study indicates that the decline in visual short-term memory observed in the older group is due to the interaction of several factors.

  1. High Performance Real-Time Visualization of Voluminous Scientific Data Through the NOAA Earth Information System (NEIS).

    NASA Astrophysics Data System (ADS)

    Stewart, J.; Hackathorn, E. J.; Joyce, J.; Smith, J. S.

    2014-12-01

    Within our community data volume is rapidly expanding. These data have limited value if one cannot interact or visualize the data in a timely manner. The scientific community needs the ability to dynamically visualize, analyze, and interact with these data along with other environmental data in real-time regardless of the physical location or data format. Within the National Oceanic Atmospheric Administration's (NOAA's), the Earth System Research Laboratory (ESRL) is actively developing the NOAA Earth Information System (NEIS). Previously, the NEIS team investigated methods of data discovery and interoperability. The recent focus shifted to high performance real-time visualization allowing NEIS to bring massive amounts of 4-D data, including output from weather forecast models as well as data from different observations (surface obs, upper air, etc...) in one place. Our server side architecture provides a real-time stream processing system which utilizes server based NVIDIA Graphical Processing Units (GPU's) for data processing, wavelet based compression, and other preparation techniques for visualization, allows NEIS to minimize the bandwidth and latency for data delivery to end-users. Client side, users interact with NEIS services through the visualization application developed at ESRL called TerraViz. Terraviz is developed using the Unity game engine and takes advantage of the GPU's allowing a user to interact with large data sets in real time that might not have been possible before. Through these technologies, the NEIS team has improved accessibility to 'Big Data' along with providing tools allowing novel visualization and seamless integration of data across time and space regardless of data size, physical location, or data format. These capabilities provide the ability to see the global interactions and their importance for weather prediction. Additionally, they allow greater access than currently exists helping to foster scientific collaboration and new ideas. This presentation will provide an update of the recent enhancements of the NEIS architecture and visualization capabilities, challenges faced, as well as ongoing research activities related to this project.

  2. Simulation Exploration through Immersive Parallel Planes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  3. Simulation Exploration through Immersive Parallel Planes: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  4. Inter- and intra-observer agreement of BI-RADS-based subjective visual estimation of amount of fibroglandular breast tissue with magnetic resonance imaging: comparison to automated quantitative assessment.

    PubMed

    Wengert, G J; Helbich, T H; Woitek, R; Kapetas, P; Clauser, P; Baltzer, P A; Vogl, W-D; Weber, M; Meyer-Baese, A; Pinker, Katja

    2016-11-01

    To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. • Subjective FGT estimation with MRI shows moderate intra-/inter-observer agreement in inexperienced readers. • Inter-observer agreement can be improved by practice and experience. • Automated observer-independent quantitative measurements can provide reliable and standardized assessment of FGT with MRI.

  5. Interactive Visualization of Large-Scale Hydrological Data using Emerging Technologies in Web Systems and Parallel Programming

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2013-12-01

    As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools developed within the light of these challenges.

  6. Promoting Visualization Skills through Deconstruction Using Physical Models and a Visualization Activity Intervention

    NASA Astrophysics Data System (ADS)

    Schiltz, Holly Kristine

    Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors' modeled visualization artifacts had on students. No patterns emerged from the passive observation of visualization artifacts in lecture or recitation, but the need to elicit visual information from students was made clear. Deconstruction proved to be a valuable method for instruction and assessment of visual information. Three strategies for using deconstruction in teaching were distilled from the lessons and observations of the student focus groups: begin with observations of what is given in an image and what it's composed of, identify the relationships between components to find additional operations in different environments about the molecule, and deconstructing steps of challenging questions can reveal mistakes. An intervention was developed to teach students to use deconstruction and verbalization to analyze complex visualization tasks and employ the principles of the theoretical framework. The activities were scaffolded to introduce increasingly challenging concepts to students, but also support them as they learned visually demanding chemistry concepts. Several themes were observed in the analysis of the visualization activities. Students used deconstruction by documenting which parts of the images were useful for interpretation of the visual. Students identified valid patterns and rules within the images, which signified understanding of arrangement of information presented in the representation. Successful strategy communication was identified when students documented personal strategies that allowed them to complete the activity tasks. Finally, students demonstrated the ability to extend symmetry skills to advanced applications they had not previously seen. This work shows how the use of deconstruction and verbalization may have a great impact on how students master difficult topics and combined, they offer students a powerful strategy to approach visually demanding chemistry problems and to the instructor a unique insight to mentally constructed strategies.

  7. A visual tristimulus projection colorimeter.

    PubMed

    Valberg, A

    1971-01-01

    Based on the optical principle of a slide projector, a visual tristimulus projection colorimeter has been developed. The calorimeter operates with easily interchangeable sets of primary color filters placed in a frame at the objective. The apparatus has proved to be fairly accurate. The reproduction of the color matches as measured by the standard deviation is equal to the visual sensitivity to color differences for each observer. Examples of deviations in the matches among individuals as well as deviations compared with the CIE 1931 Standard Observer are given. These deviations are demonstrated to be solely due to individual differences in the perception of metameric colors. Thus, taking advantage of an objective observation (allowing all adjustments to be judged by a group of impartial observers), the colorimeter provides an excellent aid in the study of discrimination, metamerism, and related effects which are of considerable interest in current research in colorimetry and in the study of color vision tests.

  8. An airborne system for vortex flow visualization on the F-18 high-alpha research vehicle

    NASA Technical Reports Server (NTRS)

    Curry, Robert E.; Richwine, David M.

    1988-01-01

    A flow visualization system for the F-18 high-alpha research vehicle is described which allows direct observation of the separated vortex flows over a wide range of flight conditions. The system consists of a smoke generator system, on-board photographic and video systems, and instrumentation. In the present concept, smoke is entrained into the low-pressure vortex core, and vortice breakdown is indicated by a rapid diffusion of the smoke. The resulting pattern is observed using photographic and video images and is correlated with measured flight conditions.

  9. Science Opportunity Analyzer (SOA) Version 8

    NASA Technical Reports Server (NTRS)

    Witoff, Robert J.; Polanskey, Carol A.; Aguinaldo, Anna Marie A.; Liu, Ning; Hofstadter, Mark D.

    2013-01-01

    SOA allows scientists to plan spacecraft observations. It facilitates the identification of geometrically interesting times in a spacecraft s orbit that a user can use to plan observations or instrument-driven spacecraft maneuvers. These observations can then be visualized multiple ways in both two- and three-dimensional views. When observations have been optimized within a spacecraft's flight rules, the resulting plans can be output for use by other JPL uplink tools. Now in its eighth major version, SOA improves on these capabilities in a modern and integrated fashion. SOA consists of five major functions: Opportunity Search, Visualization, Observation Design, Constraint Checking, and Data Output. Opportunity Search is a GUI-driven interface to existing search engines that can be used to identify times when a spacecraft is in a specific geometrical relationship with other bodies in the solar system. This function can be used for advanced mission planning as well as for making last-minute adjustments to mission sequences in response to trajectory modifications. Visualization is a key aspect of SOA. The user can view observation opportunities in either a 3D representation or as a 2D map projection. Observation Design allows the user to orient the spacecraft and visualize the projection of the instrument field of view for that orientation using the same views as Opportunity Search. Constraint Checking is provided to validate various geometrical and physical aspects of an observation design. The user has the ability to easily create custom rules or to use official project-generated flight rules. This capability may also allow scientists to easily assess the cost to science if flight rule changes occur. Data Output allows the user to compute ancillary data related to an observation or to a given position of the spacecraft along its trajectory. The data can be saved as a tab-delimited text file or viewed as a graph. SOA combines science planning functionality unique to both JPL and the sponsoring spacecraft. SOA is able to ingest JPL SPICE Kernels that are used to drive the tool and its computations. A Percy search engine is then included that identifies interesting time periods for the user to build observations. When observations are then built, flight-like orientation algorithms replicate spacecraft dynamics to closely simulate the flight spacecraft s dynamics. SOA v8 represents large steps forward from SOA v7 in terms of quality, reliability, maintainability, efficiency, and user experience. A tailored agile development environment has been built around SOA that provides automated unit testing, continuous build and integration, a consolidated Web-based code and documentation storage environment, modern Java enhancements, and a focus on usability

  10. Mars @ ASDC

    NASA Astrophysics Data System (ADS)

    Carraro, Francesco

    "Mars @ ASDC" is a project born with the goal of using the new web technologies to assist researches involved in the study of Mars. This project employs Mars map and javascript APIs provided by Google to visualize data acquired by space missions on the planet. So far, visualization of tracks acquired by MARSIS and regions observed by VIRTIS-Rosetta has been implemented. The main reason for the creation of this kind of tool is the difficulty in handling hundreds or thousands of acquisitions, like the ones from MARSIS, and the consequent difficulty in finding observations related to a particular region. This led to the development of a tool which allows to search for acquisitions either by defining the region of interest through a set of geometrical parameters or by manually selecting the region on the map through a few mouse clicks The system allows the visualization of tracks (acquired by MARSIS) or regions (acquired by VIRTIS-Rosetta) which intersect the user defined region. MARSIS tracks can be visualized both in Mercator and polar projections while the regions observed by VIRTIS can presently be visualized only in Mercator projection. The Mercator projection is the standard map provided by Google. The polar projections are provided by NASA and have been developed to be used in combination with APIs provided by Google The whole project has been developed following the "open source" philosophy: the client-side code which handles the functioning of the web page is written in javascript; the server-side code which executes the searches for tracks or regions is written in PHP and the DB which undergoes the system is MySQL.

  11. An experimental facility for the visual study of turbulent flows.

    NASA Technical Reports Server (NTRS)

    Brodkey, R. S.; Hershey, H. C.; Corino, E. R.

    1971-01-01

    An experimental technique which allows visual observations of the wall area in turbulent pipe flow is described in detail. It requires neither the introduction of any injection or measuring device into the flow nor the presence of a two-phase flow or of a non-Newtonian fluid. The technique involves suspending solid MgO particles of colloidal size in trichloroethylene and photographing their motions near the wall with a high speed movie camera moving with the flow. Trichloroethylene was chosen in order to eliminate the index of refraction problem in a curved wall. Evaluation of the technique including a discussion of limitations is included. Also the technique is compared with previous methods of visual observations of turbulent flow.

  12. Metadata Mapper: a web service for mapping data between independent visual analysis components, guided by perceptual rules

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Matasci, Naim

    2011-03-01

    The explosion of online scientific data from experiments, simulations, and observations has given rise to an avalanche of algorithmic, visualization and imaging methods. There has also been enormous growth in the introduction of tools that provide interactive interfaces for exploring these data dynamically. Most systems, however, do not support the realtime exploration of patterns and relationships across tools and do not provide guidance on which colors, colormaps or visual metaphors will be most effective. In this paper, we introduce a general architecture for sharing metadata between applications and a "Metadata Mapper" component that allows the analyst to decide how metadata from one component should be represented in another, guided by perceptual rules. This system is designed to support "brushing [1]," in which highlighting a region of interest in one application automatically highlights corresponding values in another, allowing the scientist to develop insights from multiple sources. Our work builds on the component-based iPlant Cyberinfrastructure [2] and provides a general approach to supporting interactive, exploration across independent visualization and visual analysis components.

  13. DNA Data Visualization (DDV): Software for Generating Web-Based Interfaces Supporting Navigation and Analysis of DNA Sequence Data of Entire Genomes.

    PubMed

    Neugebauer, Tomasz; Bordeleau, Eric; Burrus, Vincent; Brzezinski, Ryszard

    2015-01-01

    Data visualization methods are necessary during the exploration and analysis activities of an increasingly data-intensive scientific process. There are few existing visualization methods for raw nucleotide sequences of a whole genome or chromosome. Software for data visualization should allow the researchers to create accessible data visualization interfaces that can be exported and shared with others on the web. Herein, novel software developed for generating DNA data visualization interfaces is described. The software converts DNA data sets into images that are further processed as multi-scale images to be accessed through a web-based interface that supports zooming, panning and sequence fragment selection. Nucleotide composition frequencies and GC skew of a selected sequence segment can be obtained through the interface. The software was used to generate DNA data visualization of human and bacterial chromosomes. Examples of visually detectable features such as short and long direct repeats, long terminal repeats, mobile genetic elements, heterochromatic segments in microbial and human chromosomes, are presented. The software and its source code are available for download and further development. The visualization interfaces generated with the software allow for the immediate identification and observation of several types of sequence patterns in genomes of various sizes and origins. The visualization interfaces generated with the software are readily accessible through a web browser. This software is a useful research and teaching tool for genetics and structural genomics.

  14. Improved superficial brain hemorrhage visualization in susceptibility weighted images by constrained minimum intensity projection

    NASA Astrophysics Data System (ADS)

    Castro, Marcelo A.; Pham, Dzung L.; Butman, John

    2016-03-01

    Minimum intensity projection is a technique commonly used to display magnetic resonance susceptibility weighted images, allowing the observer to better visualize hemorrhages and vasculature. The technique displays the minimum intensity in a given projection within a thick slab, allowing different connectivity patterns to be easily revealed. Unfortunately, the low signal intensity of the skull within the thick slab can mask superficial tissues near the skull base and other regions. Because superficial microhemorrhages are a common feature of traumatic brain injury, this effect limits the ability to proper diagnose and follow up patients. In order to overcome this limitation, we developed a method to allow minimum intensity projection to properly display superficial tissues adjacent to the skull. Our approach is based on two brain masks, the largest of which includes extracerebral voxels. The analysis of the rind within both masks containing the actual brain boundary allows reclassification of those voxels initially missed in the smaller mask. Morphological operations are applied to guarantee accuracy and topological correctness, and the mean intensity within the mask is assigned to all outer voxels. This prevents bone from dominating superficial regions in the projection, enabling superior visualization of cortical hemorrhages and vessels.

  15. Quantifying Ant Activity Using Vibration Measurements

    PubMed Central

    Oberst, Sebastian; Baro, Enrique Nava; Lai, Joseph C. S.; Evans, Theodore A.

    2014-01-01

    Ant behaviour is of great interest due to their sociality. Ant behaviour is typically observed visually, however there are many circumstances where visual observation is not possible. It may be possible to assess ant behaviour using vibration signals produced by their physical movement. We demonstrate through a series of bioassays with different stimuli that the level of activity of meat ants (Iridomyrmex purpureus) can be quantified using vibrations, corresponding to observations with video. We found that ants exposed to physical shaking produced the highest average vibration amplitudes followed by ants with stones to drag, then ants with neighbours, illuminated ants and ants in darkness. In addition, we devised a novel method based on wavelet decomposition to separate the vibration signal owing to the initial ant behaviour from the substrate response, which will allow signals recorded from different substrates to be compared directly. Our results indicate the potential to use vibration signals to classify some ant behaviours in situations where visual observation could be difficult. PMID:24658467

  16. Innovative Ways of Visualising Meta Data in 4D Using Open Source Libaries

    NASA Astrophysics Data System (ADS)

    Balhar, Jakub; Valach, Pavel; Veselka, Jonas; Voumard, Yann

    2016-08-01

    There are more and more data being measured by different Earth Observation satellites around the world. Ever increasing amount of these data present new challenges and opportunities for their visualization.In this paper we propose how to visualize the amount, distribution and the structure of the data in a transparent way, which will take into account time-dimensions as well. Our approach allows us to get a global overview as well detailed regional information about distribution of the products from EO observation missions.We focus on introducing our mobile-friendly and easy- to-use web mapping application for 4D visualization of the data. Apart from that we also present the Java application which can read and process the data from various data sources.

  17. The Role of Prediction In Perception: Evidence From Interrupted Visual Search

    PubMed Central

    Mereu, Stefania; Zacks, Jeffrey M.; Kurby, Christopher A.; Lleras, Alejandro

    2014-01-01

    Recent studies of rapid resumption—an observer’s ability to quickly resume a visual search after an interruption—suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur. PMID:24820440

  18. Color-coded visualization of magnetic resonance imaging multiparametric maps

    NASA Astrophysics Data System (ADS)

    Kather, Jakob Nikolas; Weidner, Anja; Attenberger, Ulrike; Bukschat, Yannick; Weis, Cleo-Aron; Weis, Meike; Schad, Lothar R.; Zöllner, Frank Gerrit

    2017-01-01

    Multiparametric magnetic resonance imaging (mpMRI) data are emergingly used in the clinic e.g. for the diagnosis of prostate cancer. In contrast to conventional MR imaging data, multiparametric data typically include functional measurements such as diffusion and perfusion imaging sequences. Conventionally, these measurements are visualized with a one-dimensional color scale, allowing only for one-dimensional information to be encoded. Yet, human perception places visual information in a three-dimensional color space. In theory, each dimension of this space can be utilized to encode visual information. We addressed this issue and developed a new method for tri-variate color-coded visualization of mpMRI data sets. We showed the usefulness of our method in a preclinical and in a clinical setting: In imaging data of a rat model of acute kidney injury, the method yielded characteristic visual patterns. In a clinical data set of N = 13 prostate cancer mpMRI data, we assessed diagnostic performance in a blinded study with N = 5 observers. Compared to conventional radiological evaluation, color-coded visualization was comparable in terms of positive and negative predictive values. Thus, we showed that human observers can successfully make use of the novel method. This method can be broadly applied to visualize different types of multivariate MRI data.

  19. Flow Visualization by Elastic Light Scattering in the Boundary Layer of a Supersonic Flow

    NASA Technical Reports Server (NTRS)

    Herring, G. C.; Hillard, Mervin E., Jr.

    2000-01-01

    We demonstrate instantaneous flow visualization of the boundary layer region of a Mach 2.5 supersonic flow over a flat plate that is interacting with an impinging shock wave. Tests were performed in the Unitary Plan Wind Tunnel (UPWT) at NASA Langley Research Center. The technique is elastic light scattering using 10-nsec laser pulses at 532 nm. We emphasize that no seed material of any kind, including water (H2O), is purposely added to the flow. The scattered light comes from a residual impurity that normally exists in the flow medium after the air drying process. Thus, the technique described here differs from the traditional vapor-screen method, which is typically accomplished by the addition of extra H2O vapor to the airflow. The flow is visualized with a series of thin two-dimensional light sheets (oriented perpendicular to the streamwise direction) that are located at several positions downstream of the leading edge of the model. This geometry allows the direct observation of the unsteady flow structure in the spanwise dimension of the model and also allows the indirect observation of the boundary layer growth in the streamwise dimension.

  20. Novel Web-based Education Platforms for Information Communication utilizing Gamification, Virtual and Immersive Reality

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2015-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. This presentation showcase information communication interfaces, games, and virtual and immersive reality applications for supporting teaching and learning of concepts in atmospheric and hydrological sciences. The information communication platforms utilizes latest web technologies and allow accessing and visualizing large scale data on the web. The simulation system is a web-based 3D interactive learning environment for teaching hydrological and atmospheric processes and concepts. The simulation systems provides a visually striking platform with realistic terrain and weather information, and water simulation. The web-based simulation system provides an environment for students to learn about the earth science processes, and effects of development and human activity on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users.

  1. Advancing Water Science through Data Visualization

    NASA Astrophysics Data System (ADS)

    Li, X.; Troy, T.

    2014-12-01

    As water scientists, we are increasingly handling larger and larger datasets with many variables, making it easy to lose ourselves in the details. Advanced data visualization will play an increasingly significant role in propelling the development of water science in research, economy, policy and education. It can enable analysis within research and further data scientists' understanding of behavior and processes and can potentially affect how the public, whom we often want to inform, understands our work. Unfortunately for water scientists, data visualization is approached in an ad hoc manner when a more formal methodology or understanding could potentially significantly improve both research within the academy and outreach to the public. Firstly to broaden and deepen scientific understanding, data visualization can allow for more analyzed targets to be processed simultaneously and can represent the variables effectively, finding patterns, trends and relationships; thus it can even explores the new research direction or branch of water science. Depending on visualization, we can detect and separate the pivotal and trivial influential factors more clearly to assume and abstract the original complex target system. Providing direct visual perception of the differences between observation data and prediction results of models, data visualization allows researchers to quickly examine the quality of models in water science. Secondly data visualization can also improve public awareness and perhaps influence behavior. Offering decision makers clearer perspectives of potential profits of water, data visualization can amplify the economic value of water science and also increase relevant employment rates. Providing policymakers compelling visuals of the role of water for social and natural systems, data visualization can advance the water management and legislation of water conservation. By building the publics' own data visualization through apps and games about water science, they can absorb the knowledge about water indirectly and incite the awareness of water problems.

  2. Four types of ensemble coding in data visualizations.

    PubMed

    Szafir, Danielle Albers; Haroz, Steve; Gleicher, Michael; Franconeri, Steven

    2016-01-01

    Ensemble coding supports rapid extraction of visual statistics about distributed visual information. Researchers typically study this ability with the goal of drawing conclusions about how such coding extracts information from natural scenes. Here we argue that a second domain can serve as another strong inspiration for understanding ensemble coding: graphs, maps, and other visual presentations of data. Data visualizations allow observers to leverage their ability to perform visual ensemble statistics on distributions of spatial or featural visual information to estimate actual statistics on data. We survey the types of visual statistical tasks that occur within data visualizations across everyday examples, such as scatterplots, and more specialized images, such as weather maps or depictions of patterns in text. We divide these tasks into four categories: identification of sets of values, summarization across those values, segmentation of collections, and estimation of structure. We point to unanswered questions for each category and give examples of such cross-pollination in the current literature. Increased collaboration between the data visualization and perceptual psychology research communities can inspire new solutions to challenges in visualization while simultaneously exposing unsolved problems in perception research.

  3. Development of Four Dimensional Human Model that Enables Deformation of Skin, Organs and Blood Vessel System During Body Movement - Visualizing Movements of the Musculoskeletal System.

    PubMed

    Suzuki, Naoki; Hattori, Asaki; Hashizume, Makoto

    2016-01-01

    We constructed a four dimensional human model that is able to visualize the structure of a whole human body, including the inner structures, in real-time to allow us to analyze human dynamic changes in the temporal, spatial and quantitative domains. To verify whether our model was generating changes according to real human body dynamics, we measured a participant's skin expansion and compared it to that of the model conducted under the same body movement. We also made a contribution to the field of orthopedics, as we were able to devise a display method that enables the observer to more easily observe the changes made in the complex skeletal muscle system during body movements, which in the past were difficult to visualize.

  4. JackIn Head: Immersive Visual Telepresence System with Omnidirectional Wearable Camera.

    PubMed

    Kasahara, Shunichi; Nagai, Shohei; Rekimoto, Jun

    2017-03-01

    Sharing one's own immersive experience over the Internet is one of the ultimate goals of telepresence technology. In this paper, we present JackIn Head, a visual telepresence system featuring an omnidirectional wearable camera with image motion stabilization. Spherical omnidirectional video footage taken around the head of a local user is stabilized and then broadcast to others, allowing remote users to explore the immersive visual environment independently of the local user's head direction. We describe the system design of JackIn Head and report the evaluation results of real-time image stabilization and alleviation of cybersickness. Then, through an exploratory observation study, we investigate how individuals can remotely interact, communicate with, and assist each other with our system. We report our observation and analysis of inter-personal communication, demonstrating the effectiveness of our system in augmenting remote collaboration.

  5. Storage of features, conjunctions and objects in visual working memory.

    PubMed

    Vogel, E K; Woodman, G F; Luck, S J

    2001-02-01

    Working memory can be divided into separate subsystems for verbal and visual information. Although the verbal system has been well characterized, the storage capacity of visual working memory has not yet been established for simple features or for conjunctions of features. The authors demonstrate that it is possible to retain information about only 3-4 colors or orientations in visual working memory at one time. Observers are also able to retain both the color and the orientation of 3-4 objects, indicating that visual working memory stores integrated objects rather than individual features. Indeed, objects defined by a conjunction of four features can be retained in working memory just as well as single-feature objects, allowing many individual features to be retained when distributed across a small number of objects. Thus, the capacity of visual working memory must be understood in terms of integrated objects rather than individual features.

  6. Visualization of ocean forecast in BYTHOS

    NASA Astrophysics Data System (ADS)

    Zhuk, E.; Zodiatis, G.; Nikolaidis, A.; Stylianou, S.; Karaolia, A.

    2016-08-01

    The Cyprus Oceanography Center has been constantly searching for new ideas for developing and implementing innovative methods and new developments concerning the use of Information Systems in Oceanography, to suit both the Center's monitoring and forecasting products. Within the frame of this scope two major online managing and visualizing data systems have been developed and utilized, those of CYCOFOS and BYTHOS. The Cyprus Coastal Ocean Forecasting and Observing System - CYCOFOS provides a variety of operational predictions such as ultra high, high and medium resolution ocean forecasts in the Levantine Basin, offshore and coastal sea state forecasts in the Mediterranean and Black Sea, tide forecasting in the Mediterranean, ocean remote sensing in the Eastern Mediterranean and coastal and offshore monitoring. As a rich internet application, BYTHOS enables scientists to search, visualize and download oceanographic data online and in real time. The recent improving of BYTHOS system is the extension with access and visualization of CYCOFOS data and overlay forecast fields and observing data. The CYCOFOS data are stored at OPENDAP Server in netCDF format. To search, process and visualize it the php and python scripts were developed. Data visualization is achieved through Mapserver. The BYTHOS forecast access interface allows to search necessary forecasting field by recognizing type, parameter, region, level and time. Also it provides opportunity to overlay different forecast and observing data that can be used for complex analyze of sea basin aspects.

  7. Three visualization approaches for communicating and exploring PIT tag data

    USGS Publications Warehouse

    Letcher, Benjamin; Walker, Jeffrey D.; O'Donnell, Matthew; Whiteley, Andrew R.; Nislow, Keith; Coombs, Jason

    2018-01-01

    As the number, size and complexity of ecological datasets has increased, narrative and interactive raw data visualizations have emerged as important tools for exploring and understanding these large datasets. As a demonstration, we developed three visualizations to communicate and explore passive integrated transponder tag data from two long-term field studies. We created three independent visualizations for the same dataset, allowing separate entry points for users with different goals and experience levels. The first visualization uses a narrative approach to introduce users to the study. The second visualization provides interactive cross-filters that allow users to explore multi-variate relationships in the dataset. The last visualization allows users to visualize the movement histories of individual fish within the stream network. This suite of visualization tools allows a progressive discovery of more detailed information and should make the data accessible to users with a wide variety of backgrounds and interests.

  8. Using Graphic Novels, Anime, and the Internet in an Urban High School

    ERIC Educational Resources Information Center

    Frey, Nancy; Fisher, Douglas

    2004-01-01

    Alternative genres such as graphic novels, manga, and anime are employed to build on students' multiple literacies. It is observed that use of visual stories allowed students to discuss how the authors conveyed mood and tone through images.

  9. Time-varying spatial data integration and visualization: 4 Dimensions Environmental Observations Platform (4-DEOS)

    NASA Astrophysics Data System (ADS)

    Paciello, Rossana; Coviello, Irina; Filizzola, Carolina; Genzano, Nicola; Lisi, Mariano; Mazzeo, Giuseppe; Pergola, Nicola; Sileo, Giancanio; Tramutoli, Valerio

    2014-05-01

    In environmental studies the integration of heterogeneous and time-varying data, is a very common requirement for investigating and possibly visualize correlations among physical parameters underlying the dynamics of complex phenomena. Datasets used in such kind of applications has often different spatial and temporal resolutions. In some case superimposition of asynchronous layers is required. Traditionally the platforms used to perform spatio-temporal visual data analyses allow to overlay spatial data, managing the time using 'snapshot' data model, each stack of layers being labeled with different time. But this kind of architecture does not incorporate the temporal indexing neither the third spatial dimension which is usually given as an independent additional layer. Conversely, the full representation of a generic environmental parameter P(x,y,z,t) in the 4D space-time domain could allow to handle asynchronous datasets as well as less traditional data-products (e.g. vertical sections, punctual time-series, etc.) . In this paper we present the 4 Dimensions Environmental Observation Platform (4-DEOS), a system based on a web services architecture Client-Broker-Server. This platform is a new open source solution for both a timely access and an easy integration and visualization of heterogeneous (maps, vertical profiles or sections, punctual time series, etc.) asynchronous, geospatial products. The innovative aspect of the 4-DEOS system is that users can analyze data/products individually moving through time, having also the possibility to stop the display of some data/products and focus on other parameters for better studying their temporal evolution. This platform gives the opportunity to choose between two distinct display modes for time interval or for single instant. Users can choose to visualize data/products in two ways: i) showing each parameter in a dedicated window or ii) visualize all parameters overlapped in a single window. A sliding time bar, allows to follow the temporal evolution of the selected data/product. With this software, users have the possibility to identify events partially correlated each other not only in the spatial dimension but also in the time domain even at different time lags.

  10. Observing Double Stars

    NASA Astrophysics Data System (ADS)

    Genet, Russell M.; Fulton, B. J.; Bianco, Federica B.; Martinez, John; Baxter, John; Brewer, Mark; Carro, Joseph; Collins, Sarah; Estrada, Chris; Johnson, Jolyon; Salam, Akash; Wallen, Vera; Warren, Naomi; Smith, Thomas C.; Armstrong, James D.; McGaughey, Steve; Pye, John; Mohanan, Kakkala; Church, Rebecca

    2012-05-01

    Double stars have been systematically observed since William Herschel initiated his program in 1779. In 1803 he reported that, to his surprise, many of the systems he had been observing for a quarter century were gravitationally bound binary stars. In 1830 the first binary orbital solution was obtained, leading eventually to the determination of stellar masses. Double star observations have been a prolific field, with observations and discoveries - often made by students and amateurs - routinely published in a number of specialized journals such as the Journal of Double Star Observations. All published double star observations from Herschel's to the present have been incorporated in the Washington Double Star Catalog. In addition to reviewing the history of visual double stars, we discuss four observational technologies and illustrate these with our own observational results from both California and Hawaii on telescopes ranging from small SCTs to the 2-meter Faulkes Telescope North on Haleakala. Two of these technologies are visual observations aimed primarily at published "hands-on" student science education, and CCD observations of both bright and very faint doubles. The other two are recent technologies that have launched a double star renaissance. These are lucky imaging and speckle interferometry, both of which can use electron-multiplying CCD cameras to allow short (30 ms or less) exposures that are read out at high speed with very low noise. Analysis of thousands of high speed exposures allows normal seeing limitations to be overcome so very close doubles can be accurately measured.

  11. Novel Scientific Visualization Interfaces for Interactive Information Visualization and Sharing

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2012-12-01

    As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools in the Iowa Flood Information System (IFIS), developed within the light of these challenges. The IFIS is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to and visualization of flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, and other flood-related data for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS. 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods.

  12. PULSED AIR SPARGING IN AQUIFERS CONTAMINATED WITH DENSE NONAQUEOUS PHASE LIQUIDS

    EPA Science Inventory

    Air sparging was evaluated for remediation of tetrachloroethylene (PCE) present as dense nonaqueous phase liquid (DNAPL) in aquifers. A two-dimensional laboratory tank with a transparent front wall allowed for visual observation of DNAPL mobilization. A DNAPL zone 50 cm high was ...

  13. Characterizing the effects of feature salience and top-down attention in the early visual system.

    PubMed

    Poltoratski, Sonia; Ling, Sam; McCormack, Devin; Tong, Frank

    2017-07-01

    The visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. We used high-resolution functional MRI (fMRI) at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or nonsalient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, whereas the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas. NEW & NOTEWORTHY While spatial attention allows for specific, goal-driven enhancement of stimuli, salient items outside of the current focus of attention must also be prioritized. We used 7T fMRI to compare salience and spatial attentional enhancement along the early visual hierarchy. We report additive effects of attention and bottom-up salience in early visual areas, suggesting that salience enhancement is not contingent on the observer's attentional state. Copyright © 2017 the American Physiological Society.

  14. Perceiving groups: The people perception of diversity and hierarchy.

    PubMed

    Phillips, L Taylor; Slepian, Michael L; Hughes, Brent L

    2018-05-01

    The visual perception of individuals has received considerable attention (visual person perception), but little social psychological work has examined the processes underlying the visual perception of groups of people (visual people perception). Ensemble-coding is a visual mechanism that automatically extracts summary statistics (e.g., average size) of lower-level sets of stimuli (e.g., geometric figures), and also extends to the visual perception of groups of faces. Here, we consider whether ensemble-coding supports people perception, allowing individuals to form rapid, accurate impressions about groups of people. Across nine studies, we demonstrate that people visually extract high-level properties (e.g., diversity, hierarchy) that are unique to social groups, as opposed to individual persons. Observers rapidly and accurately perceived group diversity and hierarchy, or variance across race, gender, and dominance (Studies 1-3). Further, results persist when observers are given very short display times, backward pattern masks, color- and contrast-controlled stimuli, and absolute versus relative response options (Studies 4a-7b), suggesting robust effects supported specifically by ensemble-coding mechanisms. Together, we show that humans can rapidly and accurately perceive not only individual persons, but also emergent social information unique to groups of people. These people perception findings demonstrate the importance of visual processes for enabling people to perceive social groups and behave effectively in group-based social interactions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Latent binocular function in amblyopia.

    PubMed

    Chadnova, Eva; Reynaud, Alexandre; Clavagnier, Simon; Hess, Robert F

    2017-11-01

    Recently, psychophysical studies have shown that humans with amblyopia do have binocular function that is not normally revealed due to dominant suppressive interactions under normal viewing conditions. Here we use magnetoencephalography (MEG) combined with dichoptic visual stimulation to investigate the underlying binocular function in humans with amblyopia for stimuli that, because of their temporal properties, would be expected to bypass suppressive effects and to reveal any underlying binocular function. We recorded contrast response functions in visual cortical area V1 of amblyopes and normal observers using a steady state visually evoked responses (SSVER) protocol. We used stimuli that were frequency-tagged at 4Hz and 6Hz that allowed identification of the responses from each eye and were of a sufficiently high temporal frequency (>3Hz) to bypass suppression. To characterize binocular function, we compared dichoptic masking between the two eyes in normal and amblyopic participants as well as interocular phase differences in the two groups. We observed that the primary visual cortex responds less to the stimulation of the amblyopic eye compared to the fellow eye. The pattern of interaction in the amblyopic visual system however was not significantly different between the amblyopic and fellow eyes. However, the amblyopic suppressive interactions were lower than those observed in the binocular system of our normal observers. Furthermore, we identified an interocular processing delay of approximately 20ms in our amblyopic group. To conclude, when suppression is greatly reduced, such as the case with our stimulation above 3Hz, the amblyopic visual system exhibits a lack of binocular interactions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Image analysis for microelectronic retinal prosthesis.

    PubMed

    Hallum, L E; Cloherty, S L; Lovell, N H

    2008-01-01

    By way of extracellular, stimulating electrodes, a microelectronic retinal prosthesis aims to render discrete, luminous spots-so-called phosphenes-in the visual field, thereby providing a phosphene image (PI) as a rudimentary remediation of profound blindness. As part thereof, a digital camera, or some other photosensitive array, captures frames, frames are analyzed, and phosphenes are actuated accordingly by way of modulated charge injections. Here, we present a method that allows the assessment of image analysis schemes for integration with a prosthetic device, that is, the means of converting the captured image (high resolution) to modulated charge injections (low resolution). We use the mutual-information function to quantify the amount of information conveyed to the PI observer (device implantee), while accounting for the statistics of visual stimuli. We demonstrate an effective scheme involving overlapping, Gaussian kernels, and discuss extensions of the method to account for shortterm visual memory in observers, and their perceptual errors of omission and commission.

  17. WORDGRAPH: Keyword-in-Context Visualization for NETSPEAK's Wildcard Search.

    PubMed

    Riehmann, Patrick; Gruendl, Henning; Potthast, Martin; Trenkmann, Martin; Stein, Benno; Froehlich, Benno

    2012-09-01

    The WORDGRAPH helps writers in visually choosing phrases while writing a text. It checks for the commonness of phrases and allows for the retrieval of alternatives by means of wildcard queries. To support such queries, we implement a scalable retrieval engine, which returns high-quality results within milliseconds using a probabilistic retrieval strategy. The results are displayed as WORDGRAPH visualization or as a textual list. The graphical interface provides an effective means for interactive exploration of search results using filter techniques, query expansion, and navigation. Our observations indicate that, of three investigated retrieval tasks, the textual interface is sufficient for the phrase verification task, wherein both interfaces support context-sensitive word choice, and the WORDGRAPH best supports the exploration of a phrase's context or the underlying corpus. Our user study confirms these observations and shows that WORDGRAPH is generally the preferred interface over the textual result list for queries containing multiple wildcards.

  18. ViSBARD: Visual System for Browsing, Analysis and Retrieval of Data

    NASA Astrophysics Data System (ADS)

    Roberts, D. Aaron; Boller, Ryan; Rezapkin, V.; Coleman, J.; McGuire, R.; Goldstein, M.; Kalb, V.; Kulkarni, R.; Luckyanova, M.; Byrnes, J.; Kerbel, U.; Candey, R.; Holmes, C.; Chimiak, R.; Harris, B.

    2018-04-01

    ViSBARD interactively visualizes and analyzes space physics data. It provides an interactive integrated 3-D and 2-D environment to determine correlations between measurements across many spacecraft. It supports a variety of spacecraft data products and MHD models and is easily extensible to others. ViSBARD provides a way of visualizing multiple vector and scalar quantities as measured by many spacecraft at once. The data are displayed three-dimesionally along the orbits which may be displayed either as connected lines or as points. The data display allows the rapid determination of vector configurations, correlations between many measurements at multiple points, and global relationships. With the addition of magnetohydrodynamic (MHD) model data, this environment can also be used to validate simulation results with observed data, use simulated data to provide a global context for sparse observed data, and apply feature detection techniques to the simulated data.

  19. 3D visualization of Thoraco-Lumbar Spinal Lesions in German Shepherd Dog

    NASA Astrophysics Data System (ADS)

    Azpiroz, J.; Krafft, J.; Cadena, M.; Rodríguez, A. O.

    2006-09-01

    Computed tomography (CT) has been found to be an excellent imaging modality due to its sensitivity to characterize the morphology of the spine in dogs. This technique is considered to be particularly helpful for diagnosing spinal cord atrophy and spinal stenosis. The three-dimensional visualization of organs and bones can significantly improve the diagnosis of certain diseases in dogs. CT images were acquired of a German shepherd's dog spinal cord to generate stacks and digitally process them to arrange them in a volume image. All imaging experiments were acquired using standard clinical protocols on a clinical CT scanner. The three-dimensional visualization allowed us to observe anatomical structures that otherwise are not possible to observe with two-dimensional images. The combination of an imaging modality like CT together with imaging processing techniques can be a powerful tool for the diagnosis of a number of animal diseases.

  20. The Application of the Montage Image Mosaic Engine To The Visualization Of Astronomical Images

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce; Good, J. C.

    2017-05-01

    The Montage Image Mosaic Engine was designed as a scalable toolkit, written in C for performance and portability across *nix platforms, that assembles FITS images into mosaics. This code is freely available and has been widely used in the astronomy and IT communities for research, product generation, and for developing next-generation cyber-infrastructure. Recently, it has begun finding applicability in the field of visualization. This development has come about because the toolkit design allows easy integration into scalable systems that process data for subsequent visualization in a browser or client. The toolkit it includes a visualization tool suitable for automation and for integration into Python: mViewer creates, with a single command, complex multi-color images overlaid with coordinate displays, labels, and observation footprints, and includes an adaptive image histogram equalization method that preserves the structure of a stretched image over its dynamic range. The Montage toolkit contains functionality originally developed to support the creation and management of mosaics, but which also offers value to visualization: a background rectification algorithm that reveals the faint structure in an image; and tools for creating cutout and downsampled versions of large images. Version 5 of Montage offers support for visualizing data written in HEALPix sky-tessellation scheme, and functionality for processing and organizing images to comply with the TOAST sky-tessellation scheme required for consumption by the World Wide Telescope (WWT). Four online tutorials allow readers to reproduce and extend all the visualizations presented in this paper.

  1. Using Immersive Visualizations to Improve Decision Making and Enhancing Public Understanding of Earth Resource and Climate Issues

    NASA Astrophysics Data System (ADS)

    Yu, K. C.; Raynolds, R. G.; Dechesne, M.

    2008-12-01

    New visualization technologies, from ArcGIS to Google Earth, have allowed for the integration of complex, disparate data sets to produce visually rich and compelling three-dimensional models of sub-surface and surface resource distribution patterns. The rendering of these models allows the public to quickly understand complicated geospatial relationships that would otherwise take much longer to explain using traditional media. We have impacted the community through topical policy presentations at both state and city levels, adult education classes at the Denver Museum of Nature and Science (DMNS), and public lectures at DMNS. We have constructed three-dimensional models from well data and surface observations which allow policy makers to better understand the distribution of groundwater in sandstone aquifers of the Denver Basin. Our presentations to local governments in the Denver metro area have allowed resource managers to better project future ground water depletion patterns, and to encourage development of alternative sources. DMNS adult education classes on water resources, geography, and regional geology, as well as public lectures on global issues such as earthquakes, tsunamis, and resource depletion, have utilized the visualizations developed from these research models. In addition to presenting GIS models in traditional lectures, we have also made use of the immersive display capabilities of the digital "fulldome" Gates Planetarium at DMNS. The real-time Uniview visualization application installed at Gates was designed for teaching astronomy, but it can be re-purposed for displaying our model datasets in the context of the Earth's surface. The 17-meter diameter dome of the Gates Planetarium allows an audience to have an immersive experience---similar to virtual reality CAVEs employed by the oil exploration industry---that would otherwise not be available to the general public. Public lectures in the dome allow audiences of over 100 people to comprehend dynamically- changing geospatial datasets in an exciting and engaging fashion. In our presentation, we will demonstrate how new software tools like Uniview can be used to dramatically enhance and accelerate public comprehension of complex, multi-scale geospatial phenomena.

  2. Visual detection following retinal damage: predictions of an inhomogeneous retino-cortical model

    NASA Astrophysics Data System (ADS)

    Arnow, Thomas L.; Geisler, Wilson S.

    1996-04-01

    A model of human visual detection performance has been developed, based on available anatomical and physiological data for the primate visual system. The inhomogeneous retino- cortical (IRC) model computes detection thresholds by comparing simulated neural responses to target patterns with responses to a uniform background of the same luminance. The model incorporates human ganglion cell sampling distributions; macaque monkey ganglion cell receptive field properties; macaque cortical cell contrast nonlinearities; and a optical decision rule based on ideal observer theory. Spatial receptive field properties of cortical neurons were not included. Two parameters were allowed to vary while minimizing the squared error between predicted and observed thresholds. One parameter was decision efficiency, the other was the relative strength of the ganglion-cell center and surround. The latter was only allowed to vary within a small range consistent with known physiology. Contrast sensitivity was measured for sinewave gratings as a function of spatial frequency, target size and eccentricity. Contrast sensitivity was also measured for an airplane target as a function of target size, with and without artificial scotomas. The results of these experiments, as well as contrast sensitivity data from the literature were compared to predictions of the IRC model. Predictions were reasonably good for grating and airplane targets.

  3. Viewing the dynamics and control of visual attention through the lens of electrophysiology

    PubMed Central

    Woodman, Geoffrey F.

    2013-01-01

    How we find what we are looking for in complex visual scenes is a seemingly simple ability that has taken half a century to unravel. The first study to use the term visual search showed that as the number of objects in a complex scene increases, observers’ reaction times increase proportionally (Green and Anderson, 1956). This observation suggests that our ability to process the objects in the scenes is limited in capacity. However, if it is known that the target will have a certain feature attribute, for example, that it will be red, then only an increase in the number of red items increases reaction time. This observation suggests that we can control which visual inputs receive the benefit of our limited capacity to recognize the objects, such as those defined by the color red, as the items we seek. The nature of the mechanisms that underlie these basic phenomena in the literature on visual search have been more difficult to definitively determine. In this paper, I discuss how electrophysiological methods have provided us with the necessary tools to understand the nature of the mechanisms that give rise to the effects observed in the first visual search paper. I begin by describing how recordings of event-related potentials from humans and nonhuman primates have shown us how attention is deployed to possible target items in complex visual scenes. Then, I will discuss how event-related potential experiments have allowed us to directly measure the memory representations that are used to guide these deployments of attention to items with target-defining features. PMID:23357579

  4. Distributed Visualization Project

    NASA Technical Reports Server (NTRS)

    Craig, Douglas; Conroy, Michael; Kickbusch, Tracey; Mazone, Rebecca

    2016-01-01

    Distributed Visualization allows anyone, anywhere to see any simulation at any time. Development focuses on algorithms, software, data formats, data systems and processes to enable sharing simulation-based information across temporal and spatial boundaries without requiring stakeholders to possess highly-specialized and very expensive display systems. It also introduces abstraction between the native and shared data, which allows teams to share results without giving away proprietary or sensitive data. The initial implementation of this capability is the Distributed Observer Network (DON) version 3.1. DON 3.1 is available for public release in the NASA Software Store (https://software.nasa.gov/software/KSC-13775) and works with version 3.0 of the Model Process Control specification (an XML Simulation Data Representation and Communication Language) to display complex graphical information and associated Meta-Data.

  5. Testing Models for Perceptual Discrimination Using Repeatable Noise

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    Adding noise to stimuli to be discriminated allows estimation of observer classification functions based on the correlation between observer responses and relevant features of the noisy stimuli. Examples will be presented of stimulus features that are found in auditory tone detection and visual Vernier acuity. Using the standard signal detection model (Thurstone scaling), we derive formulas to estimate the proportion of the observer's decision variable variance that is controlled by the added noise. One is based on the probability of agreement of the observer with him/herself on trials with the same noise sample. Another is based on the relative performance of the observer and the model. When these do not agree, the model can be rejected. A second derivation gives the probability of agreement of observer and model when the observer follows the model except for internal noise. Agreement significantly less than this amount allows rejection of the model.

  6. Pelagic habitat visualization: the need for a third (and fourth) dimension: HabitatSpace

    USGS Publications Warehouse

    Beegle-Krause, C; Vance, Tiffany; Reusser, Debbie; Stuebe, David; Howlett, Eoin

    2009-01-01

    Habitat in open water is not simply a 2-D to 2.5-D surface such as the ocean bottom or the air-water interface. Rather, pelagic habitat is a 3-D volume of water that can change over time, leading us to the term habitat space. Visualization and analysis in 2-D is well supported with GIS tools, but a new tool was needed for visualization and analysis in four dimensions. Observational data (cruise profiles (xo, yo, z, to)), numerical circulation model fields (x,y,z,t), and trajectories (larval fish, 4-D line) need to be merged together in a meaningful way for visualization and analysis. As a first step toward this new framework, UNIDATA’s Integrated Data Viewer (IDV) has been used to create a set of tools for habitat analysis in 4-D. IDV was designed for 3-D+time geospatial data in the meteorological community. NetCDF JavaTM libraries allow the tool to read many file formats including remotely located data (e.g. data available via OPeNDAP ). With this project, IDV has been adapted for use in delineating habitat space for multiple fish species in the ocean. The ability to define and visualize boundaries of a water mass, which meets specific biologically relevant criteria (e.g., volume, connectedness, and inter-annual variability) based on model results and observational data, will allow managers to investigate the survival of individual year classes of commercially important fisheries. Better understanding of the survival of these year classes will lead to improved forecasting of fisheries recruitment.

  7. First impressions: gait cues drive reliable trait judgements.

    PubMed

    Thoresen, John C; Vuong, Quoc C; Atkinson, Anthony P

    2012-09-01

    Personality trait attribution can underpin important social decisions and yet requires little effort; even a brief exposure to a photograph can generate lasting impressions. Body movement is a channel readily available to observers and allows judgements to be made when facial and body appearances are less visible; e.g., from great distances. Across three studies, we assessed the reliability of trait judgements of point-light walkers and identified motion-related visual cues driving observers' judgements. The findings confirm that observers make reliable, albeit inaccurate, trait judgements, and these were linked to a small number of motion components derived from a Principal Component Analysis of the motion data. Parametric manipulation of the motion components linearly affected trait ratings, providing strong evidence that the visual cues captured by these components drive observers' trait judgements. Subsequent analyses suggest that reliability of trait ratings was driven by impressions of emotion, attractiveness and masculinity. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Modeling and Visualizing Flow of Chemical Agents Across Complex Terrain

    NASA Technical Reports Server (NTRS)

    Kao, David; Kramer, Marc; Chaderjian, Neal

    2005-01-01

    Release of chemical agents across complex terrain presents a real threat to homeland security. Modeling and visualization tools are being developed that capture flow fluid terrain interaction as well as point dispersal downstream flow paths. These analytic tools when coupled with UAV atmospheric observations provide predictive capabilities to allow for rapid emergency response as well as developing a comprehensive preemptive counter-threat evacuation plan. The visualization tools involve high-end computing and massive parallel processing combined with texture mapping. We demonstrate our approach across a mountainous portion of North California under two contrasting meteorological conditions. Animations depicting flow over this geographical location provide immediate assistance in decision support and crisis management.

  9. Effects of space allowance on the behaviour of long-term housed shelter dogs.

    PubMed

    Normando, Simona; Contiero, Barbara; Marchesini, Giorgio; Ricci, Rebecca

    2014-03-01

    The aim of this study was to assess the effects of space allowance (4.5 m(2)/head vs. 9 m(2)/head) on the behaviour of shelter dogs (Canis familiaris) at different times of the day (from 10:30 to 13:30 vs. from 14:30 to 17:30), and the dogs' preference between two types of beds (fabric bed vs. plastic basket). Twelve neutered dogs (seven males and five females aged 3-8 years) housed in pairs were observed using a scan sampling recording method every 20 s for a total of 14,592 scans/treatment. An increase in space allowance increased general level of activity (risk ratio (RR)=1.34), standing (RR=1.37), positive social interactions (RR=2.14), visual exploration of the environment (RR=1.21), and vocalisations (RR=2.35). Dogs spent more time in the sitting (RR=1.39) or standing (RR=1.88) posture, in positive interactions (RR=1.85), and active visual exploration (RR=1.99) during the morning than in the afternoon. The dogs were more often observed in the fabric bed than in the plastic basket (53% vs. 15% of total scans, p<0.001). Results suggest that a 9.0 m(2)/head space allowance could be more beneficial to dogs than one of 4.5 m(2). Copyright © 2014 Elsevier B.V. All rights reserved.

  10. The Use of Geogebra Software as a Calculus Teaching and Learning Tool

    ERIC Educational Resources Information Center

    Nobre, Cristiane Neri; Meireles, Magali Rezende Gouvêa; Vieira, Niltom, Jr.; de Resende, Mônica Neli; da Costa, Lucivânia Ester; da Rocha, Rejane Corrêa

    2016-01-01

    Information and Communication Technologies (ICT) in education provide a new learning environment where the student builds his own knowledge, allowing his visualization and experimentation. This study evaluated the Geogebra software in the learning process of Calculus. It was observed that the proposed activities helped in the graphical…

  11. Facial color is an efficient mechanism to visually transmit emotion

    PubMed Central

    Benitez-Quiroz, Carlos F.; Srinivasan, Ramprakash

    2018-01-01

    Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. PMID:29555780

  12. Facial color is an efficient mechanism to visually transmit emotion.

    PubMed

    Benitez-Quiroz, Carlos F; Srinivasan, Ramprakash; Martinez, Aleix M

    2018-04-03

    Facial expressions of emotion in humans are believed to be produced by contracting one's facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. Copyright © 2018 the Author(s). Published by PNAS.

  13. Interactive visualization to advance earthquake simulation

    USGS Publications Warehouse

    Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.

    2008-01-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.

  14. 3D Feature Extraction for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Silver, Deborah

    1996-01-01

    Visualization techniques provide tools that help scientists identify observed phenomena in scientific simulation. To be useful, these tools must allow the user to extract regions, classify and visualize them, abstract them for simplified representations, and track their evolution. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This article explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and those from Finite Element Analysis.

  15. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  16. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  17. Image gathering and restoration - Information and visual quality

    NASA Technical Reports Server (NTRS)

    Mccormick, Judith A.; Alter-Gartenberg, Rachel; Huck, Friedrich O.

    1989-01-01

    A method is investigated for optimizing the end-to-end performance of image gathering and restoration for visual quality. To achieve this objective, one must inevitably confront the problems that the visual quality of restored images depends on perceptual rather than mathematical considerations and that these considerations vary with the target, the application, and the observer. The method adopted in this paper is to optimize image gathering informationally and to restore images interactively to obtain the visually preferred trade-off among fidelity resolution, sharpness, and clarity. The results demonstrate that this method leads to significant improvements in the visual quality obtained by the traditional digital processing methods. These traditional methods allow a significant loss of visual quality to occur because they treat the design of the image-gathering system and the formulation of the image-restoration algorithm as two separate tasks and fail to account for the transformations between the continuous and the discrete representations in image gathering and reconstruction.

  18. Escape from harm: linking affective vision and motor responses during active avoidance

    PubMed Central

    Keil, Andreas

    2014-01-01

    When organisms confront unpleasant objects in their natural environments, they engage in behaviors that allow them to avoid aversive outcomes. Here, we linked visual processing of threat to its behavioral consequences by including a motor response that terminated exposure to an aversive event. Dense-array steady-state visual evoked potentials were recorded in response to conditioned threat and safety signals viewed in active or passive behavioral contexts. The amplitude of neuronal responses in visual cortex increased additively, as a function of emotional value and action relevance. The gain in local cortical population activity for threat relative to safety cues persisted when aversive reinforcement was behaviorally terminated, suggesting a lingering emotionally based response amplification within the visual system. Distinct patterns of long-range neural synchrony emerged between the visual cortex and extravisual regions. Increased coupling between visual and higher-order structures was observed specifically during active perception of threat, consistent with a reorganization of neuronal populations involved in linking sensory processing to action preparation. PMID:24493849

  19. Computerized visual feedback: an adjunct to robotic-assisted gait training.

    PubMed

    Banz, Raphael; Bolliger, Marc; Colombo, Gery; Dietz, Volker; Lünenburger, Lars

    2008-10-01

    Robotic devices for walking rehabilitation allow new possibilities for providing performance-related information to patients during gait training. Based on motor learning principles, augmented feedback during robotic-assisted gait training might improve the rehabilitation process used to regain walking function. This report presents a method to provide visual feedback implemented in a driven gait orthosis (DGO). The purpose of the study was to compare the immediate effect on motor output in subjects during robotic-assisted gait training when they used computerized visual feedback and when they followed verbal instructions of a physical therapist. Twelve people with neurological gait disorders due to incomplete spinal cord injury participated. Subjects were instructed to walk within the DGO in 2 different conditions. They were asked to increase their motor output by following the instructions of a therapist and by observing visual feedback. In addition, the subjects' opinions about using visual feedback were investigated by a questionnaire. Computerized visual feedback and verbal instructions by the therapist were observed to result in a similar change in motor output in subjects when walking within the DGO. Subjects reported that they were more motivated and concentrated on their movements when using computerized visual feedback compared with when no form of feedback was provided. Computerized visual feedback is a valuable adjunct to robotic-assisted gait training. It represents a relevant tool to increase patients' motor output, involvement, and motivation during gait training, similar to verbal instructions by a therapist.

  20. Using Interactive Visualization to Analyze Solid Earth Data and Geodynamics Models

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.; Kreylos, O.; Billen, M. I.; Hamann, B.; Jadamec, M. A.; Rundle, J. B.; van Aalsburg, J.; Yikilmaz, M. B.

    2008-12-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. Major projects such as EarthScope and GeoEarthScope are producing the data needed to characterize the structure and kinematics of Earth's surface and interior at unprecedented resolution. At the same time, high-performance computing enables high-precision and fine- detail simulation of geodynamics processes, complementing the observational data. To facilitate interpretation and analysis of these datasets, to evaluate models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. VR has traditionally been used primarily as a presentation tool allowing active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for accelerated scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. Our approach to VR takes advantage of the specialized skills of geoscientists who are trained to interpret geological and geophysical data generated from field observations. Interactive tools allow the scientist to explore and interpret geodynamic models, tomographic models, and topographic observations, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulations or field observations. The use of VR technology enables us to improve our interpretation of crust and mantle structure and of geodynamical processes. Mapping tools based on computer visualization allow virtual "field studies" in inaccessible regions, and an interactive tool allows us to construct digital fault models for use in numerical models. Using the interactive tools on a high-end platform such as an immersive virtual reality room known as a Cave Automatic Virtual Environment (CAVE), enables the scientist to stand in data three-dimensional dataset while taking measurements. The CAVE involves three or more projection surfaces arranged as walls in a room. Stereo projectors combined with a motion tracking system and immersion recreates the experience of carrying out research in the field. This high-end system provides significant advantages for scientists working with complex volumetric data. The interactive tools also work on low-cost platforms that provide stereo views and the potential for interactivity such as a Geowall or a 3D enabled TV. The Geowall is also a well-established tool for education, and in combination with the tools we have developed, enables the rapid transfer of research data and new knowledge to the classroom. The interactive visualization tools can also be used on a desktop or laptop with or without stereo capability. Further information about the Virtual Reality User Interface (VRUI), the 3DVisualizer, the Virtual mapping tools, and the LIDAR viewer, can be found on the KeckCAVES website, www.keckcaves.org.

  1. Time limits during visual foraging reveal flexible working memory templates.

    PubMed

    Kristjánsson, Tómas; Thornton, Ian M; Kristjánsson, Árni

    2018-06-01

    During difficult foraging tasks, humans rarely switch between target categories, but switch frequently during easier foraging. Does this reflect fundamental limits on visual working memory (VWM) capacity or simply strategic choice due to effort? Our participants performed time-limited or unlimited foraging tasks where they tapped stimuli from 2 target categories while avoiding items from 2 distractor categories. These time limits should have no effect if capacity imposes limits on VWM representations but more flexible VWM could allow observers to use VWM according to task demands in each case. We found that with time limits, participants switched more frequently and switch-costs became much smaller than during unlimited foraging. Observers can therefore switch between complex (conjunction) target categories when needed. We propose that while maintaining many complex templates in working memory is effortful and observers avoid this, they can do so if this fits task demands, showing the flexibility of working memory representations used for visual exploration. This is in contrast with recent proposals, and we discuss the implications of these findings for theoretical accounts of working memory. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Augmented Visual Experience of Simulated Solar Phenomena

    NASA Astrophysics Data System (ADS)

    Tucker, A. O., IV; Berardino, R. A.; Hahne, D.; Schreurs, B.; Fox, N. J.; Raouafi, N.

    2017-12-01

    The Parker Solar Probe (PSP) mission will explore the Sun's corona, studying solar wind, flares and coronal mass ejections. The effects of these phenomena can impact the technology that we use in ways that are not readily apparent, including affecting satellite communications and power grids. Determining the structure and dynamics of corona magnetic fields, tracing the flow of energy that heats the corona, and exploring dusty plasma near the Sun to understand its influence on solar wind and energetic particle formation requires a suite of sensors on board the PSP spacecraft that are engineered to observe specific phenomena. Using models of these sensors and simulated observational data, we can visualize what the PSP spacecraft will "see" during its multiple passes around the Sun. Augmented reality (AR) technologies enable convenient user access to massive data sets. We are developing an application that allows users to experience environmental data from the point of view of the PSP spacecraft in AR using the Microsoft HoloLens. Observational data, including imagery, magnetism, temperature, and density are visualized in 4D within the user's immediate environment. Our application provides an educational tool for comprehending the complex relationships of observational data, which aids in our understanding of the Sun.

  3. Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech

    PubMed Central

    Alcalá-Quintana, Rocío

    2015-01-01

    Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal. PMID:27551361

  4. Interactive visual exploration and refinement of cluster assignments.

    PubMed

    Kern, Michael; Lex, Alexander; Gehlenborg, Nils; Johnson, Chris R

    2017-09-12

    With ever-increasing amounts of data produced in biology research, scientists are in need of efficient data analysis methods. Cluster analysis, combined with visualization of the results, is one such method that can be used to make sense of large data volumes. At the same time, cluster analysis is known to be imperfect and depends on the choice of algorithms, parameters, and distance measures. Most clustering algorithms don't properly account for ambiguity in the source data, as records are often assigned to discrete clusters, even if an assignment is unclear. While there are metrics and visualization techniques that allow analysts to compare clusterings or to judge cluster quality, there is no comprehensive method that allows analysts to evaluate, compare, and refine cluster assignments based on the source data, derived scores, and contextual data. In this paper, we introduce a method that explicitly visualizes the quality of cluster assignments, allows comparisons of clustering results and enables analysts to manually curate and refine cluster assignments. Our methods are applicable to matrix data clustered with partitional, hierarchical, and fuzzy clustering algorithms. Furthermore, we enable analysts to explore clustering results in context of other data, for example, to observe whether a clustering of genomic data results in a meaningful differentiation in phenotypes. Our methods are integrated into Caleydo StratomeX, a popular, web-based, disease subtype analysis tool. We show in a usage scenario that our approach can reveal ambiguities in cluster assignments and produce improved clusterings that better differentiate genotypes and phenotypes.

  5. Calibration of visually guided reaching is driven by error-corrective learning and internal dynamics.

    PubMed

    Cheng, Sen; Sabes, Philip N

    2007-04-01

    The sensorimotor calibration of visually guided reaching changes on a trial-to-trial basis in response to random shifts in the visual feedback of the hand. We show that a simple linear dynamical system is sufficient to model the dynamics of this adaptive process. In this model, an internal variable represents the current state of sensorimotor calibration. Changes in this state are driven by error feedback signals, which consist of the visually perceived reach error, the artificial shift in visual feedback, or both. Subjects correct for > or =20% of the error observed on each movement, despite being unaware of the visual shift. The state of adaptation is also driven by internal dynamics, consisting of a decay back to a baseline state and a "state noise" process. State noise includes any source of variability that directly affects the state of adaptation, such as variability in sensory feedback processing, the computations that drive learning, or the maintenance of the state. This noise is accumulated in the state across trials, creating temporal correlations in the sequence of reach errors. These correlations allow us to distinguish state noise from sensorimotor performance noise, which arises independently on each trial from random fluctuations in the sensorimotor pathway. We show that these two noise sources contribute comparably to the overall magnitude of movement variability. Finally, the dynamics of adaptation measured with random feedback shifts generalizes to the case of constant feedback shifts, allowing for a direct comparison of our results with more traditional blocked-exposure experiments.

  6. Visualization of diversity in large multivariate data sets.

    PubMed

    Pham, Tuan; Hess, Rob; Ju, Crystal; Zhang, Eugene; Metoyer, Ronald

    2010-01-01

    Understanding the diversity of a set of multivariate objects is an important problem in many domains, including ecology, college admissions, investing, machine learning, and others. However, to date, very little work has been done to help users achieve this kind of understanding. Visual representation is especially appealing for this task because it offers the potential to allow users to efficiently observe the objects of interest in a direct and holistic way. Thus, in this paper, we attempt to formalize the problem of visualizing the diversity of a large (more than 1000 objects), multivariate (more than 5 attributes) data set as one worth deeper investigation by the information visualization community. In doing so, we contribute a precise definition of diversity, a set of requirements for diversity visualizations based on this definition, and a formal user study design intended to evaluate the capacity of a visual representation for communicating diversity information. Our primary contribution, however, is a visual representation, called the Diversity Map, for visualizing diversity. An evaluation of the Diversity Map using our study design shows that users can judge elements of diversity consistently and as or more accurately than when using the only other representation specifically designed to visualize diversity.

  7. Endogenous Sequential Cortical Activity Evoked by Visual Stimuli

    PubMed Central

    Miller, Jae-eun Kang; Hamm, Jordan P.; Jackson, Jesse; Yuste, Rafael

    2015-01-01

    Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. PMID:26063915

  8. Visual information mining in remote sensing image archives

    NASA Astrophysics Data System (ADS)

    Pelizzari, Andrea; Descargues, Vincent; Datcu, Mihai P.

    2002-01-01

    The present article focuses on the development of interactive exploratory tools for visually mining the image content in large remote sensing archives. Two aspects are treated: the iconic visualization of the global information in the archive and the progressive visualization of the image details. The proposed methods are integrated in the Image Information Mining (I2M) system. The images and image structure in the I2M system are indexed based on a probabilistic approach. The resulting links are managed by a relational data base. Both the intrinsic complexity of the observed images and the diversity of user requests result in a great number of associations in the data base. Thus new tools have been designed to visualize, in iconic representation the relationships created during a query or information mining operation: the visualization of the query results positioned on the geographical map, quick-looks gallery, visualization of the measure of goodness of the query, visualization of the image space for statistical evaluation purposes. Additionally the I2M system is enhanced with progressive detail visualization in order to allow better access for operator inspection. I2M is a three-tier Java architecture and is optimized for the Internet.

  9. Video game experience and its influence on visual attention parameters: an investigation using the framework of the Theory of Visual Attention (TVA).

    PubMed

    Schubert, Torsten; Finke, Kathrin; Redel, Petra; Kluckow, Steffen; Müller, Hermann; Strobach, Tilo

    2015-05-01

    Experts with video game experience, in contrast to non-experienced persons, are superior in multiple domains of visual attention. However, it is an open question which basic aspects of attention underlie this superiority. We approached this question using the framework of Theory of Visual Attention (TVA) with tools that allowed us to assess various parameters that are related to different visual attention aspects (e.g., perception threshold, processing speed, visual short-term memory storage capacity, top-down control, spatial distribution of attention) and that are measurable on the same experimental basis. In Experiment 1, we found advantages of video game experts in perception threshold and visual processing speed; the latter being restricted to the lower positions of the used computer display. The observed advantages were not significantly moderated by general person-related characteristics such as personality traits, sensation seeking, intelligence, social anxiety, or health status. Experiment 2 tested a potential causal link between the expert advantages and video game practice with an intervention protocol. It found no effects of action video gaming on perception threshold, visual short-term memory storage capacity, iconic memory storage, top-down control, and spatial distribution of attention after 15 days of training. However, observations of a selected improvement of processing speed at the lower positions of the computer screen after video game training and of retest effects are suggestive for limited possibilities to improve basic aspects of visual attention (TVA) with practice. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    PubMed

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind's eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content. Copyright © 2017 the authors 0270-6474/17/376638-10$15.00/0.

  11. Interactive 3D visualization for theoretical virtual observatories

    NASA Astrophysics Data System (ADS)

    Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-06-01

    Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  12. Sustained multifocal attentional enhancement of stimulus processing in early visual areas predicts tracking performance.

    PubMed

    Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K

    2013-03-20

    Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.

  13. Sensitivity to synchronicity of biological motion in normal and amblyopic vision

    PubMed Central

    Luu, Jennifer Y.; Levi, Dennis M.

    2017-01-01

    Amblyopia is a developmental disorder of spatial vision that results from abnormal early visual experience usually due to the presence of strabismus, anisometropia, or both strabismus and anisometropia. Amblyopia results in a range of visual deficits that cannot be corrected by optics because the deficits reflect neural abnormalities. Biological motion refers to the motion patterns of living organisms, and is normally displayed as points of lights positioned at the major joints of the body. In this experiment, our goal was twofold. We wished to examine whether the human visual system in people with amblyopia retained the higher-level processing capabilities to extract visual information from the synchronized actions of others, therefore retaining the ability to detect biological motion. Specifically, we wanted to determine if the synchronized interaction of two agents performing a dancing routine allowed the amblyopic observer to use the actions of one agent to predict the expected actions of a second agent. We also wished to establish whether synchronicity sensitivity (detection of synchronized versus desynchronized interactions) is impaired in amblyopic observers relative to normal observers. The two aims are differentiated in that the first aim looks at whether synchronized actions result in improved expected action predictions while the second aim quantitatively compares synchronicity sensitivity, or the ratio of desynchronized to synchronized detection sensitivities, to determine if there is a difference between normal and amblyopic observers. Our results show that the ability to detect biological motion requires more samples in both eyes of amblyopes than in normal control observers. The increased sample threshold is not the result of low-level losses but may reflect losses in feature integration due to undersampling in the amblyopic visual system. However, like normal observers, amblyopes are more sensitive to synchronized versus desynchronized interactions, indicating that higher-level processing of biological motion remains intact. We also found no impairment in synchronicity sensitivity in the amblyopic visual system relative to the normal visual system. Since there is no impairment in synchronicity sensitivity in either the nonamblyopic or amblyopic eye of amblyopes, our results suggest that the higher order processing of biological motion is intact. PMID:23474301

  14. Distributed Observer Network

    NASA Technical Reports Server (NTRS)

    Conroy, Michael; Mazzone, Rebecca; Little, William; Elfrey, Priscilla; Mann, David; Mabie, Kevin; Cuddy, Thomas; Loundermon, Mario; Spiker, Stephen; McArthur, Frank; hide

    2010-01-01

    The Distributed Observer network (DON) is a NASA-collaborative environment that leverages game technology to bring three-dimensional simulations to conventional desktop and laptop computers in order to allow teams of engineers working on design and operations, either individually or in groups, to view and collaborate on 3D representations of data generated by authoritative tools such as Delmia Envision, Pro/Engineer, or Maya. The DON takes models and telemetry from these sources and, using commercial game engine technology, displays the simulation results in a 3D visual environment. DON has been designed to enhance accessibility and user ability to observe and analyze visual simulations in real time. A variety of NASA mission segment simulations [Synergistic Engineering Environment (SEE) data, NASA Enterprise Visualization Analysis (NEVA) ground processing simulations, the DSS simulation for lunar operations, and the Johnson Space Center (JSC) TRICK tool for guidance, navigation, and control analysis] were experimented with. Desired functionalities, [i.e. Tivo-like functions, the capability to communicate textually or via Voice-over-Internet Protocol (VoIP) among team members, and the ability to write and save notes to be accessed later] were targeted. The resulting DON application was slated for early 2008 release to support simulation use for the Constellation Program and its teams. Those using the DON connect through a client that runs on their PC or Mac. This enables them to observe and analyze the simulation data as their schedule allows, and to review it as frequently as desired. DON team members can move freely within the virtual world. Preset camera points can be established, enabling team members to jump to specific views. This improves opportunities for shared analysis of options, design reviews, tests, operations, training, and evaluations, and improves prospects for verification of requirements, issues, and approaches among dispersed teams.

  15. SERVIR: The Regional Visualization and Monitoring System

    NASA Technical Reports Server (NTRS)

    Irwin, Daniel E.

    2010-01-01

    This slide presentation reviews the SERVIR program. SERVIR is a partnership between NASA and USAID and three international nodes: Central America, Africa, and the Himalaya region. SERVIR,using satellite observations and ground based observations, is used by decision makers to allow for improved monitoring of air quality, extreme weather, biodiversity, and changes in land cove and has also been used to respond to environmental threats, such as wildfires, floods, landslides, harmful algal blooms, and earthquakes.

  16. Improved Visualization of Glaucomatous Retinal Damage Using High-speed Ultrahigh-Resolution Optical Coherence Tomography

    PubMed Central

    Mumcuoglu, Tarkan; Wollstein, Gadi; Wojtkowski, Maciej; Kagemann, Larry; Ishikawa, Hiroshi; Gabriele, Michelle L.; Srinivasan, Vivek; Fujimoto, James G.; Duker, Jay S.; Schuman, Joel S.

    2009-01-01

    Purpose To test if improving optical coherence tomography (OCT) resolution and scanning speed improves the visualization of glaucomatous structural changes as compared with conventional OCT. Design Prospective observational case series. Participants Healthy and glaucomatous subjects in various stages of disease. Methods Subjects were scanned at a single visit with commercially available OCT (StratusOCT) and high-speed ultrahigh-resolution (hsUHR) OCT. The prototype hsUHR OCT had an axial resolution of 3.4 μm (3 times higher than StratusOCT), with an A-scan rate of 24 000 hertz (60 times faster than StratusOCT). The fast scanning rate allowed the acquisition of novel scanning patterns such as raster scanning, which provided dense coverage of the retina and optic nerve head. Main Outcome Measures Discrimination of retinal tissue layers and detailed visualization of retinal structures. Results High-speed UHR OCT provided a marked improvement in tissue visualization as compared with StratusOCT. This allowed the identification of numerous retinal layers, including the ganglion cell layer, which is specifically prone to glaucomatous damage. Fast scanning and the enhanced A-scan registration properties of hsUHR OCT provided maps of the macula and optic nerve head with unprecedented detail, including en face OCT fundus images and retinal nerve fiber layer thickness maps. Conclusion High-speed UHR OCT improves visualization of the tissues relevant to the detection and management of glaucoma. PMID:17884170

  17. Introduction to Photolithography: Preparation of Microscale Polymer Silhouettes

    ERIC Educational Resources Information Center

    Berkowski, Kimberly L.; Plunkett, Kyle N.; Moore, Jeffrey S.

    2005-01-01

    A study describes an easy procedure based on a negative photoresist process designed for junior high or high school students, which will introduce them to the key terms and concepts of photolithography. The experiment allows students to visualize the fundamental process behind microchip fabrication, observe the rapid prototyping enabled by such a…

  18. BingEO: Enable Distributed Earth Observation Data for Environmental Research

    NASA Astrophysics Data System (ADS)

    Wu, H.; Yang, C.; Xu, Y.

    2010-12-01

    Our planet is facing great environmental challenges including global climate change, environmental vulnerability, extreme poverty, and a shortage of clean cheap energy. To address these problems, scientists are developing various models to analysis, forecast, simulate various geospatial phenomena to support critical decision making. These models not only challenge our computing technology, but also challenge us to feed huge demands of earth observation data. Through various policies and programs, open and free sharing of earth observation data are advocated in earth science. Currently, thousands of data sources are freely available online through open standards such as Web Map Service (WMS), Web Feature Service (WFS) and Web Coverage Service (WCS). Seamless sharing and access to these resources call for a spatial Cyberinfrastructure (CI) to enable the use of spatial data for the advancement of related applied sciences including environmental research. Based on Microsoft Bing Search Engine and Bing Map, a seamlessly integrated and visual tool is under development to bridge the gap between researchers/educators and earth observation data providers. With this tool, earth science researchers/educators can easily and visually find the best data sets for their research and education. The tool includes a registry and its related supporting module at server-side and an integrated portal as its client. The proposed portal, Bing Earth Observation (BingEO), is based on Bing Search and Bing Map to: 1) Use Bing Search to discover Web Map Services (WMS) resources available over the internet; 2) Develop and maintain a registry to manage all the available WMS resources and constantly monitor their service quality; 3) Allow users to manually register data services; 4) Provide a Bing Maps-based Web application to visualize the data on a high-quality and easy-to-manipulate map platform and enable users to select the best data layers online. Given the amount of observation data accumulated already and still growing, BingEO will allow these resources to be utilized more widely, intensively, efficiently and economically in earth science applications.

  19. A Perspective of Our Planet's Atmosphere, Land, and Oceans: A View from Space

    NASA Technical Reports Server (NTRS)

    King, Michael D.; Tucker, Compton

    2002-01-01

    A birds eye view of the Earth from afar and up close reveals the power and magnificence of the Earth and juxtaposes the simultaneous impacts and powerlessness of humankind. The NASA Electronic Theater presents Earth science observations and visualizations in an historical perspective. Fly in from outer space to South America with its Andes Mountains and the glaciers of Patagonia, ending up close and personal in Buenos Aires. See the latest spectacular images from NASA & NOAA remote sensing missions like GOES, TRMM, Landsat 7, QuikScat, and Terra, which will be visualized and explained in the context of global change. See visualizations of global data sets currently available from Earth orbiting satellites, including the Earth at night with its city lights, aerosols from biomass burning in South America and Africa, and global cloud properties. See the dynamics of vegetation growth and decay over South America over 17 years, and its contrast to the North American and Africa continents. New visualization tools allow us to roam & zoom through massive global mosaic images from the Himalayas to the dynamics of the Pacific Ocean that affect the climate of South and North America. New visualization tools allow us to roam & zoom through massive global mosaic images including Landsat and Terra tours of South America and Africa showing land use and land cover change from Patagonia to the Amazon Basin, including the Andes Mountains, the Pantanal, and the Bolivian highlands. Landsat flyins to Rio Di Janeiro and Buenos Aires will be shows to emphasize the capabilities of new satellite technology to visualize our natural environment. Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa and across the Atlantic to the Caribbean and Amazon basin. See ocean vortexes and currents that bring up the nutrients to feed tiny phytoplankton and draw the fish, giant whales and fisherman. See how the ocean blooms in response to these currents and El Nino/La Nina climate changes. We will illustrate these and other topics with a dynamic theater-style presentation, along with animations of satellite launch deployments and orbital mapping to highlight aspects of Earth observations from space.

  20. MATISSE a web-based tool to access, visualize and analyze high resolution minor bodies observation

    NASA Astrophysics Data System (ADS)

    Zinzi, Angelo; Capria, Maria Teresa; Palomba, Ernesto; Antonelli, Lucio Angelo; Giommi, Paolo

    2016-07-01

    In the recent years planetary exploration missions acquired data from minor bodies (i.e., dwarf planets, asteroid and comets) at a detail level never reached before. Since these objects often present very irregular shapes (as in the case of the comet 67P Churyumov-Gerasimenko target of the ESA Rosetta mission) "classical" bidimensional projections of observations are difficult to understand. With the aim of providing the scientific community a tool to access, visualize and analyze data in a new way, ASI Science Data Center started to develop MATISSE (Multi-purposed Advanced Tool for the Instruments for the Solar System Exploration - http://tools.asdc.asi.it/matisse.jsp) in late 2012. This tool allows 3D web-based visualization of data acquired by planetary exploration missions: the output could either be the straightforward projection of the selected observation over the shape model of the target body or the visualization of a high-order product (average/mosaic, difference, ratio, RGB) computed directly online with MATISSE. Standard outputs of the tool also comprise downloadable files to be used with GIS software (GeoTIFF and ENVI format) and 3D very high-resolution files to be viewed by means of the free software Paraview. During this period the first and most frequent exploitation of the tool has been related to visualization of data acquired by VIRTIS-M instruments onboard Rosetta observing the comet 67P. The success of this task, well represented by the good number of published works that used images made with MATISSE confirmed the need of a different approach to correctly visualize data coming from irregular shaped bodies. In the next future the datasets available to MATISSE are planned to be extended, starting from the addition of VIR-Dawn observations of both Vesta and Ceres and also using standard protocols to access data stored in external repositories, such as NASA ODE and Planetary VO.

  1. Low target prevalence is a stubborn source of errors in visual search tasks

    PubMed Central

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2009-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1–2%) than at high prevalence (50%). Unfortunately, low prevalence is characteristic of important search tasks like airport security and medical screening where miss errors are dangerous. A series of experiments show this prevalence effect is very robust. In signal detection terms, the prevalence effect can be explained as a criterion shift and not a change in sensitivity. Several efforts to induce observers to adopt a better criterion fail. However, a regime of brief retraining periods with high prevalence and full feedback allows observers to hold a good criterion during periods of low prevalence with no feedback. PMID:17999575

  2. Visual-search models for location-known detection tasks

    NASA Astrophysics Data System (ADS)

    Gifford, H. C.; Karbaschi, Z.; Banerjee, K.; Das, M.

    2017-03-01

    Lesion-detection studies that analyze a fixed target position are generally considered predictive of studies involving lesion search, but the extent of the correlation often goes untested. The purpose of this work was to develop a visual-search (VS) model observer for location-known tasks that, coupled with previous work on localization tasks, would allow efficient same-observer assessments of how search and other task variations can alter study outcomes. The model observer featured adjustable parameters to control the search radius around the fixed lesion location and the minimum separation between suspicious locations. Comparisons were made against human observers, a channelized Hotelling observer and a nonprewhitening observer with eye filter in a two-alternative forced-choice study with simulated lumpy background images containing stationary anatomical and quantum noise. These images modeled single-pinhole nuclear medicine scans with different pinhole sizes. When the VS observer's search radius was optimized with training images, close agreement was obtained with human-observer results. Some performance differences between the humans could be explained by varying the model observer's separation parameter. The range of optimal pinhole sizes identified by the VS observer was in agreement with the range determined with the channelized Hotelling observer.

  3. Cal-Adapt: California's Climate Data Resource and Interactive Toolkit

    NASA Astrophysics Data System (ADS)

    Thomas, N.; Mukhtyar, S.; Wilhelm, S.; Galey, B.; Lehmer, E.

    2016-12-01

    Cal-Adapt is a web-based application that provides an interactive toolkit and information clearinghouse to help agencies, communities, local planners, resource managers, and the public understand climate change risks and impacts at the local level. The website offers interactive, visually compelling, and useful data visualization tools that show how climate change might affect California using downscaled continental climate data. Cal-Adapt is supporting California's Fourth Climate Change Assessment through providing access to the wealth of modeled and observed data and adaption-related information produced by California's scientific community. The site has been developed by UC Berkeley's Geospatial Innovation Facility (GIF) in collaboration with the California Energy Commission's (CEC) Research Program. The Cal-Adapt website allows decision makers, scientists and residents of California to turn research results and climate projections into effective adaptation decisions and policies. Since its release to the public in June 2011, Cal-Adapt has been visited by more than 94,000 unique visitors from over 180 countries, all 50 U.S. states, and 689 California localities. We will present several key visualizations that have been employed by Cal-Adapt's users to support their efforts to understand local impacts of climate change, indicate the breadth of data available, and delineate specific use cases. Recently, CEC and GIF have been developing and releasing Cal-Adapt 2.0, which includes updates and enhancements that are increasing its ease of use, information value, visualization tools, and data accessibility. We showcase how Cal-Adapt is evolving in response to feedback from a variety of sources to present finer-resolution downscaled data, and offer an open API that allows other organization to access Cal-Adapt climate data and build domain specific visualization and planning tools. Through a combination of locally relevant information, visualization tools, and access to primary data, Cal-Adapt allows users to investigate how the climate is projected to change in their areas of interest.

  4. Visualization of Electrical Field of Electrode Using Voltage-Controlled Fluorescence Release

    PubMed Central

    Jia, Wenyan; Wu, Jiamin; Gao, Di; Wang, Hao; Sun, Mingui

    2016-01-01

    In this study we propose an approach to directly visualize electrical current distribution at the electrode-electrolyte interface of a biopotential electrode. High-speed fluorescent microscopic images are acquired when an electric potential is applied across the interface to trigger the release of fluorescent material from the surface of the electrode. These images are analyzed computationally to obtain the distribution of the electric field from the fluorescent intensity of each pixel. Our approach allows direct observation of microscopic electrical current distribution around the electrode. Experiments are conducted to validate the feasibility of the fluorescent imaging method. PMID:27253615

  5. UVMAS: Venus ultraviolet-visual mapping spectrometer

    NASA Astrophysics Data System (ADS)

    Bellucci, G.; Zasova, L.; Altieri, F.; Nuccilli, F.; Ignatiev, N.; Moroz, V.; Khatuntsev, I.; Korablev, O.; Rodin, A.

    This paper summarizes the capabilities and technical solutions of an Ultraviolet Visual Mapping Spectrometer designed for remote sensing of Venus from a planetary orbiter. The UVMAS consists of a multichannel camera with a spectral range 0.19 << 0.49 μm which acquires data in several spectral channels (up to 400) with a spectral resolution of 0.58 nm. The instantaneous field of view of the instrument is 0.244 × 0.244 mrad. These characteristics allow: a) to study the upper clouds dynamics and chemistry; b) giving constraints on the unknown absorber; c) observation of the night side airglow.

  6. Diagnostics of boundary layer transition by shear stress sensitive liquid crystals

    NASA Astrophysics Data System (ADS)

    Shapoval, E. S.

    2016-10-01

    Previous research indicates that the problem of boundary layer transition visualization on metal models in wind tunnels (WT) which is a fundamental question in experimental aerodynamics is not solved yet. In TsAGI together with Khristianovich Institute of Theoretical and Applied Mechanics (ITAM) a method of shear stress sensitive liquid crystals (LC) which allows flow visualization was proposed. This method allows testing several flow conditions in one wind tunnel run and does not need covering the investigated model with any special heat-insulating coating which spoils the model geometry. This coating is easily applied on the model surface by spray or even by brush. Its' thickness is about 40 micrometers and it does not spoil the surface quality. At first the coating obtains some definite color. Under shear stress the LC coating changes color and this change is proportional to shear stress. The whole process can be visually observed and during the tests it is recorded by camera. The findings of the research showed that it is possible to visualize boundary layer transition, flow separation, shock waves and the flow image on the whole. It is possible to predict that the proposed method of shear stress sensitive liquid crystals is a promise for future research.

  7. 3D imaging of cleared human skin biopsies using light-sheet microscopy: A new way to visualize in-depth skin structure.

    PubMed

    Abadie, S; Jardet, C; Colombelli, J; Chaput, B; David, A; Grolleau, J-L; Bedos, P; Lobjois, V; Descargues, P; Rouquette, J

    2018-05-01

    Human skin is composed of the superimposition of tissue layers of various thicknesses and components. Histological staining of skin sections is the benchmark approach to analyse the organization and integrity of human skin biopsies; however, this approach does not allow 3D tissue visualization. Alternatively, confocal or two-photon microscopy is an effective approach to perform fluorescent-based 3D imaging. However, owing to light scattering, these methods display limited light penetration in depth. The objectives of this study were therefore to combine optical clearing and light-sheet fluorescence microscopy (LSFM) to perform in-depth optical sectioning of 5 mm-thick human skin biopsies and generate 3D images of entire human skin biopsies. A benzyl alcohol and benzyl benzoate solution was used to successfully optically clear entire formalin fixed human skin biopsies, making them transparent. In-depth optical sectioning was performed with LSFM on the basis of tissue-autofluorescence observations. 3D image analysis of optical sections generated with LSFM was performed by using the Amira ® software. This new approach allowed us to observe in situ the different layers and compartments of human skin, such as the stratum corneum, the dermis and epidermal appendages. With this approach, we easily performed 3D reconstruction to visualise an entire human skin biopsy. Finally, we demonstrated that this method is useful to visualise and quantify histological anomalies, such as epidermal hyperplasia. The combination of optical clearing and LSFM has new applications in dermatology and dermatological research by allowing 3D visualization and analysis of whole human skin biopsies. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Automatic transfer function design for medical visualization using visibility distributions and projective color mapping.

    PubMed

    Cai, Lile; Tay, Wei-Liang; Nguyen, Binh P; Chui, Chee-Kong; Ong, Sim-Heng

    2013-01-01

    Transfer functions play a key role in volume rendering of medical data, but transfer function manipulation is unintuitive and can be time-consuming; achieving an optimal visualization of patient anatomy or pathology is difficult. To overcome this problem, we present a system for automatic transfer function design based on visibility distribution and projective color mapping. Instead of assigning opacity directly based on voxel intensity and gradient magnitude, the opacity transfer function is automatically derived by matching the observed visibility distribution to a target visibility distribution. An automatic color assignment scheme based on projective mapping is proposed to assign colors that allow for the visual discrimination of different structures, while also reflecting the degree of similarity between them. When our method was tested on several medical volumetric datasets, the key structures within the volume were clearly visualized with minimal user intervention. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Short temporal asynchrony disrupts visual object recognition

    PubMed Central

    Singer, Jedediah M.; Kreiman, Gabriel

    2014-01-01

    Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition. PMID:24819738

  10. Visualizing Dynamic Bitcoin Transaction Patterns.

    PubMed

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J

    2016-06-01

    This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network.

  11. Visualizing Dynamic Bitcoin Transaction Patterns

    PubMed Central

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J.

    2016-01-01

    Abstract This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  12. Masking by Gratings Predicted by an Image Sequence Discriminating Model: Testing Models for Perceptual Discrimination Using Repeatable Noise

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    Adding noise to stimuli to be discriminated allows estimation of observer classification functions based on the correlation between observer responses and relevant features of the noisy stimuli. Examples will be presented of stimulus features that are found in auditory tone detection and visual vernier acuity. using the standard signal detection model (Thurstone scaling), we derive formulas to estimate the proportion of the observers decision variable variance that is controlled by the added noise. one is based on the probability of agreement of the observer with him/herself on trials with the same noise sample. Another is based on the relative performance of the observer and the model. When these do not agree, the model can be rejected. A second derivation gives the probability of agreement of observer and model when the observer follows the model except for internal noise. Agreement significantly less than this amount allows rejection of the model.

  13. Interactive Visualization to Advance Earthquake Simulation

    NASA Astrophysics Data System (ADS)

    Kellogg, Louise H.; Bawden, Gerald W.; Bernardin, Tony; Billen, Magali; Cowgill, Eric; Hamann, Bernd; Jadamec, Margarete; Kreylos, Oliver; Staadt, Oliver; Sumner, Dawn

    2008-04-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth’s surface and interior. Virtual mapping tools allow virtual “field studies” in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method’s strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations.

  14. Preparing for Future Learning with a Tangible User Interface: The Case of Neuroscience

    ERIC Educational Resources Information Center

    Schneider, B.; Wallace, J.; Blikstein, P.; Pea, R.

    2013-01-01

    In this paper, we describe the development and evaluation of a microworld-based learning environment for neuroscience. Our system, BrainExplorer, allows students to discover the way neural pathways work by interacting with a tangible user interface. By severing and reconfiguring connections, users can observe how the visual field is impaired and,…

  15. Rubber Bands as Model Polymers in Couette Flow

    ERIC Educational Resources Information Center

    Dunstan, Dave E.

    2008-01-01

    We present a simple device for demonstrating the essential aspects of polymers in flow in the classroom. Rubber bands are used as a macroscopic model of polymers to allow direct visual observation of the flow-induced changes in orientation and conformation. A transparent Perspex Couette cell, constructed from two sections of a tube, is used to…

  16. Bridging views in cinema: a review of the art and science of view integration.

    PubMed

    Levin, Daniel T; Baker, Lewis J

    2017-09-01

    Recently, there has been a surge of interest in the relationship between film and cognitive science. This is reflected in a new science of cinema that can help us both to understand this art form, and to produce new insights about cognition and perception. In this review, we begin by describing how the initial development of cinema involved close observation of audience response. This allowed filmmakers to develop an informal theory of visual cognition that helped them to isolate and creatively recombine fundamental elements of visual experience. We review research exploring naturalistic forms of visual perception and cognition that have opened the door to a productive convergence between the dynamic visual art of cinema and science of visual cognition that can enrich both. In particular, we discuss how parallel understandings of view integration in cinema and in cognitive science have been converging to support a new understanding of meaningful visual experience. WIREs Cogn Sci 2017, 8:e1436. doi: 10.1002/wcs.1436 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  17. Change Blindness Phenomena for Virtual Reality Display Systems.

    PubMed

    Steinicke, Frank; Bruder, Gerd; Hinrichs, Klaus; Willemsen, Pete

    2011-09-01

    In visual perception, change blindness describes the phenomenon that persons viewing a visual scene may apparently fail to detect significant changes in that scene. These phenomena have been observed in both computer-generated imagery and real-world scenes. Several studies have demonstrated that change blindness effects occur primarily during visual disruptions such as blinks or saccadic eye movements. However, until now the influence of stereoscopic vision on change blindness has not been studied thoroughly in the context of visual perception research. In this paper, we introduce change blindness techniques for stereoscopic virtual reality (VR) systems, providing the ability to substantially modify a virtual scene in a manner that is difficult for observers to perceive. We evaluate techniques for semiimmersive VR systems, i.e., a passive and active stereoscopic projection system as well as an immersive VR system, i.e., a head-mounted display, and compare the results to those of monoscopic viewing conditions. For stereoscopic viewing conditions, we found that change blindness phenomena occur with the same magnitude as in monoscopic viewing conditions. Furthermore, we have evaluated the potential of the presented techniques for allowing abrupt, and yet significant, changes of a stereoscopically displayed virtual reality environment.

  18. User Driven Data Mining, Visualization and Decision Making for NOAA Observing System and Data Investments

    NASA Astrophysics Data System (ADS)

    Austin, M.

    2016-12-01

    The National Oceanic and Atmospheric Administration (NOAA) observing system enterprise represents a $2.4B annual investment. Earth observations from these systems are foundational to NOAA's mission to describe, understand, and predict the Earth's environment. NOAA's decision makers are charged with managing this complex portfolio of observing systems to serve the national interest effectively and efficiently. The Technology Planning & Integration for Observation (TPIO) Office currently maintains an observing system portfolio for NOAA's validated user observation requirements, observing capabilities, and resulting data products and services. TPIO performs data analytics to provide NOAA leadership business case recommendations for making sound budgetary decisions. Over the last year, TPIO has moved from massive spreadsheets to intuitive dashboards that enable Federal agencies as well as the general public the ability to explore user observation requirements and environmental observing systems that monitor and predict changes in the environment. This change has led to an organizational data management shift to analytics and visualizations by allowing analysts more time to focus on understanding the data, discovering insights, and effectively communicating the information to decision makers. Moving forward, the next step is to facilitate a cultural change toward self-serve data sharing across NOAA, other Federal agencies, and the public using intuitive data visualizations that answer relevant business questions for users of NOAA's Observing System Enterprise. Users and producers of environmental data will become aware of the need for enhancing communication to simplify information exchange to achieve multipurpose goals across a variety of disciplines. NOAA cannot achieve its goal of producing environmental intelligence without data that can be shared by multiple user communities. This presentation will describe where we are on this journey and will provide examples of these visualizations, promoting a better understanding of NOAA's environmental sensing capabilities that enable improved communication to decision makers in an effective and intuitive manner.

  19. The aryl hydrocarbon receptor is required for developmental closure of the ductus venosus in the neonatal mouse.

    PubMed

    Lahvis, Garet P; Pyzalski, Robert W; Glover, Edward; Pitot, Henry C; McElwee, Matthew K; Bradfield, Christopher A

    2005-03-01

    A developmental role for the Ahr locus has been indicated by the observation that mice harboring a null allele display a portocaval vascular shunt throughout life. To define the ontogeny and determine the identity of this shunt, we developed a visualization approach in which three-dimensional (3D) images of the developing liver vasculature are generated from serial sections. Applying this 3D visualization approach at multiple developmental times allowed us to demonstrate that the portocaval shunt observed in Ahr-null mice is the remnant of an embryonic structure and is not acquired after birth. We observed that the shunt is found in late-stage wild-type embryos but closes during the first 48 h of postnatal life. In contrast, the same structure fails to close in Ahr-null mice and remains open throughout adulthood. The ontogeny of this shunt, along with its 3D position, allowed us to conclude that this shunt is a patent developmental structure known as the ductus venosus (DV). Upon searching for a physiological cause of the patent DV, we observed that during the first 48 h, most major hepatic veins, such as the portal and umbilical veins, normally decrease in diameter but do not change in Ahr-null mice. This observation suggests that failure of the DV to close may be the consequence of increased blood pressure or a failure in vasoconstriction in the developing liver.

  20. The GeoViz Toolkit: Using component-oriented coordination methods for geographic visualization and analysis

    PubMed Central

    Hardisty, Frank; Robinson, Anthony C.

    2010-01-01

    In this paper we present the GeoViz Toolkit, an open-source, internet-delivered program for geographic visualization and analysis that features a diverse set of software components which can be flexibly combined by users who do not have programming expertise. The design and architecture of the GeoViz Toolkit allows us to address three key research challenges in geovisualization: allowing end users to create their own geovisualization and analysis component set on-the-fly, integrating geovisualization methods with spatial analysis methods, and making geovisualization applications sharable between users. Each of these tasks necessitates a robust yet flexible approach to inter-tool coordination. The coordination strategy we developed for the GeoViz Toolkit, called Introspective Observer Coordination, leverages and combines key advances in software engineering from the last decade: automatic introspection of objects, software design patterns, and reflective invocation of methods. PMID:21731423

  1. Video quality assessment using a statistical model of human visual speed perception.

    PubMed

    Wang, Zhou; Li, Qiang

    2007-12-01

    Motion is one of the most important types of information contained in natural video, but direct use of motion information in the design of video quality assessment algorithms has not been deeply investigated. Here we propose to incorporate a recent model of human visual speed perception [Nat. Neurosci. 9, 578 (2006)] and model visual perception in an information communication framework. This allows us to estimate both the motion information content and the perceptual uncertainty in video signals. Improved video quality assessment algorithms are obtained by incorporating the model as spatiotemporal weighting factors, where the weight increases with the information content and decreases with the perceptual uncertainty. Consistent improvement over existing video quality assessment algorithms is observed in our validation with the video quality experts group Phase I test data set.

  2. The Science Behind the NASA/NOAA Electronic Theater 2002

    NASA Technical Reports Server (NTRS)

    Hasler, A. Fritz; Starr, David (Technical Monitor)

    2002-01-01

    Details of the science stories and scientific results behind the Etheater Earth Science Visualizations from the major remote sensing institutions around the country will be explained. The NASA Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Temple Square and the University of Utah Campus. Go back to the early weather satellite images from the 1960s see them contrasted with the latest US/Europe/Japan global weather data. See the latest images and image sequences from NASA & NOAA missions like Terra, GOES, NOAA, TRMM, SeaWiFS, Landsat 7 visualized with state-of-the art tools. A similar retrospective of numerical weather models from the 1960s will be compared with the latest "year 2002" high-resolution models. See the inner workings of a powerful hurricane as it is sliced and dissected using the University of Wisconsin Vis-5D interactive visualization system. The largest super computers are now capable of realistic modeling of the global oceans. See ocean vortexes and currents that bring up the nutrients to feed phitoplankton and zooplankton as well as draw the crill fish, whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate regimes. The Internet and networks have appeared while computers and visualizations have vastly improved over the last 40 years. These advances make it possible to present the broad scope and detailed structure of the huge new observed and simulated datasets in a compelling and instructive manner. New visualization tools allow us to interactively roam & zoom through massive global images larger than 40,000 x 20,000 pixels. Powerful movie players allow us to interactively roam, zoom & loop through 4000 x 4000 pixel bigger than HDTV movies of up to 5000 frames. New 3D tools allow highly interactive manipulation of detailed perspective views of many changing model quantities. See the 1m resolution before and after shots of lower Manhattan and the Pentagon after the September 11 disaster as well as shots of Afghanistan from the Space Imaging IKONOS as well as debris plume images from Terra MODIS and SPOT Image. Shown by the SGI-Octane Graphics-Supercomputer are visualizations of hurricanes Michelle 2001, Floyd, Mitch, Fran and Linda. Our visualizations of these storms have been featured on the covers of the National Geographic, Time, Newsweek and Popular Science. Highlights will be shown from the NASA's large collection of High Definition TV (HDTV) visualizations clips New visualizations of a Los Alamos global ocean model, and high-resolution results of a NASA/JPL Atlantic ocean basin model showing currents, and salinity features will be shown. El Nino/La Nina effects on sea surface temperature and sea surface height of the Pacific Ocean will also be shown. The SST simulations will be compared with GOES Gulf Stream animations and ocean productivity observations. Tours will be given of the entire Earth's land surface at 500 m resolution from recently composited Terra MODIS data, Visualizations will be shown from the Earth Science Etheater 2001 recently presented over the last years in New Zealand, Johannesburg, Tokyo, Paris, Munich, Sydney, Melbourne, Honolulu, Washington, New York City, Pasadena, UCAR/Boulder, and Penn State University. The presentation will use a 2-CPU SGI/CRAY Octane Super Graphics workstation with 4 GB RAM and terabyte disk array at 2048 x 768 resolution plus multimedia laptop with three high resolution projectors. Visualizations will also be featured from museum exhibits and presentations including: the Smithsonian Air & Space Museum in Washington, IMAX theater at the Maryland Science Center in Baltimore, the James Lovell Discovery World Science museum in Milwaukee, the American Museum of Natural History (NYC) Hayden Planetarium IMAX theater, etc. The Etheater is sponsored by NASA, NOAA and the American Meteorological Society. This presentation is brought to you by the University of Utah College of Mines and Earth Sciences and, the Utah Museum of Natural History.

  3. Visual laterality in dolphins: importance of the familiarity of stimuli

    PubMed Central

    2012-01-01

    Background Many studies of cerebral asymmetries in different species lead, on the one hand, to a better understanding of the functions of each cerebral hemisphere and, on the other hand, to develop an evolutionary history of hemispheric laterality. Our animal model is particularly interesting because of its original evolutionary path, i.e. return to aquatic life after a terrestrial phase. The rare reports concerning visual laterality of marine mammals investigated mainly discrimination processes. As dolphins are migrant species they are confronted to a changing environment. Being able to categorize new versus familiar objects would allow dolphins a rapid adaptation to novel environments. Visual laterality could be a prerequisite to this adaptability. To date, no study, to our knowledge, has analyzed the environmental factors that could influence their visual laterality. Results We investigated visual laterality expressed spontaneously at the water surface by a group of five common bottlenose dolphins (Tursiops truncatus) in response to various stimuli. The stimuli presented ranged from very familiar objects (known and manipulated previously) to familiar objects (known but never manipulated) to unfamiliar objects (unknown, never seen previously). At the group level, dolphins used their left eye to observe very familiar objects and their right eye to observe unfamiliar objects. However, eyes are used indifferently to observe familiar objects with intermediate valence. Conclusion Our results suggest different visual cerebral processes based either on the global shape of well-known objects or on local details of unknown objects. Moreover, the manipulation of an object appears necessary for these dolphins to construct a global representation of an object enabling its immediate categorization for subsequent use. Our experimental results pointed out some cognitive capacities of dolphins which might be crucial for their wild life given their fission-fusion social system and migratory behaviour. PMID:22239860

  4. Visual laterality in dolphins: importance of the familiarity of stimuli.

    PubMed

    Blois-Heulin, Catherine; Crével, Mélodie; Böye, Martin; Lemasson, Alban

    2012-01-12

    Many studies of cerebral asymmetries in different species lead, on the one hand, to a better understanding of the functions of each cerebral hemisphere and, on the other hand, to develop an evolutionary history of hemispheric laterality. Our animal model is particularly interesting because of its original evolutionary path, i.e. return to aquatic life after a terrestrial phase. The rare reports concerning visual laterality of marine mammals investigated mainly discrimination processes. As dolphins are migrant species they are confronted to a changing environment. Being able to categorize new versus familiar objects would allow dolphins a rapid adaptation to novel environments. Visual laterality could be a prerequisite to this adaptability. To date, no study, to our knowledge, has analyzed the environmental factors that could influence their visual laterality. We investigated visual laterality expressed spontaneously at the water surface by a group of five common bottlenose dolphins (Tursiops truncatus) in response to various stimuli. The stimuli presented ranged from very familiar objects (known and manipulated previously) to familiar objects (known but never manipulated) to unfamiliar objects (unknown, never seen previously). At the group level, dolphins used their left eye to observe very familiar objects and their right eye to observe unfamiliar objects. However, eyes are used indifferently to observe familiar objects with intermediate valence. Our results suggest different visual cerebral processes based either on the global shape of well-known objects or on local details of unknown objects. Moreover, the manipulation of an object appears necessary for these dolphins to construct a global representation of an object enabling its immediate categorization for subsequent use. Our experimental results pointed out some cognitive capacities of dolphins which might be crucial for their wild life given their fission-fusion social system and migratory behaviour.

  5. The KUPNetViz: a biological network viewer for multiple -omics datasets in kidney diseases.

    PubMed

    Moulos, Panagiotis; Klein, Julie; Jupp, Simon; Stevens, Robert; Bascands, Jean-Loup; Schanstra, Joost P

    2013-07-24

    Constant technological advances have allowed scientists in biology to migrate from conventional single-omics to multi-omics experimental approaches, challenging bioinformatics to bridge this multi-tiered information. Ongoing research in renal biology is no exception. The results of large-scale and/or high throughput experiments, presenting a wealth of information on kidney disease are scattered across the web. To tackle this problem, we recently presented the KUPKB, a multi-omics data repository for renal diseases. In this article, we describe KUPNetViz, a biological graph exploration tool allowing the exploration of KUPKB data through the visualization of biomolecule interactions. KUPNetViz enables the integration of multi-layered experimental data over different species, renal locations and renal diseases to protein-protein interaction networks and allows association with biological functions, biochemical pathways and other functional elements such as miRNAs. KUPNetViz focuses on the simplicity of its usage and the clarity of resulting networks by reducing and/or automating advanced functionalities present in other biological network visualization packages. In addition, it allows the extrapolation of biomolecule interactions across different species, leading to the formulations of new plausible hypotheses, adequate experiment design and to the suggestion of novel biological mechanisms. We demonstrate the value of KUPNetViz by two usage examples: the integration of calreticulin as a key player in a larger interaction network in renal graft rejection and the novel observation of the strong association of interleukin-6 with polycystic kidney disease. The KUPNetViz is an interactive and flexible biological network visualization and exploration tool. It provides renal biologists with biological network snapshots of the complex integrated data of the KUPKB allowing the formulation of new hypotheses in a user friendly manner.

  6. The KUPNetViz: a biological network viewer for multiple -omics datasets in kidney diseases

    PubMed Central

    2013-01-01

    Background Constant technological advances have allowed scientists in biology to migrate from conventional single-omics to multi-omics experimental approaches, challenging bioinformatics to bridge this multi-tiered information. Ongoing research in renal biology is no exception. The results of large-scale and/or high throughput experiments, presenting a wealth of information on kidney disease are scattered across the web. To tackle this problem, we recently presented the KUPKB, a multi-omics data repository for renal diseases. Results In this article, we describe KUPNetViz, a biological graph exploration tool allowing the exploration of KUPKB data through the visualization of biomolecule interactions. KUPNetViz enables the integration of multi-layered experimental data over different species, renal locations and renal diseases to protein-protein interaction networks and allows association with biological functions, biochemical pathways and other functional elements such as miRNAs. KUPNetViz focuses on the simplicity of its usage and the clarity of resulting networks by reducing and/or automating advanced functionalities present in other biological network visualization packages. In addition, it allows the extrapolation of biomolecule interactions across different species, leading to the formulations of new plausible hypotheses, adequate experiment design and to the suggestion of novel biological mechanisms. We demonstrate the value of KUPNetViz by two usage examples: the integration of calreticulin as a key player in a larger interaction network in renal graft rejection and the novel observation of the strong association of interleukin-6 with polycystic kidney disease. Conclusions The KUPNetViz is an interactive and flexible biological network visualization and exploration tool. It provides renal biologists with biological network snapshots of the complex integrated data of the KUPKB allowing the formulation of new hypotheses in a user friendly manner. PMID:23883183

  7. A qualitative inquiry into the effects of visualization on high school chemistry students' learning process of molecular structure

    NASA Astrophysics Data System (ADS)

    Deratzou, Susan

    This research studies the process of high school chemistry students visualizing chemical structures and its role in learning chemical bonding and molecular structure. Minimal research exists with high school chemistry students and more research is necessary (Gabel & Sherwood, 1980; Seddon & Moore, 1986; Seddon, Tariq, & Dos Santos Veiga, 1984). Using visualization tests (Ekstrom, French, Harman, & Dermen, 1990a), a learning style inventory (Brown & Cooper, 1999), and observations through a case study design, this study found visual learners performed better, but needed more practice and training. Statistically, all five pre- and post-test visualization test comparisons were highly significant in the two-tailed t-test (p > .01). The research findings are: (1) Students who tested high in the Visual (Language and/or Numerical) and Tactile Learning Styles (and Social Learning) had an advantage. Students who learned the chemistry concepts more effectively were better at visualizing structures and using molecular models to enhance their knowledge. (2) Students showed improvement in learning after visualization practice. Training in visualization would improve students' visualization abilities and provide them with a way to think about these concepts. (3) Conceptualization of concepts indicated that visualizing ability was critical and that it could be acquired. Support for this finding was provided by pre- and post-Visualization Test data with a highly significant t-test. (4) Various molecular animation programs and websites were found to be effective. (5) Visualization and modeling of structures encompassed both two- and three-dimensional space. The Visualization Test findings suggested that the students performed better with basic rotation of structures as compared to two- and three-dimensional objects. (6) Data from observations suggest that teaching style was an important factor in student learning of molecular structure. (7) Students did learn the chemistry concepts. Based on the Visualization Test results, which showed that most of the students performed better on the post-test, the visualization experience and the abstract nature of the content allowed them to transfer some of their chemical understanding and practice to non-chemical structures. Finally, implications for teaching of chemistry, students learning chemistry, curriculum, and research for the field of chemical education were discussed.

  8. Reflected ray retrieval from radio occultation data using radio holographic filtering of wave fields in ray space

    NASA Astrophysics Data System (ADS)

    Gorbunov, Michael E.; Cardellach, Estel; Lauritsen, Kent B.

    2018-03-01

    Linear and non-linear representations of wave fields constitute the basis of modern algorithms for analysis of radio occultation (RO) data. Linear representations are implemented by Fourier Integral Operators, which allow for high-resolution retrieval of bending angles. Non-linear representations include Wigner Distribution Function (WDF), which equals the pseudo-density of energy in the ray space. Representations allow for filtering wave fields by suppressing some areas of the ray space and mapping the field back from the transformed space to the initial one. We apply this technique to the retrieval of reflected rays from RO observations. The use of reflected rays may increase the accuracy of the retrieval of the atmospheric refractivity. Reflected rays can be identified by the visual inspection of WDF or spectrogram plots. Numerous examples from COSMIC data indicate that reflections are mostly observed over oceans or snow, in particular over Antarctica. We introduce the reflection index that characterizes the relative intensity of the reflected ray with respect to the direct ray. The index allows for the automatic identification of events with reflections. We use the radio holographic estimate of the errors of the retrieved bending angle profiles of reflected rays. A comparison of indices evaluated for a large base of events including the visual identification of reflections indicated a good agreement with our definition of reflection index.

  9. A VBA Desktop Database for Proposal Processing at National Optical Astronomy Observatories

    NASA Astrophysics Data System (ADS)

    Brown, Christa L.

    National Optical Astronomy Observatories (NOAO) has developed a relational Microsoft Windows desktop database using Microsoft Access and the Microsoft Office programming language, Visual Basic for Applications (VBA). The database is used to track data relating to observing proposals from original receipt through the review process, scheduling, observing, and final statistical reporting. The database has automated proposal processing and distribution of information. It allows NOAO to collect and archive data so as to query and analyze information about our science programs in new ways.

  10. A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech.

    PubMed

    Magnotti, John F; Beauchamp, Michael S

    2017-02-01

    Audiovisual speech integration combines information from auditory speech (talker's voice) and visual speech (talker's mouth movements) to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga), that are integrated to produce a fused percept ("da"). This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba). We describe a simplified model of causal inference in multisensory speech perception (CIMS) that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.

  11. In-motion optical sensing for assessment of animal well-being

    NASA Astrophysics Data System (ADS)

    Atkins, Colton A.; Pond, Kevin R.; Madsen, Christi K.

    2017-05-01

    The application of in-motion optical sensor measurements was investigated for inspecting livestock soundness as a means of animal well-being. An optical sensor-based platform was used to collect in-motion, weight-related information. Eight steers, weighing between 680 and 1134 kg, were evaluated twice. Six of the 8 steers were used for further evaluation and analysis. Hoof impacts caused plate flexion that was optically sensed. Observed kinetic differences between animals' strides at a walking or running/trotting gait with significant force distributions of animals' hoof impacts allowed for observation of real-time, biometric patterns. Overall, optical sensor-based measurements identified hoof differences between and within animals in motion that may allow for diagnosis of musculoskeletal unsoundness without visual evaluation.

  12. NASA's Earth Observations of the Global Environment

    NASA Technical Reports Server (NTRS)

    King, Michael D.

    2005-01-01

    A birds eye view of the Earth from afar and up close reveals the power and magnificence of the Earth and juxtaposes the simultaneous impacts and powerlessness of humankind. The NASA Electronic Theater presents Earth science observations and visualizations in an historical perspective. Fly in from outer space to Africa and Cape Town. See the latest spectacular images from NASA & NOAA remote sensing missions like Meteosat, TRMM, Landsat 7, and Terra, which will be visualized and explained in the context of global change. See visualizations of global data sets currently available from Earth orbiting satellites, including the Earth at night with its city lights, aerosols from biomass burning in the Middle East and Africa, and retreat of the glaciers on Mt. Kilimanjaro. See the dynamics of vegetation growth and decay over Africa over 17 years. New visualization tools allow us to roam & zoom through massive global mosaic images including Landsat and Terra tours of Africa and South America, showing land use and land cover change from Bolivian highlands. Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa and across the Atlantic to the Caribbean and Amazon basin. See ocean vortexes and currents that bring up the nutrients to feed tiny phytoplankton and draw the fish, pant whales and fisher- man. See how the ocean blooms in response to these currents and El Nino/La Nifia. We will illustrate these and other topics with a dynamic theater-style presentation, along with animations of satellite launch deployments and orbital mapping to highlight aspects of Earth observations from space.

  13. ChromaStarPy: A Stellar Atmosphere and Spectrum Modeling and Visualization Lab in Python

    NASA Astrophysics Data System (ADS)

    Short, C. Ian; Bayer, Jason H. T.; Burns, Lindsey M.

    2018-02-01

    We announce ChromaStarPy, an integrated general stellar atmospheric modeling and spectrum synthesis code written entirely in python V. 3. ChromaStarPy is a direct port of the ChromaStarServer (CSServ) Java modeling code described in earlier papers in this series, and many of the associated JavaScript (JS) post-processing procedures have been ported and incorporated into CSPy so that students have access to ready-made data products. A python integrated development environment (IDE) allows a student in a more advanced course to experiment with the code and to graphically visualize intermediate and final results, ad hoc, as they are running it. CSPy allows students and researchers to compare modeled to observed spectra in the same IDE in which they are processing observational data, while having complete control over the stellar parameters affecting the synthetic spectra. We also take the opportunity to describe improvements that have been made to the related codes, ChromaStar (CS), CSServ, and ChromaStarDB (CSDB), that, where relevant, have also been incorporated into CSPy. The application may be found at the home page of the OpenStars project: http://www.ap.smu.ca/OpenStars/.

  14. Visualization for Molecular Dynamics Simulation of Gas and Metal Surface Interaction

    NASA Astrophysics Data System (ADS)

    Puzyrkov, D.; Polyakov, S.; Podryga, V.

    2016-02-01

    The development of methods, algorithms and applications for visualization of molecular dynamics simulation outputs is discussed. The visual analysis of the results of such calculations is a complex and actual problem especially in case of the large scale simulations. To solve this challenging task it is necessary to decide on: 1) what data parameters to render, 2) what type of visualization to choose, 3) what development tools to use. In the present work an attempt to answer these questions was made. For visualization it was offered to draw particles in the corresponding 3D coordinates and also their velocity vectors, trajectories and volume density in the form of isosurfaces or fog. We tested the way of post-processing and visualization based on the Python language with use of additional libraries. Also parallel software was developed that allows processing large volumes of data in the 3D regions of the examined system. This software gives the opportunity to achieve desired results that are obtained in parallel with the calculations, and at the end to collect discrete received frames into a video file. The software package "Enthought Mayavi2" was used as the tool for visualization. This visualization application gave us the opportunity to study the interaction of a gas with a metal surface and to closely observe the adsorption effect.

  15. Neural representation of form-contingent color filling-in in the early visual cortex.

    PubMed

    Hong, Sang Wook; Tong, Frank

    2017-11-01

    Perceptual filling-in exemplifies the constructive nature of visual processing. Color, a prominent surface property of visual objects, can appear to spread to neighboring areas that lack any color. We investigated cortical responses to a color filling-in illusion that effectively dissociates perceived color from the retinal input (van Lier, Vergeer, & Anstis, 2009). Observers adapted to a star-shaped stimulus with alternating red- and cyan-colored points to elicit a complementary afterimage. By presenting an achromatic outline that enclosed one of the two afterimage colors, perceptual filling-in of that color was induced in the unadapted central region. Visual cortical activity was monitored with fMRI, and analyzed using multivariate pattern analysis. Activity patterns in early visual areas (V1-V4) reliably distinguished between the two color-induced filled-in conditions, but only higher extrastriate visual areas showed the predicted correspondence with color perception. Activity patterns allowed for reliable generalization between filled-in colors and physical presentations of perceptually matched colors in areas V3 and V4, but not in earlier visual areas. These findings suggest that the perception of filled-in surface color likely requires more extensive processing by extrastriate visual areas, in order for the neural representation of surface color to become aligned with perceptually matched real colors.

  16. JPL Earth Science Center Visualization Multitouch Table

    NASA Astrophysics Data System (ADS)

    Kim, R.; Dodge, K.; Malhotra, S.; Chang, G.

    2014-12-01

    JPL Earth Science Center Visualization table is a specialized software and hardware to allow multitouch, multiuser, and remote display control to create seamlessly integrated experiences to visualize JPL missions and their remote sensing data. The software is fully GIS capable through time aware OGC WMTS using Lunar Mapping and Modeling Portal as the GIS backend to continuously ingest and retrieve realtime remote sending data and satellite location data. 55 inch and 82 inch unlimited finger count multitouch displays allows multiple users to explore JPL Earth missions and visualize remote sensing data through very intuitive and interactive touch graphical user interface. To improve the integrated experience, Earth Science Center Visualization Table team developed network streaming which allows table software to stream data visualization to near by remote display though computer network. The purpose of this visualization/presentation tool is not only to support earth science operation, but specifically designed for education and public outreach and will significantly contribute to STEM. Our presentation will include overview of our software, hardware, and showcase of our system.

  17. Toward semantic-based retrieval of visual information: a model-based approach

    NASA Astrophysics Data System (ADS)

    Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman

    2002-07-01

    This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.

  18. Development of a VR-based Treadmill Control Interface for Gait Assessment of Patients with Parkinson’s Disease

    PubMed Central

    Park, Hyung-Soon; Yoon, Jung Won; Kim, Jonghyun; Iseki, Kazumi; Hallett, Mark

    2013-01-01

    Freezing of gait (FOG) is a commonly observed phenomenon in Parkinson’s disease, but its causes and mechanisms are not fully understood. This paper presents the development of a virtual reality (VR)-based body-weight supported treadmill interface (BWSTI) designed and applied to investigate FOG. The BWSTI provides a safe and controlled walking platform which allows investigators to assess gait impairments under various conditions that simulate real life. In order to be able to evoke FOG, our BWSTI employed a novel speed adaptation controller, which allows patients to drive the treadmill speed. Our interface responsively follows the subject’s intention of changing walking speed by the combined use of feedback and feedforward controllers. To provide realistic visual stimuli, a three dimensional VR system is interfaced with the speed adaptation controller and synchronously displays realistic visual cues. The VR-based BWSTI was tested with three patients with PD who are known to have FOG. Visual stimuli that might cause FOG were shown to them while the speed adaptation controller adjusted treadmill speed to follow the subjects’ intention. Two of the three subjects showed FOG during the treadmill walking. PMID:22275661

  19. Multi Agent Reward Analysis for Learning in Noisy Domains

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Agogino, Adrian K.

    2005-01-01

    In many multi agent learning problems, it is difficult to determine, a priori, the agent reward structure that will lead to good performance. This problem is particularly pronounced in continuous, noisy domains ill-suited to simple table backup schemes commonly used in TD(lambda)/Q-learning. In this paper, we present a new reward evaluation method that allows the tradeoff between coordination among the agents and the difficulty of the learning problem each agent faces to be visualized. This method is independent of the learning algorithm and is only a function of the problem domain and the agents reward structure. We then use this reward efficiency visualization method to determine an effective reward without performing extensive simulations. We test this method in both a static and a dynamic multi-rover learning domain where the agents have continuous state spaces and where their actions are noisy (e.g., the agents movement decisions are not always carried out properly). Our results show that in the more difficult dynamic domain, the reward efficiency visualization method provides a two order of magnitude speedup in selecting a good reward. Most importantly it allows one to quickly create and verify rewards tailored to the observational limitations of the domain.

  20. Peptide-activated gold nanoparticles for selective visual sensing of virus

    NASA Astrophysics Data System (ADS)

    Sajjanar, Basavaraj; Kakodia, Bhuvna; Bisht, Deepika; Saxena, Shikha; Singh, Arvind Kumar; Joshi, Vinay; Tiwari, Ashok Kumar; Kumar, Satish

    2015-05-01

    In this study, we report peptide-gold nanoparticles (AuNP)-based visual sensor for viruses. Citrate-stabilized AuNP (20 ± 1.9 nm) were functionalized with strong sulfur-gold interface using cysteinylated virus-specific peptide. Peptide-Cys-AuNP formed complexes with the viruses which made them to aggregate. The aggregation can be observed with naked eye and also with UV-Vis spectrophotometer as a color change from bright red to purple. The test allows for fast and selective detection of specific viruses. Spectroscopic measurements showed high linear correlation ( R 2 = 0.995) between the changes in optical density ratio (OD610/OD520) with the different concentrations of virus. The new method was compared with the hemagglutinating (HA) test for Newcastle disease virus (NDV). The results indicated that peptide-Cys-AuNP was more sensitive and can visually detect minimum number of virus particles present in the biological samples. The limit of detection for the NDV was 0.125 HA units of the virus. The method allows for selective detection and quantification of the NDV, and requires no isolation of viral RNA and PCR experiments. This strategy may be utilized for detection of other important human and animal viral pathogens.

  1. Linking Science Analysis with Observation Planning: A Full Circle Data Lifecycle

    NASA Technical Reports Server (NTRS)

    Grosvenor, Sandy; Jones, Jeremy; Koratkar, Anuradha; Li, Connie; Mackey, Jennifer; Neher, Ken; Wolf, Karl; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    A clear goal of the Virtual Observatory (VO) is to enable new science through analysis of integrated astronomical archives. An additional and powerful possibility of the VO is to link and integrate these new analyses with planning of new observations. By providing tools that can be used for observation planning in the VO, the VO will allow the data lifecycle to come full circle: from theory to observations to data and back around to new theories and new observations. The Scientist's Expert Assistant (SEA) Simulation Facility (SSF) is working to combine the ability to access existing archives with the ability to model and visualize new observations. Integrating the two will allow astronomers to better use the integrated archives of the VO to plan and predict the success of potential new observations more efficiently, The full circle lifecycle enabled by SEA can allow astronomers to make substantial leaps in the quality of data and science returns on new observations. Our paper examines the exciting potential of integrating archival analysis with new observation planning, such as performing data calibration analysis on archival images and using that analysis to predict the success of new observations, or performing dynamic signal-to-noise analysis combining historical results with modeling of new instruments or targets. We will also describe how the development of the SSF is progressing and what have been its successes and challenges.

  2. Linking Science Analysis with Observation Planning: A Full Circle Data Lifecycle

    NASA Technical Reports Server (NTRS)

    Jones, Jeremy; Grosvenor, Sandy; Wolf, Karl; Li, Connie; Koratkar, Anuradha; Powers, Edward I. (Technical Monitor)

    2001-01-01

    A clear goal of the Virtual Observatory (VO) is to enable new science through analysis of integrated astronomical archives. An additional and powerful possibility of the VO is to link and integrate these new analyses with planning of new observations. By providing tools that can be used for observation planning in the VO, the VO will allow the data lifecycle to come full circle: from theory to observations to data and back around to new theories and new observations. The Scientist's Expert Assistant (SEA) Simulation Facility (SSF) is working to combine the ability to access existing archives with the ability to model and visualize new observations. Integrating the two will allow astronomers to better use the integrated archives of the VO to plan and predict the success of potential new observations. The full circle lifecycle enabled by SEA can allow astronomers to make substantial leaps in the quality of data and science returns on new observations. Our paper will examine the exciting potential of integrating archival analysis with new observation planning, such as performing data calibration analysis on archival images and using that analysis to predict the success of new observations, or performing dynamic signal-to-noise analysis combining historical results with modeling of new instruments or targets. We will also describe how the development of the SSF is progressing and what has been its successes and challenges.

  3. A Force-Visualized Silicone Retractor Attachable to Surgical Suction Pipes.

    PubMed

    Watanabe, Tetsuyou; Koyama, Toshio; Yoneyama, Takeshi; Nakada, Mitsutoshi

    2017-04-05

    This paper presents a force-visually-observable silicone retractor, which is an extension of a previously developed system that had the same functions of retracting, suction, and force sensing. These features provide not only high usability by reducing the number of tool changes, but also a safe choice of retracting by visualized force information. Suction is achieved by attaching the retractor to a suction pipe. The retractor has a deformable sensing component including a hole filled with a liquid. The hole is connected to an outer tube, and the liquid level displaced in proportion to the extent of deformation resulting from the retracting load. The liquid level is capable to be observed around the surgeon's fingertips, which enhances the usability. The new hybrid structure of soft sensing and hard retracting allows the miniaturization of the retractor as well as a resolution of less than 0.05 N and a range of 0.1-0.7 N. The overall structure is made of silicone, which has the advantages of disposability, low cost, and easy sterilization/disinfection. This system was validated by conducting experiments.

  4. Distance-dependent pattern blending can camouflage salient aposematic signals.

    PubMed

    Barnett, James B; Cuthill, Innes C; Scott-Samuel, Nicholas E

    2017-07-12

    The effect of viewing distance on the perception of visual texture is well known: spatial frequencies higher than the resolution limit of an observer's visual system will be summed and perceived as a single combined colour. In animal defensive colour patterns, distance-dependent pattern blending may allow aposematic patterns, salient at close range, to match the background to distant observers. Indeed, recent research has indicated that reducing the distance from which a salient signal can be detected can increase survival over camouflage or conspicuous aposematism alone. We investigated whether the spatial frequency of conspicuous and cryptically coloured stripes affects the rate of avian predation. Our results are consistent with pattern blending acting to camouflage salient aposematic signals effectively at a distance. Experiments into the relative rate of avian predation on edible model caterpillars found that increasing spatial frequency (thinner stripes) increased survival. Similarly, visual modelling of avian predators showed that pattern blending increased the similarity between caterpillar and background. These results show how a colour pattern can be tuned to reveal or conceal different information at different distances, and produce tangible survival benefits. © 2017 The Author(s).

  5. Live Aircraft Encounter Visualization at FutureFlight Central

    NASA Technical Reports Server (NTRS)

    Murphy, James R.; Chinn, Fay; Monheim, Spencer; Otto, Neil; Kato, Kenji; Archdeacon, John

    2018-01-01

    Researchers at the National Aeronautics and Space Administration (NASA) have developed an aircraft data streaming capability that can be used to visualize live aircraft in near real-time. During a joint Federal Aviation Administration (FAA)/NASA Airborne Collision Avoidance System flight series, test sorties between unmanned aircraft and manned intruder aircraft were shown in real-time at NASA Ames' FutureFlight Central tower facility as a virtual representation of the encounter. This capability leveraged existing live surveillance, video, and audio data streams distributed through a Live, Virtual, Constructive test environment, then depicted the encounter from the point of view of any aircraft in the system showing the proximity of the other aircraft. For the demonstration, position report data were sent to the ground from on-board sensors on the unmanned aircraft. The point of view can be change dynamically, allowing encounters from all angles to be observed. Visualizing the encounters in real-time provides a safe and effective method for observation of live flight testing and a strong alternative to travel to the remote test range.

  6. Desktop Cloud Visualization: the new technology to remote access 3D interactive applications in the Cloud.

    PubMed

    Torterolo, Livia; Ruffino, Francesco

    2012-01-01

    In the proposed demonstration we will present DCV (Desktop Cloud Visualization): a unique technology that allows users to remote access 2D and 3D interactive applications over a standard network. This allows geographically dispersed doctors work collaboratively and to acquire anatomical or pathological images and visualize them for further investigations.

  7. Laser light visual cueing for freezing of gait in Parkinson disease: A pilot study with male participants.

    PubMed

    Bunting-Perry, Lisette; Spindler, Meredith; Robinson, Keith M; Noorigian, Joseph; Cianci, Heather J; Duda, John E

    2013-01-01

    Freezing of gait (FOG) is a debilitating feature of Parkinson disease (PD). In this pilot study, we sought to assess the efficacy of a rolling walker with a laser beam visual cue to treat FOG in PD patients. We recruited 22 subjects with idiopathic PD who experienced on- and off-medication FOG. Subjects performed three walking tasks both with and without the laser beam while on medications. Outcome measures included time to complete tasks, number of steps, and number of FOG episodes. A crossover design allowed within-group comparisons between the two conditions. No significant differences were observed between the two walking conditions across the three tasks. The laser beam, when applied as a visual cue on a rolling walker, did not diminish FOG in this study.

  8. Visualization in hydrological and atmospheric modeling and observation

    NASA Astrophysics Data System (ADS)

    Helbig, C.; Rink, K.; Kolditz, O.

    2013-12-01

    In recent years, visualization of geoscientific and climate data has become increasingly important due to challenges such as climate change, flood prediction or the development of water management schemes for arid and semi-arid regions. Models for simulations based on such data often have a large number of heterogeneous input data sets, ranging from remote sensing data and geometric information (such as GPS data) to sensor data from specific observations sites. Data integration using such information is not straightforward and a large number of potential problems may occur due to artifacts, inconsistencies between data sets or errors based on incorrectly calibrated or stained measurement devices. Algorithms to automatically detect various of such problems are often numerically expensive or difficult to parameterize. In contrast, combined visualization of various data sets is often a surprisingly efficient means for an expert to detect artifacts or inconsistencies as well as to discuss properties of the data. Therefore, the development of general visualization strategies for atmospheric or hydrological data will often support researchers during assessment and preprocessing of the data for model setup. When investigating specific phenomena, visualization is vital for assessing the progress of the ongoing simulation during runtime as well as evaluating the plausibility of the results. We propose a number of such strategies based on established visualization methods that - are applicable to a large range of different types of data sets, - are computationally inexpensive to allow application for time-dependent data - can be easily parameterized based on the specific focus of the research. Examples include the highlighting of certain aspects of complex data sets using, for example, an application-dependent parameterization of glyphs, iso-surfaces or streamlines. In addition, we employ basic rendering techniques allowing affine transformations, changes in opacity as well as variation of transfer functions. We found that similar strategies can be applied for hydrological and atmospheric data such as the use of streamlines for visualization of wind or fluid flow or iso-surfaces as indicators of groundwater recharge levels in the subsurface or levels of humidity in the atmosphere. We applied these strategies for a wide range of hydrological and climate applications such as groundwater flow, distribution of chemicals in water bodies, development of convection cells in the atmosphere or heat flux on the earth's surface. Results have been evaluated in discussions with experts from hydrogeology and meteorology.

  9. Developing a Data Visualization System for the Bank of America Chicago Marathon (Chicago, Illinois USA).

    PubMed

    Hanken, Taylor; Young, Sam; Smilowitz, Karen; Chiampas, George; Waskowski, David

    2016-10-01

    As one of the largest marathons worldwide, the Bank of America Chicago Marathon (BACCM; Chicago, Illinois USA) accumulates high volumes of data. Race organizers and engaged agencies need the ability to access specific data in real-time. This report details a data visualization system designed for the Chicago Marathon and establishes key principles for event management data visualization. The data visualization system allows for efficient data communication among the organizing agencies of Chicago endurance events. Agencies can observe the progress of the race throughout the day and obtain needed information, such as the number and location of runners on the course and current weather conditions. Implementation of the system can reduce time-consuming, face-to-face interactions between involved agencies by having key data streams in one location, streamlining communications with the purpose of improving race logistics, as well as medical preparedness and response. Hanken T , Young S , Smilowitz K , Chiampas G , Waskowski D . Developing a data visualization system for the Bank of America Chicago Marathon (Chicago, Illinois USA). Prehosp Disaster Med. 2016;31(5):572-577.

  10. Condensation in Nanoporous Packed Beds.

    PubMed

    Ally, Javed; Molla, Shahnawaz; Mostowfi, Farshid

    2016-05-10

    In materials with tiny, nanometer-scale pores, liquid condensation is shifted from the bulk saturation pressure observed at larger scales. This effect is called capillary condensation and can block pores, which has major consequences in hydrocarbon production, as well as in fuel cells, catalysis, and powder adhesion. In this study, high pressure nanofluidic condensation studies are performed using propane and carbon dioxide in a colloidal crystal packed bed. Direct visualization allows the extent of condensation to be observed, as well as inference of the pore geometry from Bragg diffraction. We show experimentally that capillary condensation depends on pore geometry and wettability because these factors determine the shape of the menisci that coalesce when pore filling occurs, contrary to the typical assumption that all pore structures can be modeled as cylindrical and perfectly wetting. We also observe capillary condensation at higher pressures than has been done previously, which is important because many applications involving this phenomenon occur well above atmospheric pressure, and there is little, if any, experimental validation of capillary condensation at such pressures, particularly with direct visualization.

  11. Testing the importance of auditory detections in avian point counts

    USGS Publications Warehouse

    Brewster, J.P.; Simons, T.R.

    2009-01-01

    Recent advances in the methods used to estimate detection probability during point counts suggest that the detection process is shaped by the types of cues available to observers. For example, models of the detection process based on distance-sampling or time-of-detection methods may yield different results for auditory versus visual cues because of differences in the factors that affect the transmission of these cues from a bird to an observer or differences in an observer's ability to localize cues. Previous studies suggest that auditory detections predominate in forested habitats, but it is not clear how often observers hear birds prior to detecting them visually. We hypothesized that auditory cues might be even more important than previously reported, so we conducted an experiment in a forested habitat in North Carolina that allowed us to better separate auditory and visual detections. Three teams of three observers each performed simultaneous 3-min unlimited-radius point counts at 30 points in a mixed-hardwood forest. One team member could see, but not hear birds, one could hear, but not see, and the third was nonhandicapped. Of the total number of birds detected, 2.9% were detected by deafened observers, 75.1% by blinded observers, and 78.2% by nonhandicapped observers. Detections by blinded and nonhandicapped observers were the same only 54% of the time. Our results suggest that the detection of birds in forest habitats is almost entirely by auditory cues. Because many factors affect the probability that observers will detect auditory cues, the accuracy and precision of avian point count estimates are likely lower than assumed by most field ornithologists. ?? 2009 Association of Field Ornithologists.

  12. Interactive visualization of vegetation dynamics

    USGS Publications Warehouse

    Reed, B.C.; Swets, D.; Bard, L.; Brown, J.; Rowland, James

    2001-01-01

    Satellite imagery provides a mechanism for observing seasonal dynamics of the landscape that have implications for near real-time monitoring of agriculture, forest, and range resources. This study illustrates a technique for visualizing timely information on key events during the growing season (e.g., onset, peak, duration, and end of growing season), as well as the status of the current growing season with respect to the recent historical average. Using time-series analysis of normalized difference vegetation index (NDVI) data from the advanced very high resolution radiometer (AVHRR) satellite sensor, seasonal dynamics can be derived. We have developed a set of Java-based visualization and analysis tools to make comparisons between the seasonal dynamics of the current year with those from the past twelve years. In addition, the visualization tools allow the user to query underlying databases such as land cover or administrative boundaries to analyze the seasonal dynamics of areas of their own interest. The Java-based tools (data exploration and visualization analysis or DEVA) use a Web-based client-server model for processing the data. The resulting visualization and analysis, available via the Internet, is of value to those responsible for land management decisions, resource allocation, and at-risk population targeting.

  13. Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.

    PubMed

    Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming

    2018-05-01

    The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.

  14. The influence of visual and vestibular orientation cues in a clock reading task.

    PubMed

    Davidenko, Nicolas; Cheong, Yeram; Waterman, Amanda; Smith, Jacob; Anderson, Barrett; Harmon, Sarah

    2018-05-23

    We investigated how performance in the real-life perceptual task of analog clock reading is influenced by the clock's orientation with respect to egocentric, gravitational, and visual-environmental reference frames. In Experiment 1, we designed a simple clock-reading task and found that observers' reaction time to correctly tell the time depends systematically on the clock's orientation. In Experiment 2, we dissociated egocentric from environmental reference frames by having participants sit upright or lie sideways while performing the task. We found that both reference frames substantially contribute to response times in this task. In Experiment 3, we placed upright or rotated participants in an upright or rotated immersive virtual environment, which allowed us to further dissociate vestibular from visual cues to the environmental reference frame. We found evidence of environmental reference frame effects only when visual and vestibular cues were aligned. We discuss the implications for the design of remote and head-mounted displays. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Square or sine: finding a waveform with high success rate of eliciting SSVEP.

    PubMed

    Teng, Fei; Chen, Yixin; Choong, Aik Min; Gustafson, Scott; Reichley, Christopher; Lawhead, Pamela; Waddell, Dwight

    2011-01-01

    Steady state visual evoked potential (SSVEP) is the brain's natural electrical potential response for visual stimuli at specific frequencies. Using a visual stimulus flashing at some given frequency will entrain the SSVEP at the same frequency, thereby allowing determination of the subject's visual focus. The faster an SSVEP is identified, the higher information transmission rate the system achieves. Thus, an effective stimulus, defined as one with high success rate of eliciting SSVEP and high signal-noise ratio, is desired. Also, researchers observed that harmonic frequencies often appear in the SSVEP at a reduced magnitude. Are the harmonics in the SSVEP elicited by the fundamental stimulating frequency or by the artifacts of the stimuli? In this paper, we compare the SSVEP responses of three periodic stimuli: square wave (with different duty cycles), triangle wave, and sine wave to find an effective stimulus. We also demonstrate the connection between the strength of the harmonics in SSVEP and the type of stimulus.

  16. NGC 2024: Far-infrared and radio molecular observations

    NASA Technical Reports Server (NTRS)

    Thronson, H. A., Jr.; Lada, C. J.; Schwartz, P. R.; Smith, H. A.; Smith, J.; Glaccum, W.; Harper, D. A.; Loewenstein, R. F.

    1984-01-01

    Far infrared continuum and millimeter wave molecular observations are presented for the infrared and radio source NGC 2024. The measurements are obtained at relatively high angular resolution, enabling a description of the source energetics and mass distribution in greater detail than previously reported. The object appears to be dominated by a dense ridge of material, extended in the north/south direction and centered on the dark lane that is seen in visual photographs. Maps of the source using the high density molecules CS and HCN confirm this picture and allow a description of the core structure and molecular abundances. The radio molecular and infrared observations support the idea that an important exciting star in NGC 2024 has yet to be identified and is centered on the dense ridge about 1' south of the bright mid infrared source IRS 2. The data presented here allows a presentation of a model for the source.

  17. Identification of a localized core mode in a helicon plasma

    NASA Astrophysics Data System (ADS)

    Green, Daniel A.; Chakraborty Thakur, Saikat; Tynan, George R.; Light, Adam D.

    2017-10-01

    We present imaging measurements of a newly observed mode in the core of the Controlled Shear Decorrelation Experiment - Upgrade (CSDX-U). CSDX-U is a well-characterized linear machine producing dense plasmas relevant to the tokamak edge (Te 3 eV, ne 1013 /cc). Typical fluctuations are dominated by electron drift waves, with evidence for Kelvin-Helmholtz vortices appearing near the plasma edge. A new mode has been observed using high-speed imaging that appears at high magnetic field strengths and is confined to the inner third of the plasma column. A cross-spectral phase technique allows direct visualization of dominant spatial structures as a function of frequency. Experimental dispersion curve estimates are constructed from imaging data alone, and allow direct comparison of theoretical dispersion relations to the observed mode. We present preliminary identification of the mode based on its dispersion curve, and compare the results with electrostatic probe measurements.

  18. Crystal accumulation in the Hanford Waste Treatment Plant high level waste melter: Summary of 2017 experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, K.; Fowley, M.

    A full-scale, transparent mock-up of the Hanford Tank Waste Treatment and Immobilization Project High Level Waste glass melter riser and pour spout has been constructed to allow for testing with visual feedback of particle settling, accumulation, and resuspension when operating with a controlled fraction of crystals in the glass melt. Room temperature operation with silicone oil and magnetite particles simulating molten glass and spinel crystals, respectively, allows for direct observation of flow patterns and settling patterns. The fluid and particle mixture is recycled within the system for each test.

  19. Preliminary investigation of visual attention to human figures in photographs: potential considerations for the design of aided AAC visual scene displays.

    PubMed

    Wilkinson, Krista M; Light, Janice

    2011-12-01

    Many individuals with complex communication needs may benefit from visual aided augmentative and alternative communication systems. In visual scene displays (VSDs), language concepts are embedded into a photograph of a naturalistic event. Humans play a central role in communication development and might be important elements in VSDs. However, many VSDs omit human figures. In this study, the authors sought to describe the distribution of visual attention to humans in naturalistic scenes as compared with other elements. Nineteen college students observed 8 photographs in which a human figure appeared near 1 or more items that might be expected to compete for visual attention (such as a Christmas tree or a table loaded with food). Eye-tracking technology allowed precise recording of participants' gaze. The fixation duration over a 7-s viewing period and latency to view elements in the photograph were measured. Participants fixated on the human figures more rapidly and for longer than expected based on the size of these figures, regardless of the other elements in the scene. Human figures attract attention in a photograph even when presented alongside other attractive distracters. Results suggest that humans may be a powerful means to attract visual attention to key elements in VSDs.

  20. Target-present guessing as a function of target prevalence and accumulated information in visual search.

    PubMed

    Peltier, Chad; Becker, Mark W

    2017-05-01

    Target prevalence influences visual search behavior. At low target prevalence, miss rates are high and false alarms are low, while the opposite is true at high prevalence. Several models of search aim to describe search behavior, one of which has been specifically intended to model search at varying prevalence levels. The multiple decision model (Wolfe & Van Wert, Current Biology, 20(2), 121--124, 2010) posits that all searches that end before the observer detects a target result in a target-absent response. However, researchers have found very high false alarms in high-prevalence searches, suggesting that prevalence rates may be used as a source of information to make "educated guesses" after search termination. Here, we further examine the ability for prevalence level and knowledge gained during visual search to influence guessing rates. We manipulate target prevalence and the amount of information that an observer accumulates about a search display prior to making a response to test if these sources of evidence are used to inform target present guess rates. We find that observers use both information about target prevalence rates and information about the proportion of the array inspected prior to making a response allowing them to make an informed and statistically driven guess about the target's presence.

  1. Interactive investigations into planetary interiors

    NASA Astrophysics Data System (ADS)

    Rose, I.

    2015-12-01

    Many processes in Earth science are difficult to observe or visualize due to the large timescales and lengthscales over which they operate. The dynamics of planetary mantles are particularly challenging as we cannot even look at the rocks involved. As a result, much teaching material on mantle dynamics relies on static images and cartoons, many of which are decades old. Recent improvements in computing power and technology (largely driven by game and web development) have allowed for advances in real-time physics simulations and visualizations, but these have been slow to affect Earth science education.Here I demonstrate a teaching tool for mantle convection and seismology which solves the equations for conservation of mass, momentum, and energy in real time, allowing users make changes to the simulation and immediately see the effects. The user can ask and answer questions about what happens when they add heat in one place, or take it away from another place, or increase the temperature at the base of the mantle. They can also pause the simulation, and while it is paused, create and visualize seismic waves traveling through the mantle. These allow for investigations into and discussions about plate tectonics, earthquakes, hot spot volcanism, and planetary cooling.The simulation is rendered to the screen using OpenGL, and is cross-platform. It can be run as a native application for maximum performance, but it can also be embedded in a web browser for easy deployment and portability.

  2. 3D visualization of Thoraco-Lumbar Spinal Lesions in German Shepherd Dog

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azpiroz, J.; Krafft, J.; Cadena, M.

    2006-09-08

    Computed tomography (CT) has been found to be an excellent imaging modality due to its sensitivity to characterize the morphology of the spine in dogs. This technique is considered to be particularly helpful for diagnosing spinal cord atrophy and spinal stenosis. The three-dimensional visualization of organs and bones can significantly improve the diagnosis of certain diseases in dogs. CT images were acquired of a German shepherd's dog spinal cord to generate stacks and digitally process them to arrange them in a volume image. All imaging experiments were acquired using standard clinical protocols on a clinical CT scanner. The three-dimensional visualizationmore » allowed us to observe anatomical structures that otherwise are not possible to observe with two-dimensional images. The combination of an imaging modality like CT together with imaging processing techniques can be a powerful tool for the diagnosis of a number of animal diseases.« less

  3. Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach.

    PubMed

    Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik

    2015-01-01

    Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10-150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes' reported grapheme-color association. A mathematical model, based on Bundesen's (1990) Theory of Visual Attention (TVA), was fitted to each observer's data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group's model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes' expertise regarding their specific grapheme-color associations.

  4. The use of unmanned aerial vehicle imagery in intertidal monitoring

    NASA Astrophysics Data System (ADS)

    Konar, Brenda; Iken, Katrin

    2018-01-01

    Intertidal monitoring projects are often limited in their practicality because traditional methods such as visual surveys or removal of biota are often limited in the spatial extent for which data can be collected. Here, we used imagery from a small unmanned aerial vehicle (sUAV) to test their potential use in rocky intertidal and intertidal seagrass surveys in the northern Gulf of Alaska. Images captured by the sUAV in the high, mid and low intertidal strata on a rocky beach and within a seagrass bed were compared to data derived concurrently from observer visual surveys and to images taken by observers on the ground. Observer visual data always resulted in the highest taxon richness, but when observer data were aggregated to the lower taxonomic resolution obtained by the sUAV images, overall community composition was mostly similar between the two methods. Ground camera images and sUAV images yielded mostly comparable community composition despite the typically higher taxonomic resolution obtained by the ground camera. We conclude that monitoring goals or research questions that can be answered on a relatively coarse taxonomic level can benefit from an sUAV-based approach because it allows much larger spatial coverage within the time constraints of a low tide interval than is possible by observers on the ground. We demonstrated this large-scale applicability by using sUAV images to develop maps that show the distribution patterns and patchiness of seagrass.

  5. Developing an Interactive Data Visualization Tool to Assess the Impact of Decision Support on Clinical Operations.

    PubMed

    Huber, Timothy C; Krishnaraj, Arun; Monaghan, Dayna; Gaskin, Cree M

    2018-05-18

    Due to mandates from recent legislation, clinical decision support (CDS) software is being adopted by radiology practices across the country. This software provides imaging study decision support for referring providers at the point of order entry. CDS systems produce a large volume of data, providing opportunities for research and quality improvement. In order to better visualize and analyze trends in this data, an interactive data visualization dashboard was created using a commercially available data visualization platform. Following the integration of a commercially available clinical decision support product into the electronic health record, a dashboard was created using a commercially available data visualization platform (Tableau, Seattle, WA). Data generated by the CDS were exported from the data warehouse, where they were stored, into the platform. This allowed for real-time visualization of the data generated by the decision support software. The creation of the dashboard allowed the output from the CDS platform to be more easily analyzed and facilitated hypothesis generation. Integrating data visualization tools into clinical decision support tools allows for easier data analysis and can streamline research and quality improvement efforts.

  6. Time-Resolved Influences of Functional DAT1 and COMT Variants on Visual Perception and Post-Processing

    PubMed Central

    Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred

    2012-01-01

    Background Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. Methods 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Results Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500–1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Conclusions Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems. PMID:22844499

  7. Time-resolved influences of functional DAT1 and COMT variants on visual perception and post-processing.

    PubMed

    Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred

    2012-01-01

    Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500-1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems.

  8. Three-dimensional analysis of scoliosis surgery using stereophotogrammetry

    NASA Astrophysics Data System (ADS)

    Jang, Stanley B.; Booth, Kellogg S.; Reilly, Chris W.; Sawatzky, Bonita J.; Tredwell, Stephen J.

    1994-04-01

    A new stereophotogrammetric analysis and 3D visualization allow accurate assessment of the scoliotic spine during instrumentation. Stereophoto pairs taken at each stage of the operation and robust statistical techniques are used to compute 3D transformations of the vertebrae between stages. These determine rotation, translation, goodness of fit, and overall spinal contour. A polygonal model of the spine using commercial 3D modeling package is used to produce an animation sequence of the transformation. The visualization have provided some important observation. Correction of the scoliosis is achieved largely through vertebral translation and coronal plane rotation, contrary to claims that large axial rotations are required. The animations provide valuable qualitative information for surgeons assessing the results of scoliotic correction.

  9. Nature's Notebook 2010: Data & participant summary

    USGS Publications Warehouse

    Crimmins, Theresa M.; Rosemartin, Alyssa H.; Marsh, R. Lee; Denny, Ellen G.; Enquist, Carolyn A.F.; Weltzin, Jake F.

    2011-01-01

    Data submitted by Nature’s Notebook participants show patterns that follow latitude and elevation. Multiple years of observations now allow for year‐to‐year comparisons within and across species. As such, these data should be useful to a variety of stakeholders interested in the spatial and temporal patterns of plant and animal activity on a national scale; through time, these data should also empower scientists, resource managers, and the public in decision‐making and adapting to variable and changing climates and environments. Data submitted toNature’s Notebook and supporting metadata are available for download at www.usanpn.org/results/data. Additionally, data visualization tools are available online at www.usanpn.org/results/visualizations.

  10. Design, Synthesis, and Evaluation of N- and C-Terminal Protein Bioconjugates as G Protein-Coupled Receptor Agonists.

    PubMed

    Healey, Robert D; Wojciechowski, Jonathan P; Monserrat-Martinez, Ana; Tan, Susan L; Marquis, Christopher P; Sierecki, Emma; Gambin, Yann; Finch, Angela M; Thordarson, Pall

    2018-02-21

    A G protein-coupled receptor (GPCR) agonist protein, thaumatin, was site-specifically conjugated at the N- or C-terminus with a fluorophore for visualization of GPCR:agonist interactions. The N-terminus was specifically conjugated using a synthetic 2-pyridinecarboxyaldehyde reagent. The interaction profiles observed for N- and C-terminal conjugates were varied; N-terminal conjugates interacted very weakly with the GPCR of interest, whereas C-terminal conjugates bound to the receptor. These chemical biology tools allow interactions of therapeutic proteins:GPCR to be monitored and visualized. The methodology used for site-specific bioconjugation represents an advance in application of 2-pyridinecarboxyaldehydes for N-terminal specific bioconjugations.

  11. Anatomical Analysis of the Retinal Specializations to a Crypto-Benthic, Micro-Predatory Lifestyle in the Mediterranean Triplefin Blenny Tripterygion delaisi

    PubMed Central

    Fritsch, Roland; Collin, Shaun P.; Michiels, Nico K.

    2017-01-01

    The environment and lifestyle of a species are known to exert selective pressure on the visual system, often demonstrating a tight link between visual morphology and ecology. Many studies have predicted the visual requirements of a species by examining the anatomical features of the eye. However, among the vast number of studies on visual specializations in aquatic animals, only a few have focused on small benthic fishes that occupy a heterogeneous and spatially complex visual environment. This study investigates the general retinal anatomy including the topography of both the photoreceptor and ganglion cell populations and estimates the spatial resolving power (SRP) of the eye of the Mediterranean triplefin Tripterygion delaisi. Retinal wholemounts were prepared to systematically and quantitatively analyze photoreceptor and retinal ganglion cell (RGC) densities using design-based stereology. To further examine the retinal structure, we also used magnetic resonance imaging (MRI) and histological examination of retinal cross sections. Observations of the triplefin’s eyes revealed them to be highly mobile, allowing them to view the surroundings without body movements. A rostral aphakic gap and the elliptical shape of the eye extend its visual field rostrally and allow for a rostro-caudal accommodatory axis, enabling this species to focus on prey at close range. Single and twin cones dominate the retina and are consistently arranged in one of two regular patterns, which may enhance motion detection and color vision. The retina features a prominent, dorso-temporal, convexiclivate fovea with an average density of 104,400 double and 30,800 single cones per mm2, and 81,000 RGCs per mm2. Based on photoreceptor spacing, SRP was calculated to be between 6.7 and 9.0 cycles per degree. Location and resolving power of the fovea would benefit the detection and identification of small prey in the lower frontal region of the visual field. PMID:29311852

  12. Anatomical Analysis of the Retinal Specializations to a Crypto-Benthic, Micro-Predatory Lifestyle in the Mediterranean Triplefin Blenny Tripterygion delaisi.

    PubMed

    Fritsch, Roland; Collin, Shaun P; Michiels, Nico K

    2017-01-01

    The environment and lifestyle of a species are known to exert selective pressure on the visual system, often demonstrating a tight link between visual morphology and ecology. Many studies have predicted the visual requirements of a species by examining the anatomical features of the eye. However, among the vast number of studies on visual specializations in aquatic animals, only a few have focused on small benthic fishes that occupy a heterogeneous and spatially complex visual environment. This study investigates the general retinal anatomy including the topography of both the photoreceptor and ganglion cell populations and estimates the spatial resolving power (SRP) of the eye of the Mediterranean triplefin Tripterygion delaisi . Retinal wholemounts were prepared to systematically and quantitatively analyze photoreceptor and retinal ganglion cell (RGC) densities using design-based stereology. To further examine the retinal structure, we also used magnetic resonance imaging (MRI) and histological examination of retinal cross sections. Observations of the triplefin's eyes revealed them to be highly mobile, allowing them to view the surroundings without body movements. A rostral aphakic gap and the elliptical shape of the eye extend its visual field rostrally and allow for a rostro-caudal accommodatory axis, enabling this species to focus on prey at close range. Single and twin cones dominate the retina and are consistently arranged in one of two regular patterns, which may enhance motion detection and color vision. The retina features a prominent, dorso-temporal, convexiclivate fovea with an average density of 104,400 double and 30,800 single cones per mm 2 , and 81,000 RGCs per mm 2 . Based on photoreceptor spacing, SRP was calculated to be between 6.7 and 9.0 cycles per degree. Location and resolving power of the fovea would benefit the detection and identification of small prey in the lower frontal region of the visual field.

  13. Model My Watershed: A high-performance cloud application for public engagement, watershed modeling and conservation decision support

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Tarboton, D. G.; Horsburgh, J. S.; Mayorga, E.; McFarland, M.; Robbins, A.; Haag, S.; Shokoufandeh, A.; Evans, B. M.; Arscott, D. B.

    2017-12-01

    The Model My Watershed Web app (https://app.wikiwatershed.org/) and the BiG-CZ Data Portal (http://portal.bigcz.org/) and are web applications that share a common codebase and a common goal to deliver high-performance discovery, visualization and analysis of geospatial data in an intuitive user interface in web browser. Model My Watershed (MMW) was designed as a decision support system for watershed conservation implementation. BiG CZ Data Portal was designed to provide context and background data for research sites. Users begin by creating an Area of Interest, via an automated watershed delineation tool, a free draw tool, selection of a predefined area such as a county or USGS Hydrological Unit (HUC), or uploading a custom polygon. Both Web apps visualize and provide summary statistics of land use, soil groups, streams, climate and other geospatial information. MMW then allows users to run a watershed model to simulate different scenarios of human impacts on stormwater runoff and water-quality. BiG CZ Data Portal allows users to search for scientific and monitoring data within the Area of Interest, which also serves as a prototype for the upcoming Monitor My Watershed web app. Both systems integrate with CUAHSI cyberinfrastructure, including visualizing observational data from CUAHSI Water Data Center and storing user data via CUAHSI HydroShare. Both systems also integrate with the new EnviroDIY Water Quality Data Portal (http://data.envirodiy.org/), a system for crowd-sourcing environmental monitoring data using open-source sensor stations (http://envirodiy.org/mayfly/) and based on the Observations Data Model v2.

  14. 3D Planetary Data Visualization with CesiumJS

    NASA Astrophysics Data System (ADS)

    Larsen, K. W.; DeWolfe, A. W.; Nguyen, D.; Sanchez, F.; Lindholm, D. M.

    2017-12-01

    Complex spacecraft orbits and multi-instrument observations can be challenging to visualize with traditional 2D plots. To facilitate the exploration of planetary science data, we have developed a set of web-based interactive 3D visualizations for the MAVEN and MMS missions using the free CesiumJS library. The Mars Atmospheric and Volatile Evolution (MAVEN) mission has been collecting data at Mars since September 2014. The MAVEN3D project allows playback of one day's orbit at a time, displaying the spacecraft's position and orientation. Selected science data sets can be overplotted on the orbit track, including vectors for magnetic field and ion flow velocities. We also provide an overlay the M-GITM model on the planet itself. MAVEN3D is available at the MAVEN public website at: https://lasp.colorado.edu/maven/sdc/public/pages/maven3d/ The Magnetospheric MultiScale Mission (MMS) consists of one hundred instruments on four spacecraft flying in formation around Earth, investigating the interactions between the solar wind and Earth's magnetic field. While the highest temporal resolution data isn't received and processed until later, continuous daily observations of the particle and field environments are made available as soon as they are received. Traditional `quick-look' static plots have long been the first interaction with data from a mission of this nature. Our new 3D Quicklook viewer allows data from all four spacecraft to be viewed in an interactive web application as soon as the data is ingested into the MMS Science Data Center, less than one day after collection, in order to better help identify scientifically interesting data.

  15. Titanbrowse: a new paradigm for access, visualization and analysis of hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Penteado, Paulo F.

    2016-10-01

    Currently there are archives and tools to explore remote sensing imaging, but these lack some functionality needed for hyperspectral imagers: 1) Querying and serving only whole datacubes is not enough, since in each cube there is typically a large variation in observation geometry over the spatial pixels. Thus, often the most useful unit for selecting observations of interest is not a whole cube but rather a single spectrum. 2) Pixel-specific geometric data included in the standard pipelines is calculated at only one point per pixel. Particularly for selections of pixels from many different cubes, or observations near the limb, it is necessary to know the actual extent of each pixel. 3) Database queries need not only metadata, but also by the spectral data. For instance, one query might look for atypical values of some band, or atypical relations between bands, denoting spectral features (such as ratios or differences between bands). 4) There is the need to evaluate arbitrary, dynamically-defined, complex functions of the data (beyond just simple arithmetic operations), both for selection in the queries, and for visualization, to interactively tune the queries to the observations of interest. 5) Making the most useful query for some analysis often requires interactive visualization integrated with data selection and processing, because the user needs to explore how different functions of the data vary over the observations without having to download data and import it into visualization software. 6) Complementary to interactive use, an API allowing programmatic access to the system is needed for systematic data analyses. 7) Direct access to calibrated and georeferenced data, without the need to download data and software and learn to process it.We present titanbrowse, a database, exploration and visualization system for Cassini VIMS observations of Titan, designed to fullfill the aforementioned needs. While it originallly ran on data in the user's computer, we are now developing an online version, so that users do not need to download software and data. The server, which we maintain, processes the queries and communicates the results to the client the user runs. http://ppenteado.net/titanbrowse.

  16. Integrating advanced visualization technology into the planetary Geoscience workflow

    NASA Astrophysics Data System (ADS)

    Huffman, John; Forsberg, Andrew; Loomis, Andrew; Head, James; Dickson, James; Fassett, Caleb

    2011-09-01

    Recent advances in computer visualization have allowed us to develop new tools for analyzing the data gathered during planetary missions, which is important, since these data sets have grown exponentially in recent years to tens of terabytes in size. As part of the Advanced Visualization in Solar System Exploration and Research (ADVISER) project, we utilize several advanced visualization techniques created specifically with planetary image data in mind. The Geoviewer application allows real-time active stereo display of images, which in aggregate have billions of pixels. The ADVISER desktop application platform allows fast three-dimensional visualization of planetary images overlain on digital terrain models. Both applications include tools for easy data ingest and real-time analysis in a programmatic manner. Incorporation of these tools into our everyday scientific workflow has proved important for scientific analysis, discussion, and publication, and enabled effective and exciting educational activities for students from high school through graduate school.

  17. You see what you have learned. Evidence for an interrelation of associative learning and visual selective attention.

    PubMed

    Feldmann-Wüstefeld, Tobias; Uengoer, Metin; Schubö, Anna

    2015-11-01

    Besides visual salience and observers' current intention, prior learning experience may influence deployment of visual attention. Associative learning models postulate that observers pay more attention to stimuli previously experienced as reliable predictors of specific outcomes. To investigate the impact of learning experience on deployment of attention, we combined an associative learning task with a visual search task and measured event-related potentials of the EEG as neural markers of attention deployment. In the learning task, participants categorized stimuli varying in color/shape with only one dimension being predictive of category membership. In the search task, participants searched a shape target while disregarding irrelevant color distractors. Behavioral results showed that color distractors impaired performance to a greater degree when color rather than shape was predictive in the learning task. Neurophysiological results show that the amplified distraction was due to differential attention deployment (N2pc). Experiment 2 showed that when color was predictive for learning, color distractors captured more attention in the search task (ND component) and more suppression of color distractor was required (PD component). The present results thus demonstrate that priority in visual attention is biased toward predictive stimuli, which allows learning experience to shape selection. We also show that learning experience can overrule strong top-down control (blocked tasks, Experiment 3) and that learning experience has a longer-term effect on attention deployment (tasks on two successive days, Experiment 4). © 2015 Society for Psychophysiological Research.

  18. Development of independent locomotion in children with a severe visual impairment.

    PubMed

    Hallemans, Ann; Ortibus, Els; Truijen, Steven; Meire, Francoise

    2011-01-01

    Locomotion of children and adults with a visual impairment (ages 1-44, n = 28) was compared to that of age-related individuals with normal vision (n = 60). Participants walked barefoot at preferred speed while their gait was recorded by a Vicon(®) system. Walking speed, heading angle, step frequency, stride length, step width, stance phase duration and double support time were determined. Differences between groups, relationships with age and possible interaction effects were investigated. With increasing age overall improvements in gait parameters are observed. Differences between groups were a slower walking speed, a shorter stride length, a prolonged duration of stance and of double support in the individuals with a visual impairment. These may be considered either as adaptations to balance problems or as strategies to allow to foot to probe the ground. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Flow visualization methods for field test verification of CFD analysis of an open gloveport

    DOE PAGES

    Strons, Philip; Bailey, James L.

    2017-01-01

    Anemometer readings alone cannot provide a complete picture of air flow patterns at an open gloveport. Having a means to visualize air flow for field tests in general provides greater insight by indicating direction in addition to the magnitude of the air flow velocities in the region of interest. Furthermore, flow visualization is essential for Computational Fluid Dynamics (CFD) verification, where important modeling assumptions play a significant role in analyzing the chaotic nature of low-velocity air flow. A good example is shown Figure 1, where an unexpected vortex pattern occurred during a field test that could not have been measuredmore » relying only on anemometer readings. Here by, observing and measuring the patterns of the smoke flowing into the gloveport allowed the CFD model to be appropriately updated to match the actual flow velocities in both magnitude and direction.« less

  20. Orientation selectivity sharpens motion detection in Drosophila

    PubMed Central

    Fisher, Yvette E.; Silies, Marion; Clandinin, Thomas R.

    2015-01-01

    SUMMARY Detecting the orientation and movement of edges in a scene is critical to visually guided behaviors of many animals. What are the circuit algorithms that allow the brain to extract such behaviorally vital visual cues? Using in vivo two-photon calcium imaging in Drosophila, we describe direction selective signals in the dendrites of T4 and T5 neurons, detectors of local motion. We demonstrate that this circuit performs selective amplification of local light inputs, an observation that constrains motion detection models and confirms a core prediction of the Hassenstein-Reichardt Correlator (HRC). These neurons are also orientation selective, responding strongly to static features that are orthogonal to their preferred axis of motion, a tuning property not predicted by the HRC. This coincident extraction of orientation and direction sharpens directional tuning through surround inhibition and reveals a striking parallel between visual processing in flies and vertebrate cortex, suggesting a universal strategy for motion processing. PMID:26456048

  1. What are the Shapes of Response Time Distributions in Visual Search?

    PubMed Central

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure reaction time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays in each of three classic search tasks: feature search, with the target defined by color; conjunction search, with the target defined by both color and orientation; and spatial configuration search for a 2 among distractor 5s. This large data set allows us to characterize the RT distributions in detail. We present the raw RT distributions and fit several psychologically motivated functions (ex-Gaussian, ex-Wald, Gamma, and Weibull) to the data. We analyze and interpret parameter trends from these four functions within the context of theories of visual search. PMID:21090905

  2. Fluorescence imaging of chromosomal DNA using click chemistry

    NASA Astrophysics Data System (ADS)

    Ishizuka, Takumi; Liu, Hong Shan; Ito, Kenichiro; Xu, Yan

    2016-09-01

    Chromosome visualization is essential for chromosome analysis and genetic diagnostics. Here, we developed a click chemistry approach for multicolor imaging of chromosomal DNA instead of the traditional dye method. We first demonstrated that the commercially available reagents allow for the multicolor staining of chromosomes. We then prepared two pro-fluorophore moieties that served as light-up reporters to stain chromosomal DNA based on click reaction and visualized the clear chromosomes in multicolor. We applied this strategy in fluorescence in situ hybridization (FISH) and identified, with high sensitivity and specificity, telomere DNA at the end of the chromosome. We further extended this approach to observe several basic stages of cell division. We found that the click reaction enables direct visualization of the chromosome behavior in cell division. These results suggest that the technique can be broadly used for imaging chromosomes and may serve as a new approach for chromosome analysis and genetic diagnostics.

  3. Versatile design and synthesis platform for visualizing genomes with Oligopaint FISH probes

    PubMed Central

    Beliveau, Brian J.; Joyce, Eric F.; Apostolopoulos, Nicholas; Yilmaz, Feyza; Fonseka, Chamith Y.; McCole, Ruth B.; Chang, Yiming; Li, Jin Billy; Senaratne, Tharanga Niroshini; Williams, Benjamin R.; Rouillard, Jean-Marie; Wu, Chao-ting

    2012-01-01

    A host of observations demonstrating the relationship between nuclear architecture and processes such as gene expression have led to a number of new technologies for interrogating chromosome positioning. Whereas some of these technologies reconstruct intermolecular interactions, others have enhanced our ability to visualize chromosomes in situ. Here, we describe an oligonucleotide- and PCR-based strategy for fluorescence in situ hybridization (FISH) and a bioinformatic platform that enables this technology to be extended to any organism whose genome has been sequenced. The oligonucleotide probes are renewable, highly efficient, and able to robustly label chromosomes in cell culture, fixed tissues, and metaphase spreads. Our method gives researchers precise control over the sequences they target and allows for single and multicolor imaging of regions ranging from tens of kilobases to megabases with the same basic protocol. We anticipate this technology will lead to an enhanced ability to visualize interphase and metaphase chromosomes. PMID:23236188

  4. Astrometric observations of visual binaries using 26-inch refractor during 2007-2014 at Pulkovo

    NASA Astrophysics Data System (ADS)

    Izmailov, I. S.; Roshchina, E. A.

    2016-04-01

    We present the results of 15184 astrometric observations of 322 visual binaries carried out in 2007-2014 at Pulkovo observatory. In 2007, the 26-inch refractor ( F = 10413 mm, D = 65 cm) was equipped with the CCD camera FLI ProLine 09000 (FOV 12' × 12', 3056 × 3056 pixels, 0.238 arcsec pixel-1). Telescope automation and weather monitoring system installation allowed us to increase the number of observations significantly. Visual binary and multiple systems with an angular distance in the interval 1."1-78."6 with 7."3 on average were included in the observing program. The results were studied in detail for systematic errors using calibration star pairs. There was no detected dependence of errors on temperature, pressure, and hour angle. The dependence of the 26-inch refractor's scale on temperature was taken into account in calculations. The accuracy of measurement of a single CCD image is in the range of 0."0005 to 0."289, 0."021 on average along both coordinates. Mean errors in annual average values of angular distance and position angle are equal to 0."005 and 0.°04 respectively. The results are available here http://izmccd.puldb.ru/vds.htmand in the Strasbourg Astronomical Data Center (CDS). In the catalog, the separations and position angles per night of observation and annual average as well as errors for all the values and standard deviations of a single observation are presented. We present the results of comparison of 50 pairs of stars with known orbital solutions with ephemerides.

  5. 3D mapping of existing observing capabilities in the frame of GAIA-CLIM H2020 project

    NASA Astrophysics Data System (ADS)

    Emanuele, Tramutola; Madonna, Fabio; Marco, Rosoldi; Francesco, Amato

    2017-04-01

    The aim of the Gap Analysis for Integrated Atmospheric ECV CLImate Monitoring (GAIA-CLIM) project is to improve our ability to use ground-based and sub-orbital observations to characterise satellite observations for a number of atmospheric Essential Climate Variables (ECVs). The key outcomes will be a "Virtual Observatory" (VO) facility of co-locations and their uncertainties and a report on gaps in capabilities or understanding, which shall be used to inform subsequent Horizon 2020 activities. In particular, Work Package 1 (WP1) of the GAIA-CLIM project is devoted to the geographical mapping of existing non-satellite measurement capabilities for a number of ECVs in the atmospheric, oceanic and terrestrial domains. The work carried out within WP1 has allowed to provide the users with an up-to-date geographical identification, at the European and global scales, of current surface-based, balloon-based and oceanic (floats) observing capabilities on an ECV by ECV basis for several parameters which can be obtained using space-based observations from past, present and planned satellite missions. Having alighted on a set of metadata schema to follow, a consistent collection of discovery metadata has been provided into a common structure and will be made available to users through the GAIA-CLIM VO in 2018. Metadata can be interactively visualized through a 3D Graphical User Interface. The metadataset includes 54 plausible networks and 2 aircraft permanent infrastructures for EO Characterisation in the context of GAIA-CLIM currently operating on different spatial domains and measuring different ECVs using one or more measurement techniques. Each classified network has in addition been assessed for suitability against metrological criteria to identifyy those with a level of maturity which enables closure on a comparison with satellite measurements. The metadata GUI is based on Cesium, a virtual globe freeware and open source written in Javascript. It allows users to apply different filters to the data displayed on the globe, selecting data per ECV, network, measurements type and level of maturity. Filtering is operated with a query to GeoServer web application through the WFS interface on a data layer configured on our DB Postgres with PostGIS extension; filters set on the GUI are expressed using ECQL (Extended Common Query Language). The GUI allows to visualize in real-time the current non-satellite observing capabilities along with the satellite platforms measuring the same ECVs. Satellite ground track and footprint of the instruments on board can be also visualized. This work contributes to improve metadata and web map services and to facilitate users' experience in the spatio-temporal analysis of Earth Observation data.

  6. Attention affects visual perceptual processing near the hand.

    PubMed

    Cosman, Joshua D; Vecera, Shaun P

    2010-09-01

    Specialized, bimodal neural systems integrate visual and tactile information in the space near the hand. Here, we show that visuo-tactile representations allow attention to influence early perceptual processing, namely, figure-ground assignment. Regions that were reached toward were more likely than other regions to be assigned as foreground figures, and hand position competed with image-based information to bias figure-ground assignment. Our findings suggest that hand position allows attention to influence visual perceptual processing and that visual processes typically viewed as unimodal can be influenced by bimodal visuo-tactile representations.

  7. 3D Orbit Visualization for Earth-Observing Missions

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Plesea, Lucian; Chafin, Brian G.; Weiss, Barry H.

    2011-01-01

    This software visualizes orbit paths for the Orbiting Carbon Observatory (OCO), but was designed to be general and applicable to any Earth-observing mission. The software uses the Google Earth user interface to provide a visual mechanism to explore spacecraft orbit paths, ground footprint locations, and local cloud cover conditions. In addition, a drill-down capability allows for users to point and click on a particular observation frame to pop up ancillary information such as data product filenames and directory paths, latitude, longitude, time stamp, column-average dry air mole fraction of carbon dioxide, and solar zenith angle. This software can be integrated with the ground data system for any Earth-observing mission to automatically generate daily orbit path data products in Google Earth KML format. These KML data products can be directly loaded into the Google Earth application for interactive 3D visualization of the orbit paths for each mission day. Each time the application runs, the daily orbit paths are encapsulated in a KML file for each mission day since the last time the application ran. Alternatively, the daily KML for a specified mission day may be generated. The application automatically extracts the spacecraft position and ground footprint geometry as a function of time from a daily Level 1B data product created and archived by the mission s ground data system software. In addition, ancillary data, such as the column-averaged dry air mole fraction of carbon dioxide and solar zenith angle, are automatically extracted from a Level 2 mission data product. Zoom, pan, and rotate capability are provided through the standard Google Earth interface. Cloud cover is indicated with an image layer from the MODIS (Moderate Resolution Imaging Spectroradiometer) aboard the Aqua satellite, which is automatically retrieved from JPL s OnEarth Web service.

  8. Hydrolysis of polycarbonate in sub-critical water in fused silica capillary reactor with in situ Raman spectroscopy

    USGS Publications Warehouse

    Pan, Z.; Chou, I-Ming; Burruss, R.C.

    2009-01-01

    The advantages of using fused silica capillary reactor (FSCR) instead of conventional autoclave for studying chemical reactions at elevated pressure and temperature conditions were demonstrated in this study, including the allowance for visual observation under a microscope and in situ Raman spectroscopic characterization of polycarbonate and coexisting phases during hydrolysis in subcritical water.

  9. Design of a noninvasive face mask for ocular occlusion in rats and assessment in a visual discrimination paradigm.

    PubMed

    Hager, Audrey M; Dringenberg, Hans C

    2012-12-01

    The rat visual system is structured such that the large (>90 %) majority of retinal ganglion axons reach the contralateral lateral geniculate nucleus (LGN) and visual cortex (V1). This anatomical design allows for the relatively selective activation of one cerebral hemisphere under monocular viewing conditions. Here, we describe the design of a harness and face mask allowing simple and noninvasive monocular occlusion in rats. The harness is constructed from synthetic fiber (shoelace-type material) and fits around the girth region and neck, allowing for easy adjustments to fit rats of various weights. The face mask consists of soft rubber material that is attached to the harness by Velcro strips. Eyeholes in the mask can be covered by additional Velcro patches to occlude either one or both eyes. Rats readily adapt to wearing the device, allowing behavioral testing under different types of viewing conditions. We show that rats successfully acquire a water-maze-based visual discrimination task under monocular viewing conditions. Following task acquisition, interocular transfer was assessed. Performance with the previously occluded, "untrained" eye was impaired, suggesting that training effects were partially confined to one cerebral hemisphere. The method described herein provides a simple and noninvasive means to restrict visual input for studies of visual processing and learning in various rodent species.

  10. Typical Toddlers' Participation in “Just-in-Time” Programming of Vocabulary for Visual Scene Display Augmentative and Alternative Communication Apps on Mobile Technology: A Descriptive Study

    PubMed Central

    Drager, Kathryn; Light, Janice; Caron, Jessica Gosnell

    2017-01-01

    Purpose Augmentative and alternative communication (AAC) promotes communicative participation and language development for young children with complex communication needs. However, the motor, linguistic, and cognitive demands of many AAC technologies restrict young children's operational use of and influence over these technologies. The purpose of the current study is to better understand young children's participation in programming vocabulary “just in time” on an AAC application with minimized demands. Method A descriptive study was implemented to highlight the participation of 10 typically developing toddlers (M age: 16 months, range: 10–22 months) in just-in-time vocabulary programming in an AAC app with visual scene displays. Results All 10 toddlers participated in some capacity in adding new visual scene displays and vocabulary to the app just in time. Differences in participation across steps were observed, suggesting variation in the developmental demands of controls involved in vocabulary programming. Conclusions Results from the current study provide clinical insights toward involving young children in AAC programming just in time and steps that may allow for more independent participation or require more scaffolding. Technology designed to minimize motor, cognitive, and linguistic demands may allow children to participate in programming devices at a younger age. PMID:28586825

  11. Recent progress in the imaging of soil processes at the microscopic scale, and a look ahead

    NASA Astrophysics Data System (ADS)

    Garnier, Patricia; Baveye, Philippe C.; Pot, Valérie; Monga, Olivier; Portell, Xavier

    2016-04-01

    Over the last few years, tremendous progress has been achieved in the visualization of soil structures at the microscopic scale. Computed tomography, based on synchrotron X-ray beams or table-top equipment, allows the visualization of pore geometry at micrometric resolution. Chemical and microbiological information obtainable in 2D cuts through soils can now be interpolated, with the support of CT-data, to produce 3-dimensional maps. In parallel with these analytical advances, significant progress has also been achieved in the computer simulation and visualization of a range of physical, chemical, and microbiological processes taking place in soil pores. In terms of water distribution and transport in soils, for example, the use of Lattice-Boltzmann models as well as models based on geometric primitives has been shown recently to reproduce very faithfully observations made with synchrotron X-ray tomography. Coupling of these models with fungal and bacterial growth models allows the description of a range of microbiologically-mediated processes of great importance at the moment, for example in terms of carbon sequestration. In this talk, we shall review progress achieved to date in this field, indicate where questions remain unanswered, and point out areas where further advances are expected in the next few years.

  12. Typical Toddlers' Participation in "Just-in-Time" Programming of Vocabulary for Visual Scene Display Augmentative and Alternative Communication Apps on Mobile Technology: A Descriptive Study.

    PubMed

    Holyfield, Christine; Drager, Kathryn; Light, Janice; Caron, Jessica Gosnell

    2017-08-15

    Augmentative and alternative communication (AAC) promotes communicative participation and language development for young children with complex communication needs. However, the motor, linguistic, and cognitive demands of many AAC technologies restrict young children's operational use of and influence over these technologies. The purpose of the current study is to better understand young children's participation in programming vocabulary "just in time" on an AAC application with minimized demands. A descriptive study was implemented to highlight the participation of 10 typically developing toddlers (M age: 16 months, range: 10-22 months) in just-in-time vocabulary programming in an AAC app with visual scene displays. All 10 toddlers participated in some capacity in adding new visual scene displays and vocabulary to the app just in time. Differences in participation across steps were observed, suggesting variation in the developmental demands of controls involved in vocabulary programming. Results from the current study provide clinical insights toward involving young children in AAC programming just in time and steps that may allow for more independent participation or require more scaffolding. Technology designed to minimize motor, cognitive, and linguistic demands may allow children to participate in programming devices at a younger age.

  13. The neural basis of visual dominance in the context of audio-visual object processing.

    PubMed

    Schmid, Carmen; Büchel, Christian; Rose, Michael

    2011-03-01

    Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. A browser-based 3D Visualization Tool designed for comparing CERES/CALIOP/CloudSAT level-2 data sets.

    NASA Astrophysics Data System (ADS)

    Chu, C.; Sun-Mack, S.; Chen, Y.; Heckert, E.; Doelling, D. R.

    2017-12-01

    In Langley NASA, Clouds and the Earth's Radiant Energy System (CERES) and Moderate Resolution Imaging Spectroradiometer (MODIS) are merged with Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) and CloudSat Cloud Profiling Radar (CPR). The CERES merged product (C3M) matches up to three CALIPSO footprints with each MODIS pixel along its ground track. It then assigns the nearest CloudSat footprint to each of those MODIS pixels. The cloud properties from MODIS, retrieved using the CERES algorithms, are included in C3M with the matched CALIPSO and CloudSat products along with radiances from 18 MODIS channels. The dataset is used to validate the CERES retrieved MODIS cloud properties and the computed TOA and surface flux difference using MODIS or CALIOP/CloudSAT retrieved clouds. This information is then used to tune the computed fluxes to match the CERES observed TOA flux. A visualization tool will be invaluable to determine the cause of these large cloud and flux differences in order to improve the methodology. This effort is part of larger effort to allow users to order the CERES C3M product sub-setted by time and parameter as well as the previously mentioned visualization capabilities. This presentation will show a new graphical 3D-interface, 3D-CERESVis, that allows users to view both passive remote sensing satellites (MODIS and CERES) and active satellites (CALIPSO and CloudSat), such that the detailed vertical structures of cloud properties from CALIPSO and CloudSat are displayed side by side with horizontally retrieved cloud properties from MODIS and CERES. Similarly, the CERES computed profile fluxes whether using MODIS or CALIPSO and CloudSat clouds can also be compared. 3D-CERESVis is a browser-based visualization tool that makes uses of techniques such as multiple synchronized cursors, COLLADA format data and Cesium.

  15. Advanced Engineering Technology for Measuring Performance.

    PubMed

    Rutherford, Drew N; D'Angelo, Anne-Lise D; Law, Katherine E; Pugh, Carla M

    2015-08-01

    The demand for competency-based assessments in surgical training is growing. Use of advanced engineering technology for clinical skills assessment allows for objective measures of hands-on performance. Clinical performance can be assessed in several ways via quantification of an assessee's hand movements (motion tracking), direction of visual attention (eye tracking), levels of stress (physiologic marker measurements), and location and pressure of palpation (force measurements). Innovations in video recording technology and qualitative analysis tools allow for a combination of observer- and technology-based assessments. Overall the goal is to create better assessments of surgical performance with robust validity evidence. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Ontogenetic improvement of visual function in the medaka Oryzias latipes based on an optomotor testing system for larval and adult fish

    USGS Publications Warehouse

    Carvalho, Paulo S. M.; Noltie, Douglas B.; Tillitt, D.E.

    2002-01-01

    We developed a system for evaluation of visual function in larval and adult fish. Both optomotor (swimming) and optokinetic (eye movement) responses were monitored and recorded using a system of rotating stripes. The system allowed manipulation of factors such as width of the stripes used, rotation speed of the striped drum, and light illuminance levels within both the scotopic and photopic ranges. Precise control of these factors allowed quantitative measurements of visual acuity and motion detection. Using this apparatus, we tested the hypothesis that significant posthatch ontogenetic improvements in visual function occur in the medaka Oryzias latipes, and also that this species shows significant in ovo neuronal development. Significant improvements in the acuity angle alpha (ability to discriminate detail) were observed from approximately 5 degrees at hatch to 1 degree in the oldest adult stages. In addition, we measured a significant improvement in flicker fusion thresholds (motion detection skills) between larval and adult life stages within both the scotopic and photopic ranges of light illuminance. Ranges of flicker fusion thresholds (X±SD) at log I=1.96 (photopic) varied from 37.2±1.6 cycles/s in young adults to 18.6±1.6 cycles/s in young larvae 10 days posthatch. At log I=−2.54 (scotopic), flicker fusion thresholds varied from 5.8±0.7 cycles/s in young adults to 1.7±0.4 cycles/s in young larvae 10 days posthatch. Light sensitivity increased approximately 2.9 log units from early hatched larval stages to adults. The demonstrated ontogenetic improvements in visual function probably enable the fish to explore new resources, thereby enlarging their fundamental niche.

  17. Interactive Visual Analysis within Dynamic Ocean Models

    NASA Astrophysics Data System (ADS)

    Butkiewicz, T.

    2012-12-01

    The many observation and simulation based ocean models available today can provide crucial insights for all fields of marine research and can serve as valuable references when planning data collection missions. However, the increasing size and complexity of these models makes leveraging their contents difficult for end users. Through a combination of data visualization techniques, interactive analysis tools, and new hardware technologies, the data within these models can be made more accessible to domain scientists. We present an interactive system that supports exploratory visual analysis within large-scale ocean flow models. The currents and eddies within the models are illustrated using effective, particle-based flow visualization techniques. Stereoscopic displays and rendering methods are employed to ensure that the user can correctly perceive the complex 3D structures of depth-dependent flow patterns. Interactive analysis tools are provided which allow the user to experiment through the introduction of their customizable virtual dye particles into the models to explore regions of interest. A multi-touch interface provides natural, efficient interaction, with custom multi-touch gestures simplifying the otherwise challenging tasks of navigating and positioning tools within a 3D environment. We demonstrate the potential applications of our visual analysis environment with two examples of real-world significance: Firstly, an example of using customized particles with physics-based behaviors to simulate pollutant release scenarios, including predicting the oil plume path for the 2010 Deepwater Horizon oil spill disaster. Secondly, an interactive tool for plotting and revising proposed autonomous underwater vehicle mission pathlines with respect to the surrounding flow patterns predicted by the model; as these survey vessels have extremely limited energy budgets, designing more efficient paths allows for greater survey areas.

  18. Visualization on the Web of 20 Years of Crop Rotation and Wildlife Co-Evolutions

    NASA Astrophysics Data System (ADS)

    Plumejeaud-Perreau, Christine; Poitevin, Cyril; Bretagnolle, Vincent

    2018-05-01

    The accumulation of evidences of the effects of intensive agricultural practices against wildlife fauna and flora, and biodiversity in general, has been largely published in scientific papers (Tildman, 1999). However, data serving as sup-port to their conclusions are often kept hidden behind research institutions. This paper presents a data visualization sys-tem opened on the Web allowing citizens to get a comprehensive access to data issued from such kind of research institution, collected for more than 20 years. The Web Information System has been thought in order to ease the comparison of data issues from various databases describing the same object, the agricultural landscape, at different scales and through different observation devices. An interactive visualization is proposed in order to check co-evolution of fauna and flora together with agricultural practices. It mixes aerial orthoimagery produced since 1950 with vectorial data showing the evolutions of agricultural parcels with those of a few sentinel species such as the Montagu's harrier. This is made through a composition of maps, charts and time lines, and specific tools for comparison. A particular concern is given to the observation effort bias in order to show meaningful statistical aggregates.

  19. Vision in the natural world.

    PubMed

    Hayhoe, Mary M; Rothkopf, Constantin A

    2011-03-01

    Historically, the study of visual perception has followed a reductionist strategy, with the goal of understanding complex visually guided behavior by separate analysis of its elemental components. Recent developments in monitoring behavior, such as measurement of eye movements in unconstrained observers, have allowed investigation of the use of vision in the natural world. This has led to a variety of insights that would be difficult to achieve in more constrained experimental contexts. In general, it shifts the focus of vision away from the properties of the stimulus toward a consideration of the behavioral goals of the observer. It appears that behavioral goals are a critical factor in controlling the acquisition of visual information from the world. This insight has been accompanied by a growing understanding of the importance of reward in modulating the underlying neural mechanisms and by theoretical developments using reinforcement learning models of complex behavior. These developments provide us with the tools to understanding how tasks are represented in the brain, and how they control acquisition of information through use of gaze. WIREs Cogni Sci 2011 2 158-166 DOI: 10.1002/wcs.113 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  20. Visual Reconciliation of Alternative Similarity Spaces in Climate Modeling.

    PubMed

    Poco, Jorge; Dasgupta, Aritra; Wei, Yaxing; Hargrove, William; Schwalm, Christopher R; Huntzinger, Deborah N; Cook, Robert; Bertini, Enrico; Silva, Claudio T

    2014-12-01

    Visual data analysis often requires grouping of data objects based on their similarity. In many application domains researchers use algorithms and techniques like clustering and multidimensional scaling to extract groupings from data. While extracting these groups using a single similarity criteria is relatively straightforward, comparing alternative criteria poses additional challenges. In this paper we define visual reconciliation as the problem of reconciling multiple alternative similarity spaces through visualization and interaction. We derive this problem from our work on model comparison in climate science where climate modelers are faced with the challenge of making sense of alternative ways to describe their models: one through the output they generate, another through the large set of properties that describe them. Ideally, they want to understand whether groups of models with similar spatio-temporal behaviors share similar sets of criteria or, conversely, whether similar criteria lead to similar behaviors. We propose a visual analytics solution based on linked views, that addresses this problem by allowing the user to dynamically create, modify and observe the interaction among groupings, thereby making the potential explanations apparent. We present case studies that demonstrate the usefulness of our technique in the area of climate science.

  1. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    PubMed

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  2. Collaborative visual analytics of radio surveys in the Big Data era

    NASA Astrophysics Data System (ADS)

    Vohl, Dany; Fluke, Christopher J.; Hassan, Amr H.; Barnes, David G.; Kilborn, Virginia A.

    2017-06-01

    Radio survey datasets comprise an increasing number of individual observations stored as sets of multidimensional data. In large survey projects, astronomers commonly face limitations regarding: 1) interactive visual analytics of sufficiently large subsets of data; 2) synchronous and asynchronous collaboration; and 3) documentation of the discovery workflow. To support collaborative data inquiry, we present encube, a large-scale comparative visual analytics framework. encube can utilise advanced visualization environments such as the CAVE2 (a hybrid 2D and 3D virtual reality environment powered with a 100 Tflop/s GPU-based supercomputer and 84 million pixels) for collaborative analysis of large subsets of data from radio surveys. It can also run on standard desktops, providing a capable visual analytics experience across the display ecology. encube is composed of four primary units enabling compute-intensive processing, advanced visualisation, dynamic interaction, parallel data query, along with data management. Its modularity will make it simple to incorporate astronomical analysis packages and Virtual Observatory capabilities developed within our community. We discuss how encube builds a bridge between high-end display systems (such as CAVE2) and the classical desktop, preserving all traces of the work completed on either platform - allowing the research process to continue wherever you are.

  3. Effectiveness of glucose monitoring systems modified for the visually impaired.

    PubMed

    Bernbaum, M; Albert, S G; Brusca, S; McGinnis, J; Miller, D; Hoffmann, J W; Mooradian, A D

    1993-10-01

    To compare three glucose meters modified for use by individuals with diabetes and visual impairment regarding accuracy, precision, and clinical reliability. Ten subjects with diabetes and visual impairment performed self-monitoring of blood glucose using each of the three commercially available blood glucose meters modified for visually impaired users (the AccuChek Freedom [Boehringer Mannheim, Indianapolis, IN], the Diascan SVM [Home Diagnostics, Eatontown, NJ], and the One Touch [Lifescan, Milpitas, CA]). The meters were independently evaluated by a laboratory technologist for precision and accuracy determinations. Only two meters were acceptable with regard to laboratory precision (coefficient of variation < 10%)--the Accuchek and the One Touch. The Accuchek and the One Touch did not differ significantly with regard to laboratory estimates of accuracy. A great discrepancy of the clinical reliability results was observed between these two meters. The Accuchek maintained a high degree of reliability (y = 0.99X + 0.44, r = 0.97, P = 0.001). The visually impaired subjects were unable to perform reliable testing using the One Touch system because of a lack of appropriate tactile landmarks and auditory signals. In addition to laboratory assessments of glucose meters, monitoring systems designed for the visually impaired must include adequate tactile and audible feedback features to allow for the acquisition and placement of appropriate blood samples.

  4. Does the walking task matter? Influence of different walking conditions on dual-task performances in young and older persons.

    PubMed

    Beurskens, Rainer; Bock, Otmar

    2013-12-01

    Previous literature suggests that age-related deficits of dual-task walking are particularly pronounced with second tasks that require continuous visual processing. Here we evaluate whether the difficulty of the walking task matters as well. To this end, participants were asked to walk along a straight pathway of 20m length in four different walking conditions: (a) wide path and preferred pace; (b) narrow path and preferred pace, (c) wide path and fast pace, (d) obstacled wide path and preferred pace. Each condition was performed concurrently with a task requiring visual processing or fine motor control, and all tasks were also performed alone which allowed us to calculate the dual-task costs (DTC). Results showed that the age-related increase of DTC is substantially larger with the visually demanding than with the motor-demanding task, more so when walking on a narrow or obstacled path. We attribute these observations to the fact that visual scanning of the environment becomes more crucial when walking in difficult terrains: the higher visual demand of those conditions accentuates the age-related deficits in coordinating them with a visual non-walking task. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Classroom Demonstration of the Visual Effects of Eye Diseases

    PubMed Central

    Raphail, Ann-Marie; Bach, Emily C.; Hallock, Robert M.

    2014-01-01

    An understanding of the visual system is a fundamental aspect of many neuroscience and psychology courses. These classes often cover a variety of visual diseases that are correlated with the anatomy of the visual system, e.g., cataracts are caused by a clouding of the lens. Here, we describe an easy way to modify standard laboratory glasses/goggles to simulate the various perceptual deficits that accompany vision disorders such as astigmatism, cataracts, diabetic retinopathy, glaucoma, optic neuritis, posterior vitreous detachment, and retinitis pigmentosa. For example, when teaching about cataracts, students can put on glasses that mimic how severe cataracts affect one’s vision. Using the glasses will allow students to draw connections between the disorder, its perceptual deficits, and the underlying anatomy. We also discuss floaters in the eye and provide an easy method to allow students to detect their own floaters. Together, these demonstrations make for a more dynamic and interactive class on the visual system that will better link diseases of the eye to anatomy and perception, and allow undergraduate students to develop a better understanding of the visual system as a whole. PMID:24693262

  6. Rehabilitation of Reading and Visual Exploration in Visual Field Disorders: Transfer or Specificity?

    ERIC Educational Resources Information Center

    Schuett, Susanne; Heywood, Charles A.; Kentridge, Robert W.; Dauner, Ruth; Zihl, Josef

    2012-01-01

    Reading and visual exploration impairments in unilateral homonymous visual field disorders are frequent and disabling consequences of acquired brain injury. Compensatory therapies have been developed, which allow patients to regain sufficient reading and visual exploration performance through systematic oculomotor training. However, it is still…

  7. Computer vision enhances mobile eye-tracking to expose expert cognition in natural-scene visual-search tasks

    NASA Astrophysics Data System (ADS)

    Keane, Tommy P.; Cahill, Nathan D.; Tarduno, John A.; Jacobs, Robert A.; Pelz, Jeff B.

    2014-02-01

    Mobile eye-tracking provides the fairly unique opportunity to record and elucidate cognition in action. In our research, we are searching for patterns in, and distinctions between, the visual-search performance of experts and novices in the geo-sciences. Traveling to regions resultant from various geological processes as part of an introductory field studies course in geology, we record the prima facie gaze patterns of experts and novices when they are asked to determine the modes of geological activity that have formed the scene-view presented to them. Recording eye video and scene video in natural settings generates complex imagery that requires advanced applications of computer vision research to generate registrations and mappings between the views of separate observers. By developing such mappings, we could then place many observers into a single mathematical space where we can spatio-temporally analyze inter- and intra-subject fixations, saccades, and head motions. While working towards perfecting these mappings, we developed an updated experiment setup that allowed us to statistically analyze intra-subject eye-movement events without the need for a common domain. Through such analyses we are finding statistical differences between novices and experts in these visual-search tasks. In the course of this research we have developed a unified, open-source, software framework for processing, visualization, and interaction of mobile eye-tracking and high-resolution panoramic imagery.

  8. Effect of fixation positions on perception of lightness

    NASA Astrophysics Data System (ADS)

    Toscani, Matteo; Valsecchi, Matteo; Gegenfurtner, Karl R.

    2015-03-01

    Visual acuity, luminance sensitivity, contrast sensitivity, and color sensitivity are maximal in the fovea and decrease with retinal eccentricity. Therefore every scene is perceived by integrating the small, high resolution samples collected by moving the eyes around. Moreover, when viewing ambiguous figures the fixated position influences the dominance of the possible percepts. Therefore fixations could serve as a selection mechanism whose function is not confined to finely resolve the selected detail of the scene. Here this hypothesis is tested in the lightness perception domain. In a first series of experiments we demonstrated that when observers matched the color of natural objects they based their lightness judgments on objects' brightest parts. During this task the observers tended to fixate points with above average luminance, suggesting a relationship between perception and fixations that we causally proved using a gaze contingent display in a subsequent experiment. Simulations with rendered physical lighting show that higher values in an object's luminance distribution are particularly informative about reflectance. In a second series of experiments we considered a high level strategy that the visual system uses to segment the visual scene in a layered representation. We demonstrated that eye movement sampling mediates between the layer segregation and its effects on lightness perception. Together these studies show that eye fixations are partially responsible for the selection of information from a scene that allows the visual system to estimate the reflectance of a surface.

  9. Telescopic multi-resolution augmented reality

    NASA Astrophysics Data System (ADS)

    Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold

    2014-05-01

    To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.

  10. NASA/NOAA Electronic Theater: 90 Minutes of Spectacular Visualization

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.

    2004-01-01

    The NASA/NOAA Electronic Theater presents Earth science observations and visualizations from space in a historical perspective. Fly in from outer space to Ashville and the Conference Auditorium. Zoom through the Cosmos to SLC and site of the 2002 Winter Olympics using 1 m IKONOS 'Spy Satellite' data. Contrast the 1972 Apollo 17 'Blue Marble' image of the Earth with the latest US and International global satellite images that allow us to view our Planet from any vantage point. See the latest spectacular images from NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiFS, & Landsat 7, of storms & fires like Hurricane Isabel and the LA/San Diego Fire Storms of 2003. See how High Definition Television (HDTV) is revolutionizing the way we do science communication. Take the pulse of the planet on a daily, annual and 30-year time scale. See daily thunderstorms, the annual blooming of the northern hemisphere land masses and oceans, fires in Africa, dust storms in Iraq, and carbon monoxide exhaust from global burning. See visualizations featured on Newsweek, TIME, National Geographic, Popular Science covers & National & International Network TV. Spectacular new global visualizations of the observed and simulated atmosphere and Oceans are shown. See the currents and vortexes in the Oceans that bring up the nutrients blooms in response to El Nino/La Nina climate changes. The Etheater will be presented using the latest High Definition TV (HDTV) and video projection technology on a large screen. See the global city lights, and the great NE US blackout of August 2003 observed by the 'night-vision' DMSP satellite.

  11. Radio Frequency Identification and Motion-sensitive Video Efficiently Automate Recording of Unrewarded Choice Behavior by Bumblebees

    PubMed Central

    Orbán, Levente L.; Plowright, Catherine M.S.

    2014-01-01

    We present two methods for observing bumblebee choice behavior in an enclosed testing space. The first method consists of Radio Frequency Identification (RFID) readers built into artificial flowers that display various visual cues, and RFID tags (i.e., passive transponders) glued to the thorax of bumblebee workers. The novelty in our implementation is that RFID readers are built directly into artificial flowers that are capable of displaying several distinct visual properties such as color, pattern type, spatial frequency (i.e., “busyness” of the pattern), and symmetry (spatial frequency and symmetry were not manipulated in this experiment). Additionally, these visual displays in conjunction with the automated systems are capable of recording unrewarded and untrained choice behavior. The second method consists of recording choice behavior at artificial flowers using motion-sensitive high-definition camcorders. Bumblebees have number tags glued to their thoraces for unique identification. The advantage in this implementation over RFID is that in addition to observing landing behavior, alternate measures of preference such as hovering and antennation may also be observed. Both automation methods increase experimental control, and internal validity by allowing larger scale studies that take into account individual differences. External validity is also improved because bees can freely enter and exit the testing environment without constraints such as the availability of a research assistant on-site. Compared to human observation in real time, the automated methods are more cost-effective and possibly less error-prone. PMID:25489677

  12. Radio Frequency Identification and motion-sensitive video efficiently automate recording of unrewarded choice behavior by bumblebees.

    PubMed

    Orbán, Levente L; Plowright, Catherine M S

    2014-11-15

    We present two methods for observing bumblebee choice behavior in an enclosed testing space. The first method consists of Radio Frequency Identification (RFID) readers built into artificial flowers that display various visual cues, and RFID tags (i.e., passive transponders) glued to the thorax of bumblebee workers. The novelty in our implementation is that RFID readers are built directly into artificial flowers that are capable of displaying several distinct visual properties such as color, pattern type, spatial frequency (i.e., "busyness" of the pattern), and symmetry (spatial frequency and symmetry were not manipulated in this experiment). Additionally, these visual displays in conjunction with the automated systems are capable of recording unrewarded and untrained choice behavior. The second method consists of recording choice behavior at artificial flowers using motion-sensitive high-definition camcorders. Bumblebees have number tags glued to their thoraces for unique identification. The advantage in this implementation over RFID is that in addition to observing landing behavior, alternate measures of preference such as hovering and antennation may also be observed. Both automation methods increase experimental control, and internal validity by allowing larger scale studies that take into account individual differences. External validity is also improved because bees can freely enter and exit the testing environment without constraints such as the availability of a research assistant on-site. Compared to human observation in real time, the automated methods are more cost-effective and possibly less error-prone.

  13. Attentional gain and processing capacity limits predict the propensity to neglect unexpected visual stimuli.

    PubMed

    Papera, Massimiliano; Richards, Anne

    2016-05-01

    Exogenous allocation of attentional resources allows the visual system to encode and maintain representations of stimuli in visual working memory (VWM). However, limits in the processing capacity to allocate resources can prevent unexpected visual stimuli from gaining access to VWM and thereby to consciousness. Using a novel approach to create unbiased stimuli of increasing saliency, we investigated visual processing during a visual search task in individuals who show a high or low propensity to neglect unexpected stimuli. When propensity to inattention is high, ERP recordings show a diminished amplification concomitantly with a decrease in theta band power during the N1 latency, followed by a poor target enhancement during the N2 latency. Furthermore, a later modulation in the P3 latency was also found in individuals showing propensity to visual neglect, suggesting that more effort is required for conscious maintenance of visual information in VWM. Effects during early stages of processing (N80 and P1) were also observed suggesting that sensitivity to contrasts and medium-to-high spatial frequencies may be modulated by low-level saliency (albeit no statistical group differences were found). In accordance with the Global Workplace Model, our data indicate that a lack of resources in low-level processors and visual attention may be responsible for the failure to "ignite" a state of high-level activity spread across several brain areas that is necessary for stimuli to access awareness. These findings may aid in the development of diagnostic tests and intervention to detect/reduce inattention propensity to visual neglect of unexpected stimuli. © 2016 Society for Psychophysiological Research.

  14. An annotation system for 3D fluid flow visualization

    NASA Technical Reports Server (NTRS)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  15. Phase transitions in mixed gas hydrates: experimental observations versus calculated data.

    PubMed

    Schicks, Judith M; Naumann, Rudolf; Erzinger, Jörg; Hester, Keith C; Koh, Carolyn A; Sloan, E Dendy

    2006-06-15

    This paper presents the phase behavior of multicomponent gas hydrate systems formed from primarily methane with small amounts of ethane and propane. Experimental conditions were typically in a pressure range between 1 and 6 MPa, and the temperature range was between 260 and 290 K. These multicomponent systems have been investigated using a variety of techniques including microscopic observations, Raman spectroscopy, and X-ray diffraction. These techniques, used in combination, allowed for measurement of the hydrate structure and composition, while observing the morphology of the hydrate crystals measured. The hydrate formed immediately below the three-phase line (V-L --> V-L-H) and contained crystals that were both light and dark in appearance. The light crystals, which visually were a single solid phase, showed a spectroscopic indication for the presence of occluded free gas in the hydrate. In contrast, the dark crystals were measured to be structure II (sII) without the presence of these occluded phases. Along with hydrate measurements near the decomposition line, an unexpected transformation process was visually observed at P-T-conditions in the stability field of the hydrates. Larger crystallites transformed into a foamy solid upon cooling over this transition line (between 5 and 10 K below the decomposition temperature). Below the transition line, a mixture of sI and sII was detected. This is the first time that these multicomponent systems have been investigated at these pressure and temperature conditions using both visual and spectroscopic techniques. These techniques enabled us to observe and measure the unexpected transformation process showing coexistence of different gas hydrate phases.

  16. Observation and visualization: reflections on the relationship between science, visual arts, and the evolution of the scientific image.

    PubMed

    Kolijn, Eveline

    2013-10-01

    The connections between biological sciences, art and printed images are of great interest to the author. She reflects on the historical relevance of visual representations for science. She argues that the connection between art and science seems to have diminished during the twentieth century. However, this connection is currently growing stronger again through digital media and new imaging methods. Scientific illustrations have fuelled art, while visual modeling tools have assisted scientific research. As a print media artist, she explores the relationship between art and science in her studio practice and will present this historical connection with examples related to evolution, microbiology and her own work. Art and science share a common source, which leads to scrutiny and enquiry. Science sets out to reveal and explain our reality, whereas art comments and makes connections that don't need to be tested by rigorous protocols. Art and science should each be evaluated on their own merit. Allowing room for both in the quest to understand our world will lead to an enriched experience.

  17. Effects of Different Heave Motion Components on Pilot Pitch Control Behavior

    NASA Technical Reports Server (NTRS)

    Zaal, Petrus M. T.; Zavala, Melinda A.

    2016-01-01

    The study described in this paper had two objectives. The first objective was to investigate if a different weighting of heave motion components decomposed at the center of gravity, allowing for a higher fidelity of individual components, would result in pilot manual pitch control behavior and performance closer to that observed with full aircraft motion. The second objective was to investigate if decomposing the heave components at the aircraft's instantaneous center of rotation rather than at the center of gravity could result in additional improvements in heave motion fidelity. Twenty-one general aviation pilots performed a pitch attitude control task in an experiment conducted on the Vertical Motion Simulator at NASA Ames under different hexapod motion conditions. The large motion capability of the Vertical Motion Simulator also allowed for a full aircraft motion condition, which served as a baseline. The controlled dynamics were of a transport category aircraft trimmed close to the stall point. When the ratio of center of gravity pitch heave to center of gravity heave increased in the hexapod motion conditions, pilot manual control behavior and performance became increasingly more similar to what is observed with full aircraft motion. Pilot visual and motion gains significantly increased, while the visual lead time constant decreased. The pilot visual and motion time delays remained approximately constant and decreased, respectively. The neuromuscular damping and frequency both decreased, with their values more similar to what is observed with real aircraft motion when there was an equal weighting of the heave of the center of gravity and heave due to rotations about the center of gravity. In terms of open- loop performance, the disturbance and target crossover frequency increased and decreased, respectively, and their corresponding phase margins remained constant and increased, respectively. The decomposition point of the heave components only had limited effects on pilot manual control behavior and performance.

  18. High-performance object tracking and fixation with an online neural estimator.

    PubMed

    Kumarawadu, Sisil; Watanabe, Keigo; Lee, Tsu-Tian

    2007-02-01

    Vision-based target tracking and fixation to keep objects that move in three dimensions in view is important for many tasks in several fields including intelligent transportation systems and robotics. Much of the visual control literature has focused on the kinematics of visual control and ignored a number of significant dynamic control issues that limit performance. In line with this, this paper presents a neural network (NN)-based binocular tracking scheme for high-performance target tracking and fixation with minimum sensory information. The procedure allows the designer to take into account the physical (Lagrangian dynamics) properties of the vision system in the control law. The design objective is to synthesize a binocular tracking controller that explicitly takes the systems dynamics into account, yet needs no knowledge of dynamic nonlinearities and joint velocity sensory information. The combined neurocontroller-observer scheme can guarantee the uniform ultimate bounds of the tracking, observer, and NN weight estimation errors under fairly general conditions on the controller-observer gains. The controller is tested and verified via simulation tests in the presence of severe target motion changes.

  19. Influence of daytime running lamps on visual reaction time of pedestrians when detecting turn indicators.

    PubMed

    Peña-García, Antonio; de Oña Lopez, Rocío; Espín Estrella, Antonio; Aznar Dols, Fernando; Calvo Poyo, Franscisco J; Molero Mesa, Evaristo; de Oña López, Juan

    2010-10-01

    This article describes one experiment that studied the influence of Daytime Running Lamps (DRL) on pedestrian detection of turn indicators. An experimental device including one DRL and one turn indicator was used in order to determine Visual Reaction Times (VRT) of 148 observers in different situations involving turn indicator activation. Such situations were combinations of three main variables: color of DRL, separation between DRL and Turn Indicator, and observation angle. Significant changes in VRT were found depending on the configurations above, especially the observation angle and the color of DRL. This second result demonstrates that amber DRLs inhibit the detection of Turn Indicators. One of the main targets of this paper is to recommend that carmakers introduce only white DRLs on new vehicles. We also intend to advise regulatory bodies working on automotive regulation about the consequences of allowing amber DRLs and also about the danger of introducing constrains on the distance between DRL and Turn Indicator without further experimental evidences. Copyright © 2010 Elsevier Ltd and National Safety Council. All rights reserved.

  20. Ultra-precise Masses and Magnitudes for the Gliese 268 M-dwarf Binary

    NASA Astrophysics Data System (ADS)

    Barry, R. K.; Demory, B. O.; Ségransan, D.; Forveille, T.; Danchi, W. C.; di Folco, E.; Queloz, D.; Torres, G.; Traub, W. A.; Delfosse, X.; Mayor, M.; Perrier, C.; Udry, S.

    2009-02-01

    Recent advances in astrometry using interferometry and precision radial velocity techniques combined allow for a significant improvement in the precision of masses of M-dwarf stars in visual systems. We report recent astrometric observations of Gliese 268, an M-dwarf binary with a 10.4 day orbital period, with the IOTA interferometer and radial velocity observations with the ELODIE instrument. Combining these measurements leads to preliminary masses of the constituent stars with uncertainties of 0.4%. The masses of the components are 0.22596+/-0.00084 Msolar for the primary and 0.19230+/-0.00071 Msolar for the secondary. The system parallax is determined by these observations to be 0.1560+/-.0030 arcsec (2.0% uncertainty) and is within Hipparcos error bars (0.1572+/-.0033). We tested these physical parameters, along with the near-infrared luminosities of the stars, against stellar evolution models for low-mass stars. Discrepancies between the measured and theoretical values point toward a low-level departure from the predictions. These results are among the most precise masses measured for visual binaries.

  1. GloVis

    USGS Publications Warehouse

    Houska, Treva R.; Johnson, A.P.

    2012-01-01

    The Global Visualization Viewer (GloVis) trifold provides basic information for online access to a subset of satellite and aerial photography collections from the U.S. Geological Survey Earth Resources Observation and Science (EROS) Center archive. The GloVis (http://glovis.usgs.gov/) browser-based utility allows users to search and download National Aerial Photography Program (NAPP), National High Altitude Photography (NHAP), Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Earth Observing-1 (EO-1), Global Land Survey, Moderate Resolution Imaging Spectroradiometer (MODIS), and TerraLook data. Minimum computer system requirements and customer service contact information also are included in the brochure.

  2. Integration of Robotics and 3D Visualization to Modernize the Expeditionary Warfare Demonstrator (EWD)

    DTIC Science & Technology

    2009-09-01

    his schedule is. I learned most from our informal discussions and collaboration with other industry professionals. Amela was instrumental in allowing...me to effectively analyze, structure and critique my work. I take many professional lessons learned from Amela with me as I leave NPS. Thanks to...observers began learning about maneuver warfare in a large-scale battle. The demonstration was recognized as a huge success after General von Muffling

  3. MEDSAT - A remote sensing satellite for malaria early warning and control

    NASA Technical Reports Server (NTRS)

    Vesecky, John; Slawski, James; Stottlemeyer, Bret; De La Sierra, Ruben; Daida, Jason; Wood, Byron; Lawless, James

    1992-01-01

    A remote sensing, medical satellite (MEDSAT) aids in the control of carrier (vector) borne disease. The prototype design is a light satellite to test for control of malaria. The design features a 340-kg satellite with visual/IR and SAR sensors in a low inclination orbit observing a number of worldwide test sites. The approach is to use four-band visual/IR and dual-polarized L-band SAR images obtained from MEDSAT in concert with in-situ data to estimate the temporal and spatial variations of malaria risk. This allows public health resources to focus on the most vulnerable areas at the appropriate time. It is concluded that a light-satellite design for a MEDSAT satellite with a Pegasus launch is feasible.

  4. Visual identification and similarity measures used for on-line motion planning of autonomous robots in unknown environments

    NASA Astrophysics Data System (ADS)

    Martínez, Fredy; Martínez, Fernando; Jacinto, Edwar

    2017-02-01

    In this paper we propose an on-line motion planning strategy for autonomous robots in dynamic and locally observable environments. In this approach, we first visually identify geometric shapes in the environment by filtering images. Then, an ART-2 network is used to establish the similarity between patterns. The proposed algorithm allows that a robot establish its relative location in the environment, and define its navigation path based on images of the environment and its similarity to reference images. This is an efficient and minimalist method that uses the similarity of landmark view patterns to navigate to the desired destination. Laboratory tests on real prototypes demonstrate the performance of the algorithm.

  5. Arthroscopic approach and intra-articular anatomy of the dorsal and plantar synovial compartments of the bovine tarsocrural joint.

    PubMed

    Lardé, Hélène; Nichols, Sylvain; Babkine, Marie; Desrochers, André

    2017-01-01

    To determine arthroscopic approaches to the dorsal and plantar synovial compartments of the tarsocrural joint in adult cattle, and to describe the arthroscopic intra-articular anatomy from each approach. Ex vivo study. Fresh adult bovine cadavers (n = 7). Two tarsocrural joint were injected with latex to determine arthroscopic portal locations and arthroscopy of the tarsocrural joint of 12 tarsi was performed. The dorsolateral approach was made through the large pouch located between the long digital extensor and peroneus longus tendons. The dorsomedial approach was made just medial to the common synovial sheath of the tibialis cranialis, peroneus tertius, and long digital extensor tendons. The plantarolateral and plantaromedial approaches were made lateral and medial to the tarsal tendon sheath, respectively. Each approach allowed visualization of the distal tibia articulating with the proximal trochlea of the talus. Consistently observed structures included the distal intermediate ridge of the tibia, and the medial and lateral trochlear ridges and trochlear groove of the talus. Lateral and medial malleoli were best assessed from dorsal approaches. From the lateral approaches evaluation of the abaxial surface of the lateral trochlear ridge allowed visualization of the fibulocalcaneal joint. From the plantar approaches additional observed structures included the coracoid process of the calcaneus, plantar trochlea of the talus, and plantar talotibial and talofibular ligaments. In cattle, the dorsolateral and plantarolateral approaches allowed for the best evaluation of the dorsal and plantar aspects of the tarsocrural joint, respectively. © 2017 The American College of Veterinary Surgeons.

  6. Subconscious Visual Cues during Movement Execution Allow Correct Online Choice Reactions

    PubMed Central

    Leukel, Christian; Lundbye-Jensen, Jesper; Christensen, Mark Schram; Gollhofer, Albert; Nielsen, Jens Bo; Taube, Wolfgang

    2012-01-01

    Part of the sensory information is processed by our central nervous system without conscious perception. Subconscious processing has been shown to be capable of triggering motor reactions. In the present study, we asked the question whether visual information, which is not consciously perceived, could influence decision-making in a choice reaction task. Ten healthy subjects (28±5 years) executed two different experimental protocols. In the Motor reaction protocol, a visual target cue was shown on a computer screen. Depending on the displayed cue, subjects had to either complete a reaching movement (go-condition) or had to abort the movement (stop-condition). The cue was presented with different display durations (20–160 ms). In the second Verbalization protocol, subjects verbalized what they experienced on the screen. Again, the cue was presented with different display durations. This second protocol tested for conscious perception of the visual cue. The results of this study show that subjects achieved significantly more correct responses in the Motor reaction protocol than in the Verbalization protocol. This difference was only observed at the very short display durations of the visual cue. Since correct responses in the Verbalization protocol required conscious perception of the visual information, our findings imply that the subjects performed correct motor responses to visual cues, which they were not conscious about. It is therefore concluded that humans may reach decisions based on subconscious visual information in a choice reaction task. PMID:23049749

  7. A Polymer Visualization System with Accurate Heating and Cooling Control and High-Speed Imaging

    PubMed Central

    Wong, Anson; Guo, Yanting; Park, Chul B.; Zhou, Nan Q.

    2015-01-01

    A visualization system to observe crystal and bubble formation in polymers under high temperature and pressure has been developed. Using this system, polymer can be subjected to a programmable thermal treatment to simulate the process in high pressure differential scanning calorimetry (HPDSC). With a high-temperature/high-pressure view-cell unit, this system enables in situ observation of crystal formation in semi-crystalline polymers to complement thermal analyses with HPDSC. The high-speed recording capability of the camera not only allows detailed recording of crystal formation, it also enables in situ capture of plastic foaming processes with a high temporal resolution. To demonstrate the system’s capability, crystal formation and foaming processes of polypropylene/carbon dioxide systems were examined. It was observed that crystals nucleated and grew into spherulites, and they grew at faster rates as temperature decreased. This observation agrees with the crystallinity measurement obtained with the HPDSC. Cell nucleation first occurred at crystals’ boundaries due to CO2 exclusion from crystal growth fronts. Subsequently, cells were nucleated around the existing ones due to tensile stresses generated in the constrained amorphous regions between networks of crystals. PMID:25915031

  8. An Unroofing Method to Observe the Cytoskeleton Directly at Molecular Resolution Using Atomic Force Microscopy

    PubMed Central

    Usukura, Eiji; Narita, Akihiro; Yagi, Akira; Ito, Shuichi; Usukura, Jiro

    2016-01-01

    An improved unroofing method enabled the cantilever of an atomic force microscope (AFM) to reach directly into a cell to visualize the intracellular cytoskeletal actin filaments, microtubules, clathrin coats, and caveolae in phosphate-buffered saline (PBS) at a higher resolution than conventional electron microscopy. All of the actin filaments clearly exhibited a short periodicity of approximately 5–6 nm, which was derived from globular actins linked to each other to form filaments, as well as a long helical periodicity. The polarity of the actin filaments appeared to be determined by the shape of the periodic striations. Microtubules were identified based on their thickness. Clathrin coats and caveolae were observed on the cytoplasmic surface of cell membranes. The area containing clathrin molecules and their terminal domains was directly visualized. Characteristic ridge structures located at the surface of the caveolae were observed at high resolution, similar to those observed with electron microscopy (EM). Overall, unroofing allowed intracellular AFM imaging in a liquid environment with a level of quality equivalent or superior to that of EM. Thus, AFMs are anticipated to provide cutting-edge findings in cell biology and histology. PMID:27273367

  9. OnSight: Multi-platform Visualization of the Surface of Mars

    NASA Astrophysics Data System (ADS)

    Abercrombie, S. P.; Menzies, A.; Winter, A.; Clausen, M.; Duran, B.; Jorritsma, M.; Goddard, C.; Lidawer, A.

    2017-12-01

    A key challenge of planetary geology is to develop an understanding of an environment that humans cannot (yet) visit. Instead, scientists rely on visualizations created from images sent back by robotic explorers, such as the Curiosity Mars rover. OnSight is a multi-platform visualization tool that helps scientists and engineers to visualize the surface of Mars. Terrain visualization allows scientists to understand the scale and geometric relationships of the environment around the Curiosity rover, both for scientific understanding and for tactical consideration in safely operating the rover. OnSight includes a web-based 2D/3D visualization tool, as well as an immersive mixed reality visualization. In addition, OnSight offers a novel feature for communication among the science team. Using the multiuser feature of OnSight, scientists can meet virtually on Mars, to discuss geology in a shared spatial context. Combining web-based visualization with immersive visualization allows OnSight to leverage strengths of both platforms. This project demonstrates how 3D visualization can be adapted to either an immersive environment or a computer screen, and will discuss advantages and disadvantages of both platforms.

  10. Virtual presence for mission visualization: computer game technology provides a new approach

    NASA Astrophysics Data System (ADS)

    Hussey, K.

    2007-08-01

    The concept of virtual presence for mission and planetary science visualization is to allow the public to "see" in space as if they were either riding aboard or standing next to an ESA/NASA spacecraft. Our approach to accomplishing this goal is to utilize and extend the same technology used by the computer gaming industry.With this technology, people would be able to immediately "look" in any direction from their virtual location and "zoom-in" at will. Whenever real data for their "view" exists it would be incorporated into the scene. Where data is missing, a high-fidelity simulation of the view would be generated to fill in the chosen field of view. The observer could also change the time of observation into the past or future. The potential for the application of this technology for the development of educational curricula is huge. On the engineering side, all allowable spacecraft and environmental parameters that are being measured and sent to Earth would be immediately viewable as if looking at the dashboard of a car or an instrument panel of an aircraft. Historical information could also be displayed upon request. This can revolutionize the way the general public and planetary scientific community views ESA/NASA missions and provides an educational context that is attractive to the younger generation. While conceptually using this technology is quite simple, the cross-discipline technical challenges are very demanding. This technology is currently under development and application at JPL to assist current missions in viewing their data, communicating with the public and visualizing future mission plans. Real-time demonstrations of the technology described will be shown.

  11. Video-microscopy of NCAP films: the observation of LC droplets in real time

    NASA Astrophysics Data System (ADS)

    Reamey, Robert H.; Montoya, Wayne; Wong, Abraham

    1992-06-01

    We have used video-microscopy to observe the behavior of liquid crystal (LC) droplets within nematic droplet-polymer films (NCAP) as the droplets respond to an applied electric field. The textures observed at intermediate fields yielded information about the process of liquid crystal orientation dynamics within droplets. The nematic droplet-polymer films had low LC content (less than 1 percent) to allow the observation of individual droplets in a 2 - 6 micrometers size range. The aqueous emulsification technique was used to prepare the films as it allows the straightforward preparation of low LC content films with a controlled droplet size range. Standard electro-optical (E-O) tests were also performed on the films, allowing us to correlate single droplet behavior with that of the film as a whole. Hysteresis measured in E-O tests was visually confirmed by droplet orientation dynamics; a film which had high hysteresis in E-O tests exhibited distinctly different LC orientations within the droplet when ramped up in voltage than when ramped down in voltage. Ramping the applied voltage to well above saturation resulted in some droplets becoming `stuck'' in a new droplet structure which can be made to revert back to bipolar with high voltage pulses or with heat.

  12. Integrating Satellite, Radar and Surface Observation with Time and Space Matching

    NASA Astrophysics Data System (ADS)

    Ho, Y.; Weber, J.

    2015-12-01

    The Integrated Data Viewer (IDV) from Unidata is a Java™-based software framework for analyzing and visualizing geoscience data. It brings together the ability to display and work with satellite imagery, gridded data, surface observations, balloon soundings, NWS WSR-88D Level II and Level III RADAR data, and NOAA National Profiler Network data, all within a unified interface. Applying time and space matching on the satellite, radar and surface observation datasets will automatically synchronize the display from different data sources and spatially subset to match the display area in the view window. These features allow the IDV users to effectively integrate these observations and provide 3 dimensional views of the weather system to better understand the underlying dynamics and physics of weather phenomena.

  13. Auditory and visual connectivity gradients in frontoparietal cortex

    PubMed Central

    Hellyer, Peter J.; Wise, Richard J. S.; Leech, Robert

    2016-01-01

    Abstract A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal–ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior–anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top–down modulation of modality‐specific information to occur within higher‐order cortex. This could provide a potentially faster and more efficient pathway by which top–down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long‐range connections to sensory cortices. Hum Brain Mapp 38:255–270, 2017. © 2016 Wiley Periodicals, Inc. PMID:27571304

  14. A Microfluidic-Enabled Mechanical Microcompressor for the Immobilization of Live Single- and Multi-Cellular Specimens

    PubMed Central

    Yan, Yingjun; Jiang, Liwei; Aufderheide, Karl J.; Wright, Gus A.; Terekhov, Alexander; Costa, Lino; Qin, Kevin; McCleery, W. Tyler; Fellenstein, John J.; Ustione, Alessandro; Robertson, J. Brian; Johnson, Carl Hirschie; Piston, David W.; Hutson, M. Shane; Wikswo, John P.; Hofmeister, William; Janetopoulos, Chris

    2014-01-01

    A microcompressor is a precision mechanical device that flattens and immobilizes living cells and small organisms for optical microscopy, allowing enhanced visualization of sub-cellular structures and organelles. We have developed an easily fabricated device, which can be equipped with microfluidics, permitting the addition of media or chemicals during observation. This device can be used on both upright and inverted microscopes. The apparatus permits micrometer precision flattening for nondestructive immobilization of specimens as small as a bacterium, while also accommodating larger specimens, such as Caenorhabditis elegans, for long-term observations. The compressor mount is removable and allows easy specimen addition and recovery for later observation. Several customized specimen beds can be incorporated into the base. To demonstrate the capabilities of the device, we have imaged numerous cellular events in several protozoan species, in yeast cells, and in Drosophila melanogaster embryos. We have been able to document previously unreported events, and also perform photobleaching experiments, in conjugating Tetrahymena thermophila. PMID:24444078

  15. Farris-Tang retractor in optic nerve sheath decompression surgery.

    PubMed

    Spiegel, Jennifer A; Sokol, Jason A; Whittaker, Thomas J; Bernard, Benjamin; Farris, Bradley K

    2016-01-01

    Our purpose is to introduce the use of the Farris-Tang retractor in optic nerve sheath decompression surgery. The procedure of optic nerve sheath fenestration was reviewed at our tertiary care teaching hospital, including the use of the Farris-Tang retractor. Pseudotumor cerebri is a syndrome of increased intracranial pressure without a clear cause. Surgical treatment can be effective in cases in which medical therapy has failed and disc swelling with visual field loss progresses. Optic nerve sheath decompression surgery (ONDS) involves cutting slits or windows in the optic nerve sheath to allow cerebrospinal fluid to escape, reducing the pressure around the optic nerve. We introduce the Farris-Tang retractor, a retractor that allows for excellent visualization of the optic nerve sheath during this surgery, facilitating the fenestration of the sheath and visualization of the subsequent cerebrospinal fluid egress. Utilizing a medial conjunctival approach, the Farris-Tang retractor allows for easy retraction of the medial orbital tissue and reduces the incidence of orbital fat protrusion through Tenon's capsule. The Farris-Tang retractor allows safe, easy, and effective access to the optic nerve with good visualization in optic nerve sheath decompression surgery. This, in turn, allows for greater surgical efficiency and positive patient outcomes.

  16. Visualizing Uncertainty for Probabilistic Weather Forecasting based on Reforecast Analogs

    NASA Astrophysics Data System (ADS)

    Pelorosso, Leandro; Diehl, Alexandra; Matković, Krešimir; Delrieux, Claudio; Ruiz, Juan; Gröeller, M. Eduard; Bruckner, Stefan

    2016-04-01

    Numerical weather forecasts are prone to uncertainty coming from inaccuracies in the initial and boundary conditions and lack of precision in numerical models. Ensemble of forecasts partially addresses these problems by considering several runs of the numerical model. Each forecast is generated with different initial and boundary conditions and different model configurations [GR05]. The ensembles can be expressed as probabilistic forecasts, which have proven to be very effective in the decision-making processes [DE06]. The ensemble of forecasts represents only some of the possible future atmospheric states, usually underestimating the degree of uncertainty in the predictions [KAL03, PH06]. Hamill and Whitaker [HW06] introduced the "Reforecast Analog Regression" (RAR) technique to overcome the limitations of ensemble forecasting. This technique produces probabilistic predictions based on the analysis of historical forecasts and observations. Visual analytics provides tools for processing, visualizing, and exploring data to get new insights and discover hidden information patterns in an interactive exchange between the user and the application [KMS08]. In this work, we introduce Albero, a visual analytics solution for probabilistic weather forecasting based on the RAR technique. Albero targets at least two different type of users: "forecasters", who are meteorologists working in operational weather forecasting and "researchers", who work in the construction of numerical prediction models. Albero is an efficient tool for analyzing precipitation forecasts, allowing forecasters to make and communicate quick decisions. Our solution facilitates the analysis of a set of probabilistic forecasts, associated statistical data, observations and uncertainty. A dashboard with small-multiples of probabilistic forecasts allows the forecasters to analyze at a glance the distribution of probabilities as a function of time, space, and magnitude. It provides the user with a more accurate measure of forecast uncertainty that could result in better decision-making. It offers different level of abstractions to help with the recalibration of the RAR method. It also has an inspection tool that displays the selected analogs, their observations and statistical data. It gives the users access to inner parts of the method, unveiling hidden information. References [GR05] GNEITING T., RAFTERY A. E.: Weather forecasting with ensemble methods. Science 310, 5746, 248-249, 2005. [KAL03] KALNAY E.: Atmospheric modeling, data assimilation and predictability. Cambridge University Press, 2003. [PH06] PALMER T., HAGEDORN R.: Predictability of weather and climate. Cambridge University Press, 2006. [HW06] HAMILL T. M., WHITAKER J. S.: Probabilistic quantitative precipitation forecasts based on reforecast analogs: Theory and application. Monthly Weather Review 134, 11, 3209-3229, 2006. [DE06] DEITRICK S., EDSALL R.: The influence of uncertainty visualization on decision making: An empirical evaluation. Springer, 2006. [KMS08] KEIM D. A., MANSMANN F., SCHNEIDEWIND J., THOMAS J., ZIEGLER H.: Visual analytics: Scope and challenges. Springer, 2008.

  17. Satellite and airborne oil spill remote sensing: State of the art and application to the BP DeepWater Horizon oil spill

    USGS Publications Warehouse

    Leifer, I.; Clark, R.; Jones, C.; Holt, B.; Svejkovsky, J.; Swayze, G.

    2011-01-01

    The vast, persistent, and unconstrained oil release from the DeepWater Horizon (DWH) challenged the spill response, which required accurate quantitative oil assessment at synoptic and operational scales. Experienced observers are the mainstay of oil spill response. Key limitations are weather, scene illumination geometry, and few trained observers, leading to potential observer bias. Aiding the response was extensive passive and active satellite and airborne remote sensing, including intelligent system augmentation, reviewed herein. Oil slick appearance strongly depends on many factors like emulsion composition and scene geometry, yielding false positives and great thickness uncertainty. Oil thicknesses and the oil to water ratios for thick slicks were derived quantitatively with a new spectral library approach based on the shape and depth of spectral features related to C-H vibration bands. The approach used near infrared, imaging spectroscopy data from the AVIRIS (Airborne Visual/InfraRed Imaging Spectrometer) instrument on the NASA ER-2 stratospheric airplane. Extrapolation to the total slick used MODIS satellite visual-spectrum broadband data, which observes sunglint reflection from surface slicks; i.e., indicates the presence of oil and/or surfactant slicks. Oil slick emissivity is less than seawater's allowing MODIS thermal infrared (TIR) nighttime identification; however, water temperature variations can cause false positives. Some strong emissivity features near 6.7 and 9.7 ??m could be analyzed as for the AVIRIS short wave infrared features, but require high spectral resolution data. TIR spectral trends can allow fresh/weathered oil discrimination. Satellite Synthetic Aperture Radar (SSAR) provided synoptic data under all-sky conditions by observing oil dampening of capillary waves; however, SSAR typically cannot discriminate thick from thin oil slicks. Airborne UAVSAR's significantly greater signal-to-noise ratio and fine spatial resolution allowed successful mapping of oil slick thickness-related patterns. Laser induced fluorescence (LIF) can quantify oil thicknesses by Raman scattering line distortions, but saturates for >20-??m thick oil and depends on oil optical characteristics and sea state. Combined with laser bathymetry LIF can provide submerged oil remote sensing.

  18. 6D Visualization of Multidimensional Data by Means of Cognitive Technology

    NASA Astrophysics Data System (ADS)

    Vitkovskiy, V.; Gorohov, V.; Komarinskiy, S.

    2010-12-01

    On the basis of the cognitive graphics concept, we worked out the SW-system for visualization and analysis. It allows to train and to aggravate intuition of researcher, to raise his interest and motivation to the creative, scientific cognition, to realize process of dialogue with the very problems simultaneously. The Space Hedgehog system is the next step in the cognitive means of the multidimensional data analyze. The technique and technology cognitive 6D visualization of the multidimensional data is developed on the basis of the cognitive visualization research and technology development. The Space Hedgehog system allows direct dynamic visualization of 6D objects. It is developed with use of experience of the program Space Walker creation and its applications.

  19. MOPEX: a software package for astronomical image processing and visualization

    NASA Astrophysics Data System (ADS)

    Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley

    2006-06-01

    We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.

  20. The Art of Observation: A Pedagogical Framework.

    PubMed

    Wellbery, Caroline; McAteer, Rebecca A

    2015-12-01

    Observational skills, honed through experience with the literary and visual arts, bring together in a timely manner many of the goals of the medical humanities, providing thematic cohesion through the act of seeing while aiming to advance clinical skills through a unified practice. In an arts observation pedagogy, nature writing serves as an apt model for precise, clinically relevant linguistic noticing because meticulous attention to the natural world involves scientific precision; additionally, a number of visual metaphors employed in medicine are derived from close observation of the natural world. Close reading reinforces observational skills as part of integrative, multidisciplinary clinical practice. Literary precision provides an educational bridge to recognizing the importance of detail in the clinical realm. In weighing multiple perspectives, observation applied to practice helps learners understand the nuances of the role of witness, activating reflection consonant with the viewer's professional identity. The realization that seeing is highly filtered through the observer's values allows the act of observation to come under scrutiny, opening the observer's gaze to disturbance and challenging the values and precepts of the prevailing medical culture. Application of observational skills can, for example, help observers recognize and address noxious effects of the built environment. As learners describe what they see, they also develop the communication skills needed to articulate both problems and possible improvements within their expanding sphere of influence. The ability to craft this speech as public narrative can lead to interventions with positive impacts on physicians, their colleagues, and patients.

  1. Digital photorefraction

    NASA Astrophysics Data System (ADS)

    Costa, Manuel F. M.; Jorge, Jorge M.

    1998-01-01

    The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.

  2. Digital photorefraction

    NASA Astrophysics Data System (ADS)

    Costa, Manuel F.; Jorge, Jorge M.

    1997-12-01

    The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.

  3. Spatial and object-based attention modulates broadband high-frequency responses across the human visual cortical hierarchy.

    PubMed

    Davidesco, Ido; Harel, Michal; Ramot, Michal; Kramer, Uri; Kipervasser, Svetlana; Andelman, Fani; Neufeld, Miri Y; Goelman, Gadi; Fried, Itzhak; Malach, Rafael

    2013-01-16

    One of the puzzling aspects in the visual attention literature is the discrepancy between electrophysiological and fMRI findings: whereas fMRI studies reveal strong attentional modulation in the earliest visual areas, single-unit and local field potential studies yielded mixed results. In addition, it is not clear to what extent spatial attention effects extend from early to high-order visual areas. Here we addressed these issues using electrocorticography recordings in epileptic patients. The patients performed a task that allowed simultaneous manipulation of both spatial and object-based attention. They were presented with composite stimuli, consisting of a small object (face or house) superimposed on a large one, and in separate blocks, were instructed to attend one of the objects. We found a consistent increase in broadband high-frequency (30-90 Hz) power, but not in visual evoked potentials, associated with spatial attention starting with V1/V2 and continuing throughout the visual hierarchy. The magnitude of the attentional modulation was correlated with the spatial selectivity of each electrode and its distance from the occipital pole. Interestingly, the latency of the attentional modulation showed a significant decrease along the visual hierarchy. In addition, electrodes placed over high-order visual areas (e.g., fusiform gyrus) showed both effects of spatial and object-based attention. Overall, our results help to reconcile previous observations of discrepancy between fMRI and electrophysiology. They also imply that spatial attention effects can be found both in early and high-order visual cortical areas, in parallel with their stimulus tuning properties.

  4. Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities

    PubMed Central

    Foerster, Rebecca M.; Poth, Christian H.; Behler, Christian; Botsch, Mario; Schneider, Werner X.

    2016-01-01

    Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen’s visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions. PMID:27869220

  5. Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities.

    PubMed

    Foerster, Rebecca M; Poth, Christian H; Behler, Christian; Botsch, Mario; Schneider, Werner X

    2016-11-21

    Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen's visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.

  6. Reef-coral proteins as visual, non-destructive reporters for plant transformation.

    PubMed

    Wenck, A; Pugieux, C; Turner, M; Dunn, M; Stacy, C; Tiozzo, A; Dunder, E; van Grinsven, E; Khan, R; Sigareva, M; Wang, W C; Reed, J; Drayton, P; Oliver, D; Trafford, H; Legris, G; Rushton, H; Tayab, S; Launis, K; Chang, Y-F; Chen, D-F; Melchers, L

    2003-11-01

    Recently, five novel fluorescent proteins have been isolated from non-bioluminescent species of reef-coral organisms and have been made available through ClonTech. They are AmCyan, AsRed, DsRed, ZsGreen and ZsYellow. These proteins are valuable as reporters for transformation because they do not require a substrate or external co-factor to emit fluorescence and can be tested in vivo without destruction of the tissue under study. We have evaluated them in a large range of plants, both monocots and dicots, and our results indicate that they are valuable reporting tools for transformation in a wide variety of crops. We report here their successful expression in wheat, maize, barley, rice, banana, onion, soybean, cotton, tobacco, potato and tomato. Transient expression could be observed as early as 24 h after DNA delivery in some cases, allowing for very clear visualization of individually transformed cells. Stable transgenic events were generated, using mannose, kanamycin or hygromycin selection. Transgenic plants were phenotypically normal, showing a wide range of fluorescence levels, and were fertile. Expression of AmCyan, ZsGreen and AsRed was visible in maize T1 seeds, allowing visual segregation to more than 99% accuracy. The excitation and emission wavelengths of some of these proteins are significantly different; the difference is enough for the simultaneous visualization of cells transformed with more than one of the fluorescent proteins. These proteins will become useful tools for transformation optimization and other studies. The wide variety of plants successfully tested demonstrates that these proteins will potentially find broad use in plant biology.

  7. Direct Visualization of Exciton Reequilibration in the LH1 and LH2 Complexes of Rhodobacter sphaeroides by Multipulse Spectroscopy

    PubMed Central

    Cohen Stuart, Thomas A.; Vengris, Mikas; Novoderezhkin, Vladimir I.; Cogdell, Richard J.; Hunter, C. Neil; van Grondelle, Rienk

    2011-01-01

    The dynamics of the excited states of the light-harvesting complexes LH1 and LH2 of Rhodobacter sphaeroides are governed, mainly, by the excitonic nature of these ring-systems. In a pump-dump-probe experiment, the first pulse promotes LH1 or LH2 to its excited state and the second pulse dumps a portion of the excited state. By selective dumping, we can disentangle the dynamics normally hidden in the excited-state manifold. We find that by using this multiple-excitation technique we can visualize a 400-fs reequilibration reflecting relaxation between the two lowest exciton states that cannot be directly explored by conventional pump-probe. An oscillatory feature is observed within the exciton reequilibration, which is attributed to a coherent motion of a vibrational wavepacket with a period of ∼150 fs. Our disordered exciton model allows a quantitative interpretation of the observed reequilibration processes occurring in these antennas. PMID:21539791

  8. Direct visualization of exciton reequilibration in the LH1 and LH2 complexes of Rhodobacter sphaeroides by multipulse spectroscopy.

    PubMed

    Cohen Stuart, Thomas A; Vengris, Mikas; Novoderezhkin, Vladimir I; Cogdell, Richard J; Hunter, C Neil; van Grondelle, Rienk

    2011-05-04

    The dynamics of the excited states of the light-harvesting complexes LH1 and LH2 of Rhodobacter sphaeroides are governed, mainly, by the excitonic nature of these ring-systems. In a pump-dump-probe experiment, the first pulse promotes LH1 or LH2 to its excited state and the second pulse dumps a portion of the excited state. By selective dumping, we can disentangle the dynamics normally hidden in the excited-state manifold. We find that by using this multiple-excitation technique we can visualize a 400-fs reequilibration reflecting relaxation between the two lowest exciton states that cannot be directly explored by conventional pump-probe. An oscillatory feature is observed within the exciton reequilibration, which is attributed to a coherent motion of a vibrational wavepacket with a period of ∼150 fs. Our disordered exciton model allows a quantitative interpretation of the observed reequilibration processes occurring in these antennas. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  9. Bayesian modeling of cue interaction: bistability in stereoscopic slant perception.

    PubMed

    van Ee, Raymond; Adams, Wendy J; Mamassian, Pascal

    2003-07-01

    Our two eyes receive different views of a visual scene, and the resulting binocular disparities enable us to reconstruct its three-dimensional layout. However, the visual environment is also rich in monocular depth cues. We examined the resulting percept when observers view a scene in which there are large conflicts between the surface slant signaled by binocular disparities and the slant signaled by monocular perspective. For a range of disparity-perspective cue conflicts, many observers experience bistability: They are able to perceive two distinct slants and to flip between the two percepts in a controlled way. We present a Bayesian model that describes the quantitative aspects of perceived slant on the basis of the likelihoods of both perspective and disparity slant information combined with prior assumptions about the shape and orientation of objects in the scene. Our Bayesian approach can be regarded as an overarching framework that allows researchers to study all cue integration aspects-including perceptual decisions--in a unified manner.

  10. Visual reaction time for chromaticity changes at constant luminance in different color representation systems

    NASA Astrophysics Data System (ADS)

    Jimenezdel Barco, L.; Jimenez, J. R.; Rubino, M.; Diaz, J. A.

    1996-09-01

    The results obtained by different authors show that when a color stimulus changes in both luminance and chromaticity, the visual reaction time (VRT) of an observer in detecting this chromatic change depends on nothing more than the luminance change and is regulated by Pieron's law. In the present work, we evaluate the VRT needed by an observer to detect the chromaticity difference between an adapting and variable stimulus. For this, we have used the experimental method of hue substitution, which allows us to maintain the luminance channel constant and thereby study the temporal response to changes only in chromaticity. The experiments were carried out with a CRT color monitor and the experimental results are expressed in different color-representation systems. The systems UCS-CIE 1964 (U*, V*, W*) and CIELUV show good correlations between the VRT and the chromaticity difference expressed in these systems, adjusting the VRT to an expression following Pieron's law: VRT-VRTon=k( Delta E)- beta .

  11. Boosting the Motor Outcome of the Untrained Hand by Action Observation: Mirror Visual Feedback, Video Therapy, or Both Combined-What Is More Effective?

    PubMed

    Bähr, Florian; Ritter, Alexander; Seidel, Gundula; Puta, Christian; Gabriel, Holger H W; Hamzei, Farsin

    2018-01-01

    Action observation (AO) allows access to a network that processes visuomotor and sensorimotor inputs and is believed to be involved in observational learning of motor skills. We conducted three consecutive experiments to examine the boosting effect of AO on the motor outcome of the untrained hand by either mirror visual feedback (MVF), video therapy (VT), or a combination of both. In the first experiment, healthy participants trained either with MVF or without mirror feedback while in the second experiment, participants either trained with VT or observed animal videos. In the third experiment, participants first observed video clips that were followed by either training with MVF or training without mirror feedback. The outcomes for the untrained hand were quantified by scores from five motor tasks. The results demonstrated that MVF and VT significantly increase the motor performance of the untrained hand by the use of AO. We found that MVF was the most effective approach to increase the performance of the target effector. On the contrary, the combination of MVF and VT turns out to be less effective looking from clinical perspective. The gathered results suggest that action-related motor competence with the untrained hand is acquired by both mirror-based and video-based AO.

  12. Boosting the Motor Outcome of the Untrained Hand by Action Observation: Mirror Visual Feedback, Video Therapy, or Both Combined—What Is More Effective?

    PubMed Central

    Ritter, Alexander; Seidel, Gundula; Puta, Christian; Gabriel, Holger H. W.; Hamzei, Farsin

    2018-01-01

    Action observation (AO) allows access to a network that processes visuomotor and sensorimotor inputs and is believed to be involved in observational learning of motor skills. We conducted three consecutive experiments to examine the boosting effect of AO on the motor outcome of the untrained hand by either mirror visual feedback (MVF), video therapy (VT), or a combination of both. In the first experiment, healthy participants trained either with MVF or without mirror feedback while in the second experiment, participants either trained with VT or observed animal videos. In the third experiment, participants first observed video clips that were followed by either training with MVF or training without mirror feedback. The outcomes for the untrained hand were quantified by scores from five motor tasks. The results demonstrated that MVF and VT significantly increase the motor performance of the untrained hand by the use of AO. We found that MVF was the most effective approach to increase the performance of the target effector. On the contrary, the combination of MVF and VT turns out to be less effective looking from clinical perspective. The gathered results suggest that action-related motor competence with the untrained hand is acquired by both mirror-based and video-based AO. PMID:29849570

  13. Web-based interactive visualization in a Grid-enabled neuroimaging application using HTML5.

    PubMed

    Siewert, René; Specovius, Svenja; Wu, Jie; Krefting, Dagmar

    2012-01-01

    Interactive visualization and correction of intermediate results are required in many medical image analysis pipelines. To allow certain interaction in the remote execution of compute- and data-intensive applications, new features of HTML5 are used. They allow for transparent integration of user interaction into Grid- or Cloud-enabled scientific workflows. Both 2D and 3D visualization and data manipulation can be performed through a scientific gateway without the need to install specific software or web browser plugins. The possibilities of web-based visualization are presented along the FreeSurfer-pipeline, a popular compute- and data-intensive software tool for quantitative neuroimaging.

  14. Quantifying the effect of colorization enhancement on mammogram images

    NASA Astrophysics Data System (ADS)

    Wojnicki, Paul J.; Uyeda, Elizabeth; Micheli-Tzanakou, Evangelia

    2002-04-01

    Current methods of radiological displays provide only grayscale images of mammograms. The limitation of the image space to grayscale provides only luminance differences and textures as cues for object recognition within the image. However, color can be an important and significant cue in the detection of shapes and objects. Increasing detection ability allows the radiologist to interpret the images in more detail, improving object recognition and diagnostic accuracy. Color detection experiments using our stimulus system, have demonstrated that an observer can only detect an average of 140 levels of grayscale. An optimally colorized image can allow a user to distinguish 250 - 1000 different levels, hence increasing potential image feature detection by 2-7 times. By implementing a colorization map, which follows the luminance map of the original grayscale images, the luminance profile is preserved and color is isolated as the enhancement mechanism. The effect of this enhancement mechanism on the shape, frequency composition and statistical characteristics of the Visual Evoked Potential (VEP) are analyzed and presented. Thus, the effectiveness of the image colorization is measured quantitatively using the Visual Evoked Potential (VEP).

  15. Visualizing time: how linguistic metaphors are incorporated into displaying instruments in the process of interpreting time-varying signals

    NASA Astrophysics Data System (ADS)

    Garcia-Belmonte, Germà

    2017-06-01

    Spatial visualization is a well-established topic of education research that has allowed improving science and engineering students' skills on spatial relations. Connections have been established between visualization as a comprehension tool and instruction in several scientific fields. Learning about dynamic processes mainly relies upon static spatial representations or images. Visualization of time is inherently problematic because time can be conceptualized in terms of two opposite conceptual metaphors based on spatial relations as inferred from conventional linguistic patterns. The situation is particularly demanding when time-varying signals are recorded using displaying electronic instruments, and the image should be properly interpreted. This work deals with the interplay between linguistic metaphors, visual thinking and scientific instrument mediation in the process of interpreting time-varying signals displayed by electronic instruments. The analysis draws on a simplified version of a communication system as example of practical signal recording and image visualization in a physics and engineering laboratory experience. Instrumentation delivers meaningful signal representations because it is designed to incorporate a specific and culturally favored time view. It is suggested that difficulties in interpreting time-varying signals are linked with the existing dual perception of conflicting time metaphors. The activation of specific space-time conceptual mapping might allow for a proper signal interpretation. Instruments play then a central role as visualization mediators by yielding an image that matches specific perception abilities and practical purposes. Here I have identified two ways of understanding time as used in different trajectories through which students are located. Interestingly specific displaying instruments belonging to different cultural traditions incorporate contrasting time views. One of them sees time in terms of a dynamic metaphor consisting of a static observer looking at passing events. This is a general and widespread practice common in the contemporary mass culture, which lies behind the process of making sense to moving images usually visualized by means of movie shots. In contrast scientific culture favored another way of time conceptualization (static time metaphor) that historically fostered the construction of graphs and the incorporation of time-dependent functions, as represented on the Cartesian plane, into displaying instruments. Both types of cultures, scientific and mass, are considered highly technological in the sense that complex instruments, apparatus or machines participate in their visual practices.

  16. Visual display aid for orbital maneuvering - Design considerations

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Ellis, Stephen R.

    1993-01-01

    This paper describes the development of an interactive proximity operations planning system that allows on-site planning of fuel-efficient multiburn maneuvers in a potential multispacecraft environment. Although this display system most directly assists planning by providing visual feedback to aid visualization of the trajectories and constraints, its most significant features include: (1) the use of an 'inverse dynamics' algorithm that removes control nonlinearities facing the operator, and (2) a trajectory planning technique that separates, through a 'geometric spreadsheet', the normally coupled complex problems of planning orbital maneuvers and allows solution by an iterative sequence of simple independent actions. The visual feedback of trajectory shapes and operational constraints, provided by user-transparent and continuously active background computations, allows the operator to make fast, iterative design changes that rapidly converge to fuel-efficient solutions. The planning tool provides an example of operator-assisted optimization of nonlinear cost functions.

  17. Interactive Visualization of Infrared Spectral Data: Synergy of Computation, Visualization, and Experiment for Learning Spectroscopy

    NASA Astrophysics Data System (ADS)

    Lahti, Paul M.; Motyka, Eric J.; Lancashire, Robert J.

    2000-05-01

    A straightforward procedure is described to combine computation of molecular vibrational modes using commonly available molecular modeling programs with visualization of the modes using advanced features of the MDL Information Systems Inc. Chime World Wide Web browser plug-in. Minor editing of experimental spectra that are stored in the JCAMP-DX format allows linkage of IR spectral frequency ranges to Chime molecular display windows. The spectra and animation files can be combined by Hypertext Markup Language programming to allow interactive linkage between experimental spectra and computationally generated vibrational displays. Both the spectra and the molecular displays can be interactively manipulated to allow the user maximum control of the objects being viewed. This procedure should be very valuable not only for aiding students through visual linkage of spectra and various vibrational animations, but also by assisting them in learning the advantages and limitations of computational chemistry by comparison to experiment.

  18. A Global Repository for Planet-Sized Experiments and Observations

    NASA Technical Reports Server (NTRS)

    Williams, Dean; Balaji, V.; Cinquini, Luca; Denvil, Sebastien; Duffy, Daniel; Evans, Ben; Ferraro, Robert D.; Hansen, Rose; Lautenschlager, Michael; Trenham, Claire

    2016-01-01

    Working across U.S. federal agencies, international agencies, and multiple worldwide data centers, and spanning seven international network organizations, the Earth System Grid Federation (ESGF) allows users to access, analyze, and visualize data using a globally federated collection of networks, computers, and software. Its architecture employs a system of geographically distributed peer nodes that are independently administered yet united by common federation protocols and application programming interfaces (APIs). The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP) output used by the Intergovernmental Panel on Climate Change assessment reports. Data served by ESGF not only include model output (i.e., CMIP simulation runs) but also include observational data from satellites and instruments, reanalyses, and generated images. Metadata summarize basic information about the data for fast and easy data discovery.

  19. Visual observing reports

    NASA Astrophysics Data System (ADS)

    Roggemans, Paul

    2016-01-01

    In this overview we summarize reports published by visual observers shortly after the field work has been done and first impressions and memories of the real meteor observing experiences are fresh in mind. March-April being silent months meteor wise and the weather circumstances in 2016 having been rather unfavorable almost no visual observing efforts have been reported. Long term visual observer, Koen Miskotte could observe in this rather poorly known period and reported his data in MeteorNews.org. The Eta Aquariids 2016 provided a surprising nice display well covered by fellow visual observer Paul Jones in Florida.

  20. Adding Test Generation to the Teaching Machine

    ERIC Educational Resources Information Center

    Bruce-Lockhart, Michael; Norvell, Theodore; Crescenzi, Pierluigi

    2009-01-01

    We propose an extension of the Teaching Machine project, called Quiz Generator, that allows instructors to produce assessment quizzes in the field of algorithm and data structures quite easily. This extension makes use of visualization techniques and is based on new features of the Teaching Machine that allow third-party visualizers to be added as…

  1. Visualizing Molecular Chirality in the Organic Chemistry Laboratory Using Cholesteric Liquid Crystals

    ERIC Educational Resources Information Center

    Popova, Maia; Bretz, Stacey Lowery; Hartley, C. Scott

    2016-01-01

    Although stereochemistry is an important topic in second-year undergraduate organic chemistry, there are limited options for laboratory activities that allow direct visualization of macroscopic chiral phenomena. A novel, guided-inquiry experiment was developed that allows students to explore chirality in the context of cholesteric liquid crystals.…

  2. Guidelines for assigning allowable properties to visually graded foreign species based on test data from full sized specimens

    Treesearch

    David W. Green; Bradley E. Shelley

    2006-01-01

    The objective of this document is to provide philosophy and guidelines for the assignment of allowable properties to visually graded dimension lumber produced from trees not grown in the United States. This document assumes, as a starting point, the procedures of ASTM D 1990.

  3. [New developments in 2005 for the management of visual deficit in the infant].

    PubMed

    Bursztyn, J

    2005-03-01

    The main progress in ophthalmopediatrics is the knowledge of the importance and efficacy of visual refraction defects screening. This screening has to be the earliest possible, allowing a glass equipment preventing complications: amblyopia, strabismus. This screening allows the same time screening of organic anomalies and early treatment.

  4. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: Themore » visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.« less

  5. Virtual Images: Going Through the Looking Glass

    NASA Astrophysics Data System (ADS)

    Mota, Ana Rita; dos Santos, João Lopes

    2017-01-01

    Virtual images are often introduced through a "geometric" perspective, with little conceptual or qualitative illustrations, hindering a deeper understanding of this physical concept. In this paper, we present two rather simple observations that force a critical reflection on the optical nature of a virtual image. This approach is supported by the reflect-view, a useful device in geometrical optics classes because it allows a visual confrontation between virtual images and real objects that seemingly occupy the same region of space.

  6. A system to program projects to meet visual quality objectives

    Treesearch

    Fred L. Henley; Frank L. Hunsaker

    1979-01-01

    The U. S. Forest Service has established Visual Quality Objectives for National Forest lands and determined a method to ascertain the Visual Absorption Capability of those lands. Combining the two mapping inventories has allowed the Forest Service to retain the visual quality while managing natural resources.

  7. Numbers, Pictures, and Politics: Teaching Research Methods through Data Visualizations

    ERIC Educational Resources Information Center

    Rom, Mark Carl

    2015-01-01

    Data visualization is the term used to describe the methods and technologies used to allow the exploration and communication of quantitative information graphically. Data visualization is a rapidly growing and evolving discipline, and visualizations are widely used to cover politics. Yet, while popular and scholarly publications widely use…

  8. Imaging mass spectrometry in microbiology

    PubMed Central

    Watrous, Jeramie D.; Dorrestein, Pieter C.

    2013-01-01

    Mass spectrometry tools which allow for the 2-D visualization of the distribution of trace metals, metabolites, surface lipids, peptides and proteins directly from biological samples without the need for chemical tagging or antibodies are becoming increasingly useful for microbiology applications. These tools, comprised of different imaging mass spectrometry techniques, are ushering in an exciting new era of discovery by allowing for the generation of chemical hypotheses based on of the spatial mapping of atoms and molecules that can correlate to or transcend observed phenotypes. In this review, we explore the wide range of imaging mass spectrometry techniques available to microbiologists and describe their unique applications to microbiology with respect to the types of microbiology samples to be investigated. PMID:21822293

  9. Swipe transfer assembly

    DOEpatents

    Christiansen, Robert M.; Mills, William C.

    1992-01-01

    The swipe transfer assembly is a mechanical assembly which is used in conjunction with glove boxes and other sealed containments. It is used to pass small samples into or out of glove boxes without an open breach of the containment, and includes a rotational cylinder inside a fixed cylinder, the inside cylinder being rotatable through an arc of approximately 240.degree. relative to the outer cylinder. An offset of 120.degree. from end to end allows only one port to be opened at a time. The assembly is made of stainless steel or aluminum and clear acrylic plastic to enable visual observation. The assembly allows transfer of swipes and smears from radiological and other specially controlled environments.

  10. Molecular matter waves - tools and applications

    NASA Astrophysics Data System (ADS)

    Juffmann, Thomas; Sclafani, Michele; Knobloch, Christian; Cheshnovsky, Ori; Arndt, Markus

    2013-05-01

    Fluorescence microscopy allows us to visualize the gradual emergence of a deterministic far-field matter-wave diffraction pattern from stochastically arriving single molecules. We create a slow beam of phthalocyanine molecules via laser desorption from a glass window. The small source size provides the transverse coherence required to observe an interference pattern in the far-field behind an ultra-thin nanomachined grating. There the molecules are deposited onto a quartz window and can be imaged in situ and in real time with single molecule sensitivity. This new setup not only allows for a textbook demonstration of quantum interference, but also enables quantitative explorations of the van der Waals interaction between molecules and material gratings.

  11. Flexible wings in flapping flight

    NASA Astrophysics Data System (ADS)

    Moret, Lionel; Thiria, Benjamin; Zhang, Jun

    2007-11-01

    We study the effect of passive pitching and flexible deflection of wings on the forward flapping flight. The wings are flapped vertically in water and are allowed to move freely horizontally. The forward speed is chosen by the flapping wing itself by balance of drag and thrust. We show, that by allowing the wing to passively pitch or by adding a flexible extension at its trailing edge, the forward speed is significantly increased. Detailed measurements of wing deflection and passive pitching, together with flow visualization, are used to explain our observations. The advantage of having a wing with finite rigidity/flexibility is discussed as we compare the current results with our biological inspirations such as birds and fish.

  12. A Systematic Review and Meta-Analysis on the Safety of Vascular Endothelial Growth Factor (VEGF) Inhibitors for the Treatment of Retinopathy of Prematurity

    PubMed Central

    Pertl, Laura; Steinwender, Gernot; Mayer, Christoph; Hausberger, Silke; Pöschl, Eva-Maria; Wackernagel, Werner; Wedrich, Andreas; El-Shabrawi, Yosuf; Haas, Anton

    2015-01-01

    Introduction Laser photocoagulation is the current gold standard treatment for proliferative retinopathy of prematurity (ROP). However, it permanently reduces the visual field and might induce myopia. Vascular endothelial growth factor (VEGF) inhibitors for the treatment of ROP may enable continuing vascularization of the retina, potentially allowing the preservation of the visual field. However, for their use in infants concern remains. This meta-analysis explores the safety of VEGF inhibitors. Methods The Ovid Interface was used to perform a systematic review of the literature in the databases PubMed, EMBASE and the Cochrane Library. Results This meta-analysis included 24 original reports (including 1.457 eyes) on VEGF inhibitor treatment for ROP. The trials were solely observational except for one randomized and two case-control studies. We estimated a 6-month risk of retreatment per eye of 2.8%, and a 6-month risk of ocular complication without the need of retreatment of 1.6% per eye. Systemic complications were only reported as isolated incidents. Discussion VEGF inhibitors seem to be associated with low recurrence rates and ocular complication rates. They may have the benefit of potentially allowing the preservation of visual field and lower rates of myopia. Due to the lack of data, the risk of systemic side effects cannot be assessed. PMID:26083024

  13. Intelligent Visualization of Geo-Information on the Future Web

    NASA Astrophysics Data System (ADS)

    Slusallek, P.; Jochem, R.; Sons, K.; Hoffmann, H.

    2012-04-01

    Visualization is a key component of the "Observation Web" and will become even more important in the future as geo data becomes more widely accessible. The common statement that "Data that cannot be seen, does not exist" is especially true for non-experts, like most citizens. The Web provides the most interesting platform for making data easily and widely available. However, today's Web is not well suited for the interactive visualization and exploration that is often needed for geo data. Support for 3D data was added only recently and at an extremely low level (WebGL), but even the 2D visualization capabilities of HTML e.g. (images, canvas, SVG) are rather limited, especially regarding interactivity. We have developed XML3D as an extension to HTML-5. It allows for compactly describing 2D and 3D data directly as elements of an HTML-5 document. All graphics elements are part of the Document Object Model (DOM) and can be manipulated via the same set of DOM events and methods that millions of Web developers use on a daily basis. Thus, XML3D makes highly interactive 2D and 3D visualization easily usable, not only for geo data. XML3D is supported by any WebGL-capable browser but we also provide native implementations in Firefox and Chromium. As an example, we show how OpenStreetMap data can be mapped directly to XML3D and visualized interactively in any Web page. We show how this data can be easily augmented with additional data from the Web via a few lines of Javascript. We also show how embedded semantic data (via RDFa) allows for linking the visualization back to the data's origin, thus providing an immersive interface for interacting with and modifying the original data. XML3D is used as key input for standardization within the W3C Community Group on "Declarative 3D for the Web" chaired by the DFKI and has recently been selected as one of the Generic Enabler for the EU Future Internet initiative.

  14. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    PubMed

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Dissection and lateral mounting of zebrafish embryos: analysis of spinal cord development.

    PubMed

    Beck, Aaron P; Watt, Roland M; Bonner, Jennifer

    2014-02-28

    The zebrafish spinal cord is an effective investigative model for nervous system research for several reasons. First, genetic, transgenic and gene knockdown approaches can be utilized to examine the molecular mechanisms underlying nervous system development. Second, large clutches of developmentally synchronized embryos provide large experimental sample sizes. Third, the optical clarity of the zebrafish embryo permits researchers to visualize progenitor, glial, and neuronal populations. Although zebrafish embryos are transparent, specimen thickness can impede effective microscopic visualization. One reason for this is the tandem development of the spinal cord and overlying somite tissue. Another reason is the large yolk ball, which is still present during periods of early neurogenesis. In this article, we demonstrate microdissection and removal of the yolk in fixed embryos, which allows microscopic visualization while preserving surrounding somite tissue. We also demonstrate semipermanent mounting of zebrafish embryos. This permits observation of neurodevelopment in the dorso-ventral and anterior-posterior axes, as it preserves the three-dimensionality of the tissue.

  16. Dissection and Lateral Mounting of Zebrafish Embryos: Analysis of Spinal Cord Development

    PubMed Central

    Beck, Aaron P.; Watt, Roland M.; Bonner, Jennifer

    2014-01-01

    The zebrafish spinal cord is an effective investigative model for nervous system research for several reasons. First, genetic, transgenic and gene knockdown approaches can be utilized to examine the molecular mechanisms underlying nervous system development. Second, large clutches of developmentally synchronized embryos provide large experimental sample sizes. Third, the optical clarity of the zebrafish embryo permits researchers to visualize progenitor, glial, and neuronal populations. Although zebrafish embryos are transparent, specimen thickness can impede effective microscopic visualization. One reason for this is the tandem development of the spinal cord and overlying somite tissue. Another reason is the large yolk ball, which is still present during periods of early neurogenesis. In this article, we demonstrate microdissection and removal of the yolk in fixed embryos, which allows microscopic visualization while preserving surrounding somite tissue. We also demonstrate semipermanent mounting of zebrafish embryos. This permits observation of neurodevelopment in the dorso-ventral and anterior-posterior axes, as it preserves the three-dimensionality of the tissue. PMID:24637734

  17. Does visual short-term memory have a high-capacity stage?

    PubMed

    Matsukura, Michi; Hollingworth, Andrew

    2011-12-01

    Visual short-term memory (VSTM) has long been considered a durable, limited-capacity system for the brief retention of visual information. However, a recent work by Sligte et al. (Plos One 3:e1699, 2008) reported that, relatively early after the removal of a memory array, a cue allowed participants to access a fragile, high-capacity stage of VSTM that is distinct from iconic memory. In the present study, we examined whether this stage division is warranted by attempting to corroborate the existence of an early, high-capacity form of VSTM. The results of four experiments did not support Sligte et al.'s claim, since we did not obtain evidence for VSTM retention that exceeded traditional estimates of capacity. However, performance approaching that observed in Sligte et al. can be achieved through extensive practice, providing a clear explanation for their findings. Our evidence favors the standard view of VSTM as a limited-capacity system that maintains a few object representations in a relatively durable form.

  18. Interactive Web-based Floodplain Simulation System for Realistic Experiments of Flooding and Flood Damage

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2013-12-01

    Recent developments in web technologies make it easy to manage and visualize large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The floodplain simulation system is a web-based 3D interactive flood simulation environment to create real world flooding scenarios. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create and modify predefined scenarios, control environmental parameters, and evaluate flood mitigation techniques. The web-based simulation system provides an environment to children and adults learn about the flooding, flood damage, and effects of development and human activity in the floodplain. The system provides various scenarios customized to fit the age and education level of the users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various flooding and land use scenarios.

  19. Microscopic visualization of metabotropic glutamate receptors on the surface of living cells using bifunctional magnetic resonance imaging probes.

    PubMed

    Mishra, Anurag; Mishra, Ritu; Gottschalk, Sven; Pal, Robert; Sim, Neil; Engelmann, Joern; Goldberg, Martin; Parker, David

    2014-02-19

    A series of bimodal metabotropic glutamate-receptor targeted MRI contrast agents has been developed and evaluated, based on established competitive metabotropic Glu receptor subtype 5 (mGluR5) antagonists. In order to directly visualize mGluR5 binding of these agents on the surface of live astrocytes, variations in the core structure were made. A set of gadolinium conjugates containing either a cyanine dye or a fluorescein moiety was accordingly prepared, to allow visualization by optical microscopy in cellulo. In each case, surface receptor binding was compromised and cell internalization observed. Another approach, examining the location of a terbium analogue via sensitized emission, also exhibited nonspecific cell uptake in neuronal cell line models. Finally, biotin derivatives of two lead compounds were prepared, and the specificity of binding to the mGluR5 cell surface receptors was demonstrated with the aid of their fluorescently labeled avidin conjugates, using both total internal reflection fluorescence (TIRF) and confocal microscopy.

  20. An Hour of Spectacular Visualization

    NASA Technical Reports Server (NTRS)

    Hasler, Arthur F.

    2005-01-01

    The NASA/NOAA Electronic Theater presents Earth science observations and visualizations from space in a historical perspective. Fly in from outer space to Athens and site of the 2004 Summer Olympics and the Far East using 1 m IKONOS "Spy Satellite" data. Contrast the 1972 Apollo 17 "Blue Marble" image of the Earth with the latest US and International global satellite images that allow us to view our Planet from any vantage point. See the latest spectacular images from NASA/NOAA/Commercial remote sensing missions like Terra, GOES, TRMM, SeaWiFS, & Landsat 7, QuickBird of the SE Asia Tsunami, devastation of Hurricane Katrina this year in New Orleans, and the LA/San Diego Fires of 2003. See how High Definition Television (HDTV) is revolutionizing the way we do science communication. Take the pulse of the planet on a daily, annual and 30-year time scale. See daily thunderstorms, the annual blooming of the northern hemisphere land masses and oceans, fires in Africa, dust storms in Iraq, and carbon monoxide exhaust from global burning. See visualizations featured on Newsweek, TIME, National Geographic, Popular Science covers & National & International Network TV. Spectacular new global visualizations of the observed and simulated atmosphere & oceans are shown. See the currents and vortexes in the oceans that bring up the nutrients to feed tiny plankton and draw the fish, whales and fishermen. See the how the ocean blooms in response to El Nino/La Nina climate changes. The Etheater will be presented using the latest High Definition TV (HDTV) and video projection technology on a large screen. See city lights around the globe and in your area observed by the "night-vision" DMSP satellite, Also see how Keyhole and Google Maps are using satellite and aerial photography to help you find your house and plan your vacation.

  1. An Hour of Spectacular Visualization

    NASA Technical Reports Server (NTRS)

    Hasler, Arthur F.

    2004-01-01

    The NASA/NOAA Electronic Theater presents Earth science observations and visualizations from space in a historical perspective. Fly in from outer space to the Far East and down to Beijing and Bangkok. Zooms through the Cosmos to the site of the 2004 Summer Olympic games in Athens using 1 m IKONOS "Spy Satellite" data. Contrast the 1972 Apollo 17 "Blue Marble" image of the Earth with the latest US and International global satellite images that allow us to view our Planet from any vantage point. See the latest spectacular images from NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiFS, & Landsat 7, of typhoons/hurricanes and fires in California and around the planet. See how High Definition Television (HDTV) is revolutionizing the way we do science communication. Take the pulse of the planet on a daily, annual and 30-year time scale. See daily thunderstorms, the annual greening of the northern hemisphere land masses and Oceans, fires in Africa, dust storms in Iraq, and carbon monoxide exhaust from global burning. See visualizations featured on Newsweek, TIME, National Geographic, Popular Science covers & National & International Network TV. Spectacular new global visualizations of the observed and simulated atmosphere & Oceans are shown. See the currents and vortexes in the Oceans that bring up the nutrients to feed tiny plankton and draw the fish, whales and fishermen. See the how the ocean blooms in response to El Nino/La Nina climate changes. The Etheater will be presented using the latest High Definition TV (HDTV) and video projection technology on a large screen. See the global city lights, showing population concentrations in the US, Africa, and Asia observed by the "night-vision" DMSP satellite.

  2. NASA/NOAA Electronic Theater: An Hour of Spectacular Visualization

    NASA Technical Reports Server (NTRS)

    Hasier, A. F.

    2004-01-01

    The NASA/NOAA Electronic Theater presents Earth science observations and visualizations from space in a historical perspective. Fly in from outer space to Utah, Logan and the USU Agriculture Station. Compare zooms through the Cosmos to the sites of the 2004 Summer and 2002 Winter Olympic games using 1 m IKONOS "Spy Satellite" data. Contrast the 1972 Apollo 17 "Blue Marble" image of the Earth with the latest US and International global satellite images that allow us to view our Planet from any vantage point. See the latest spectacular images h m NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiF!3,& Landsat 7, of storms & fires like Hurricanes Charlie & Isabel and the LA/San Diego Fire Storms of 2003. See how High Definition Television (HDTV) is revolutionizing the way we do science communication. Take the pulse of the planet on a daily, annual and 30-year time scale. See daily thunderstorms, the annual greening of the northern hemisphere land masses and oceans, fires in Africa, dust storms in Iraq, and carbon monoxide exhaust from global burning. See visualizations featured on Newsweek, TIME, National Geographic, Popular Science covers & National & International Network TV. Spectacular new global visualizations of the observed and simulated atmosphere & oceans are shown. See the currents and vortexes in the oceans that bring up the nutrients to feed tiny plankton and draw the fish, whales and fishermen. See the how the Ocean blooms in response to El Nino/La Nina climate changes. The E-theater will be presented using the latest High Definition TV and video projection technology on a large screen. See the global city lights, and the great NE US blackout of August 2003 observed by the "night-vision" DMSP satellite.

  3. 3D Visualization for Planetary Missions

    NASA Astrophysics Data System (ADS)

    DeWolfe, A. W.; Larsen, K.; Brain, D.

    2018-04-01

    We have developed visualization tools for viewing planetary orbiters and science data in 3D for both Earth and Mars, using the Cesium Javascript library, allowing viewers to visualize the position and orientation of spacecraft and science data.

  4. Teaching Technology Education to Visually Impaired Students.

    ERIC Educational Resources Information Center

    Mann, Rene

    1987-01-01

    Discusses various types of visual impairments and how the learning environment can be adapted to limit their effect. Presents suggestions for adapting industrial arts laboratory activities to maintain safety standards while allowing the visually impaired to participate. (CH)

  5. A Bayesian Account of Visual-Vestibular Interactions in the Rod-and-Frame Task.

    PubMed

    Alberts, Bart B G T; de Brouwer, Anouk J; Selen, Luc P J; Medendorp, W Pieter

    2016-01-01

    Panoramic visual cues, as generated by the objects in the environment, provide the brain with important information about gravity direction. To derive an optimal, i.e., Bayesian, estimate of gravity direction, the brain must combine panoramic information with gravity information detected by the vestibular system. Here, we examined the individual sensory contributions to this estimate psychometrically. We asked human subjects to judge the orientation (clockwise or counterclockwise relative to gravity) of a briefly flashed luminous rod, presented within an oriented square frame (rod-in-frame). Vestibular contributions were manipulated by tilting the subject's head, whereas visual contributions were manipulated by changing the viewing distance of the rod and frame. Results show a cyclical modulation of the frame-induced bias in perceived verticality across a 90° range of frame orientations. The magnitude of this bias decreased significantly with larger viewing distance, as if visual reliability was reduced. Biases increased significantly when the head was tilted, as if vestibular reliability was reduced. A Bayesian optimal integration model, with distinct vertical and horizontal panoramic weights, a gain factor to allow for visual reliability changes, and ocular counterroll in response to head tilt, provided a good fit to the data. We conclude that subjects flexibly weigh visual panoramic and vestibular information based on their orientation-dependent reliability, resulting in the observed verticality biases and the associated response variabilities.

  6. SOCR Motion Charts: An Efficient, Open-Source, Interactive and Dynamic Applet for Visualizing Longitudinal Multivariate Data

    PubMed Central

    Al-Aziz, Jameel; Christou, Nicolas; Dinov, Ivo D.

    2011-01-01

    The amount, complexity and provenance of data have dramatically increased in the past five years. Visualization of observed and simulated data is a critical component of any social, environmental, biomedical or scientific quest. Dynamic, exploratory and interactive visualization of multivariate data, without preprocessing by dimensionality reduction, remains a nearly insurmountable challenge. The Statistics Online Computational Resource (www.SOCR.ucla.edu) provides portable online aids for probability and statistics education, technology-based instruction and statistical computing. We have developed a new Java-based infrastructure, SOCR Motion Charts, for discovery-based exploratory analysis of multivariate data. This interactive data visualization tool enables the visualization of high-dimensional longitudinal data. SOCR Motion Charts allows mapping of ordinal, nominal and quantitative variables onto time, 2D axes, size, colors, glyphs and appearance characteristics, which facilitates the interactive display of multidimensional data. We validated this new visualization paradigm using several publicly available multivariate datasets including Ice-Thickness, Housing Prices, Consumer Price Index, and California Ozone Data. SOCR Motion Charts is designed using object-oriented programming, implemented as a Java Web-applet and is available to the entire community on the web at www.socr.ucla.edu/SOCR_MotionCharts. It can be used as an instructional tool for rendering and interrogating high-dimensional data in the classroom, as well as a research tool for exploratory data analysis. PMID:21479108

  7. A physiologically based nonhomogeneous Poisson counter model of visual identification.

    PubMed

    Christensen, Jeppe H; Markussen, Bo; Bundesen, Claus; Kyllingsbæk, Søren

    2018-04-30

    A physiologically based nonhomogeneous Poisson counter model of visual identification is presented. The model was developed in the framework of a Theory of Visual Attention (Bundesen, 1990; Kyllingsbæk, Markussen, & Bundesen, 2012) and meant for modeling visual identification of objects that are mutually confusable and hard to see. The model assumes that the visual system's initial sensory response consists in tentative visual categorizations, which are accumulated by leaky integration of both transient and sustained components comparable with those found in spike density patterns of early sensory neurons. The sensory response (tentative categorizations) feeds independent Poisson counters, each of which accumulates tentative object categorizations of a particular type to guide overt identification performance. We tested the model's ability to predict the effect of stimulus duration on observed distributions of responses in a nonspeeded (pure accuracy) identification task with eight response alternatives. The time courses of correct and erroneous categorizations were well accounted for when the event-rates of competing Poisson counters were allowed to vary independently over time in a way that mimicked the dynamics of receptive field selectivity as found in neurophysiological studies. Furthermore, the initial sensory response yielded theoretical hazard rate functions that closely resembled empirically estimated ones. Finally, supplied with a Naka-Rushton type contrast gain control, the model provided an explanation for Bloch's law. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. Visual search asymmetries within color-coded and intensity-coded displays.

    PubMed

    Yamani, Yusuke; McCarley, Jason S

    2010-06-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information. The design of symbology to produce search asymmetries (Treisman & Souther, 1985) offers a potential technique for doing this, but it is not obvious from existing models of search that an asymmetry observed in the absence of extraneous visual stimuli will persist within a complex color- or intensity-coded display. To address this issue, in the current study we measured the strength of a visual search asymmetry within displays containing color- or intensity-coded extraneous items. The asymmetry persisted strongly in the presence of extraneous items that were drawn in a different color (Experiment 1) or a lower contrast (Experiment 2) than the search-relevant items, with the targets favored by the search asymmetry producing highly efficient search. The asymmetry was attenuated but not eliminated when extraneous items were drawn in a higher contrast than search-relevant items (Experiment 3). Results imply that the coding of symbology to exploit visual search asymmetries can facilitate visual search for high-priority items even within color- or intensity-coded displays. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  9. The use of head/eye-centered, hand-centered and allocentric representations for visually guided hand movements and perceptual judgments.

    PubMed

    Thaler, Lore; Todd, James T

    2009-04-01

    Two experiments are reported that were designed to measure the accuracy and reliability of both visually guided hand movements (Exp. 1) and perceptual matching judgments (Exp. 2). The specific procedure for informing subjects of the required response on each trial was manipulated so that some tasks could only be performed using an allocentric representation of the visual target; others could be performed using either an allocentric or hand-centered representation; still others could be performed based on an allocentric, hand-centered or head/eye-centered representation. Both head/eye and hand centered representations are egocentric because they specify visual coordinates with respect to the subject. The results reveal that accuracy and reliability of both motor and perceptual responses are highest when subjects direct their response towards a visible target location, which allows them to rely on a representation of the target in head/eye-centered coordinates. Systematic changes in averages and standard deviations of responses are observed when subjects cannot direct their response towards a visible target location, but have to represent target distance and direction in either hand-centered or allocentric visual coordinates instead. Subjects' motor and perceptual performance agree quantitatively well. These results strongly suggest that subjects process head/eye-centered representations differently from hand-centered or allocentric representations, but that they process visual information for motor actions and perceptual judgments together.

  10. Musician Map: visualizing music collaborations over time

    NASA Astrophysics Data System (ADS)

    Yim, Ji-Dong; Shaw, Chris D.; Bartram, Lyn

    2009-01-01

    In this paper we introduce Musician Map, a web-based interactive tool for visualizing relationships among popular musicians who have released recordings since 1950. Musician Map accepts search terms from the user, and in turn uses these terms to retrieve data from MusicBrainz.org and AudioScrobbler.net, and visualizes the results. Musician Map visualizes relationships of various kinds between music groups and individual musicians, such as band membership, musical collaborations, and linkage to other artists that are generally regarded as being similar in musical style. These relationships are plotted between artists using a new timeline-based visualization where a node in a traditional node-link diagram has been transformed into a Timeline-Node, which allows the visualization of an evolving entity over time, such as the membership in a band. This allows the user to pursue social trend queries such as "Do Hip-Hop artists collaborate differently than Rock artists".

  11. The visual white matter: The application of diffusion MRI and fiber tractography to vision science

    PubMed Central

    Rokem, Ariel; Takemura, Hiromasa; Bock, Andrew S.; Scherf, K. Suzanne; Behrmann, Marlene; Wandell, Brian A.; Fine, Ione; Bridge, Holly; Pestilli, Franco

    2017-01-01

    Visual neuroscience has traditionally focused much of its attention on understanding the response properties of single neurons or neuronal ensembles. The visual white matter and the long-range neuronal connections it supports are fundamental in establishing such neuronal response properties and visual function. This review article provides an introduction to measurements and methods to study the human visual white matter using diffusion MRI. These methods allow us to measure the microstructural and macrostructural properties of the white matter in living human individuals; they allow us to trace long-range connections between neurons in different parts of the visual system and to measure the biophysical properties of these connections. We also review a range of findings from recent studies on connections between different visual field maps, the effects of visual impairment on the white matter, and the properties underlying networks that process visual information supporting visual face recognition. Finally, we discuss a few promising directions for future studies. These include new methods for analysis of MRI data, open datasets that are becoming available to study brain connectivity and white matter properties, and open source software for the analysis of these data. PMID:28196374

  12. Automating Geospatial Visualizations with Smart Default Renderers for Data Exploration Web Applications

    NASA Astrophysics Data System (ADS)

    Ekenes, K.

    2017-12-01

    This presentation will outline the process of creating a web application for exploring large amounts of scientific geospatial data using modern automated cartographic techniques. Traditional cartographic methods, including data classification, may inadvertently hide geospatial and statistical patterns in the underlying data. This presentation demonstrates how to use smart web APIs that quickly analyze the data when it loads, and provides suggestions for the most appropriate visualizations based on the statistics of the data. Since there are just a few ways to visualize any given dataset well, it is imperative to provide smart default color schemes tailored to the dataset as opposed to static defaults. Since many users don't go beyond default values, it is imperative that they are provided with smart default visualizations. Multiple functions for automating visualizations are available in the Smart APIs, along with UI elements allowing users to create more than one visualization for a dataset since there isn't a single best way to visualize a given dataset. Since bivariate and multivariate visualizations are particularly difficult to create effectively, this automated approach removes the guesswork out of the process and provides a number of ways to generate multivariate visualizations for the same variables. This allows the user to choose which visualization is most appropriate for their presentation. The methods used in these APIs and the renderers generated by them are not available elsewhere. The presentation will show how statistics can be used as the basis for automating default visualizations of data along continuous ramps, creating more refined visualizations while revealing the spread and outliers of the data. Adding interactive components to instantaneously alter visualizations allows users to unearth spatial patterns previously unknown among one or more variables. These applications may focus on a single dataset that is frequently updated, or configurable for a variety of datasets from multiple sources.

  13. VisOHC: Designing Visual Analytics for Online Health Communities

    PubMed Central

    Kwon, Bum Chul; Kim, Sung-Hee; Lee, Sukwon; Choo, Jaegul; Huh, Jina; Yi, Ji Soo

    2015-01-01

    Through online health communities (OHCs), patients and caregivers exchange their illness experiences and strategies for overcoming the illness, and provide emotional support. To facilitate healthy and lively conversations in these communities, their members should be continuously monitored and nurtured by OHC administrators. The main challenge of OHC administrators' tasks lies in understanding the diverse dimensions of conversation threads that lead to productive discussions in their communities. In this paper, we present a design study in which three domain expert groups participated, an OHC researcher and two OHC administrators of online health communities, which was conducted to find with a visual analytic solution. Through our design study, we characterized the domain goals of OHC administrators and derived tasks to achieve these goals. As a result of this study, we propose a system called VisOHC, which visualizes individual OHC conversation threads as collapsed boxes–a visual metaphor of conversation threads. In addition, we augmented the posters' reply authorship network with marks and/or beams to show conversation dynamics within threads. We also developed unique measures tailored to the characteristics of OHCs, which can be encoded for thread visualizations at the users' requests. Our observation of the two administrators while using VisOHC showed that it supports their tasks and reveals interesting insights into online health communities. Finally, we share our methodological lessons on probing visual designs together with domain experts by allowing them to freely encode measurements into visual variables. PMID:26529688

  14. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems.

    PubMed

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-03-01

    One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers' visual and manual distractions with 'infotainment' technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual-manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox 'one-shot' voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory-vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers' interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation.

  15. VisOHC: Designing Visual Analytics for Online Health Communities.

    PubMed

    Kwon, Bum Chul; Kim, Sung-Hee; Lee, Sukwon; Choo, Jaegul; Huh, Jina; Yi, Ji Soo

    2016-01-01

    Through online health communities (OHCs), patients and caregivers exchange their illness experiences and strategies for overcoming the illness, and provide emotional support. To facilitate healthy and lively conversations in these communities, their members should be continuously monitored and nurtured by OHC administrators. The main challenge of OHC administrators' tasks lies in understanding the diverse dimensions of conversation threads that lead to productive discussions in their communities. In this paper, we present a design study in which three domain expert groups participated, an OHC researcher and two OHC administrators of online health communities, which was conducted to find with a visual analytic solution. Through our design study, we characterized the domain goals of OHC administrators and derived tasks to achieve these goals. As a result of this study, we propose a system called VisOHC, which visualizes individual OHC conversation threads as collapsed boxes-a visual metaphor of conversation threads. In addition, we augmented the posters' reply authorship network with marks and/or beams to show conversation dynamics within threads. We also developed unique measures tailored to the characteristics of OHCs, which can be encoded for thread visualizations at the users' requests. Our observation of the two administrators while using VisOHC showed that it supports their tasks and reveals interesting insights into online health communities. Finally, we share our methodological lessons on probing visual designs together with domain experts by allowing them to freely encode measurements into visual variables.

  16. Three-dimensional visualization of nanostructured surfaces and bacterial attachment using Autodesk® Maya®.

    PubMed

    Boshkovikj, Veselin; Fluke, Christopher J; Crawford, Russell J; Ivanova, Elena P

    2014-02-28

    There has been a growing interest in understanding the ways in which bacteria interact with nano-structured surfaces. As a result, there is a need for innovative approaches to enable researchers to visualize the biological processes taking place, despite the fact that it is not possible to directly observe these processes. We present a novel approach for the three-dimensional visualization of bacterial interactions with nano-structured surfaces using the software package Autodesk Maya. Our approach comprises a semi-automated stage, where actual surface topographic parameters, obtained using an atomic force microscope, are imported into Maya via a custom Python script, followed by a 'creative stage', where the bacterial cells and their interactions with the surfaces are visualized using available experimental data. The 'Dynamics' and 'nDynamics' capabilities of the Maya software allowed the construction and visualization of plausible interaction scenarios. This capability provides a practical aid to knowledge discovery, assists in the dissemination of research results, and provides an opportunity for an improved public understanding. We validated our approach by graphically depicting the interactions between the two bacteria being used for modeling purposes, Staphylococcus aureus and Pseudomonas aeruginosa, with different titanium substrate surfaces that are routinely used in the production of biomedical devices.

  17. Three-dimensional visualization of nanostructured surfaces and bacterial attachment using Autodesk® Maya®

    NASA Astrophysics Data System (ADS)

    Boshkovikj, Veselin; Fluke, Christopher J.; Crawford, Russell J.; Ivanova, Elena P.

    2014-02-01

    There has been a growing interest in understanding the ways in which bacteria interact with nano-structured surfaces. As a result, there is a need for innovative approaches to enable researchers to visualize the biological processes taking place, despite the fact that it is not possible to directly observe these processes. We present a novel approach for the three-dimensional visualization of bacterial interactions with nano-structured surfaces using the software package Autodesk Maya. Our approach comprises a semi-automated stage, where actual surface topographic parameters, obtained using an atomic force microscope, are imported into Maya via a custom Python script, followed by a `creative stage', where the bacterial cells and their interactions with the surfaces are visualized using available experimental data. The `Dynamics' and `nDynamics' capabilities of the Maya software allowed the construction and visualization of plausible interaction scenarios. This capability provides a practical aid to knowledge discovery, assists in the dissemination of research results, and provides an opportunity for an improved public understanding. We validated our approach by graphically depicting the interactions between the two bacteria being used for modeling purposes, Staphylococcus aureus and Pseudomonas aeruginosa, with different titanium substrate surfaces that are routinely used in the production of biomedical devices.

  18. Interactive Learning Environment: Web-based Virtual Hydrological Simulation System using Augmented and Immersive Reality

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2014-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The hydrological simulation system is a web-based 3D interactive learning environment for teaching hydrological processes and concepts. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create or load predefined scenarios, control environmental parameters, and evaluate environmental mitigation alternatives. The web-based simulation system provides an environment for students to learn about the hydrological processes (e.g. flooding and flood damage), and effects of development and human activity in the floodplain. The system utilizes latest web technologies and graphics processing unit (GPU) for water simulation and object collisions on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various visualization and interaction modes.

  19. Expertise for upright faces improves the precision but not the capacity of visual working memory.

    PubMed

    Lorenc, Elizabeth S; Pratte, Michael S; Angeloni, Christopher F; Tong, Frank

    2014-10-01

    Considerable research has focused on how basic visual features are maintained in working memory, but little is currently known about the precision or capacity of visual working memory for complex objects. How precisely can an object be remembered, and to what extent might familiarity or perceptual expertise contribute to working memory performance? To address these questions, we developed a set of computer-generated face stimuli that varied continuously along the dimensions of age and gender, and we probed participants' memories using a method-of-adjustment reporting procedure. This paradigm allowed us to separately estimate the precision and capacity of working memory for individual faces, on the basis of the assumptions of a discrete capacity model, and to assess the impact of face inversion on memory performance. We found that observers could maintain up to four to five items on average, with equally good memory capacity for upright and upside-down faces. In contrast, memory precision was significantly impaired by face inversion at every set size tested. Our results demonstrate that the precision of visual working memory for a complex stimulus is not strictly fixed but, instead, can be modified by learning and experience. We find that perceptual expertise for upright faces leads to significant improvements in visual precision, without modifying the capacity of working memory.

  20. Three-dimensional visualization of nanostructured surfaces and bacterial attachment using Autodesk® Maya®

    PubMed Central

    Boshkovikj, Veselin; Fluke, Christopher J.; Crawford, Russell J.; Ivanova, Elena P.

    2014-01-01

    There has been a growing interest in understanding the ways in which bacteria interact with nano-structured surfaces. As a result, there is a need for innovative approaches to enable researchers to visualize the biological processes taking place, despite the fact that it is not possible to directly observe these processes. We present a novel approach for the three-dimensional visualization of bacterial interactions with nano-structured surfaces using the software package Autodesk Maya. Our approach comprises a semi-automated stage, where actual surface topographic parameters, obtained using an atomic force microscope, are imported into Maya via a custom Python script, followed by a ‘creative stage', where the bacterial cells and their interactions with the surfaces are visualized using available experimental data. The ‘Dynamics' and ‘nDynamics' capabilities of the Maya software allowed the construction and visualization of plausible interaction scenarios. This capability provides a practical aid to knowledge discovery, assists in the dissemination of research results, and provides an opportunity for an improved public understanding. We validated our approach by graphically depicting the interactions between the two bacteria being used for modeling purposes, Staphylococcus aureus and Pseudomonas aeruginosa, with different titanium substrate surfaces that are routinely used in the production of biomedical devices. PMID:24577105

  1. Acetazolamide-induced vasodilation does not inhibit the visually evoked flow response

    PubMed Central

    Yonai, Yaniv; Boms, Neta; Molnar, Sandor; Rosengarten, Bernhard; Bornstein, Natan M; Csiba, Laszlo; Olah, Laszlo

    2010-01-01

    Different methods are used to assess the vasodilator ability of cerebral blood vessels; however, the exact mechanism of cerebral vasodilation, induced by different stimuli, is not entirely known. Our aim was to investigate whether the potent vasodilator agent, acetazolamide (AZ), inhibits the neurovascular coupling, which also requires vasodilation. Therefore, visually evoked flow parameters were examined by transcranial Doppler in ten healthy subjects before and after AZ administration. Pulsatility index and peak systolic flow velocity changes, evoked by visual stimulus, were recorded in the posterior cerebral arteries before and after intravenous administration of 15 mg/kg AZ. Repeated-measures ANOVA did not show significant group main effect between the visually evoked relative flow velocity time courses before and after AZ provocation (P=0.43). Visual stimulation induced significant increase of relative flow velocity and decrease of pulsatility index not only before but also at the maximal effect of AZ. These results suggest that maximal cerebral vasodilation cannot be determined by the clinically accepted dose of AZ (15 mg/kg) and prove that neurovascular coupling remains preserved despite AZ-induced vasodilation. Our observation indicates independent regulation of vasodilation during neurovascular coupling, allowing the adaptation of cerebral blood flow according to neuronal activity even if other processes require significant vasodilation. PMID:19809468

  2. Behavioural and immunological responses to an immune challenge in Octopus vulgaris.

    PubMed

    Locatello, Lisa; Fiorito, Graziano; Finos, Livio; Rasotto, Maria B

    2013-10-02

    Behavioural and immunological changes consequent to stress and infection are largely unexplored in cephalopods, despite the wide employment of species such as Octopus vulgaris in studies that require their manipulation and prolonged maintenance in captivity. Here we explore O. vulgaris behavioural and immunological (i.e. haemocyte number and serum lysozyme activity) responses to an in vivo immune challenge with Escherichia coli lipopolysaccharides (LPS). Behavioural changes of immune-treated and sham-injected animals were observed in both sight-allowed and isolated conditions, i.e. visually interacting or not with a conspecific. Immune stimulation primarily caused a significant increase in the number of circulating haemocytes 4h after the treatment, while serum lysozyme activity showed a less clear response. However, the effect of LPS on the circulating haemocytes begins to vanish 24h after injection. Our observations indicate a significant change in behaviour consequent to LPS administration, with treated octopuses exhibiting a decrease of general activity pattern when kept in the isolated condition. A similar decrease was not observed in the sight-allowed condition, where we noticed a specific significant reduction only in the time spent to visually interact with the conspecific. Overall, significant, but lower, behavioural and immunological effects of injection were detected also in sham-injected animals, suggesting a non-trivial susceptibility to manipulation and haemolymph sampling. Our results gain importance in light of changes of the regulations for the use of cephalopods in scientific procedures that call for the prompt development of guidelines, covering many aspects of cephalopod provision, maintenance and welfare. © 2013.

  3. The effect of multispectral image fusion enhancement on human efficiency.

    PubMed

    Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M

    2017-01-01

    The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.

  4. Eye movements reveal epistemic curiosity in human observers.

    PubMed

    Baranes, Adrien; Oudeyer, Pierre-Yves; Gottlieb, Jacqueline

    2015-12-01

    Saccadic (rapid) eye movements are primary means by which humans and non-human primates sample visual information. However, while saccadic decisions are intensively investigated in instrumental contexts where saccades guide subsequent actions, it is largely unknown how they may be influenced by curiosity - the intrinsic desire to learn. While saccades are sensitive to visual novelty and visual surprise, no study has examined their relation to epistemic curiosity - interest in symbolic, semantic information. To investigate this question, we tracked the eye movements of human observers while they read trivia questions and, after a brief delay, were visually given the answer. We show that higher curiosity was associated with earlier anticipatory orienting of gaze toward the answer location without changes in other metrics of saccades or fixations, and that these influences were distinct from those produced by variations in confidence and surprise. Across subjects, the enhancement of anticipatory gaze was correlated with measures of trait curiosity from personality questionnaires. Finally, a machine learning algorithm could predict curiosity in a cross-subject manner, relying primarily on statistical features of the gaze position before the answer onset and independently of covariations in confidence or surprise, suggesting potential practical applications for educational technologies, recommender systems and research in cognitive sciences. With this article, we provide full access to the annotated database allowing readers to reproduce the results. Epistemic curiosity produces specific effects on oculomotor anticipation that can be used to read out curiosity states. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Vortex filament method as a tool for computational visualization of quantum turbulence

    PubMed Central

    Hänninen, Risto; Baggaley, Andrew W.

    2014-01-01

    The vortex filament model has become a standard and powerful tool to visualize the motion of quantized vortices in helium superfluids. In this article, we present an overview of the method and highlight its impact in aiding our understanding of quantum turbulence, particularly superfluid helium. We present an analysis of the structure and arrangement of quantized vortices. Our results are in agreement with previous studies showing that under certain conditions, vortices form coherent bundles, which allows for classical vortex stretching, giving quantum turbulence a classical nature. We also offer an explanation for the differences between the observed properties of counterflow and pure superflow turbulence in a pipe. Finally, we suggest a mechanism for the generation of coherent structures in the presence of normal fluid shear. PMID:24704873

  6. Electron microscopic visualization of complementary labeled DNA with platinum-containing guanine derivative.

    PubMed

    Loukanov, Alexandre; Filipov, Chavdar; Mladenova, Polina; Toshev, Svetlin; Emin, Saim

    2016-04-01

    The object of the present report is to provide a method for a visualization of DNA in TEM by complementary labeling of cytosine with guanine derivative, which contains platinum as contrast-enhanced heavy element. The stretched single-chain DNA was obtained by modifying double-stranded DNA. The labeling method comprises the following steps: (i) stretching and adsorption of DNA on the support film of an electron microscope grid (the hydrophobic carbon film holding negative charged DNA); (ii) complementary labeling of the cytosine bases from the stretched single-stranded DNA pieces on the support film with platinum containing guanine derivative to form base-specific hydrogen bond; and (iii) producing a magnified image of the base-specific labeled DNA. Stretched single-stranded DNA on a support film is obtained by a rapid elongation of DNA pieces on the surface between air and aqueous buffer solution. The attached platinum-containing guanine derivative serves as a high-dense marker and it can be discriminated from the surrounding background of support carbon film and visualized by use of conventional TEM observation at 100 kV accelerated voltage. This method allows examination of specific nucleic macromolecules through atom-by-atom analysis and it is promising way toward future DNA-sequencing or molecular diagnostics of nucleic acids by electron microscopic observation. © 2016 Wiley Periodicals, Inc.

  7. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  8. The role of explicit and implicit standards in visual speed discrimination.

    PubMed

    Norman, J Farley; Pattison, Kristina F; Norman, Hideko F; Craft, Amy E; Wiesemann, Elizabeth Y; Taylor, M Jett

    2008-01-01

    Five experiments were designed to investigate visual speed discrimination. Variations of the method of constant stimuli were used to obtain speed discrimination thresholds in experiments 1, 2, 4, and 5, while the method of single stimuli was used in experiment 3. The observers' thresholds were significantly influenced by the choice of psychophysical method and by changes in the standard speed. The observers' judgments were unaffected, however, by changes in the magnitude of random variations in stimulus duration, reinforcing the conclusions of Lappin et al (1975 Journal of Experimental Psychology: Human Perception and Performance 1 383 394). When an implicit standard was used, the observers produced relatively low discrimination thresholds (7.0% of the standard speed), verifying the results of McKee (1981 Vision Research 21 491-500). When an explicit standard was used in a 2AFC variant of the method of constant stimuli, however, the observers' discrimination thresholds increased by 74% (to 12.2%), resembling the high thresholds obtained by Mandriota et al (1962 Science 138 437-438). A subsequent signal-detection analysis revealed that the observers' actual sensitivities to differences in speed were in fact equivalent for both psychophysical methods. The formation of an implicit standard in the method of single stimuli allows human observers to make judgments of speed that are as precise as those obtained when explicit standards are available.

  9. Spatial Visualization in Introductory Geology Courses

    NASA Astrophysics Data System (ADS)

    Reynolds, S. J.

    2004-12-01

    Visualization is critical to solving most geologic problems, which involve events and processes across a broad range of space and time. Accordingly, spatial visualization is an essential part of undergraduate geology courses. In such courses, students learn to visualize three-dimensional topography from two-dimensional contour maps, to observe landscapes and extract clues about how that landscape formed, and to imagine the three-dimensional geometries of geologic structures and how these are expressed on the Earth's surface or on geologic maps. From such data, students reconstruct the geologic history of areas, trying to visualize the sequence of ancient events that formed a landscape. To understand the role of visualization in student learning, we developed numerous interactive QuickTime Virtual Reality animations to teach students the most important visualization skills and approaches. For topography, students can spin and tilt contour-draped, shaded-relief terrains, flood virtual landscapes with water, and slice into terrains to understand profiles. To explore 3D geometries of geologic structures, they interact with virtual blocks that can be spun, sliced into, faulted, and made partially transparent to reveal internal structures. They can tilt planes to see how they interact with topography, and spin and tilt geologic maps draped over digital topography. The GeoWall system allows students to see some of these materials in true stereo. We used various assessments to research the effectiveness of these materials and to document visualization strategies students use. Our research indicates that, compared to control groups, students using such materials improve more in their geologic visualization abilities and in their general visualization abilities as measured by a standard spatial visualization test. Also, females achieve greater gains, improving their general visualization abilities to the same level as males. Misconceptions that students carry obstruct learning, but are largely undocumented. Many students, for example, cannot visualize that the landscape in which rock layers were deposited was different than the landscape in which the rocks are exposed today, even in the Grand Canyon.

  10. Data Visualization: An Exploratory Study into the Software Tools Used by Businesses

    ERIC Educational Resources Information Center

    Diamond, Michael; Mattia, Angela

    2017-01-01

    Data visualization is a key component to business and data analytics, allowing analysts in businesses to create tools such as dashboards for business executives. Various software packages allow businesses to create these tools in order to manipulate data for making informed business decisions. The focus is to examine what skills employers are…

  11. Data Visualization: An Exploratory Study into the Software Tools Used by Businesses

    ERIC Educational Resources Information Center

    Diamond, Michael; Mattia, Angela

    2015-01-01

    Data visualization is a key component to business and data analytics, allowing analysts in businesses to create tools such as dashboards for business executives. Various software packages allow businesses to create these tools in order to manipulate data for making informed business decisions. The focus is to examine what skills employers are…

  12. Visual Literacy in Bloom: Using Bloom's Taxonomy to Support Visual Learning Skills

    ERIC Educational Resources Information Center

    Arneson, Jessie B.; Offerdahl, Erika G.

    2018-01-01

    "Vision and Change" identifies science communication as one of the core competencies in undergraduate biology. Visual representations are an integral part of science communication, allowing ideas to be shared among and between scientists and the public. As such, development of scientific visual literacy should be a desired outcome of…

  13. Conveying Clinical Reasoning Based on Visual Observation via Eye-Movement Modelling Examples

    ERIC Educational Resources Information Center

    Jarodzka, Halszka; Balslev, Thomas; Holmqvist, Kenneth; Nystrom, Marcus; Scheiter, Katharina; Gerjets, Peter; Eika, Berit

    2012-01-01

    Complex perceptual tasks, like clinical reasoning based on visual observations of patients, require not only conceptual knowledge about diagnostic classes but also the skills to visually search for symptoms and interpret these observations. However, medical education so far has focused very little on how visual observation skills can be…

  14. A Quantitative Visual Mapping and Visualization Approach for Deep Ocean Floor Research

    NASA Astrophysics Data System (ADS)

    Hansteen, T. H.; Kwasnitschka, T.

    2013-12-01

    Geological fieldwork on the sea floor is still impaired by our inability to resolve features on a sub-meter scale resolution in a quantifiable reference frame and over an area large enough to reveal the context of local observations. In order to overcome these issues, we have developed an integrated workflow of visual mapping techniques leading to georeferenced data sets which we examine using state-of-the-art visualization technology to recreate an effective working style of field geology. We demonstrate a microbathymetrical workflow, which is based on photogrammetric reconstruction of ROV imagery referenced to the acoustic vehicle track. The advantage over established acoustical systems lies in the true three-dimensionality of the data as opposed to the perspective projection from above produced by downward looking mapping methods. A full color texture mosaic derived from the imagery allows studies at resolutions beyond the resolved geometry (usually one order of magnitude below the image resolution) while color gives additional clues, which can only be partly resolved in acoustic backscatter. The creation of a three-dimensional model changes the working style from the temporal domain of a video recording back to the spatial domain of a map. We examine these datasets using a custom developed immersive virtual visualization environment. The ARENA (Artificial Research Environment for Networked Analysis) features a (lower) hemispherical screen at a diameter of six meters, accommodating up to four scientists at once thus providing the ability to browse data interactively among a group of researchers. This environment facilitates (1) the development of spatial understanding analogue to on-land outcrop studies, (2) quantitative observations of seafloor morphology and physical parameters of its deposits, (3) more effective formulation and communication of working hypotheses.

  15. Remote vs. head-mounted eye-tracking: a comparison using radiologists reading mammograms

    NASA Astrophysics Data System (ADS)

    Mello-Thoms, Claudia; Gur, David

    2007-03-01

    Eye position monitoring has been used for decades in Radiology in order to determine how radiologists interpret medical images. Using these devices several discoveries about the perception/decision making process have been made, such as the importance of comparisons of perceived abnormalities with selected areas of the background, the likelihood that a true lesion will attract visual attention early in the reading process, and the finding that most misses attract prolonged visual dwell, often comparable to dwell in the location of reported lesions. However, eye position tracking is a cumbersome process, which often requires the observer to wear a helmet gear which contains the eye tracker per se and a magnetic head tracker, which allows for the computation of head position. Observers tend to complain of fatigue after wearing the gear for a prolonged time. Recently, with the advances made to remote eye-tracking, the use of head-mounted systems seemed destined to become a thing of the past. In this study we evaluated a remote eye tracking system, and compared it to a head-mounted system, as radiologists read a case set of one-view mammograms on a high-resolution display. We compared visual search parameters between the two systems, such as time to hit the location of the lesion for the first time, amount of dwell time in the location of the lesion, total time analyzing the image, etc. We also evaluated the observers' impressions of both systems, and what their perceptions were of the restrictions of each system.

  16. Methods study for the relocation of visual information in central scotoma cases

    NASA Astrophysics Data System (ADS)

    Scherlen, Anne-Catherine; Gautier, Vincent

    2005-03-01

    In this study we test the benefit on the reading performance of different ways to relocating the visual information present under the scotoma. The relocation (or unmasking) allows to compensate the loss of information and avoid the patient developing driving strategies not adapted for the reading. Eight healthy subjects were tested on a reading task, on each a central scotoma of various sizes was simulated. We then evaluate the reading speed (words/min) during three visual information relocation methods: all masked information is relocated - on both side of scotoma, - on the right of scotoma, - and only essentials letters for the word recognition too on the right of scotoma. We compare these reading speeds versus the pathological condition, ie without relocating visual information. Our results show that unmasking strategy improve the reading speed when all the visual information is unmask to the right of scotoma, this only for large scotoma. Taking account the word morphology, the perception of only certain letters outside the scotoma can be sufficient to improve the reading speed. A deepening of reading processes in the presence of a scotoma will then allows a new perspective for visual information unmasking. Multidisciplinary competences brought by engineers, ophtalmologists, linguists, clinicians would allow to optimize the reading benefit brought by the unmasking.

  17. Motion based parsing for video from observational psychology

    NASA Astrophysics Data System (ADS)

    Kokaram, Anil; Doyle, Erika; Lennon, Daire; Joyeux, Laurent; Fuller, Ray

    2006-01-01

    In Psychology it is common to conduct studies involving the observation of humans undertaking some task. The sessions are typically recorded on video and used for subjective visual analysis. The subjective analysis is tedious and time consuming, not only because much useless video material is recorded but also because subjective measures of human behaviour are not necessarily repeatable. This paper presents tools using content based video analysis that allow automated parsing of video from one such study involving Dyslexia. The tools rely on implicit measures of human motion that can be generalised to other applications in the domain of human observation. Results comparing quantitative assessment of human motion with subjective assessment are also presented, illustrating that the system is a useful scientific tool.

  18. Lateralized hybrid faces: evidence of a valence-specific bias in the processing of implicit emotions.

    PubMed

    Prete, Giulia; Laeng, Bruno; Tommasi, Luca

    2014-01-01

    It is well known that hemispheric asymmetries exist for both the analyses of low-level visual information (such as spatial frequency) and high-level visual information (such as emotional expressions). In this study, we assessed which of the above factors underlies perceptual laterality effects with "hybrid faces": a type of stimulus that allows testing for unaware processing of emotional expressions, when the emotion is displayed in the low-frequency information while an image of the same face with a neutral expression is superimposed to it. Despite hybrid faces being perceived as neutral, the emotional information modulates observers' social judgements. In the present study, participants were asked to assess friendliness of hybrid faces displayed tachistoscopically, either centrally or laterally to fixation. We found a clear influence of the hidden emotions also with lateral presentations. Happy faces were rated as more friendly and angry faces as less friendly with respect to neutral faces. In general, hybrid faces were evaluated as less friendly when they were presented in the left visual field/right hemisphere than in the right visual field/left hemisphere. The results extend the validity of the valence hypothesis in the specific domain of unaware (subcortical) emotion processing.

  19. The differential optomotor response of the four-eyed fish Anableps anableps.

    PubMed

    Albensi, B C; Powell, J H

    1998-01-01

    The perception of motion is important for the survival and reproduction of many animals, including fish. In the laboratory, support for this idea comes from the observation that many fish show a tendency to follow a series of stripes revolving around a circular aquarium. This response, known as the optomotor response (OMR), is recognized as an innate behavior in many species. The 'four-eyed' fishes of the genus Anableps are an unusual fish from Central and South America and actually have only two eyes. Each eye is divided into upper and lower halves internally and externally. This peculiar dual visual system allows Anableps to feed on creatures that swim or land near or on the water surface or to flee from flying predators attacking from above. It was hypothesized that Anableps should also possess the OMR. We used the OMR as a test to investigate potential differential visual processing in Anableps on normal and 'blinded' fish (the eyes are actually covered--not physically blinded). It was found that the OMR does exist in Anableps and that the strength of this response is dependent on the visual field being tested--a stronger OMR was seen as a result of visual stimulation from the aerial environment.

  20. No evidence for enhancements to visual working memory with transcranial direct current stimulation to prefrontal or posterior parietal cortices.

    PubMed

    Robison, Matthew K; McGuirk, William P; Unsworth, Nash

    2017-08-01

    The present study examined the relative contributions of the prefrontal cortex (PFC) and posterior parietal cortex (PPC) to visual working memory. Evidence from a number of different techniques has led to the theory that the PFC controls access to working memory (i.e., filtering), determining which information is encoded and maintained for later use whereas the parietal cortex determines how much information is held at 1 given time, regardless of relevance (i.e., capacity; McNab & Klingberg, 2008; Vogel, McCollough, & Machizawa, 2005). To test this theory, we delivered transcranial DC stimulation (tDCS) to the right PFC and right PPC and measured visual working memory capacity and filtering abilities both during and immediately following stimulation. We observed no evidence that tDCS to either the PFC or PPC significantly improved visual working memory. Although the present results did not allow us to make firm theoretical conclusions about the roles of the PFC and PPC in working memory, the results add to the growing body of literature surrounding tDCS and its associated behavioral and neurophysiological effects. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Global Precipitation Mission Visualization Tool

    NASA Technical Reports Server (NTRS)

    Schwaller, Mathew

    2011-01-01

    The Global Precipitation Mission (GPM) software provides graphic visualization tools that enable easy comparison of ground- and space-based radar observations. It was initially designed to compare ground radar reflectivity from operational, ground-based, S- and C-band meteorological radars with comparable measurements from the Tropical Rainfall Measuring Mission (TRMM) satellite's precipitation radar instrument. This design is also applicable to other groundbased and space-based radars, and allows both ground- and space-based radar data to be compared for validation purposes. The tool creates an operational system that routinely performs several steps. It ingests satellite radar data (precipitation radar data from TRMM) and groundbased meteorological radar data from a number of sources. Principally, the ground radar data comes from national networks of weather radars (see figure). The data ingested by the visualization tool must conform to the data formats used in GPM Validation Network Geometry-matched data product generation. The software also performs match-ups of the radar volume data for the ground- and space-based data, as well as statistical and graphical analysis (including two-dimensional graphical displays) on the match-up data. The visualization tool software is written in IDL, and can be operated either in the IDL development environment or as a stand-alone executable function.

  2. The Meteor and Fireball Network of the Sociedad Malagueña de Astronomía

    NASA Astrophysics Data System (ADS)

    Aznar, J. C.; Castellón, A.; Gálvez, F.; Martínez, E.; Troughton, B.; Núñez, J. M.; Villalba, F.

    2016-12-01

    One of the most active fields in which has been dedicated the Málaga Astronomical Society (SMA) is the meteors and meteor showers. Since 2006 the SMA refers parts of visual observations and photographic detections from El Pinillo station (Torremolinos, Spain). In 2013 it was decided to give an extra boost to get a camera network that allowed the calculation of the atmospheric trajectory of a meteoroid and, where possible, obtaining the orbital elements.

  3. Control of Surface Attack by Gallium Alloys in Electrical Contacts.

    DTIC Science & Technology

    1986-03-28

    and atmospheric control but does not allow visual observation of the contact brushes. This machine is a small homopolar motor built from mild steel...collectors,gallium, homopolar devices,liquid metals,~- is. ABSTRACT ICNI.. .. w 41N"w -~dv.mp.d Wrllt by Itabata" * Electrical contact between a copp’er...32 5 Test rig with felt metal brushes 32 6 Homopolar test apparatus 33 7 Rewetting of alloy track 33 8 Alloy track after running with finger 34 brushes

  4. Visually Tracking Translocations in Living Cells | Center for Cancer Research

    Cancer.gov

    Chromosomal translocations, the fusion of pieces of DNA from different chromosomes, are often observed in cancer cells and can even cause cancer. However, little is known about the dynamics and regulation of translocation formation. To investigate this critical process, Tom Misteli, Ph.D., in CCR’s Laboratory of Receptor Biology and Gene Expression, and his colleague Vassilis Roukos, Ph.D., developed a novel experimental system that allowed the researchers to see, for the first time, translocations form in individual, live cells.

  5. Computer-Based Tools for Inquiry in Undergraduate Classrooms: Results from the VGEE

    NASA Astrophysics Data System (ADS)

    Pandya, R. E.; Bramer, D. J.; Elliott, D.; Hay, K. E.; Mallaiahgari, L.; Marlino, M. R.; Middleton, D.; Ramamurhty, M. K.; Scheitlin, T.; Weingroff, M.; Wilhelmson, R.; Yoder, J.

    2002-05-01

    The Visual Geophysical Exploration Environment (VGEE) is a suite of computer-based tools designed to help learners connect observable, large-scale geophysical phenomena to underlying physical principles. Technologically, this connection is mediated by java-based interactive tools: a multi-dimensional visualization environment, authentic scientific data-sets, concept models that illustrate fundamental physical principles, and an interactive web-based work management system for archiving and evaluating learners' progress. Our preliminary investigations showed, however, that the tools alone are not sufficient to empower undergraduate learners; learners have trouble in organizing inquiry and using the visualization tools effectively. To address these issues, the VGEE includes an inquiry strategy and scaffolding activities that are similar to strategies used successfully in K-12 classrooms. The strategy is organized around the steps: identify, relate, explain, and integrate. In the first step, students construct visualizations from data to try to identify salient features of a particular phenomenon. They compare their previous conceptions of a phenomenon to the data examine their current knowledge and motivate investigation. Next, students use the multivariable functionality of the visualization environment to relate the different features they identified. Explain moves the learner temporarily outside the visualization to the concept models, where they explore fundamental physical principles. Finally, in integrate, learners use these fundamental principles within the visualization environment by literally placing the concept model within the visualization environment as a probe and watching it respond to larger-scale patterns. This capability, unique to the VGEE, addresses the disconnect that novice learners often experience between fundamental physics and observable phenomena. It also allows learners the opportunity to reflect on and refine their knowledge as well as anchor it within a context for long-term retention. We are implementing the VGEE in one of two otherwise identical entry-level atmospheric courses. In addition to comparing student learning and attitudes in the two courses, we are analyzing student participation with the VGEE to evaluate the effectiveness and usability of the VGEE. In particular, we seek to identify the scaffolding students need to construct physically meaningful multi-dimensional visualizations, and evaluate the effectiveness of the visualization-embedded concept-models in addressing inert knowledge. We will also examine the utility of the inquiry strategy in developing content knowledge, process-of-science knowledge, and discipline-specific investigatory skills. Our presentation will include video examples of student use to illustrate our findings.

  6. Interactive Webmap-Based Science Planning for BepiColombo

    NASA Astrophysics Data System (ADS)

    McAuliffe, J.; Martinez, S.; Ortiz de Landaluce, I.; de la Fuente, S.

    2015-10-01

    For BepiColombo, ESA's Mission to Mercury, we will build a web-based, map-based interface to the Science Planning System. This interface will allow the mission's science teams to visually define targets for observations and interactively specify what operations will make up the given observation. This will be a radical departure from previous ESA mission planning methods. Such an interface will rely heavily on GIS technologies. This interface will provide footprint coverage of all existing archived data for Mercury, including a set of built-in basemaps. This will allow the science teams to analyse their planned observations and operational constraints with relevant contextual information from their own instrument, other BepiColombo instruments or from previous missions. The interface will allow users to import and export data in commonly used GIS formats, such that it can be visualised together with the latest planning information (e.g. import custom basemaps) or analysed in other GIS software. The interface will work with an object-oriented concept of an observation that will be a key characteristic of the overall BepiColombo scienceplanning concept. Observation templates or classes will be tracked right through the planning-executionprocessing- archiving cycle to the final archived science products. By using an interface that synthesises all relevant available information, the science teams will have a better understanding of the operational environment; it will enhance their ability to plan efficiently minimising or removing manual planning. Interactive 3D visualisation of the planned, scheduled and executed observations, simulation of the viewing conditions and interactive modification of the observation parameters are also being considered.

  7. On the barn owl's visual pre-attack behavior: I. Structure of head movements and motion patterns.

    PubMed

    Ohayon, Shay; van der Willigen, Robert F; Wagner, Hermann; Katsman, Igor; Rivlin, Ehud

    2006-09-01

    Barn owls exhibit a rich repertoire of head movements before taking off for prey capture. These movements occur mainly at light levels that allow for the visual detection of prey. To investigate these movements and their functional relevance, we filmed the pre-attack behavior of barn owls. Off-line image analysis enabled reconstruction of all six degrees of freedom of head movements. Three categories of head movements were observed: fixations, head translations and head rotations. The observed rotations contained a translational component. Head rotations did not follow Listing's law, but could be well described by a second-order surface, which indicated that they are in close agreement with Donder's law. Head translations did not contain any significant rotational components. Translations were further segmented into straight-line and curved paths. Translations along an axis perpendicular to the line of sight were similar to peering movements observed in other animals. We suggest that these basic motion elements (fixations, head rotations, translations along a straight line, and translation along a curved trajectory) may be combined to form longer and more complex behavior. We speculate that these head movements mainly underlie estimation of distance during prey capture.

  8. Preliminayr Study on Diffraction Enhanced Radiographic Imaging for a Canine Model of Cartilage Damage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muehleman,C.; Li, J.; Zhong, Z.

    2006-01-01

    Objective: To demonstrate the ability of a novel radiographic technique, Diffraction Enhanced Radiographic Imaging (DEI), to render high contrast images of canine knee joints for identification of cartilage lesions in situ. Methods: DEI was carried out at the X-15A beamline at Brookhaven National Laboratory on intact canine knee joints with varying levels of cartilage damage. Two independent observers graded the DE images for lesions and these grades were correlated to the gross morphological grade. Results: The correlation of gross visual grades with DEI grades for the 18 canine knee joints as determined by observer 1 (r2=0.8856, P=0.001) and observer 2more » (r2=0.8818, P=0.001) was high. The overall weighted ? value for inter-observer agreement was 0.93, thus considered high agreement. Conclusion: The present study is the first study for the efficacy of DEI for cartilage lesions in an animal joint, from very early signs through erosion down to subchondral bone, representing the spectrum of cartilage changes occurring in human osteoarthritis (OA). Here we show that DEI allows the visualization of cartilage lesions in intact canine knee joints with good accuracy. Hence, DEI may be applicable for following joint degeneration in animal models of OA.« less

  9. Interactive 4D Visualization of Sediment Transport Models

    NASA Astrophysics Data System (ADS)

    Butkiewicz, T.; Englert, C. M.

    2013-12-01

    Coastal sediment transport models simulate the effects that waves, currents, and tides have on near-shore bathymetry and features such as beaches and barrier islands. Understanding these dynamic processes is integral to the study of coastline stability, beach erosion, and environmental contamination. Furthermore, analyzing the results of these simulations is a critical task in the design, placement, and engineering of coastal structures such as seawalls, jetties, support pilings for wind turbines, etc. Despite the importance of these models, there is a lack of available visualization software that allows users to explore and perform analysis on these datasets in an intuitive and effective manner. Existing visualization interfaces for these datasets often present only one variable at a time, using two dimensional plan or cross-sectional views. These visual restrictions limit the ability to observe the contents in the proper overall context, both in spatial and multi-dimensional terms. To improve upon these limitations, we use 3D rendering and particle system based illustration techniques to show water column/flow data across all depths simultaneously. We can also encode multiple variables across different perceptual channels (color, texture, motion, etc.) to enrich surfaces with multi-dimensional information. Interactive tools are provided, which can be used to explore the dataset and find regions-of-interest for further investigation. Our visualization package provides an intuitive 4D (3D, time-varying) visualization of sediment transport model output. In addition, we are also integrating real world observations with the simulated data to support analysis of the impact from major sediment transport events. In particular, we have been focusing on the effects of Superstorm Sandy on the Redbird Artificial Reef Site, offshore of Delaware Bay. Based on our pre- and post-storm high-resolution sonar surveys, there has significant scour and bedform migration around the sunken subway cars and other vessels present at the Redbird site. Due to the extensive surveying and historical data availability in the area, the site is highly attractive for comparing hindcasted sediment transport simulations to our observations of actual changes. This work has the potential to strengthen the accuracy of sediment transport modeling, as well as help predict and prepare for future changes due to similar extreme sediment transport events. Our visualization showing a simple sediment transport model with tidal flows causing significant erosion (red) and deposition (blue).

  10. SnopViz, an interactive snow profile visualization tool

    NASA Astrophysics Data System (ADS)

    Fierz, Charles; Egger, Thomas; gerber, Matthias; Bavay, Mathias; Techel, Frank

    2016-04-01

    SnopViz is a visualization tool for both simulation outputs of the snow-cover model SNOWPACK and observed snow profiles. It has been designed to fulfil the needs of operational services (Swiss Avalanche Warning Service, Avalanche Canada) as well as offer the flexibility required to satisfy the specific needs of researchers. This JavaScript application runs on any modern browser and does not require an active Internet connection. The open source code is available for download from models.slf.ch where examples can also be run. Both the SnopViz library and the SnopViz User Interface will become a full replacement of the current research visualization tool SN_GUI for SNOWPACK. The SnopViz library is a stand-alone application that parses the provided input files, for example, a single snow profile (CAAML file format) or multiple snow profiles as output by SNOWPACK (PRO file format). A plugin architecture allows for handling JSON objects (JavaScript Object Notation) as well and plugins for other file formats may be added easily. The outputs are provided either as vector graphics (SVG) or JSON objects. The SnopViz User Interface (UI) is a browser based stand-alone interface. It runs in every modern browser, including IE, and allows user interaction with the graphs. SVG, the XML based standard for vector graphics, was chosen because of its easy interaction with JS and a good software support (Adobe Illustrator, Inkscape) to manipulate graphs outside SnopViz for publication purposes. SnopViz provides new visualization for SNOWPACK timeline output as well as time series input and output. The actual output format for SNOWPACK timelines was retained while time series are read from SMET files, a file format used in conjunction with the open source data handling code MeteoIO. Finally, SnopViz is able to render single snow profiles, either observed or modelled, that are provided as CAAML-file. This file format (caaml.org/Schemas/V5.0/Profiles/SnowProfileIACS) is an international standard to exchange snow profile data. It is supported by the International Association of Cryospheric Sciences (IACS) and was developed in collaboration with practitioners (Avalanche Canada).

  11. Voluntarily controlled but not merely observed visual feedback affects postural sway

    PubMed Central

    Asai, Tomohisa; Hiromitsu, Kentaro; Imamizu, Hiroshi

    2018-01-01

    Online stabilization of human standing posture utilizes multisensory afferences (e.g., vision). Whereas visual feedback of spontaneous postural sway can stabilize postural control especially when observers concentrate on their body and intend to minimize postural sway, the effect of intentional control of visual feedback on postural sway itself remains unclear. This study assessed quiet standing posture in healthy adults voluntarily controlling or merely observing visual feedback. The visual feedback (moving square) had either low or high gain and was either horizontally flipped or not. Participants in the voluntary-control group were instructed to minimize their postural sway while voluntarily controlling visual feedback, whereas those in the observation group were instructed to minimize their postural sway while merely observing visual feedback. As a result, magnified and flipped visual feedback increased postural sway only in the voluntary-control group. Furthermore, regardless of the instructions and feedback manipulations, the experienced sense of control over visual feedback positively correlated with the magnitude of postural sway. We suggest that voluntarily controlled, but not merely observed, visual feedback is incorporated into the feedback control system for posture and begins to affect postural sway. PMID:29682421

  12. Visual Working Memory Supports the Inhibition of Previously Processed Information: Evidence from Preview Search

    ERIC Educational Resources Information Center

    Al-Aidroos, Naseem; Emrich, Stephen M.; Ferber, Susanne; Pratt, Jay

    2012-01-01

    In four experiments we assessed whether visual working memory (VWM) maintains a record of previously processed visual information, allowing old information to be inhibited, and new information to be prioritized. Specifically, we evaluated whether VWM contributes to the inhibition (i.e., visual marking) of previewed distractors in a preview search.…

  13. Application of Frameworks in the Analysis and (Re)design of Interactive Visual Learning Tools

    ERIC Educational Resources Information Center

    Liang, Hai-Ning; Sedig, Kamran

    2009-01-01

    Interactive visual learning tools (IVLTs) are software environments that encode and display information visually and allow learners to interact with the visual information. This article examines the application and utility of frameworks in the analysis and design of IVLTs at the micro level. Frameworks play an important role in any design. They…

  14. Covert enaction at work: Recording the continuous movements of visuospatial attention to visible or imagined targets by means of Steady-State Visual Evoked Potentials (SSVEPs).

    PubMed

    Gregori Grgič, Regina; Calore, Enrico; de'Sperati, Claudio

    2016-01-01

    Whereas overt visuospatial attention is customarily measured with eye tracking, covert attention is assessed by various methods. Here we exploited Steady-State Visual Evoked Potentials (SSVEPs) - the oscillatory responses of the visual cortex to incoming flickering stimuli - to record the movements of covert visuospatial attention in a way operatively similar to eye tracking (attention tracking), which allowed us to compare motion observation and motion extrapolation with and without eye movements. Observers fixated a central dot and covertly tracked a target oscillating horizontally and sinusoidally. In the background, the left and the right halves of the screen flickered at two different frequencies, generating two SSVEPs in occipital regions whose size varied reciprocally as observers attended to the moving target. The two signals were combined into a single quantity that was modulated at the target frequency in a quasi-sinusoidal way, often clearly visible in single trials. The modulation continued almost unchanged when the target was switched off and observers mentally extrapolated its motion in imagery, and also when observers pointed their finger at the moving target during covert tracking, or imagined doing so. The amplitude of modulation during covert tracking was ∼25-30% of that measured when observers followed the target with their eyes. We used 4 electrodes in parieto-occipital areas, but similar results were achieved with a single electrode in Oz. In a second experiment we tested ramp and step motion. During overt tracking, SSVEPs were remarkably accurate, showing both saccadic-like and smooth pursuit-like modulations of cortical responsiveness, although during covert tracking the modulation deteriorated. Covert tracking was better with sinusoidal motion than ramp motion, and better with moving targets than stationary ones. The clear modulation of cortical responsiveness recorded during both overt and covert tracking, identical for motion observation and motion extrapolation, suggests to include covert attention movements in enactive theories of mental imagery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Simultaneous in vivo visualization and localization of solid oral dosage forms in the rat gastrointestinal tract by magnetic resonance imaging (MRI).

    PubMed

    Christmann, V; Rosenberg, J; Seega, J; Lehr, C M

    1997-08-01

    Bioavailability of orally administered drugs is much influenced by the behavior, performance and fate of the dosage form within the gastrointestinal (GI) tract. Therefore, MRI in vivo methods that allow for the simultaneous visualization of solid oral dosage forms and anatomical structures of the GI tract have been investigated. Oral contrast agents containing Gd-DTPA were used to depict the lumen of the digestive organs. Solid oral dosage forms were visualized in a rat model by a 1H-MRI double contrast technique (magnetite-labelled microtablets) and a combination of 1H- and 19F-MRI (fluorine-labelled minicapsules). Simultaneous visualization of solid oral dosage forms and the GI environment in the rat was possible using MRI. Microtablets could reproducibly be monitored in the rat stomach and in the intestines using a 1H-MRI double contrast technique. Fluorine-labelled minicapsules were detectable in the rat stomach by a combination of 1H- and 19F-MRI in vivo. The in vivo 1H-MRI double contrast technique described allows solid oral dosage forms in the rat GI tract to be depicted. Solid dosage forms can easily be labelled by incorporating trace amounts of non-toxic iron oxide (magnetite) particles. 1H-MRI is a promising tool for observing such pharmaceutical dosage forms in humans. Combined 1H- and 19F-MRI offer a means of unambiguously localizing solid oral dosage forms in more distal parts of the GI tract. Studies correlating MRI examinations with drug plasma levels could provide valuable information for the development of pharmaceutical dosage forms.

  16. Visual Analytics for Heterogeneous Geoscience Data

    NASA Astrophysics Data System (ADS)

    Pan, Y.; Yu, L.; Zhu, F.; Rilee, M. L.; Kuo, K. S.; Jiang, H.; Yu, H.

    2017-12-01

    Geoscience data obtained from diverse sources have been routinely leveraged by scientists to study various phenomena. The principal data sources include observations and model simulation outputs. These data are characterized by spatiotemporal heterogeneity originated from different instrument design specifications and/or computational model requirements used in data generation processes. Such inherent heterogeneity poses several challenges in exploring and analyzing geoscience data. First, scientists often wish to identify features or patterns co-located among multiple data sources to derive and validate certain hypotheses. Heterogeneous data make it a tedious task to search such features in dissimilar datasets. Second, features of geoscience data are typically multivariate. It is challenging to tackle the high dimensionality of geoscience data and explore the relations among multiple variables in a scalable fashion. Third, there is a lack of transparency in traditional automated approaches, such as feature detection or clustering, in that scientists cannot intuitively interact with their analysis processes and interpret results. To address these issues, we present a new scalable approach that can assist scientists in analyzing voluminous and diverse geoscience data. We expose a high-level query interface that allows users to easily express their customized queries to search features of interest across multiple heterogeneous datasets. For identified features, we develop a visualization interface that enables interactive exploration and analytics in a linked-view manner. Specific visualization techniques such as scatter plots to parallel coordinates are employed in each view to allow users to explore various aspects of features. Different views are linked and refreshed according to user interactions in any individual view. In such a manner, a user can interactively and iteratively gain understanding into the data through a variety of visual analytics operations. We demonstrate with use cases how scientists can combine the query and visualization interfaces to enable a customized workflow facilitating studies using heterogeneous geoscience datasets.

  17. Titan's lakes and Mare observed by the Visual and Infrared Mapping Spectrometer

    NASA Astrophysics Data System (ADS)

    Brown, R. H.; Soderblom, L. A.; Sotin, C.; Barnes, J. W.; Hayes, A. G.; Lawrence, K. J.; Le Mouelic, S.; Rodriguez, S.; Soderblom, J. M.; Baines, K. H.; Buratti, B. J.; Clark, R. N.; Jaumann, R.; Nicholson, P. D.; Stephan, K.

    2012-04-01

    Titan is the only place, besides Earth, that holds stable liquid bodies at its surface. The large Kraken Mare, first seen by ISS [1], was then observed by the radar instrument that discovered a large number of small lakes as well as two other Mare [2]. The liquid nature of these radar-dark features was later confirmed by the specular reflection observed by the Visual and Infrared Mapping Spectrometer (VIMS) over Kraken Mare [3] and by the very low albedo at 5-micron over Ontario Lacus [4]. The three largest lakes are called Mare and are all located in the North Pole area. It is remarkable that most of these lakes have been observed on the North Pole with only one large lake, Ontario lacus, located in the South Pole area. This observation suggests the influence of orbital parameters on the meteorology and the occurrence of rainfalls to refill the depressions [5]. Ethane was detected by the VIMS instrument as one component of Ontario lacus [4]. These lakes and Mare play a key role in Titan's meteorology as demonstrated by recent global circulation models [6]. Determining the composition and the evolution of those lakes has become a primary science objective of the Cassini extended mission. Since Titan entered northern spring in August 2009, the North Pole has been illuminated allowing observations at optical wavelengths. On June 5, 2010 the Visual and Infrared Mapping Spectrometer (VIMS) onboard the Cassini spacecraft observed the northern pole area with a pixel size from 3 to 7 km. These observations demonstrate that little of the solar flux at 5-micron is scattered by the atmosphere, which allowed us to build a mosaic covering an area of more than 500,000 km2 that overlaps and complements observations made by the Synthetic Aperture Radar (SAR) in 2007. We find that there is an excellent correlation between the shape of the radar dark area, known as Ligeia Mare and the VIMS 5-micron dark unit. Matching most of the radar shoreline, the 2010 VIMS observations suggest that the 125,000-km2 surface area of Ligeia Mare measured by RADAR in 2007 has not significantly changed [7]. The analysis of the 2-micron spectral window confirms the presence of ethane [8]. Because its saturation vapor pressure is several orders of magnitude smaller than that of methane, liquid ethane is expected to be very stable at Titan's surface conditions, which could explain the stability of the shorelines if ethane is the major compound of the lakes. VIMS observations of Ontario Lacus are planned in 2012 before it disappears in the polar night. Several observations of the northern lakes are planned in 2012 as well as observations of the Mare later in the mission. This work has been performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract to NASA. Government sponsorship acknowledged.

  18. Gonioscopy-assisted Transluminal Trabeculotomy (GATT): Thermal Suture Modification With a Dye-stained Rounded Tip.

    PubMed

    Grover, Davinder S; Fellman, Ronald L

    2016-06-01

    To describe a novel technique for thermally marking the tip of a suture, in preparation for a gonioscopy-assisted transluminal trabeculotomy. One patient was used as an example for this technique. Technique report. The authors introduce a modification of a novel surgical procedure (GATT) in which a suture is marked and thermally blunted allowing a proper visualization while performing an ab interno, minimally invasive, circumferential 360-degree suture trabeculotomy. The authors have previously reported on the GATT surgery with the use of an illuminated microcatheter, which allowed for visualization of the tip of the catheter as it circumnavigated Schlemm canal. This modification allows for similar visualization of the tip of the suture, however, is much more cost-effective while still maintaining similar safety.

  19. Scientific Visualization Made Easy for the Scientist

    NASA Astrophysics Data System (ADS)

    Westerhoff, M.; Henderson, B.

    2002-12-01

    amirar is an application program used in creating 3D visualizations and geometric models of 3D image data sets from various application areas, e.g. medicine, biology, biochemistry, chemistry, physics, and engineering. It has demonstrated significant adoption in the market place since becoming commercially available in 2000. The rapid adoption has expanded the features being requested by the user base and broadened the scope of the amira product offering. The amira product offering includes amira Standard, amiraDevT, used to extend the product capabilities by users, amiraMolT, used for molecular visualization, amiraDeconvT, used to improve quality of image data, and amiraVRT, used in immersive VR environments. amira allows the user to construct a visualization tailored to his or her needs without requiring any programming knowledge. It also allows 3D objects to be represented as grids suitable for numerical simulations, notably as triangular surfaces and volumetric tetrahedral grids. The amira application also provides methods to generate such grids from voxel data representing an image volume, and it includes a general-purpose interactive 3D viewer. amiraDev provides an application-programming interface (API) that allows the user to add new components by C++ programming. amira supports many import formats including a 'raw' format allowing immediate access to your native uniform data sets. amira uses the power and speed of the OpenGLr and Open InventorT graphics libraries and 3D graphics accelerators to allow you to access over 145 modules, enabling you to process, probe, analyze and visualize your data. The amiraMolT extension adds powerful tools for molecular visualization to the existing amira platform. amiraMolT contains support for standard molecular file formats, tools for visualization and analysis of static molecules as well as molecular trajectories (time series). amiraDeconv adds tools for the deconvolution of 3D microscopic images. Deconvolution is the process of increasing image quality and resolution by computationally compensating artifacts of the recording process. amiraDeconv supports 3D wide field microscopy as well as 3D confocal microscopy. It offers both non-blind and blind image deconvolution algorithms. Non-blind deconvolution uses an individual measured point spread function, while non-blind algorithms work on the basis of only a few recording parameters (like numerical aperture or zoom factor). amiraVR is a specialized and extended version of the amira visualization system which is dedicated for use in immersive installations, such as large-screen stereoscopic projections, CAVEr or Holobenchr systems. Among others, it supports multi-threaded multi-pipe rendering, head-tracking, advanced 3D interaction concepts, and 3D menus allowing interaction with any amira object in the same way as on the desktop. With its unique set of features, amiraVR represents both a VR (Virtual Reality) ready application for scientific and medical visualization in immersive environments, and a development platform that allows building VR applications.

  20. Increasing awareness and preparedness by an exhibition and studying the effect of visuals

    NASA Astrophysics Data System (ADS)

    Charrière, Marie; Bogaard, Thom; Malet, Jean-Philippe; Mostert, Erik

    2013-04-01

    Damages caused by natural hazards can be reduced not only by protection, management and intervention activities, but also by information and communication to improve awareness and preparedness of local communities and tourists. Risk communication is particularly crucial for mountainous areas, such as the Ubaye Valley (France), as they are affected by multiple hazards and are particularly sensitive to the potential effects of climate and socio-economic changes which may increase the risk associated with natural hazards significantly. An exhibition is a powerful tool to communicate with the general public. It allows1: (1) targeting specific audiences, (2) transmitting technical and scientific knowledge using a suitable language, (3) anchoring the collective memory of past events, (4) visualize and emotionalize the topic of natural hazards, (5) strengthening the communication between peers, and (6) highlighting local resources and knowledge. In addition to these theoretical advantages, an exhibition may fulfill the requirements of a community. In the Ubaye Valley (France), this tool was proposed by the stakeholders themselves to increase awareness and preparedness of the general public. To meet this demand, the exhibition was designed following three general topics: (1) the natural phenomena and their potential consequences on the elements at risk, (2) the management and protection measures (individual and collective) and (3) the evolution of events and knowledge throughout past up to the present and the anticipation of the future situations. Besides being a real risk communication practice, this exhibition will be the setting for an extensive research project studying the effect of the use of visualization tools on the awareness and preparedness of a community. A wide range of visuals (photos, videos, maps, models, animations, multimedia, etc.) will present many dimensions of locally occurring natural hazards and risk problems. The aim of the research is (1) to verify the theoretical advantages of visual communication, such as conveying strong messages and making them easy to remember2, (2) to measure the change of awareness and preparedness after being exposed to such media, and (3) to propose guidelines for further development and use of visual tools for natural hazard risk communication. To conduct this analysis, questionnaires and direct observation will be applied. The first method will allow to measure changes in knowledge and perceptions as the same questionnaire will be filled by visitors prior and after their attendance to the exhibition. Additional items of the questionnaire will deal with the opinions on the different visualization tools, i.e. fulfillment of needs and requirements of the visitors. Direct observation will be used for analyzing the relative attraction of each of the visualization tools. This research will help to determine which tool is more suitable to communicate to the community not only as a whole, but also by its sub-groups, i.e. children or adults, locals or tourists, etc.

  1. Visualization of Flowfield Modification by RCS Jets on a Capsule Entry Vehicle

    NASA Technical Reports Server (NTRS)

    Danehy, P. M.; Inman, J. A.; Alderfer, D. W.; Buck, G. M.; Schwartz, R.

    2008-01-01

    Nitric oxide planar laser-induced fluorescence (NO PLIF) has been used to visualize the flow on the aft-body of an entry capsule having an activated RCS jet in NASA Langley Research Center's 31-Inch Mach 10 wind tunnel facility. A capsule shape representative of the Apollo command module was tested. These tests were performed to demonstrate the ability of the PLIF method to visualize RCS jet flow while providing some preliminary input to NASA's Orion Vehicle design team. Two different RCS nozzle designs - conical and contoured - were tested. The conical and contoured nozzles had area ratios of 13.4 and 22.5 respectively. The conical nozzle had a half-angle of 10 . Low- and high-Reynolds number cases were investigated by changing the tunnel stagnation pressure from 350 psi to 1300 psi, resulting in freestream Reynolds numbers of 0.56 and 1.8 million per foot respectively. For both of these cases, three different jet plenum pressures were tested (nominally 56, 250 and 500 psi). A single angle-of-attack was investigated (24 degrees). NO PLIF uses an ultraviolet laser sheet to interrogate a slice in the flow containing seeded NO; this UV light excites fluorescence from the NO molecules which is detected by a high-speed digital camera. The system has spatial resolution of about 200 microns (2 pixel blurring) and has flow-stopping time resolution (approximately 1 microsecond). NO was seeded into the flow two different ways. First, the RCS jet fluid was seeded with approximately 1-5% NO, with the balance N2. This allowed observation of the shape, structure and trajectory of the RCS jets. Visualizations of both laminar and turbulent flow jet features were obtained. Visualizations were obtained with the tunnel operating at Mach 10 and also with the test section held at a constant pressure similar to the aftbody static pressure (0.04 psi) obtained during tunnel runs. These two conditions are called "tunnel on" and "tunnel off" respectively. Second, the forebody flow was seeded with a very low flowrate (<100 standard cubic centimeters per minute) of pure NO. This trace gas was entrained into and allowed visualization of the shear layer forming between the expansion fan on the shoulder of the model and the recirculating separated flow in the wake of the model. This shear layer was observed to be laminar in the absence of the RCS jet operation and turbulent above a certain RCS jet flowrate. Furthermore, the operation of the RCS jet is seen to push the shear layer out away from the model, with a higher jet pressures resulting in larger deflections. Figures show some data from this test, partially processed. In the final paper, these images will be processed and rendered on a three dimensional visualization of the test hardware for clearer visualization and interpretation of the flowfields.

  2. Visions of our Planet's Atmosphere, Land and Oceans: NASA/NOAA Electronic-Theater 2002. Spectacular Visualizations of our Blue Marble

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Starr, David (Technical Monitor)

    2002-01-01

    Spectacular Visualizations of our Blue Marble The NASA/NOAA Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to the 2002 Winter Olympic Stadium Site of the Olympic Opening and Closing Ceremonies in Salt Lake City. Fly in and through Olympic Alpine Venues using 1 m IKONOS "Spy Satellite" data. Go back to the early weather satellite images from the 1960s and see them contrasted with the latest US and international global satellite weather movies including hurricanes & "tornadoes". See the latest visualizations of spectacular images from NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiFS, Landsat 7 including new 1 - min GOES rapid scan image sequences of Nov 9th 2001 Midwest tornadic thunderstorms and have them explained. See how High-Definition Television (HDTV) is revolutionizing the way we communicate science. (In cooperation with the American Museum of Natural History in NYC). See dust storms in Africa and smoke plumes from fires in Mexico. See visualizations featured on the covers of Newsweek, TIME, National Geographic, Popular Science & on National & International Network TV. New computer software tools allow us to roam & zoom through massive global images e.g. Landsat tours of the US, and Africa, showing desert and mountain geology as well as seasonal changes in vegetation. See animations of the polar ice packs and the motion of gigantic Antarctic Icebergs from SeaWinds data. Spectacular new visualizations of the global atmosphere & oceans are shown. See vertexes and currents in the global oceans that bring up the nutrients to feed tiny algae and draw the fish, whales and fisherman. See the how the ocean blooms in response to these currents and El Nicola Nina climate changes. See the city lights, fishing fleets, gas flares and biomass burning of the Earth at night observed by the "night-vision" DMSP military satellite.

  3. Filming the invisible - time-resolved visualization of compressible flows

    NASA Astrophysics Data System (ADS)

    Kleine, H.

    2010-04-01

    Essentially all processes in gasdynamics are invisible to the naked eye as they occur in a transparent medium. The task to observe them is further complicated by the fact that most of these processes are also transient, often with characteristic times that are considerably below the threshold of human perception. Both difficulties can be overcome by combining visualization methods that reveal changes in the transparent medium, and high-speed photography techniques that “stop” the motion of the flow. The traditional approach is to reconstruct a transient process from a series of single images, each taken in a different experiment at a different instant. This approach, which is still widely used today, can only be expected to give reliable results when the process is reproducible. Truly time-resolved visualization, which yields a sequence of flow images in a single experiment, has been attempted for more than a century, but many of the developed camera systems were characterized by a high level of complexity and limited quality of the results. Recent advances in digital high-speed photography have changed this situation and have provided the tools to investigate, with relative ease and in sufficient detail, the true development of a transient flow with characteristic time scales down to one microsecond. This paper discusses the potential and the limitations one encounters when using density-sensitive visualization techniques in time-resolved mode. Several examples illustrate how this approach can reveal and explain a number of previously undetected phenomena in a variety of highly transient compressible flows. It is demonstrated that time-resolved visualization offers numerous advantages which normally outweigh its shortcomings, mainly the often-encountered loss in resolution. Apart from the capability to track the location and/or shape of flow features in space and time, adequate time-resolved visualization allows one to observe the development of deliberately introduced near-isentropic perturbation wavelets. This new diagnostic tool can be used to qualitatively and quantitatively determine otherwise inaccessible thermodynamic properties of a compressible flow.

  4. Evolview v2: an online visualization and management tool for customized and annotated phylogenetic trees.

    PubMed

    He, Zilong; Zhang, Huangkai; Gao, Shenghan; Lercher, Martin J; Chen, Wei-Hua; Hu, Songnian

    2016-07-08

    Evolview is an online visualization and management tool for customized and annotated phylogenetic trees. It allows users to visualize phylogenetic trees in various formats, customize the trees through built-in functions and user-supplied datasets and export the customization results to publication-ready figures. Its 'dataset system' contains not only the data to be visualized on the tree, but also 'modifiers' that control various aspects of the graphical annotation. Evolview is a single-page application (like Gmail); its carefully designed interface allows users to upload, visualize, manipulate and manage trees and datasets all in a single webpage. Developments since the last public release include a modern dataset editor with keyword highlighting functionality, seven newly added types of annotation datasets, collaboration support that allows users to share their trees and datasets and various improvements of the web interface and performance. In addition, we included eleven new 'Demo' trees to demonstrate the basic functionalities of Evolview, and five new 'Showcase' trees inspired by publications to showcase the power of Evolview in producing publication-ready figures. Evolview is freely available at: http://www.evolgenius.info/evolview/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Interactive Visualization of Dependencies

    ERIC Educational Resources Information Center

    Moreno, Camilo Arango; Bischof, Walter F.; Hoover, H. James

    2012-01-01

    We present an interactive tool for browsing course requisites as a case study of dependency visualization. This tool uses multiple interactive visualizations to allow the user to explore the dependencies between courses. A usability study revealed that the proposed browser provides significant advantages over traditional methods, in terms of…

  6. The iconic memory skills of brain injury survivors and non-brain injured controls after visual scanning training.

    PubMed

    McClure, J T; Browning, R T; Vantrease, C M; Bittle, S T

    1994-01-01

    Previous research suggests that traumatic brain injury (TBI) results in impairment of iconic memory abilities.We would like to acknowledge the contribution of Jeffrey D. Vantrease, who wrote the software program for the Iconic Memory procedure and measurement. This raises serious implications for brain injury rehabilitation. Most cognitive rehabilitation programs do not include iconic memory training. Instead it is common for cognitive rehabilitation programs to focus on attention and concentration skills, memory skills, and visual scanning skills.This study compared the iconic memory skills of brain-injury survivors and control subjects who all reached criterion levels of visual scanning skills. This involved previous training for the brain-injury survivors using popular visual scanning programs that allowed them to visually scan with response time and accuracy within normal limits. Control subjects required only minimal training to reach normal limits criteria. This comparison allows for the dissociation of visual scanning skills and iconic memory skills.The results are discussed in terms of their implications for cognitive rehabilitation and the relationship between visual scanning training and iconic memory skills.

  7. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    NASA Astrophysics Data System (ADS)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  8. Optically secured information retrieval using two authenticated phase-only masks.

    PubMed

    Wang, Xiaogang; Chen, Wen; Mei, Shengtao; Chen, Xudong

    2015-10-23

    We propose an algorithm for jointly designing two phase-only masks (POMs) that allow for the encryption and noise-free retrieval of triple images. The images required for optical retrieval are first stored in quick-response (QR) codes for noise-free retrieval and flexible readout. Two sparse POMs are respectively calculated from two different images used as references for authentication based on modified Gerchberg-Saxton algorithm (GSA) and pixel extraction, and are then used as support constraints in a modified double-phase retrieval algorithm (MPRA), together with the above-mentioned QR codes. No visible information about the target images or the reference images can be obtained from each of these authenticated POMs. This approach allows users to authenticate the two POMs used for image reconstruction without visual observation of the reference images. It also allows user to friendly access and readout with mobile devices.

  9. Optically secured information retrieval using two authenticated phase-only masks

    PubMed Central

    Wang, Xiaogang; Chen, Wen; Mei, Shengtao; Chen, Xudong

    2015-01-01

    We propose an algorithm for jointly designing two phase-only masks (POMs) that allow for the encryption and noise-free retrieval of triple images. The images required for optical retrieval are first stored in quick-response (QR) codes for noise-free retrieval and flexible readout. Two sparse POMs are respectively calculated from two different images used as references for authentication based on modified Gerchberg-Saxton algorithm (GSA) and pixel extraction, and are then used as support constraints in a modified double-phase retrieval algorithm (MPRA), together with the above-mentioned QR codes. No visible information about the target images or the reference images can be obtained from each of these authenticated POMs. This approach allows users to authenticate the two POMs used for image reconstruction without visual observation of the reference images. It also allows user to friendly access and readout with mobile devices. PMID:26494213

  10. Optically secured information retrieval using two authenticated phase-only masks

    NASA Astrophysics Data System (ADS)

    Wang, Xiaogang; Chen, Wen; Mei, Shengtao; Chen, Xudong

    2015-10-01

    We propose an algorithm for jointly designing two phase-only masks (POMs) that allow for the encryption and noise-free retrieval of triple images. The images required for optical retrieval are first stored in quick-response (QR) codes for noise-free retrieval and flexible readout. Two sparse POMs are respectively calculated from two different images used as references for authentication based on modified Gerchberg-Saxton algorithm (GSA) and pixel extraction, and are then used as support constraints in a modified double-phase retrieval algorithm (MPRA), together with the above-mentioned QR codes. No visible information about the target images or the reference images can be obtained from each of these authenticated POMs. This approach allows users to authenticate the two POMs used for image reconstruction without visual observation of the reference images. It also allows user to friendly access and readout with mobile devices.

  11. Nebula: reconstruction and visualization of scattering data in reciprocal space.

    PubMed

    Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H

    2015-04-01

    Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute time-scales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula , is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware.

  12. Nebula: reconstruction and visualization of scattering data in reciprocal space

    PubMed Central

    Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H.

    2015-01-01

    Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute time­scales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula, is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware. PMID:25844083

  13. Collaborative Visualization and Analysis of Multi-dimensional, Time-dependent and Distributed Data in the Geosciences Using the Unidata Integrated Data Viewer

    NASA Astrophysics Data System (ADS)

    Meertens, C. M.; Murray, D.; McWhirter, J.

    2004-12-01

    Over the last five years, UNIDATA has developed an extensible and flexible software framework for analyzing and visualizing geoscience data and models. The Integrated Data Viewer (IDV), initially developed for visualization and analysis of atmospheric data, has broad interdisciplinary application across the geosciences including atmospheric, ocean, and most recently, earth sciences. As part of the NSF-funded GEON Information Technology Research project, UNAVCO has enhanced the IDV to display earthquakes, GPS velocity vectors, and plate boundary strain rates. These and other geophysical parameters can be viewed simultaneously with three-dimensional seismic tomography and mantle geodynamic model results. Disparate data sets of different formats, variables, geographical projections and scales can automatically be displayed in a common projection. The IDV is efficient and fully interactive allowing the user to create and vary 2D and 3D displays with contour plots, vertical and horizontal cross-sections, plan views, 3D isosurfaces, vector plots and streamlines, as well as point data symbols or numeric values. Data probes (values and graphs) can be used to explore the details of the data and models. The IDV is a freely available Java application using Java3D and VisAD and runs on most computers. UNIDATA provides easy-to-follow instructions for download, installation and operation of the IDV. The IDV primarily uses netCDF, a self-describing binary file format, to store multi-dimensional data, related metadata, and source information. The IDV is designed to work with OPeNDAP-equipped data servers that provide real-time observations and numerical models from distributed locations. Users can capture and share screens and animations, or exchange XML "bundles" that contain the state of the visualization and embedded links to remote data files. A real-time collaborative feature allows groups of users to remotely link IDV sessions via the Internet and simultaneously view and control the visualization. A Jython-based formulation facility allows computations on disparate data sets using simple formulas. Although the IDV is an advanced tool for research, its flexible architecture has also been exploited for educational purposes with the Virtual Geophysical Exploration Environment (VGEE) development. The VGEE demonstration added physical concept models to the IDV and curricula for atmospheric science education intended for the high school to graduate student levels.

  14. 3D visualization of unsteady 2D airplane wake vortices

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Zheng, Z. C.

    1994-01-01

    Air flowing around the wing tips of an airplane forms horizontal tornado-like vortices that can be dangerous to following aircraft. The dynamics of such vortices, including ground and atmospheric effects, can be predicted by numerical simulation, allowing the safety and capacity of airports to be improved. In this paper, we introduce three-dimensional techniques for visualizing time-dependent, two-dimensional wake vortex computations, and the hazard strength of such vortices near the ground. We describe a vortex core tracing algorithm and a local tiling method to visualize the vortex evolution. The tiling method converts time-dependent, two-dimensional vortex cores into three-dimensional vortex tubes. Finally, a novel approach calculates the induced rolling moment on the following airplane at each grid point within a region near the vortex tubes and thus allows three-dimensional visualization of the hazard strength of the vortices. We also suggest ways of combining multiple visualization methods to present more information simultaneously.

  15. From genes to brain oscillations: is the visual pathway the epigenetic clue to schizophrenia?

    PubMed

    González-Hernández, J A; Pita-Alcorta, C; Cedeño, I R

    2006-01-01

    Molecular data and gene expression data and recently mitochondrial genes and possible epigenetic regulation by non-coding genes is revolutionizing our views on schizophrenia. Genes and epigenetic mechanisms are triggered by cell-cell interaction and by external stimuli. A number of recent clinical and molecular observations indicate that epigenetic factors may be operational in the origin of the illness. Based on the molecular insights, gene expression profiles and epigenetic regulation of gene, we went back to the neurophysiology (brain oscillations) and found a putative role of the visual experiences (i.e. visual stimuli) as epigenetic factor. The functional evidences provided here, establish a direct link between the striate and extrastriate unimodal visual cortex and the neurobiology of the schizophrenia. This result support the hypothesis that 'visual experience' has a potential role as epigenetic factor and contribute to trigger and/or to maintain the progression of the schizophrenia. In this case, candidate genes sensible for the visual 'insult' may be located within the visual cortex including associative areas, while the integrity of the visual pathway before reaching the primary visual cortex is preserved. The same effect can be perceived if target genes are localised within the visual pathway, which actually, is more sensitive for 'insult' during the early life than the cortex per se. If this process affects gene expression at these sites a stably sensory specific 'insult', i.e. distorted visual information, is entering the visual system and expanded to fronto-temporo-parietal multimodal areas even from early maturation periods. The difference in the timing of postnatal neuroanatomical events between such areas and the primary visual cortex in humans (with the formers reaching the same development landmarks later in life than the latter) is 'optimal' to establish an abnormal 'cell- communication' mediated by the visual system that may further interfere with the local physiology. In this context the strategy to search target genes need to be rearrangement and redirected to visual-related genes. Otherwise, psychophysics studies combining functional neuroimage, and electrophysiology are strongly recommended, for the search of epigenetic clues that will allow to carrier gene association studies in schizophrenia.

  16. Design and Implementation of Cancellation Tasks for Visual Search Strategies and Visual Attention in School Children

    ERIC Educational Resources Information Center

    Wang, Tsui-Ying; Huang, Ho-Chuan; Huang, Hsiu-Shuang

    2006-01-01

    We propose a computer-assisted cancellation test system (CACTS) to understand the visual attention performance and visual search strategies in school children. The main aim of this paper is to present our design and development of the CACTS and demonstrate some ways in which computer techniques can allow the educator not only to obtain more…

  17. Dynamic interactions between visual working memory and saccade target selection

    PubMed Central

    Schneegans, Sebastian; Spencer, John P.; Schöner, Gregor; Hwang, Seongmin; Hollingworth, Andrew

    2014-01-01

    Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task-irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. PMID:25228628

  18. Haltere mechanosensory influence on tethered flight behavior in Drosophila.

    PubMed

    Mureli, Shwetha; Fox, Jessica L

    2015-08-01

    In flies, mechanosensory information from modified hindwings known as halteres is combined with visual information for wing-steering behavior. Haltere input is necessary for free flight, making it difficult to study the effects of haltere ablation under natural flight conditions. We thus used tethered Drosophila melanogaster flies to examine the relationship between halteres and the visual system, using wide-field motion or moving figures as visual stimuli. Haltere input was altered by surgically decreasing its mass, or by removing it entirely. Haltere removal does not affect the flies' ability to flap or steer their wings, but it does increase the temporal frequency at which they modify their wingbeat amplitude. Reducing the haltere mass decreases the optomotor reflex response to wide-field motion, and removing the haltere entirely does not further decrease the response. Decreasing the mass does not attenuate the response to figure motion, but removing the entire haltere does attenuate the response. When flies are allowed to control a visual stimulus in closed-loop conditions, haltereless flies fixate figures with the same acuity as intact flies, but cannot stabilize a wide-field stimulus as accurately as intact flies can. These manipulations suggest that the haltere mass is influential in wide-field stabilization, but less so in figure tracking. In both figure and wide-field experiments, we observe responses to visual motion with and without halteres, indicating that during tethered flight, intact halteres are not strictly necessary for visually guided wing-steering responses. However, the haltere feedback loop may operate in a context-dependent way to modulate responses to visual motion. © 2015. Published by The Company of Biologists Ltd.

  19. A Presentation of Spectracular Visualizations

    NASA Technical Reports Server (NTRS)

    Hasler, Fritz; Pierce, Hal; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Florida and the KSC Visitor's Center. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer hurricanes & tornadic thunderstorms. See the latest spectacular images from NASA and NOAA remote sensing missions like GOES, NOAA, TRMM, SeaWiFS, Landsat7, & new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran and Linda. See visualizations featured on covers of magazines like Newsweek, TIME, National Geographic, Popular Science and on National & International Network TV. New Digital Earth visualization tools allow us to roam & zoom through massive global images including a Landsat tour of the US, with drill-downs into major cities using I m resolution spy-satellite technology from the Space Imaging IKONOS satellite. Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. See ocean vortexes and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUs, 5 Gigabytes of RAM and Terabyte disk using two projectors across the super sized Universe Theater panoramic screen.

  20. A Presentation of Spectacular Visualizations. Visions of Our Planet's Atmosphere, Land and Oceans: ETheater Presentation

    NASA Technical Reports Server (NTRS)

    Hasler, Fritz; Pierce, Hal; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Florida and the KSC Visitor's Center. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer hurricanes & tornadic thunderstorms. See the latest spectacular images from NASA and NOAA remote sensing missions like GOES, NOAA, TRMM, SeaWiFS, Landsat7, & new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran and Linda. See visualizations featured on covers of magazines like Newsweek, TIME, National Geographic, Popular Science and on National & International Network TV. New Digital Earth visualization tools allow us to roam & zoom through massive global images including a Landsat tour of the US, with drill-downs into major cities using 1 m resolution spy-satellite technology from the Space Imaging IKONOS satellite. Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. See ocean vortices and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUs, 5 Gigabytes of RAM and Terabyte disk using two projectors across the super sized Universe Theater panoramic screen.

  1. Alpha-galactosidase versus active charcoal for improving sonographic visualization of abdominal organs in patients with excessive intestinal gas

    PubMed Central

    Maconi, G.; Bolzacchini, E.; Radice, E.; Marzocchi, M.; Badini, M.

    2012-01-01

    Background and aims Intestinal gas is a frequent cause of poor visualization during gastrointestinal ultrasound (US). The enzyme alpha-galactosidase may reduce intestinal gas production, thereby improving abdominal US visualization. We compared the efficacies of alpha-galactosidase and active charcoal in improving US visualization in patients with previous unsatisfactory abdominal US scans caused by excessive intestinal gas. Materials and methods: 45 patients with poor visualization of at least one target organ: pancreas, hepatic lobes (score 0–2) or common bile duct (CBD) (score 0–1) were enrolled in a prospective randomized, crossover, observer-blinded study. The patients received alpha-galactosidase (Sinaire Forte, Promefarm, Milan, Italy) 600 GalU t.i.d. for 2 days before abdominal US plus 900 GalU the morning of exam or active charcoal 448 mg t.i.d., for 2 days before the exam plus 672 mg the morning of the exam. Visualization was graded as follows: 0 = none (complete gas interference); 1 = severe interference, 2 = moderate interference, 3 = mild interference; 4 = complete (no gas interference). Results: 42 patients completed the study. Both alpha-galactosidase and active charcoal improved the visualization of target organs. Visualization of the right hepatic lobe, CBD and pancreatic tail was significantly improved (vs. baseline) only by alpha-galactosidase (p < 0.01). Scores ≥3 for all parts of the pancreas and both hepatic lobes were achieved in only 12.5% of the patients after both treatments. Both products were well tolerated. Conclusion: Alpha-galactosidase and active charcoal can improve US visualization of abdominal organs in patients whose scans are frequently unsatisfactory due to excessive intestinal gas. Visualization of the pancreatic tail and right hepatic lobe was significantly improved only by alpha-galactosidase. However, both treatments allowed adequate visualization of all target organs during the same examination only in a few patients. PMID:23730387

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minelli, Annalisa, E-mail: Annalisa.Minelli@univ-brest.fr; Marchesini, Ivan, E-mail: Ivan.Marchesini@irpi.cnr.it; Taylor, Faith E., E-mail: Faith.Taylor@kcl.ac.uk

    Although there are clear economic and environmental incentives for producing energy from solar and wind power, there can be local opposition to their installation due to their impact upon the landscape. To date, no international guidelines exist to guide quantitative visual impact assessment of these facilities, making the planning process somewhat subjective. In this paper we demonstrate the development of a method and an Open Source GIS tool to quantitatively assess the visual impact of these facilities using line-of-site techniques. The methods here build upon previous studies by (i) more accurately representing the shape of energy producing facilities, (ii) takingmore » into account the distortion of the perceived shape and size of facilities caused by the location of the observer, (iii) calculating the possible obscuring of facilities caused by terrain morphology and (iv) allowing the combination of various facilities to more accurately represent the landscape. The tool has been applied to real and synthetic case studies and compared to recently published results from other models, and demonstrates an improvement in accuracy of the calculated visual impact of facilities. The tool is named r.wind.sun and is freely available from GRASS GIS AddOns. - Highlights: • We develop a tool to quantify wind turbine and photovoltaic panel visual impact. • The tool is freely available to download and edit as a module of GRASS GIS. • The tool takes into account visual distortion of the shape and size of objects. • The accuracy of calculation of visual impact is improved over previous methods.« less

  3. Reliability of Visual and Somatosensory Feedback in Skilled Movement: The Role of the Cerebellum.

    PubMed

    Mizelle, J C; Oparah, Alexis; Wheaton, Lewis A

    2016-01-01

    The integration of vision and somatosensation is required to allow for accurate motor behavior. While both sensory systems contribute to an understanding of the state of the body through continuous updating and estimation, how the brain processes unreliable sensory information remains to be fully understood in the context of complex action. Using functional brain imaging, we sought to understand the role of the cerebellum in weighting visual and somatosensory feedback by selectively reducing the reliability of each sense individually during a tool use task. We broadly hypothesized upregulated activation of the sensorimotor and cerebellar areas during movement with reduced visual reliability, and upregulated activation of occipital brain areas during movement with reduced somatosensory reliability. As specifically compared to reduced somatosensory reliability, we expected greater activations of ipsilateral sensorimotor cerebellum for intact visual and somatosensory reliability. Further, we expected that ipsilateral posterior cognitive cerebellum would be affected with reduced visual reliability. We observed that reduced visual reliability results in a trend towards the relative consolidation of sensorimotor activation and an expansion of cerebellar activation. In contrast, reduced somatosensory reliability was characterized by the absence of cerebellar activations and a trend towards the increase of right frontal, left parietofrontal activation, and temporo-occipital areas. Our findings highlight the role of the cerebellum for specific aspects of skillful motor performance. This has relevance to understanding basic aspects of brain functions underlying sensorimotor integration, and provides a greater understanding of cerebellar function in tool use motor control.

  4. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems

    PubMed Central

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-01-01

    Abstract One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers’ visual and manual distractions with ‘infotainment’ technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual–manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox ‘one-shot’ voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory–vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers’ interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation. PMID:26269281

  5. Integration of spectral domain optical coherence tomography with microperimetry generates unique datasets for the simultaneous identification of visual function and retinal structure in ophthalmological applications

    NASA Astrophysics Data System (ADS)

    Koulen, Peter; Gallimore, Gary; Vincent, Ryan D.; Sabates, Nelson R.; Sabates, Felix N.

    2011-06-01

    Conventional perimeters are used routinely in various eye disease states to evaluate the central visual field and to quantitatively map sensitivity. However, standard automated perimetry proves difficult for retina and specifically macular disease due to the need for central and steady fixation. Advances in instrumentation have led to microperimetry, which incorporates eye tracking for placement of macular sensitivity values onto an image of the macular fundus thus enabling a precise functional and anatomical mapping of the central visual field. Functional sensitivity of the retina can be compared with the observed structural parameters that are acquired with high-resolution spectral domain optical coherence tomography and by integration of scanning laser ophthalmoscope-driven imaging. Findings of the present study generate a basis for age-matched comparison of sensitivity values in patients with macular pathology. Microperimetry registered with detailed structural data performed before and after intervention treatments provides valuable information about macular function, disease progression and treatment success. This approach also allows for the detection of disease or treatment related changes in retinal sensitivity when visual acuity is not affected and can drive the decision making process in choosing different treatment regimens and guiding visual rehabilitation. This has immediate relevance for applications in central retinal vein occlusion, central serous choroidopathy, age-related macular degeneration, familial macular dystrophy and several other forms of retina related visual disability.

  6. Keratinocytes in culture accumulate phagocytosed melanosomes in the perinuclear area.

    PubMed

    Ando, Hideya; Niki, Yoko; Yoshida, Masaki; Ito, Masaaki; Akiyama, Kaoru; Kim, Jin-Hwa; Yoon, Tae-Jin; Lee, Jeung-Hoon; Matsui, Mary S; Ichihashi, Masamitsu

    2010-02-01

    There are many techniques for evaluating melanosome transfer to keratinocytes but the spectrophotometric quantification of melanosomes incorporated by keratinocyte phagocytosis has not been previously reported. Here we describe a new method that allows the spectrophotometric visualization of melanosome uptake by normal human keratinocytes in culture. Fontana-Masson staining of keratinocytes incubated with isolated melanosomes showed the accumulation of incorporated melanosomes in the perinuclear areas of keratinocytes within 48 h. Electron microscopic observations of melanosomes ingested by keratinocytes revealed that many phagosomes containing clusters of melanosomes or their fragments were localized in the perinuclear area. A known inhibitor of keratinocyte phagocytosis which inhibits protease-activated receptor-2, i.e., soybean trypsin inhibitor, decreased melanosome uptake by keratinocytes in a dose-dependent manner. These data suggest that our method is a useful model to quantitate keratinocyte phagocytosis of melanosomes visually in vitro.

  7. In Internet-Based Visualization System Study about Breakthrough Applet Security Restrictions

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Huang, Yan

    In the process of realization Internet-based visualization system of the protein molecules, system needs to allow users to use the system to observe the molecular structure of the local computer, that is, customers can generate the three-dimensional graphics from PDB file on the client computer. This requires Applet access to local file, related to the Applet security restrictions question. In this paper include two realization methods: 1.Use such as signature tools, key management tools and Policy Editor tools provided by the JDK to digital signature and authentication for Java Applet, breakthrough certain security restrictions in the browser. 2. Through the use of Servlet agent implement indirect access data methods, breakthrough the traditional Java Virtual Machine sandbox model restriction of Applet ability. The two ways can break through the Applet's security restrictions, but each has its own strengths.

  8. Visual Basic VPython Interface: Charged Particle in a Magnetic Field

    NASA Astrophysics Data System (ADS)

    Prayaga, Chandra

    2006-12-01

    A simple Visual Basic (VB) to VPython interface is described and illustrated with the example of a charged particle in a magnetic field. This interface allows data to be passed to Python through a text file read by Python. The first component of the interface is a user-friendly data entry screen designed in VB, in which the user can input values of the charge, mass, initial position and initial velocity of the particle, and the magnetic field. Next, a command button is coded to write these values to a text file. Another command button starts the VPython program, which reads the data from the text file, numerically solves the equation of motion, and provides the 3d graphics animation. Students can use the interface to run the program several times with different data and observe changes in the motion.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strons, Philip; Bailey, James L.

    Anemometer readings alone cannot provide a complete picture of air flow patterns at an open gloveport. Having a means to visualize air flow for field tests in general provides greater insight by indicating direction in addition to the magnitude of the air flow velocities in the region of interest. Furthermore, flow visualization is essential for Computational Fluid Dynamics (CFD) verification, where important modeling assumptions play a significant role in analyzing the chaotic nature of low-velocity air flow. A good example is shown Figure 1, where an unexpected vortex pattern occurred during a field test that could not have been measuredmore » relying only on anemometer readings. Here by, observing and measuring the patterns of the smoke flowing into the gloveport allowed the CFD model to be appropriately updated to match the actual flow velocities in both magnitude and direction.« less

  10. The effect of asymmetric vortex wake characteristics on a slender delta wing undergoing wing rock motion

    NASA Technical Reports Server (NTRS)

    Arena, A. S., Jr.; Nelson, R. C.

    1989-01-01

    An experimental investigation into the fluid mechanisms responsible for wing rock on a slender delta wing with 80 deg leading edge sweep has been conducted. Time history and flow visualization data are presented for a wide angle-of-attack range. The use of an air bearing spindle has allowed the motion of the wing to be free from bearing friction or mechanical hysteresis. A bistable static condition has been found in vortex breakdown at an angle of attack of 40 deg which causes an overshoot of the steady state rocking amplitude. Flow visualization experiments also reveal a difference in static and dynamic breakdown locations on the wing. A hysteresis loop in dynamic breakdown location similar to that seen on pitching delta wings was observed as the wing was undergoing the limit cycle oscillation.

  11. Rarefied flow diagnostics using pulsed high-current electron beams

    NASA Technical Reports Server (NTRS)

    Wojcik, Radoslaw M.; Schilling, John H.; Erwin, Daniel A.

    1990-01-01

    The use of high-current short-pulse electron beams in low-density gas flow diagnostics is introduced. Efficient beam propagation is demonstrated for pressure up to 300 microns. The beams, generated by low-pressure pseudospark discharges in helium, provide extremely high fluorescence levels, allowing time-resolved visualization in high-background environments. The fluorescence signal frequency is species-dependent, allowing instantaneous visualization of mixing flowfields.

  12. Time-resolved imaging of the MALDI linear-TOF ion cloud: direct visualization and exploitation of ion optical phenomena using a position- and time-sensitive detector.

    PubMed

    Ellis, Shane R; Soltwisch, Jens; Heeren, Ron M A

    2014-05-01

    In this study, we describe the implementation of a position- and time-sensitive detection system (Timepix detector) to directly visualize the spatial distributions of the matrix-assisted laser desorption ionization ion cloud in a linear-time-of-flight (MALDI linear-ToF) as it is projected onto the detector surface. These time-resolved images allow direct visualization of m/z-dependent ion focusing effects that occur within the ion source of the instrument. The influence of key parameters, namely extraction voltage (E(V)), pulsed-ion extraction (PIE) delay, and even the matrix-dependent initial ion velocity was investigated and were found to alter the focusing properties of the ion-optical system. Under certain conditions where the spatial focal plane coincides with the detector plane, so-called x-y space focusing could be observed (i.e., the focusing of the ion cloud to a small, well-defined spot on the detector). Such conditions allow for the stigmatic ion imaging of intact proteins for the first time on a commercial linear ToF-MS system. In combination with the ion-optical magnification of the system (~100×), a spatial resolving power of 11–16 μm with a pixel size of 550 nm was recorded within a laser spot diameter of ~125 μm. This study demonstrates both the diagnostic and analytical advantages offered by the Timepix detector in ToF-MS.

  13. The Arctic Research Mapping Application (ARMAP): a Geoportal for Visualizing Project-level Information About U.S. Funded Research in the Arctic

    NASA Astrophysics Data System (ADS)

    Kassin, A.; Cody, R. P.; Barba, M.; Gaylord, A. G.; Manley, W. F.; Score, R.; Escarzaga, S. M.; Tweedie, C. E.

    2016-12-01

    The Arctic Research Mapping Application (ARMAP; http://armap.org/) is a suite of online applications and data services that support Arctic science by providing project tracking information (who's doing what, when and where in the region) for United States Government funded projects. In collaboration with 17 research agencies, project locations are displayed in a visually enhanced web mapping application. Key information about each project is presented along with links to web pages that provide additional information, including links to data where possible. The latest ARMAP iteration has i) reworked the search user interface (UI) to enable multiple filters to be applied in user-driven queries and ii) implemented ArcGIS Javascript API 4.0 to allow for deployment of 3D maps directly into a users web-browser and enhanced customization of popups. Module additions include i) a dashboard UI powered by a back-end Apache SOLR engine to visualize data in intuitive and interactive charts; and ii) a printing module that allows users to customize maps and export these to different formats (pdf, ppt, gif and jpg). New reference layers and an updated ship tracks layer have also been added. These improvements have been made to improve discoverability, enhance logistics coordination, identify geographic gaps in research/observation effort, and foster enhanced collaboration among the research community. Additionally, ARMAP can be used to demonstrate past, present, and future research effort supported by the U.S. Government.

  14. Our World Their World

    ERIC Educational Resources Information Center

    Brisco, Nicole

    2011-01-01

    Build, create, make, blog, develop, organize, structure, perform. These are just a few verbs that illustrate the visual world. These words create images that allow students to respond to their environment. Visual culture studies recognize the predominance of visual forms of media, communication, and information in the postmodern world. This…

  15. How Much Gravity Is Needed to Establish the Perceptual Upright?

    PubMed Central

    Harris, Laurence R.; Herpers, Rainer; Hofhammer, Thomas; Jenkin, Michael

    2014-01-01

    Might the gravity levels found on other planets and on the moon be sufficient to provide an adequate perception of upright for astronauts? Can the amount of gravity required be predicted from the physiological threshold for linear acceleration? The perception of upright is determined not only by gravity but also visual information when available and assumptions about the orientation of the body. Here, we used a human centrifuge to simulate gravity levels from zero to earth gravity along the long-axis of the body and measured observers' perception of upright using the Oriented Character Recognition Test (OCHART) with and without visual cues arranged to indicate a direction of gravity that differed from the body's long axis. This procedure allowed us to assess the relative contribution of the added gravity in determining the perceptual upright. Control experiments off the centrifuge allowed us to measure the relative contributions of normal gravity, vision, and body orientation for each participant. We found that the influence of 1 g in determining the perceptual upright did not depend on whether the acceleration was created by lying on the centrifuge or by normal gravity. The 50% threshold for centrifuge-simulated gravity's ability to influence the perceptual upright was at around 0.15 g, close to the level of moon gravity but much higher than the threshold for detecting linear acceleration along the long axis of the body. This observation may partially explain the instability of moonwalkers but is good news for future missions to Mars. PMID:25184481

  16. A web portal for hydrodynamical, cosmological simulations

    NASA Astrophysics Data System (ADS)

    Ragagnin, A.; Dolag, K.; Biffi, V.; Cadolle Bel, M.; Hammer, N. J.; Krukau, A.; Petkova, M.; Steinborn, D.

    2017-07-01

    This article describes a data centre hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data centre has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) CLUSTERINSPECT visualizes properties of member galaxies of a selected galaxy cluster; (II) SIMCUT returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) SMAC creates idealized 2D maps of various, physical quantities and observables of a selected object; (IV) PHOX generates virtual X-ray observations with specifications of various current and upcoming instruments.

  17. Assessment of technical condition of concrete pavement by the example of district road

    NASA Astrophysics Data System (ADS)

    Linek, M.; Nita, P.; Żebrowski, W.; Wolka, P.

    2018-05-01

    The article presents the comprehensive assessment of concrete pavement condition. Analyses included the district road located in the swietokrzyskie province, used for 11 years. Comparative analyses were conducted twice. The first analysis was carried out after 9 years of pavement operation, in 2015. In order to assess the extent of pavement degradation, the tests were repeated in 2017. Within the scope of field research, the traffic intensity within the analysed road section was determined. Visual assessment of pavement condition was conducted, according to the guidelines included in SOSN-B. Visual assessment can be extended by ground-penetrating radar measurements which allow to provide comprehensive assessment of the occurred structure changes within its entire thickness and length. The assessment included also performance parameters, i.e. pavement regularity, surface roughness and texture. Extension of test results by the assessment of changes in internal structure of concrete composite and structure observations by means of Scanning Electron Microscope allow for the assessment of parameters of internal structure of hardened concrete. Supplementing the observations of internal structure by means of computed tomography scan provides comprehensive information of possible discontinuities and composite structure. According to the analysis of the obtained results, conclusions concerning the analysed pavement condition were reached. It was determined that the pavement is distinguished by high performance parameters, its condition is good and it does not require any repairs. Maintenance treatment was suggested in order to extend the period of proper operation of the analysed pavement.

  18. How much gravity is needed to establish the perceptual upright?

    PubMed

    Harris, Laurence R; Herpers, Rainer; Hofhammer, Thomas; Jenkin, Michael

    2014-01-01

    Might the gravity levels found on other planets and on the moon be sufficient to provide an adequate perception of upright for astronauts? Can the amount of gravity required be predicted from the physiological threshold for linear acceleration? The perception of upright is determined not only by gravity but also visual information when available and assumptions about the orientation of the body. Here, we used a human centrifuge to simulate gravity levels from zero to earth gravity along the long-axis of the body and measured observers' perception of upright using the Oriented Character Recognition Test (OCHART) with and without visual cues arranged to indicate a direction of gravity that differed from the body's long axis. This procedure allowed us to assess the relative contribution of the added gravity in determining the perceptual upright. Control experiments off the centrifuge allowed us to measure the relative contributions of normal gravity, vision, and body orientation for each participant. We found that the influence of 1 g in determining the perceptual upright did not depend on whether the acceleration was created by lying on the centrifuge or by normal gravity. The 50% threshold for centrifuge-simulated gravity's ability to influence the perceptual upright was at around 0.15 g, close to the level of moon gravity but much higher than the threshold for detecting linear acceleration along the long axis of the body. This observation may partially explain the instability of moonwalkers but is good news for future missions to Mars.

  19. Comparative evaluation of toric intraocular lens alignment and visual quality with image-guided surgery and conventional three-step manual marking.

    PubMed

    Titiyal, Jeewan S; Kaur, Manpreet; Jose, Cijin P; Falera, Ruchita; Kinkar, Ashutosh; Bageshwar, Lalit Ms

    2018-01-01

    To compare toric intraocular lens (IOL) alignment assisted by image-guided surgery or manual marking methods and its impact on visual quality. This prospective comparative study enrolled 80 eyes with cataract and astigmatism ≥1.5 D to undergo phacoemulsification with toric IOL alignment by manual marking method using bubble marker (group I, n=40) or Callisto eye and Z align (group II, n=40). Postoperatively, accuracy of alignment and visual quality was assessed with a ray tracing aberrometer. Primary outcome measure was deviation from the target axis of implantation. Secondary outcome measures were visual quality and acuity. Follow-up was performed on postoperative days (PODs) 1 and 30. Deviation from the target axis of implantation was significantly less in group II on PODs 1 and 30 (group I: 5.5°±3.3°, group II: 3.6°±2.6°; p =0.005). Postoperative refractive cylinder was -0.89±0.35 D in group I and -0.64±0.36 D in group II ( p =0.003). Visual acuity was comparable between both the groups. Visual quality measured in terms of Strehl ratio ( p <0.05) and modulation transfer function (MTF) ( p <0.05) was significantly better in the image-guided surgery group. Significant negative correlation was observed between deviation from target axis and visual quality parameters (Strehl ratio and MTF) ( p <0.05). Image-guided surgery allows precise alignment of toric IOL without need for reference marking. It is associated with superior visual quality which correlates with the precision of IOL alignment.

  20. Comparative evaluation of toric intraocular lens alignment and visual quality with image-guided surgery and conventional three-step manual marking

    PubMed Central

    Titiyal, Jeewan S; Kaur, Manpreet; Jose, Cijin P; Falera, Ruchita; Kinkar, Ashutosh; Bageshwar, Lalit MS

    2018-01-01

    Purpose To compare toric intraocular lens (IOL) alignment assisted by image-guided surgery or manual marking methods and its impact on visual quality. Patients and methods This prospective comparative study enrolled 80 eyes with cataract and astigmatism ≥1.5 D to undergo phacoemulsification with toric IOL alignment by manual marking method using bubble marker (group I, n=40) or Callisto eye and Z align (group II, n=40). Postoperatively, accuracy of alignment and visual quality was assessed with a ray tracing aberrometer. Primary outcome measure was deviation from the target axis of implantation. Secondary outcome measures were visual quality and acuity. Follow-up was performed on postoperative days (PODs) 1 and 30. Results Deviation from the target axis of implantation was significantly less in group II on PODs 1 and 30 (group I: 5.5°±3.3°, group II: 3.6°±2.6°; p=0.005). Postoperative refractive cylinder was −0.89±0.35 D in group I and −0.64±0.36 D in group II (p=0.003). Visual acuity was comparable between both the groups. Visual quality measured in terms of Strehl ratio (p<0.05) and modulation transfer function (MTF) (p<0.05) was significantly better in the image-guided surgery group. Significant negative correlation was observed between deviation from target axis and visual quality parameters (Strehl ratio and MTF) (p<0.05). Conclusion Image-guided surgery allows precise alignment of toric IOL without need for reference marking. It is associated with superior visual quality which correlates with the precision of IOL alignment. PMID:29731603

  1. Visual activity predicts auditory recovery from deafness after adult cochlear implantation.

    PubMed

    Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal

    2013-12-01

    Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.

  2. Solid Hydrogen Experiments for Atomic Propellants: Particle Formation, Imaging, Observations, and Analyses

    NASA Technical Reports Server (NTRS)

    Palaszewski, Bryan

    2005-01-01

    This report presents particle formation observations and detailed analyses of the images from experiments that were conducted on the formation of solid hydrogen particles in liquid helium. Hydrogen was frozen into particles in liquid helium, and observed with a video camera. The solid hydrogen particle sizes and the total mass of hydrogen particles were estimated. These newly analyzed data are from the test series held on February 28, 2001. Particle sizes from previous testing in 1999 and the testing in 2001 were similar. Though the 2001 testing created similar particles sizes, many new particle formation phenomena were observed: microparticles and delayed particle formation. These experiment image analyses are some of the first steps toward visually characterizing these particles, and they allow designers to understand what issues must be addressed in atomic propellant feed system designs for future aerospace vehicles.

  3. Object permanence in lemurs.

    PubMed

    Deppe, Anja M; Wright, Patricia C; Szelistowski, William A

    2009-03-01

    Object permanence, the ability to mentally represent objects that have disappeared from view, should be advantageous to animals in their interaction with the natural world. The objective of this study was to examine whether lemurs possess object permanence. Thirteen adult subjects representing four species of diurnal lemur (Eulemur fulvus rufus, Eulemur mongoz, Lemur catta and Hapalemur griseus) were presented with seven standard Piagetian visible and invisible object displacement tests, plus one single visible test where the subject had to wait predetermined times before allowed to search, and two invisible tests where each hiding place was made visually unique. In all visible tests lemurs were able to find an object that had been in clear view before being hidden. However, when lemurs were not allowed to search for up to 25-s, performance declined with increasing time-delay. Subjects did not outperform chance on any invisible displacements regardless of whether hiding places were visually uniform or unique, therefore the upper limit of object permanence observed was Stage 5b. Lemur species in this study eat stationary foods and are not subject to stalking predators, thus Stage 5 object permanence is probably sufficient to solve most problems encountered in the wild.

  4. McIDAS-V: Data Analysis and Visualization for NPOESS and GOES-R

    NASA Astrophysics Data System (ADS)

    Rink, T.; Achtor, T. H.

    2009-12-01

    McIDAS-V, the next-generation McIDAS, is being built on top a modern, cross-platform software framework which supports development of 4-D, interactive displays and integration of wide-array of geophysical data. As the replacement of McIDAS, the development emphasis is on future satellite observation platforms such as NPOESS and GOES-R. Data interrogation, analysis and visualization capabilities have been developed for multi- and hyper-spectral instruments like MODIS, AIRS and IASI, and are being extended for application to VIIRS and CrIS. Compatibility with GOES-R ABI level1 and level2 product storage formats has been demonstrated. The abstract data model, which can internalize most any geophysical data, opens up new possibilities for data fusion techniques, for example, polar and geostationary, (LEO/GEO), synergy for research and validation. McIDAS-V follows an object-oriented design model, using the Java programming language, allowing specialized extensions for for new sources of data, and novel displays and interactive behavior. The reference application, what the user sees on startup, can be customized, and the system has a persistence mechanism allowing sharing of the application state across the internet. McIDAS-V is open-source, and free to the public.

  5. Visual Double Stars - St. Mary's High School Astronomy Club

    NASA Astrophysics Data System (ADS)

    Bensel, Holly; Tran, Thanh; Hicks, Sean; He, Yifan; Moczygemba, Mitchell; Shi, Yuqi; Sternenberg, Leah; Watson, Kaycia; Rooney, Kieran; Birmingham, Paige; You, Ruiyang

    2017-01-01

    The St. Mary’s School Astronomy Club is working towards measuring positions and angles of relatively unstudied visual binary stars. We are starting with confirming prior results we obtained at the Pine Mountain Observatory Summer Science Research Workshop in 2009 - 2012 on ARY 52 (Frey et al. 2009, JDSO), Iota Bootis (Bensel et al. 2009, JDSO), and Mizar (Bensel et al. 2009, JDSO). We are also comparing our results with those published in the Washington Double Star Catalog (Mason 2009). We are using Pine Mountain Observatory’s remote imaging 14-inch Meade Schmidt-Cassegrain telescope equipped with a CCD camera operated by Scott Fisher at the University of Oregon and local astronomer Sean Curry’s 12.5" PlaneWave CDK telescope. We are practicing using tools such as astrometry.net and DS9 software to measure positions and angles on known double stars with well established values before attempting new measurements. Our next project will be to study “neglected visual double stars,” lesser studied double stars with fainter magnitudes. (A neglected double star is one that has not been observed extensively or recently.)Double star analysis is relatively straight forward and can be performed with equipment available to most high schools.Educational outcomes include instrument setup, orientation, instruction, observations, analysis, presentation of data, and writing up findings for publication. Accurate recording of data is a useful and important life skill for all students to learn. Another important life skill is learning to work together to accomplish a specific goal. This project allows novice and experienced observers to work hand-in-hand to accomplish a specific goal, such as the publishing of a research paper in the Journal of Double Star Observations.

  6. Keypress-Based Musical Preference Is Both Individual and Lawful.

    PubMed

    Livengood, Sherri L; Sheppard, John P; Kim, Byoung W; Malthouse, Edward C; Bourne, Janet E; Barlow, Anne E; Lee, Myung J; Marin, Veronica; O'Connor, Kailyn P; Csernansky, John G; Block, Martin P; Blood, Anne J; Breiter, Hans C

    2017-01-01

    Musical preference is highly individualized and is an area of active study to develop methods for its quantification. Recently, preference-based behavior, associated with activity in brain reward circuitry, has been shown to follow lawful, quantifiable patterns, despite broad variation across individuals. These patterns, observed using a keypress paradigm with visual stimuli, form the basis for relative preference theory (RPT). Here, we sought to determine if such patterns extend to non-visual domains (i.e., audition) and dynamic stimuli, potentially providing a method to supplement psychometric, physiological, and neuroimaging approaches to preference quantification. For this study, we adapted our keypress paradigm to two sets of stimuli consisting of seventeenth to twenty-first century western art music (Classical) and twentieth to twenty-first century jazz and popular music (Popular). We studied a pilot sample and then a separate primary experimental sample with this paradigm, and used iterative mathematical modeling to determine if RPT relationships were observed with high R 2 fits. We further assessed the extent of heterogeneity in the rank ordering of keypress-based responses across subjects. As expected, individual rank orderings of preferences were quite heterogeneous, yet we observed mathematical patterns fitting these data similar to those observed previously with visual stimuli. These patterns in music preference were recurrent across two cohorts and two stimulus sets, and scaled between individual and group data, adhering to the requirements for lawfulness. Our findings suggest a general neuroscience framework that predicts human approach/avoidance behavior, while also allowing for individual differences and the broad diversity of human choices; the resulting framework may offer novel approaches to advancing music neuroscience, or its applications to medicine and recommendation systems.

  7. Keypress-Based Musical Preference Is Both Individual and Lawful

    PubMed Central

    Livengood, Sherri L.; Sheppard, John P.; Kim, Byoung W.; Malthouse, Edward C.; Bourne, Janet E.; Barlow, Anne E.; Lee, Myung J.; Marin, Veronica; O'Connor, Kailyn P.; Csernansky, John G.; Block, Martin P.; Blood, Anne J.; Breiter, Hans C.

    2017-01-01

    Musical preference is highly individualized and is an area of active study to develop methods for its quantification. Recently, preference-based behavior, associated with activity in brain reward circuitry, has been shown to follow lawful, quantifiable patterns, despite broad variation across individuals. These patterns, observed using a keypress paradigm with visual stimuli, form the basis for relative preference theory (RPT). Here, we sought to determine if such patterns extend to non-visual domains (i.e., audition) and dynamic stimuli, potentially providing a method to supplement psychometric, physiological, and neuroimaging approaches to preference quantification. For this study, we adapted our keypress paradigm to two sets of stimuli consisting of seventeenth to twenty-first century western art music (Classical) and twentieth to twenty-first century jazz and popular music (Popular). We studied a pilot sample and then a separate primary experimental sample with this paradigm, and used iterative mathematical modeling to determine if RPT relationships were observed with high R2 fits. We further assessed the extent of heterogeneity in the rank ordering of keypress-based responses across subjects. As expected, individual rank orderings of preferences were quite heterogeneous, yet we observed mathematical patterns fitting these data similar to those observed previously with visual stimuli. These patterns in music preference were recurrent across two cohorts and two stimulus sets, and scaled between individual and group data, adhering to the requirements for lawfulness. Our findings suggest a general neuroscience framework that predicts human approach/avoidance behavior, while also allowing for individual differences and the broad diversity of human choices; the resulting framework may offer novel approaches to advancing music neuroscience, or its applications to medicine and recommendation systems. PMID:28512395

  8. Eye Movements to Natural Images as a Function of Sex and Personality

    PubMed Central

    Mercer Moss, Felix Joseph; Baddeley, Roland; Canagarajah, Nishan

    2012-01-01

    Women and men are different. As humans are highly visual animals, these differences should be reflected in the pattern of eye movements they make when interacting with the world. We examined fixation distributions of 52 women and men while viewing 80 natural images and found systematic differences in their spatial and temporal characteristics. The most striking of these was that women looked away and usually below many objects of interest, particularly when rating images in terms of their potency. We also found reliable differences correlated with the images' semantic content, the observers' personality, and how the images were semantically evaluated. Information theoretic techniques showed that many of these differences increased with viewing time. These effects were not small: the fixations to a single action or romance film image allow the classification of the sex of an observer with 64% accuracy. While men and women may live in the same environment, what they see in this environment is reliably different. Our findings have important implications for both past and future eye movement research while confirming the significant role individual differences play in visual attention. PMID:23248740

  9. Climate Engine - Monitoring Drought with Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Hegewisch, K.; Daudert, B.; Morton, C.; McEvoy, D.; Huntington, J. L.; Abatzoglou, J. T.

    2016-12-01

    Drought has adverse effects on society through reduced water availability and agricultural production and increased wildfire risk. An abundance of remotely sensed imagery and climate data are being collected in near-real time that can provide place-based monitoring and early warning of drought and related hazards. However, in an era of increasing wealth of earth observations, tools that quickly access, compute, and visualize archives, and provide answers at relevant scales to better inform decision-making are lacking. We have developed ClimateEngine.org, a web application that uses Google's Earth Engine platform to enable users to quickly compute and visualize real-time observations. A suite of drought indices allow us to monitor and track drought from local (30-meters) to regional scales and contextualize current droughts within the historical record. Climate Engine is currently being used by U.S. federal agencies and researchers to develop baseline conditions and impact assessments related to agricultural, ecological, and hydrological drought. Climate Engine is also working with the Famine Early Warning Systems Network (FEWS NET) to expedite monitoring agricultural drought over broad areas at risk of food insecurity globally.

  10. iClimate: a climate data and analysis portal

    NASA Astrophysics Data System (ADS)

    Goodman, P. J.; Russell, J. L.; Merchant, N.; Miller, S. J.; Juneja, A.

    2015-12-01

    We will describe a new climate data and analysis portal called iClimate that facilitates direct comparisons between available climate observations and climate simulations. Modeled after the successful iPlant Collaborative Discovery Environment (www.iplantcollaborative.org) that allows plant scientists to trade and share environmental, physiological and genetic data and analyses, iClimate provides an easy-to-use platform for large-scale climate research, including the storage, sharing, automated preprocessing, analysis and high-end visualization of large and often disparate observational and model datasets. iClimate will promote data exploration and scientific discovery by providing: efficient and high-speed transfer of data from nodes around the globe (e.g. PCMDI and NASA); standardized and customized data/model metrics; efficient subsampling of datasets based on temporal period, geographical region or variable; and collaboration tools for sharing data, workflows, analysis results, and data visualizations with collaborators or with the community at large. We will present iClimate's capabilities, and demonstrate how it will simplify and enhance the ability to do basic or cutting-edge climate research by professionals, laypeople and students.

  11. Neural systems for preparatory control of imitation.

    PubMed

    Cross, Katy A; Iacoboni, Marco

    2014-01-01

    Humans have an automatic tendency to imitate others. Previous studies on how we control these tendencies have focused on reactive mechanisms, where inhibition of imitation is implemented after seeing an action. This work suggests that reactive control of imitation draws on at least partially specialized mechanisms. Here, we examine preparatory imitation control, where advance information allows control processes to be employed before an action is observed. Drawing on dual route models from the spatial compatibility literature, we compare control processes using biological and non-biological stimuli to determine whether preparatory imitation control recruits specialized neural systems that are similar to those observed in reactive imitation control. Results indicate that preparatory control involves anterior prefrontal, dorsolateral prefrontal, posterior parietal and early visual cortices regardless of whether automatic responses are evoked by biological (imitative) or non-biological stimuli. These results indicate both that preparatory control of imitation uses general mechanisms, and that preparatory control of imitation draws on different neural systems from reactive imitation control. Based on the regions involved, we hypothesize that preparatory control is implemented through top-down attentional biasing of visual processing.

  12. A Bayesian Account of Visual–Vestibular Interactions in the Rod-and-Frame Task

    PubMed Central

    de Brouwer, Anouk J.; Medendorp, W. Pieter

    2016-01-01

    Abstract Panoramic visual cues, as generated by the objects in the environment, provide the brain with important information about gravity direction. To derive an optimal, i.e., Bayesian, estimate of gravity direction, the brain must combine panoramic information with gravity information detected by the vestibular system. Here, we examined the individual sensory contributions to this estimate psychometrically. We asked human subjects to judge the orientation (clockwise or counterclockwise relative to gravity) of a briefly flashed luminous rod, presented within an oriented square frame (rod-in-frame). Vestibular contributions were manipulated by tilting the subject’s head, whereas visual contributions were manipulated by changing the viewing distance of the rod and frame. Results show a cyclical modulation of the frame-induced bias in perceived verticality across a 90° range of frame orientations. The magnitude of this bias decreased significantly with larger viewing distance, as if visual reliability was reduced. Biases increased significantly when the head was tilted, as if vestibular reliability was reduced. A Bayesian optimal integration model, with distinct vertical and horizontal panoramic weights, a gain factor to allow for visual reliability changes, and ocular counterroll in response to head tilt, provided a good fit to the data. We conclude that subjects flexibly weigh visual panoramic and vestibular information based on their orientation-dependent reliability, resulting in the observed verticality biases and the associated response variabilities. PMID:27844055

  13. Towards a Comprehensive Computational Simulation System for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Shih, Ming-Hsin

    1994-01-01

    The objective of this work is to develop algorithms associated with a comprehensive computational simulation system for turbomachinery flow fields. This development is accomplished in a modular fashion. These modules includes grid generation, visualization, network, simulation, toolbox, and flow modules. An interactive grid generation module is customized to facilitate the grid generation process associated with complicated turbomachinery configurations. With its user-friendly graphical user interface, the user may interactively manipulate the default settings to obtain a quality grid within a fraction of time that is usually required for building a grid about the same geometry with a general-purpose grid generation code. Non-Uniform Rational B-Spline formulations are utilized in the algorithm to maintain geometry fidelity while redistributing grid points on the solid surfaces. Bezier curve formulation is used to allow interactive construction of inner boundaries. It is also utilized to allow interactive point distribution. Cascade surfaces are transformed from three-dimensional surfaces of revolution into two-dimensional parametric planes for easy manipulation. Such a transformation allows these manipulated plane grids to be mapped to surfaces of revolution by any generatrix definition. A sophisticated visualization module is developed to al-low visualization for both grid and flow solution, steady or unsteady. A network module is built to allow data transferring in the heterogeneous environment. A flow module is integrated into this system, using an existing turbomachinery flow code. A simulation module is developed to combine the network, flow, and visualization module to achieve near real-time flow simulation about turbomachinery geometries. A toolbox module is developed to support the overall task. A batch version of the grid generation module is developed to allow portability and has been extended to allow dynamic grid generation for pitch changing turbomachinery configurations. Various applications with different characteristics are presented to demonstrate the success of this system.

  14. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance.

    PubMed

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-12-01

    Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.

  15. Suppression of spontaneous nystagmus during different visual fixation conditions.

    PubMed

    Hirvonen, Timo P; Juhola, Martti; Aalto, Heikki

    2012-07-01

    Analysis of spontaneous nystagmus is important in the evaluation of dizzy patients. The aim was to measure how different visual conditions affect the properties of nystagmus using three-dimensional video-oculography (VOG). We compared prevalence, frequency and slow phase velocity (SPV) of the spontaneous nystagmus with gaze fixation allowed, with Frenzel's glasses, and in total darkness. Twenty-five patients (35 measurements) with the peripheral vestibular pathologies were included. The prevalence of nystagmus with the gaze fixation was 40%, and it increased significantly to 66% with Frenzel's glasses and regular room lights on (p < 0.01). The prevalence increased significantly to 83% when the regular room lights were switched off (p = 0.014), and further to 100% in total darkness (p = 0.025). The mean SPV of nystagmus with visual fixation allowed was 1.0°/s. It increased to 2.4°/s with Frenzel's glasses and room lights on, and additionally to 3.1°/s, when the regular room lights were switched off. The mean SPV in total darkness was 6.9°/s. The difference was highly significant between all test conditions (p < 0.01). The frequency of nystagmus was 0.7 beats/s with gaze fixation, 0.8 beats/s in both the test conditions with Frenzel's glasses on, and 1.2 beats/s in total darkness. The frequency in total darkness was significantly higher (p < 0.05) than with Frenzel's glasses, and more so than with visual fixation (p = 0.003). The VOG in total darkness is superior in detecting nystagmus, since Frenzel's glasses allow visual suppression to happen, and this effect is reinforced with gaze fixation allowed. Strict control of visual surroundings is essential in interpreting peripheral nystagmus.

  16. The Macro and Micro of it Is that Entropy Is the Spread of Energy

    NASA Astrophysics Data System (ADS)

    Phillips, Jeffrey A.

    2016-09-01

    While entropy is often described as "disorder," it is better thought of as a measure of how spread out energy is within a system. To illustrate this interpretation of entropy to introductory college or high school students, several activities have been created. Students first study the relationship between microstates and macrostates to better understand the probabilities involved. Then, each student observes how a system evolves as energy is allowed to move within it. By studying how the class's ensemble of systems evolves, the tendency of energy to spread, rather than concentrate, can be observed. All activities require minimal equipment and provide students with a tactile and visual experience with entropy.

  17. ASERA: A Spectrum Eye Recognition Assistant

    NASA Astrophysics Data System (ADS)

    Yuan, Hailong; Zhang, Haotong; Zhang, Yanxia; Lei, Yajuan; Dong, Yiqiao; Zhao, Yongheng

    2018-04-01

    ASERA, ASpectrum Eye Recognition Assistant, aids in quasar spectral recognition and redshift measurement and can also be used to recognize various types of spectra of stars, galaxies and AGNs (Active Galactic Nucleus). This interactive software allows users to visualize observed spectra, superimpose template spectra from the Sloan Digital Sky Survey (SDSS), and interactively access related spectral line information. ASERA is an efficient and user-friendly semi-automated toolkit for the accurate classification of spectra observed by LAMOST (the Large Sky Area Multi-object Fiber Spectroscopic Telescope) and is available as a standalone Java application and as a Java applet. The software offers several functions, including wavelength and flux scale settings, zoom in and out, redshift estimation, and spectral line identification.

  18. The Incremental Launching Method for Educational Virtual Model

    NASA Astrophysics Data System (ADS)

    Martins, Octávio; Sampaio, A. Z.

    This paper describes the application of virtual reality technology to the development of an educational model related to the construction of a bridge. The model allow the visualization of the physical progression of the work following a planned construction sequence, the observation of details of the form of every component of the works and carry the study of the type and method of operation of the equipment applied in the construction. The model admit interaction and then some degree of collaboration between students and teachers in the analyses of aspects concerning geometric forms, working methodology or other technical issues observed using the application. The model presents distinct advantage as educational aids in first-degree courses in Civil Engineering.

  19. The Extraction of Information From Visual Persistence

    ERIC Educational Resources Information Center

    Erwin, Donald E.

    1976-01-01

    This research sought to distinguish among three concepts of visual persistence by substituting the physical presence of the target stimulus while simultaneously inhibiting the formation of a persisting representation. Reportability of information about the stimuli was compared to a condition in which visual persistence was allowed to fully develop…

  20. Storyline Visualizations of Eye Tracking of Movie Viewing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balint, John T.; Arendt, Dustin L.; Blaha, Leslie M.

    Storyline visualizations offer an approach that promises to capture the spatio-temporal characteristics of individual observers and simultaneously illustrate emerging group behaviors. We develop a visual analytics approach to parsing, aligning, and clustering fixation sequences from eye tracking data. Visualization of the results captures the similarities and differences across a group of observers performing a common task. We apply our storyline approach to visualize gaze patterns of people watching dynamic movie clips. Storylines mitigate some of the shortcomings of existent spatio-temporal visualization techniques and, importantly, continue to highlight individual observer behavioral dynamics.

  1. Cognitive approaches for patterns analysis and security applications

    NASA Astrophysics Data System (ADS)

    Ogiela, Marek R.; Ogiela, Lidia

    2017-08-01

    In this paper will be presented new opportunities for developing innovative solutions for semantic pattern classification and visual cryptography, which will base on cognitive and bio-inspired approaches. Such techniques can be used for evaluation of the meaning of analyzed patterns or encrypted information, and allow to involve such meaning into the classification task or encryption process. It also allows using some crypto-biometric solutions to extend personalized cryptography methodologies based on visual pattern analysis. In particular application of cognitive information systems for semantic analysis of different patterns will be presented, and also a novel application of such systems for visual secret sharing will be described. Visual shares for divided information can be created based on threshold procedure, which may be dependent on personal abilities to recognize some image details visible on divided images.

  2. WebViz:A Web-based Collaborative Interactive Visualization System for large-Scale Data Sets

    NASA Astrophysics Data System (ADS)

    Yuen, D. A.; McArthur, E.; Weiss, R. M.; Zhou, J.; Yao, B.

    2010-12-01

    WebViz is a web-based application designed to conduct collaborative, interactive visualizations of large data sets for multiple users, allowing researchers situated all over the world to utilize the visualization services offered by the University of Minnesota’s Laboratory for Computational Sciences and Engineering (LCSE). This ongoing project has been built upon over the last 3 1/2 years .The motivation behind WebViz lies primarily with the need to parse through an increasing amount of data produced by the scientific community as a result of larger and faster multicore and massively parallel computers coming to the market, including the use of general purpose GPU computing. WebViz allows these large data sets to be visualized online by anyone with an account. The application allows users to save time and resources by visualizing data ‘on the fly’, wherever he or she may be located. By leveraging AJAX via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide users with a remote, web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota. LCSE’s custom hierarchical volume rendering software provides high resolution visualizations on the order of 15 million pixels and has been employed for visualizing data primarily from simulations in astrophysics to geophysical fluid dynamics . In the current version of WebViz, we have implemented a highly extensible back-end framework built around HTTP "server push" technology. The web application is accessible via a variety of devices including netbooks, iPhones, and other web and javascript-enabled cell phones. Features in the current version include the ability for users to (1) securely login (2) launch multiple visualizations (3) conduct collaborative visualization sessions (4) delegate control aspects of a visualization to others and (5) engage in collaborative chats with other users within the user interface of the web application. These features are all in addition to a full range of essential visualization functions including 3-D camera and object orientation, position manipulation, time-stepping control, and custom color/alpha mapping.

  3. In Situ Visualization of the Phase Behavior of Oil Samples Under Refinery Process Conditions.

    PubMed

    Laborde-Boutet, Cedric; McCaffrey, William C

    2017-02-21

    To help address production issues in refineries caused by the fouling of process units and lines, we have developed a setup as well as a method to visualize the behavior of petroleum samples under process conditions. The experimental setup relies on a custom-built micro-reactor fitted with a sapphire window at the bottom, which is placed over the objective of an inverted microscope equipped with a cross-polarizer module. Using reflection microscopy enables the visualization of opaque samples, such as petroleum vacuum residues, or asphaltenes. The combination of the sapphire window from the micro-reactor with the cross-polarizer module of the microscope on the light path allows high-contrast imaging of isotropic and anisotropic media. While observations are carried out, the micro-reactor can be heated to the temperature range of cracking reactions (up to 450 °C), can be subjected to H2 pressure relevant to hydroconversion reactions (up to 16 MPa), and can stir the sample by magnetic coupling. Observations are typically carried out by taking snapshots of the sample under cross-polarized light at regular time intervals. Image analyses may not only provide information on the temperature, pressure, and reactive conditions yielding phase separation, but may also give an estimate of the evolution of the chemical (absorption/reflection spectra) and physical (refractive index) properties of the sample before the onset of phase separation.

  4. The Use of Spinning-Disk Confocal Microscopy for the Intravital Analysis of Platelet Dynamics in Response to Systemic and Local Inflammation

    PubMed Central

    Jenne, Craig N.; Wong, Connie H. Y.; Petri, Björn; Kubes, Paul

    2011-01-01

    Platelets are central players in inflammation and are an important component of the innate immune response. The ability to visualize platelets within the live host is essential to understanding their role in these processes. Past approaches have involved adoptive transfer of labelled platelets, non-specific dyes, or the use of fluorescent antibodies to tag platelets in vivo. Often, these techniques result in either the activation of the platelet, or blockade of specific platelet receptors. In this report, we describe two new methods for intravital visualization of platelet biology, intravenous administration of labelled anti-CD49b, which labels all platelets, and CD41-YFP transgenic mice, in which a percentage of platelets express YFP. Both approaches label endogenous platelets and allow for their visualization using spinning-disk confocal fluorescent microscopy. Following LPS-induced inflammation, we were able to measure a significant increase in both the number and size of platelet aggregates observed within the vasculature of a number of different tissues. Real-time observation of these platelet aggregates reveals them to be large, dynamic structures that are continually expanding and sloughing-off into circulation. Using these techniques, we describe for the first time, platelet recruitment to, and behaviour within numerous tissues of the mouse, both under control conditions and following LPS induced inflammation. PMID:21949865

  5. The Role of Audio-Visual Feedback in a Thought-Based Control of a Humanoid Robot: A BCI Study in Healthy and Spinal Cord Injured People.

    PubMed

    Tidoni, Emmanuele; Gergondet, Pierre; Fusco, Gabriele; Kheddar, Abderrahmane; Aglioti, Salvatore M

    2017-06-01

    The efficient control of our body and successful interaction with the environment are possible through the integration of multisensory information. Brain-computer interface (BCI) may allow people with sensorimotor disorders to actively interact in the world. In this study, visual information was paired with auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and spinal cord injured (SCI) people were asked to embody a humanoid robot and complete a pick-and-place task by means of a visual evoked potentials BCI system. Participants observed the remote environment from the robot's perspective through a head mounted display. Human-footsteps and computer-beep sounds were used as synchronous/asynchronous auditory feedback. Healthy participants achieved better placing accuracy when listening to human footstep sounds relative to a computer-generated sound. SCI people demonstrated more difficulty in steering the robot during asynchronous auditory feedback conditions. Importantly, subjective reports highlighted that the BCI mask overlaying the display did not limit the observation of the scenario and the feeling of being in control of the robot. Overall, the data seem to suggest that sensorimotor-related information may improve the control of external devices. Further studies are required to understand how the contribution of residual sensory channels could improve the reliability of BCI systems.

  6. Visual adaptation dominates bimodal visual-motor action adaptation

    PubMed Central

    de la Rosa, Stephan; Ferstl, Ylva; Bülthoff, Heinrich H.

    2016-01-01

    A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically – akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information. PMID:27029781

  7. ProXL (Protein Cross-Linking Database): A Platform for Analysis, Visualization, and Sharing of Protein Cross-Linking Mass Spectrometry Data

    PubMed Central

    2016-01-01

    ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app. PMID:27302480

  8. A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space

    PubMed Central

    Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.

    2017-01-01

    Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382

  9. Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.

    PubMed

    Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M

    2014-12-01

    In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.

  10. Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: evidence from eye movements.

    PubMed

    Hout, Michael C; Goldinger, Stephen D

    2012-02-01

    When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated nontarget objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent.

  11. ProXL (Protein Cross-Linking Database): A Platform for Analysis, Visualization, and Sharing of Protein Cross-Linking Mass Spectrometry Data.

    PubMed

    Riffle, Michael; Jaschob, Daniel; Zelter, Alex; Davis, Trisha N

    2016-08-05

    ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app .

  12. Direct visualization of nanoparticle dynamics at liquid interfaces

    NASA Astrophysics Data System (ADS)

    Gao, Yige; Kim, Paul; Hoagland, David; Russell, Tom

    Ionic liquids, because of their negligible vapor pressures and moderate viscosities, are suitable media to investigate the dynamics of different types of dispersed nanoparticles by scanning electron microscopy. No liquid cell is necessary. Here, Brownian motions of nanoparticles partially wetted at the vacuum-liquid interface are visualized by low voltage SEM under conditions that allow single particle tracking for tens-of-minutes or longer. Conductive, nonconductive, semiconductive, and core-shell conductive-nonconductive nanoparticles have all been studied, and their interactions with each other in one- and two-component layers, as manifested in particle trajectories, differ significantly. For example, Au-coated silica nanoparticles aggregate above a threshold current, whereas aggregated silica-coated Au nanoparticles disaggregate at the same conditions. The impacts of surface concentration of nanoparticle dynamics were observed for one-component and two-component layers, with both global and localized motions visualized for single particles even in dense environments. As the surface concentration increases, the diffusion coefficient drops, and when the concentration reaches a critical threshold, the nanoparticles are essentially frozen. Financial support from NSF DMR-1619651 is acknowledged.

  13. Image segmentation and 3D visualization for MRI mammography

    NASA Astrophysics Data System (ADS)

    Li, Lihua; Chu, Yong; Salem, Angela F.; Clark, Robert A.

    2002-05-01

    MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.

  14. Visualization of hydrodynamic pilot-wave dynamics

    NASA Astrophysics Data System (ADS)

    Prost, Victor; Quintela, Julio; Harris, Daniel; Brun, Pierre-Thomas; Bush, John

    2015-11-01

    We present a low-cost device for examining the dynamics of droplets bouncing on a vibrating fluid bath, suitable for educational purposes. Dual control of vibrational and strobing frequency from a cell phone application allowed us to reduce the total cost to 60 dollars. Illumination with inhomogeneous colored light allows for striking visualization of the droplet dynamics and accompanying wave field via still photography or high-speed videography. Thanks to the NSF.

  15. Automated registration of tail bleeding in rats.

    PubMed

    Johansen, Peter B; Henriksen, Lars; Andresen, Per R; Lauritzen, Brian; Jensen, Kåre L; Juhl, Trine N; Tranholm, Mikael

    2008-05-01

    An automated system for registration of tail bleeding in rats using a camera and a user-designed PC-based software program has been developed. The live and processed images are displayed on the screen and are exported together with a text file for later statistical processing of the data allowing calculation of e.g. number of bleeding episodes, bleeding times and bleeding areas. Proof-of-principle was achieved when the camera captured the blood stream after infusion of rat whole blood into saline. Suitability was assessed by recording of bleeding profiles in heparin-treated rats, demonstrating that the system was able to capture on/off bleedings and that the data transfer and analysis were conducted successfully. Then, bleeding profiles were visually recorded by two independent observers simultaneously with the automated recordings after tail transection in untreated rats. Linear relationships were found in the number of bleedings, demonstrating, however, a statistically significant difference in the recording of bleeding episodes between observers. Also, the bleeding time was longer for visual compared to automated recording. No correlation was found between blood loss and bleeding time in untreated rats, but in heparinized rats a correlation was suggested. Finally, the blood loss correlated with the automated recording of bleeding area. In conclusion, the automated system has proven suitable for replacing visual recordings of tail bleedings in rats. Inter-observer differences can be eliminated, monotonous repetitive work avoided, and a higher through-put of animals in less time achieved. The automated system will lead to an increased understanding of the nature of bleeding following tail transection in different rodent models.

  16. The NASA/NOAA Electronic Theater

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.

    2003-01-01

    The NASA/NOAA Electronic Theater presents Earth science observations and visualizations from space in a historical perspective. Fly in from outer space to Cambridge and Harvard University. Zoom through the Cosmos to SLC and site of the 2002 Winter Olympics using 1 m IKONOS "Spy Satellite" data. Contrast the 1972 Apollo 17 "Blue Marble" image of the Earth with the latest US and International global satellite images that allow us to view our Planet from any vantage point. See the latest spectacular images from NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiFS, & Landsat 7, of storms & fires like Hurricane Isabel and the LNSan Diego firestorms of 2003. See how High Definition Television (HDTV) is revolutionizing the way we do science communication. Take the pulse of the planet on a daily, annual and 30-year time scale. See daily thunderstorms, the annual blooming of the northern hemisphere landmasses and oceans, fires in Africa, dust storms in Iraq, and carbon monoxide exhaust from global burning. See visualizations featured on Newsweek, TIME, National Geographic, Popular Science covers & National & International Network TV. Spectacular new global visualizations of the observed and simulated atmosphere & oceans are shown. See the currents and vortexes in the oceans that bring up the nutrients to feed tiny plankton and draw the fish, whales and fishermen. See the how the ocean blooms in response to El Niiioh Niiia climate changes. The Etheater will be presented using the latest High Definition TV (HDTV) and video projection technology on a large screen. See the global city lights, and the great NE US blackout of August 2003 observed by the "night-vision" DMSP satellite.

  17. Analysis of total visual and ccd v-broadband observation of comet c/1995 o1 (hale-bopp): 1995-2003

    NASA Astrophysics Data System (ADS)

    de Almeida, A. A.; Boczko, R.; Lopes, A. R.; Sanzovo, G. C.

    The wealth of available information on total visual magnitudes and broadband-V CCD observations of the exceptionally bright Comet C/1995 O1 (Hale-Bopp) proved to be an excellent opportunity to test the Semi-Empirical Method of Visual Magnitudes (de Almeida, Singh & Huebner, 1997) for very bright comets. The main objective is to extend the method to include total visual magnitude observations obtained with CCD detector and V filter in our analysis of total visual magnitudes and obtain a single light curve. We compare the CCD V-broadband careful observations of Liller (1997) by plotting then together with the total visual magnitude observations from experienced visual observers found in the International Comet Quarterly (ICQ) archive. We find a nice agreement despite of the fact that CCDs and V filter passbands detect systematically more coma than visual observers, since they have different responses to C2, which is the main emission from the coma, and consequently they should be used with larger apperture diameters. A data set of ˜400 CCD selected observations covering about the same 5 years time span of the ˜12,000 ICQ total visual magnitude observations were used in the analysis. A least-squares fit to the values yielded a relation for water production rates vs heliocentric distances for the pre- and post-perihelion phases and are converted into gas production rates (in g/s) released by the nucleus. The dimension of the nucleus as well as its effective active area is determined and compared to other works.

  18. BioVEC: a program for biomolecule visualization with ellipsoidal coarse-graining.

    PubMed

    Abrahamsson, Erik; Plotkin, Steven S

    2009-09-01

    Biomolecule Visualization with Ellipsoidal Coarse-graining (BioVEC) is a tool for visualizing molecular dynamics simulation data while allowing coarse-grained residues to be rendered as ellipsoids. BioVEC reads in configuration files, which may be output from molecular dynamics simulations that include orientation output in either quaternion or ANISOU format, and can render frames of the trajectory in several common image formats for subsequent concatenation into a movie file. The BioVEC program is written in C++, uses the OpenGL API for rendering, and is open source. It is lightweight, allows for user-defined settings for and texture, and runs on either Windows or Linux platforms.

  19. Living Liquid: Design and Evaluation of an Exploratory Visualization Tool for Museum Visitors.

    PubMed

    Ma, J; Liao, I; Ma, Kwan-Liu; Frazier, J

    2012-12-01

    Interactive visualizations can allow science museum visitors to explore new worlds by seeing and interacting with scientific data. However, designing interactive visualizations for informal learning environments, such as museums, presents several challenges. First, visualizations must engage visitors on a personal level. Second, visitors often lack the background to interpret visualizations of scientific data. Third, visitors have very limited time at individual exhibits in museums. This paper examines these design considerations through the iterative development and evaluation of an interactive exhibit as a visualization tool that gives museumgoers access to scientific data generated and used by researchers. The exhibit prototype, Living Liquid, encourages visitors to ask and answer their own questions while exploring the time-varying global distribution of simulated marine microbes using a touchscreen interface. Iterative development proceeded through three rounds of formative evaluations using think-aloud protocols and interviews, each round informing a key visualization design decision: (1) what to visualize to initiate inquiry, (2) how to link data at the microscopic scale to global patterns, and (3) how to include additional data that allows visitors to pursue their own questions. Data from visitor evaluations suggests that, when designing visualizations for public audiences, one should (1) avoid distracting visitors from data that they should explore, (2) incorporate background information into the visualization, (3) favor understandability over scientific accuracy, and (4) layer data accessibility to structure inquiry. Lessons learned from this case study add to our growing understanding of how to use visualizations to actively engage learners with scientific data.

  20. Lessons Learned in Developing and Validating Models of Visual Search and Target Acquisition

    DTIC Science & Technology

    2000-03-01

    n al th at d istin gu ish es th e center t x u e t a s t o h t m g t o c r w t e f c lirregularity. texture transition that might occur with a...shown in Figure 5, and it allows the model to simulate the blue squares, and red squares). Neisser and others have performance of experienced human...Processes that Affect STA features support pop-out. For example, Neisser found that after extensive training, observers can learn to rapidly pick Another

  1. Feasibility of Using Remotely Sensed Data to Aid in Long-Term Monitoring of Biodiversity

    NASA Technical Reports Server (NTRS)

    Carroll, Mark L.; Brown, Molly E.; Elders, Akiko; Johnson, Kiersten

    2014-01-01

    Remote sensing is defined as making observations of an event or phenomena without physically sampling it. Typically this is done with instruments and sensors mounted on anything from poles extended over a cornfield,to airplanes,to satellites orbiting the Earth The sensors have characteristics that allow them to detect and record information regarding the emission and reflectance of electromagnetic energy from a surface or object. That information can then be represented visually on a screen or paper map or used in data analysis to inform decision-making.

  2. ESEM analysis of polymeric film in EVA-modified cement paste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, D.A.; Monteiro, P.J.M.

    2005-10-01

    Portland cement pastes modified by 20% weight (polymer/cement ratio) of poly(ethylene-co-vinyl acetate) (EVA) were prepared, cured, and immersed in water for 11 days. The effects of water saturation and drying on the EVA polymeric film formed in cement pastes were observed using environmental scanning electron microscopy (ESEM). This technique allowed the imaging of the EVA film even in saturated samples. The decrease of the relative humidity inside the ESEM chamber did not cause any visual modification of the polymeric film during its drying.

  3. Attention to Color Sharpens Neural Population Tuning via Feedback Processing in the Human Visual Cortex Hierarchy.

    PubMed

    Bartsch, Mandy V; Loewe, Kristian; Merkel, Christian; Heinze, Hans-Jochen; Schoenfeld, Mircea A; Tsotsos, John K; Hopf, Jens-Max

    2017-10-25

    Attention can facilitate the selection of elementary object features such as color, orientation, or motion. This is referred to as feature-based attention and it is commonly attributed to a modulation of the gain and tuning of feature-selective units in visual cortex. Although gain mechanisms are well characterized, little is known about the cortical processes underlying the sharpening of feature selectivity. Here, we show with high-resolution magnetoencephalography in human observers (men and women) that sharpened selectivity for a particular color arises from feedback processing in the human visual cortex hierarchy. To assess color selectivity, we analyze the response to a color probe that varies in color distance from an attended color target. We find that attention causes an initial gain enhancement in anterior ventral extrastriate cortex that is coarsely selective for the target color and transitions within ∼100 ms into a sharper tuned profile in more posterior ventral occipital cortex. We conclude that attention sharpens selectivity over time by attenuating the response at lower levels of the cortical hierarchy to color values neighboring the target in color space. These observations support computational models proposing that attention tunes feature selectivity in visual cortex through backward-propagating attenuation of units less tuned to the target. SIGNIFICANCE STATEMENT Whether searching for your car, a particular item of clothing, or just obeying traffic lights, in everyday life, we must select items based on color. But how does attention allow us to select a specific color? Here, we use high spatiotemporal resolution neuromagnetic recordings to examine how color selectivity emerges in the human brain. We find that color selectivity evolves as a coarse to fine process from higher to lower levels within the visual cortex hierarchy. Our observations support computational models proposing that feature selectivity increases over time by attenuating the responses of less-selective cells in lower-level brain areas. These data emphasize that color perception involves multiple areas across a hierarchy of regions, interacting with each other in a complex, recursive manner. Copyright © 2017 the authors 0270-6474/17/3710346-12$15.00/0.

  4. Semantics by analogy for illustrative volume visualization☆

    PubMed Central

    Gerl, Moritz; Rautek, Peter; Isenberg, Tobias; Gröller, Eduard

    2012-01-01

    We present an interactive graphical approach for the explicit specification of semantics for volume visualization. This explicit and graphical specification of semantics for volumetric features allows us to visually assign meaning to both input and output parameters of the visualization mapping. This is in contrast to the implicit way of specifying semantics using transfer functions. In particular, we demonstrate how to realize a dynamic specification of semantics which allows to flexibly explore a wide range of mappings. Our approach is based on three concepts. First, we use semantic shader augmentation to automatically add rule-based rendering functionality to static visualization mappings in a shader program, while preserving the visual abstraction that the initial shader encodes. With this technique we extend recent developments that define a mapping between data attributes and visual attributes with rules, which are evaluated using fuzzy logic. Second, we let users define the semantics by analogy through brushing on renderings of the data attributes of interest. Third, the rules are specified graphically in an interface that provides visual clues for potential modifications. Together, the presented methods offer a high degree of freedom in the specification and exploration of rule-based mappings and avoid the limitations of a linguistic rule formulation. PMID:23576827

  5. Interactive Mapping on Virtual Terrain Models Using RIMS (Real-time, Interactive Mapping System)

    NASA Astrophysics Data System (ADS)

    Bernardin, T.; Cowgill, E.; Gold, R. D.; Hamann, B.; Kreylos, O.; Schmitt, A.

    2006-12-01

    Recent and ongoing space missions are yielding new multispectral data for the surfaces of Earth and other planets at unprecedented rates and spatial resolution. With their high spatial resolution and widespread coverage, these data have opened new frontiers in observational Earth and planetary science. But they have also precipitated an acute need for new analytical techniques. To address this problem, we have developed RIMS, a Real-time, Interactive Mapping System that allows scientists to visualize, interact with, and map directly on, three-dimensional (3D) displays of georeferenced texture data, such as multispectral satellite imagery, that is draped over a surface representation derived from digital elevation data. The system uses a quadtree-based multiresolution method to render in real time high-resolution (3 to 10 m/pixel) data over large (800 km by 800 km) spatial areas. It allows users to map inside this interactive environment by generating georeferenced and attributed vector-based elements that are draped over the topography. We explain the technique using 15 m ASTER stereo-data from Iraq, P.R. China, and other remote locations because our particular motivation is to develop a technique that permits the detailed (10 m to 1000 m) neotectonic mapping over large (100 km to 1000 km long) active fault systems that is needed to better understand active continental deformation on Earth. RIMS also includes a virtual geologic compass that allows users to fit a plane to geologic surfaces and thereby measure their orientations. It also includes tools that allow 3D surface reconstruction of deformed and partially eroded surfaces such as folded bedding planes. These georeferenced map and measurement data can be exported to, or imported from, a standard GIS (geographic information systems) file format. Our interactive, 3D visualization and analysis system is designed for those who study planetary surfaces, including neotectonic geologists, geomorphologists, marine geophysicists, and planetary scientists. The strength of our system is that it combines interactive rendering with interactive mapping and measurement of features observed in topographic and texture data. Comparison with commercially available software indicates that our system improves mapping accuracy and efficiency. More importantly, it enables Earth scientists to rapidly achieve a deeper level of understanding of remotely sensed data, as observations can be made that are not possible with existing systems.

  6. Short Term Motor-Skill Acquisition Improves with Size of Self-Controlled Virtual Hands

    PubMed Central

    Ossmy, Ori; Mukamel, Roy

    2017-01-01

    Visual feedback in general, and from the body in particular, is known to influence the performance of motor skills in humans. However, it is unclear how the acquisition of motor skills depends on specific visual feedback parameters such as the size of performing effector. Here, 21 healthy subjects physically trained to perform sequences of finger movements with their right hand. Through the use of 3D Virtual Reality devices, visual feedback during training consisted of virtual hands presented on the screen, tracking subject’s hand movements in real time. Importantly, the setup allowed us to manipulate the size of the displayed virtual hands across experimental conditions. We found that performance gains increase with the size of virtual hands. In contrast, when subjects trained by mere observation (i.e., in the absence of physical movement), manipulating the size of the virtual hand did not significantly affect subsequent performance gains. These results demonstrate that when it comes to short-term motor skill learning, the size of visual feedback matters. Furthermore, these results suggest that highest performance gains in individual subjects are achieved when the size of the virtual hand matches their real hand size. These results may have implications for optimizing motor training schemes. PMID:28056023

  7. Engine flow visualization using a copper vapor laser

    NASA Technical Reports Server (NTRS)

    Regan, Carolyn A.; Chun, Kue S.; Schock, Harold J., Jr.

    1987-01-01

    A flow visualization system has been developed to determine the air flow within the combustion chamber of a motored, axisymmetric engine. The engine has been equipped with a transparent quartz cylinder, allowing complete optical access to the chamber. A 40-Watt copper vapor laser is used as the light source. Its beam is focused down to a sheet approximately 1 mm thick. The light plane is passed through the combustion chamber, and illuminates oil particles which were entrained in the intake air. The light scattered off of the particles is recorded by a high speed rotating prism movie camera. A movie is then made showing the air flow within the combustion chamber for an entire four-stroke engine cycle. The system is synchronized so that a pulse generated by the camera triggers the laser's thyratron. The camera is run at 5,000 frames per second; the trigger drives one laser pulse per frame. This paper describes the optics used in the flow visualization system, the synchronization circuit, and presents results obtained from the movie. This is believed to be the first published study showing a planar observation of airflow in a four-stroke piston-cylinder assembly. These flow visualization results have been used to interpret flow velocity measurements previously obtained with a laser Doppler velocimetry system.

  8. Web-based visualization of gridded dataset usings OceanBrowser

    NASA Astrophysics Data System (ADS)

    Barth, Alexander; Watelet, Sylvain; Troupin, Charles; Beckers, Jean-Marie

    2015-04-01

    OceanBrowser is a web-based visualization tool for gridded oceanographic data sets. Those data sets are typically four-dimensional (longitude, latitude, depth and time). OceanBrowser allows one to visualize horizontal sections at a given depth and time to examine the horizontal distribution of a given variable. It also offers the possibility to display the results on an arbitrary vertical section. To study the evolution of the variable in time, the horizontal and vertical sections can also be animated. Vertical section can be generated by using a fixed distance from coast or fixed ocean depth. The user can customize the plot by changing the color-map, the range of the color-bar, the type of the plot (linearly interpolated color, simple contours, filled contours) and download the current view as a simple image or as Keyhole Markup Language (KML) file for visualization in applications such as Google Earth. The data products can also be accessed as NetCDF files and through OPeNDAP. Third-party layers from a web map service can also be integrated. OceanBrowser is used in the frame of the SeaDataNet project (http://gher-diva.phys.ulg.ac.be/web-vis/) and EMODNET Chemistry (http://oceanbrowser.net/emodnet/) to distribute gridded data sets interpolated from in situ observation using DIVA (Data-Interpolating Variational Analysis).

  9. Canonical Visual Size for Real-World Objects

    PubMed Central

    Konkle, Talia; Oliva, Aude

    2012-01-01

    Real-world objects can be viewed at a range of distances and thus can be experienced at a range of visual angles within the visual field. Given the large amount of visual size variation possible when observing objects, we examined how internal object representations represent visual size information. In a series of experiments which required observers to access existing object knowledge, we observed that real-world objects have a consistent visual size at which they are drawn, imagined, and preferentially viewed. Importantly, this visual size is proportional to the logarithm of the assumed size of the object in the world, and is best characterized not as a fixed visual angle, but by the ratio of the object and the frame of space around it. Akin to the previous literature on canonical perspective, we term this consistent visual size information the canonical visual size. PMID:20822298

  10. An Alternative Option to Dedicated Braille Notetakers for People with Visual Impairments: Universal Technology for Better Access

    ERIC Educational Resources Information Center

    Hong, Sunggye

    2012-01-01

    Technology provides equal access to information and helps people with visual impairments to complete tasks more independently. Among various assistive technology options for people with visual impairments, braille notetakers have been considered the most significant because of their technological innovation. Braille notetakers allow users who are…

  11. Evaluation of Software for Introducing Protein Structure: Visualization and Simulation

    ERIC Educational Resources Information Center

    White, Brian; Kahriman, Azmin; Luberice, Lois; Idleh, Farhia

    2010-01-01

    Communicating an understanding of the forces and factors that determine a protein's structure is an important goal of many biology and biochemistry courses at a variety of levels. Many educators use computer software that allows visualization of these complex molecules for this purpose. Although visualization is in wide use and has been associated…

  12. Learner-Information Interaction: A Macro-Level Framework Characterizing Visual Cognitive Tools

    ERIC Educational Resources Information Center

    Sedig, Kamran; Liang, Hai-Ning

    2008-01-01

    Visual cognitive tools (VCTs) are external mental aids that maintain and display visual representations (VRs) of information (i.e., structures, objects, concepts, ideas, and problems). VCTs allow learners to operate upon the VRs to perform epistemic (i.e., reasoning and knowledge-based) activities. In VCTs, the mechanism by which learners operate…

  13. Visual Imagery for Letters and Words. Final Report.

    ERIC Educational Resources Information Center

    Weber, Robert J.

    In a series of six experiments, undergraduate college students visually imagined letters or words and then classified as rapidly as possible the imagined letters for some physical property such as vertical height. This procedure allowed for a preliminary assessment of the temporal parameters of visual imagination. The results delineate a number of…

  14. Semantic layers for illustrative volume rendering.

    PubMed

    Rautek, Peter; Bruckner, Stefan; Gröller, Eduard

    2007-01-01

    Direct volume rendering techniques map volumetric attributes (e.g., density, gradient magnitude, etc.) to visual styles. Commonly this mapping is specified by a transfer function. The specification of transfer functions is a complex task and requires expert knowledge about the underlying rendering technique. In the case of multiple volumetric attributes and multiple visual styles the specification of the multi-dimensional transfer function becomes more challenging and non-intuitive. We present a novel methodology for the specification of a mapping from several volumetric attributes to multiple illustrative visual styles. We introduce semantic layers that allow a domain expert to specify the mapping in the natural language of the domain. A semantic layer defines the mapping of volumetric attributes to one visual style. Volumetric attributes and visual styles are represented as fuzzy sets. The mapping is specified by rules that are evaluated with fuzzy logic arithmetics. The user specifies the fuzzy sets and the rules without special knowledge about the underlying rendering technique. Semantic layers allow for a linguistic specification of the mapping from attributes to visual styles replacing the traditional transfer function specification.

  15. Saccade frequency response to visual cues during gait in Parkinson's disease: the selective role of attention.

    PubMed

    Stuart, Samuel; Lord, Sue; Galna, Brook; Rochester, Lynn

    2018-04-01

    Gait impairment is a core feature of Parkinson's disease (PD) with implications for falls risk. Visual cues improve gait in PD, but the underlying mechanisms are unclear. Evidence suggests that attention and vision play an important role; however, the relative contribution from each is unclear. Measurement of visual exploration (specifically saccade frequency) during gait allows for real-time measurement of attention and vision. Understanding how visual cues influence visual exploration may allow inferences of the underlying mechanisms to response which could help to develop effective therapeutics. This study aimed to examine saccade frequency during gait in response to a visual cue in PD and older adults and investigate the roles of attention and vision in visual cue response in PD. A mobile eye-tracker measured saccade frequency during gait in 55 people with PD and 32 age-matched controls. Participants walked in a straight line with and without a visual cue (50 cm transverse lines) presented under single task and dual-task (concurrent digit span recall). Saccade frequency was reduced when walking in PD compared to controls; however, visual cues ameliorated saccadic deficit. Visual cues significantly increased saccade frequency in both PD and controls under both single task and dual-task. Attention rather than visual function was central to saccade frequency and gait response to visual cues in PD. In conclusion, this study highlights the impact of visual cues on visual exploration when walking and the important role of attention in PD. Understanding these complex features will help inform intervention development. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. A novel model for ectopic, chronic, intravital multiphoton imaging of bone marrow vasculature and architecture in split femurs

    PubMed Central

    Bălan, Mirela; Kiefer, Friedemann

    2015-01-01

    Creating a model for intravital visualization of femoral bone marrow, a major site of hematopoiesis in adult mammalian organisms, poses a serious challenge, in that it needs to overcome bone opacity and the inaccessibility of marrow. Furthermore, meaningful analysis of bone marrow developmental and differentiation processes requires the repetitive observation of the same site over long periods of time, which we refer to as chronic imaging. To surmount these issues, we developed a chronic intravital imaging model that allows the observation of split femurs, ectopically transplanted into a dorsal skinfold chamber of a host mouse. Repeated, long term observations are facilitated by multiphoton microscopy, an imaging technique that combines superior imaging capacity at greater tissue depth with low phototoxicity. The transplanted, ectopic femur was stabilized by its sterile environment and rapidly connected to the host vasculature, allowing further development and observation of extended processes. After optimizing transplant age and grafting procedure, we observed the development of new woven bone and maturation of secondary ossification centers in the transplanted femurs, preceded by the sprouting of a sinusoidal-like vascular network, which was almost entirely composed of femoral endothelial cells. After two weeks, the transplant was still populated with stromal and haematopoietic cells belonging both to donor and host. Over this time frame, the transplant partially retained myeloid progenitor cells with single and multi-lineage differentiation capacity. In summary, our model allowed repeated intravital imaging of bone marrow angiogenesis and hematopoiesis. It represents a promising starting point for the development of improved chronic optical imaging models for femoral bone marrow. PMID:28243515

  17. Cycle-specific female preferences for visual and non-visual cues in the horse (Equus caballus)

    PubMed Central

    Burger, Dominik; Meuwly, Charles; Thomas, Selina; Sieme, Harald; Oberthür, Michael; Wedekind, Claus; Meinecke-Tillmann, Sabine

    2018-01-01

    Although female preferences are well studied in many mammals, the possible effects of the oestrous cycle are not yet sufficiently understood. Here we investigate female preferences for visual and non-visual male traits relative to the periodically cycling of sexual proceptivity (oestrus) and inactivity (dioestrus), respectively, in the polygynous horse (Equus caballus). We individually exposed mares to stallions in four experimental situations: (i) mares in oestrus and visual contact to stallions allowed, (ii) mares in oestrus, with blinds (wooden partitions preventing visual contact but allowing for acoustic and olfactory communication), (iii) mares in dioestrus, no blinds, and (iv) mares in dioestrus, with blinds. Contact times of the mares with each stallion, defined as the cumulative amount of time a mare was in the vicinity of an individual stallion and actively searching contact, were used to rank stallions according to each mare’s preferences. We found that preferences based on visual traits differed significantly from preferences based on non-visual traits in dioestrous mares. The mares then showed a preference for older and larger males, but only if visual cues were available. In contrast, oestrous mares showed consistent preferences with or without blinds, i.e. their preferences were mainly based on non-visual traits and could not be predicted by male age or size. Stallions who were generally preferred displayed a high libido that may have positively influenced female interest or may have been a consequence of it. We conclude that the oestrous cycle has a significant influence on female preferences for visual and non-visual male traits in the horse. PMID:29466358

  18. The Generation of Novel MR Imaging Techniques to Visualize Inflammatory/Degenerative Mechanisms and the Correlation of MR Data with 3D Microscopic Changes

    DTIC Science & Technology

    2012-09-01

    structures that are impossible with current methods . Using techniques to concurrently stain and three-dimensionally analyze many cell types and...new methods allowed us to visualize structures in these damaged samples that were not visible using conventional techniques allowing us modify our...AWARD NUMBER: W81XWH-11-1-0705 TITLE: The Generation of Novel MR Imaging Techniques to

  19. Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer

    PubMed Central

    Gutman, David A.; Dunn, William D.; Cobb, Jake; Stoner, Richard M.; Kalpathy-Cramer, Jayashree; Erickson, Bradley

    2014-01-01

    Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance. PMID:24904399

  20. JavaScript: Data Visualizations

    EPA Pesticide Factsheets

    D3 is a JavaScript library that, in a manner similar to jQuery library, allows direct inspection and manipulation of the Document Object Model, but is intended for the primary purpose of data visualization.

  1. Efficient summary statistical representation when change localization fails.

    PubMed

    Haberman, Jason; Whitney, David

    2011-10-01

    People are sensitive to the summary statistics of the visual world (e.g., average orientation/speed/facial expression). We readily derive this information from complex scenes, often without explicit awareness. Given the fundamental and ubiquitous nature of summary statistical representation, we tested whether this kind of information is subject to the attentional constraints imposed by change blindness. We show that information regarding the summary statistics of a scene is available despite limited conscious access. In a novel experiment, we found that while observers can suffer from change blindness (i.e., not localize where change occurred between two views of the same scene), observers could nevertheless accurately report changes in the summary statistics (or "gist") about the very same scene. In the experiment, observers saw two successively presented sets of 16 faces that varied in expression. Four of the faces in the first set changed from one emotional extreme (e.g., happy) to another (e.g., sad) in the second set. Observers performed poorly when asked to locate any of the faces that changed (change blindness). However, when asked about the ensemble (which set was happier, on average), observer performance remained high. Observers were sensitive to the average expression even when they failed to localize any specific object change. That is, even when observers could not locate the very faces driving the change in average expression between the two sets, they nonetheless derived a precise ensemble representation. Thus, the visual system may be optimized to process summary statistics in an efficient manner, allowing it to operate despite minimal conscious access to the information presented.

  2. A rapid method to visualize von willebrand factor multimers by using agarose gel electrophoresis, immunolocalization and luminographic detection.

    PubMed

    Krizek, D R; Rick, M E

    2000-03-15

    A highly sensitive and rapid clinical method for the visualization of the multimeric structure of von Willebrand Factor in plasma and platelets is described. The method utilizes submerged horizontal agarose gel electrophoresis, followed by transfer of the von Willebrand Factor onto a polyvinylidine fluoride membrane, and immunolocalization and luminographic visualization of the von Willebrand Factor multimeric pattern. This method distinguishes type 1 from types 2A and 2B von Willebrand disease, allowing timely evaluation and classification of von Willebrand Factor in patient plasma. It also allows visualization of the unusually high molecular weight multimers present in platelets. There are several major advantages to this method including rapid processing, simplicity of gel preparation, high sensitivity to low concentrations of von Willebrand Factor, and elimination of radioactivity.

  3. Fully automatic three-dimensional visualization of intravascular optical coherence tomography images: methods and feasibility in vivo

    PubMed Central

    Ughi, Giovanni J; Adriaenssens, Tom; Desmet, Walter; D’hooge, Jan

    2012-01-01

    Intravascular optical coherence tomography (IV-OCT) is an imaging modality that can be used for the assessment of intracoronary stents. Recent publications pointed to the fact that 3D visualizations have potential advantages compared to conventional 2D representations. However, 3D imaging still requires a time consuming manual procedure not suitable for on-line application during coronary interventions. We propose an algorithm for a rapid and fully automatic 3D visualization of IV-OCT pullbacks. IV-OCT images are first processed for the segmentation of the different structures. This also allows for automatic pullback calibration. Then, according to the segmentation results, different structures are depicted with different colors to visualize the vessel wall, the stent and the guide-wire in details. Final 3D rendering results are obtained through the use of a commercial 3D DICOM viewer. Manual analysis was used as ground-truth for the validation of the segmentation algorithms. A correlation value of 0.99 and good limits of agreement (Bland Altman statistics) were found over 250 images randomly extracted from 25 in vivo pullbacks. Moreover, 3D rendering was compared to angiography, pictures of deployed stents made available by the manufacturers and to conventional 2D imaging corroborating visualization results. Computational time for the visualization of an entire data sets resulted to be ~74 sec. The proposed method allows for the on-line use of 3D IV-OCT during percutaneous coronary interventions, potentially allowing treatments optimization. PMID:23243578

  4. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  5. Herding Cats: Geocuration Practices Employed for Field Research Data Collection Activities and Visualization by Blueprint Earth

    NASA Astrophysics Data System (ADS)

    Hoover, R.; Harrison, M.; Sonnenthal, N.; Hernandez, A.; Pelaez, J.

    2015-12-01

    Researchers investigating interdisciplinary topics must work to understand the barriers created by information siloes in order to productively collaborate on complex Earth science questions. These barriers create acute challenges when research is driven by observations rather than hypotheses, as communication between collaborators hinges on data synthesis techniques that often vary greatly between disciplines. Field data collection across disciplines creates even more challenges, and employing student researchers of varying abilities demands an approach that is structured, and yet still flexible enough to accommodate inherent differences in the subjective portions of student data collection. Blueprint Earth is performing system-level environmental observations in the broad areas of geology, biology, hydrology, and atmospheric science. Traditional field data collection methodologies are employed for ease of reproducibility, but must translate across disciplinary information siloes. Information collected must be readily useable in the formulation of hypotheses based on field observations, which necessitates an understanding of key metrics by all investigators involved in data analysis. Blueprint Earth demonstrates the ability to create clear data standards across several disciplines while incorporating a quality control process and this allows for conversion of data into functional visualizations. Additionally, geocuration is organized such that data will be ready for public dissemination upon completion of field research.

  6. Digital holographic interferometry applied to the investigation of ignition process.

    PubMed

    Pérez-Huerta, J S; Saucedo-Anaya, Tonatiuh; Moreno, I; Ariza-Flores, D; Saucedo-Orozco, B

    2017-06-12

    We use the digital holographic interferometry (DHI) technique to display the early ignition process for a butane-air mixture flame. Because such an event occurs in a short time (few milliseconds), a fast CCD camera is used to study the event. As more detail is required for monitoring the temporal evolution of the process, less light coming from the combustion is captured by the CCD camera, resulting in a deficient and underexposed image. Therefore, the CCD's direct observation of the combustion process is limited (down to 1000 frames per second). To overcome this drawback, we propose the use of DHI along with a high power laser in order to supply enough light to increase the speed capture, thus improving the visualization of the phenomenon in the initial moments. An experimental optical setup based on DHI is used to obtain a large sequence of phase maps that allows us to observe two transitory stages in the ignition process: a first explosion which slightly emits visible light, and a second stage induced by variations in temperature when the flame is emerging. While the last stage can be directly monitored by the CCD camera, the first stage is hardly detected by direct observation, and DHI clearly evidences this process. Furthermore, our method can be easily adapted for visualizing other types of fast processes.

  7. Endocytosis and interaction of poly (amidoamine) dendrimers with Caco-2 cells.

    PubMed

    Kitchens, Kelly M; Foraker, Amy B; Kolhatkar, Rohit B; Swaan, Peter W; Ghandehari, Hamidreza

    2007-11-01

    To investigate the internalization and subcellular trafficking of fluorescently labeled poly (amidoamine) (PAMAM) dendrimers in intestinal cell monolayers. PAMAM dendrimers with positive or negative surface charge were conjugated to fluorescein isothiocyanate (FITC) and visualized for colocalization with endocytosis markers using confocal microscopy. Effect of concentration, generation and charge on the morphology of microvilli was observed using transmission electron microscopy. Both cationic and anionic PAMAM dendrimers internalized within 20 min, and differentially colocalized with endocytosis markers clathrin, EEA-1, and LAMP-1. Transmission electron microscopy analysis showed a concentration-, generation- and surface charge-dependent effect on microvilli morphology. These studies provide visual evidence that endocytic mechanism(s) contribute to the internalization and subcellular trafficking of PAMAM dendrimers across the intestinal cells, and that appropriate selection of PAMAM dendrimers based on surface charge, concentration and generation number allows the application of these polymers for oral drug delivery.

  8. Evidence against the temporal subsampling account of illusory motion reversal

    PubMed Central

    Kline, Keith A.; Eagleman, David M.

    2010-01-01

    An illusion of reversed motion may occur sporadically while viewing continuous smooth motion. This has been suggested as evidence of discrete temporal sampling by the visual system in analogy to the sampling that generates the wagon–wheel effect on film. In an alternative theory, the illusion is not the result of discrete sampling but instead of perceptual rivalry between appropriately activated and spuriously activated motion detectors. Results of the current study demonstrate that illusory reversals of two spatially overlapping and orthogonal motions often occur separately, providing evidence against the possibility that illusory motion reversal (IMR) is caused by temporal sampling within a visual region. Further, we find that IMR occurs with non-uniform and non-periodic stimuli—an observation that is not accounted for by the temporal sampling hypothesis. We propose, that a motion aftereffect is superimposed on the moving stimulus, sporadically allowing motion detectors for the reverse direction to dominate perception. PMID:18484852

  9. The pH ruler: a Java applet for developing interactive exercises on acids and bases.

    PubMed

    Barrette-Ng, Isabelle H

    2011-07-01

    In introductory biochemistry courses, it is often a struggle to teach the basic concepts of acid-base chemistry in a manner that is relevant to biological systems. To help students gain a more intuitive and visual understanding of abstract acid-base concepts, a simple graphical construct called the pH ruler Java applet was developed. The applet allows students to visualize the abundance of different protonation states of diprotic and triprotic amino acids at different pH values. Using the applet, the student can drag a widget on a slider bar to change the pH and observe in real time changes in the abundance of different ionization states of this amino acid. This tool provides a means for developing more complex inquiry-based, active-learning exercises to teach more advanced topics of biochemistry, such as protein purification, protein structure and enzyme mechanism.

  10. Single α-particle irradiation permits real-time visualization of RNF8 accumulation at DNA damaged sites

    NASA Astrophysics Data System (ADS)

    Muggiolu, Giovanna; Pomorski, Michal; Claverie, Gérard; Berthet, Guillaume; Mer-Calfati, Christine; Saada, Samuel; Devès, Guillaume; Simon, Marina; Seznec, Hervé; Barberet, Philippe

    2017-01-01

    As well as being a significant source of environmental radiation exposure, α-particles are increasingly considered for use in targeted radiation therapy. A better understanding of α-particle induced damage at the DNA scale can be achieved by following their tracks in real-time in targeted living cells. Focused α-particle microbeams can facilitate this but, due to their low energy (up to a few MeV) and limited range, α-particles detection, delivery, and follow-up observations of radiation-induced damage remain difficult. In this study, we developed a thin Boron-doped Nano-Crystalline Diamond membrane that allows reliable single α-particles detection and single cell irradiation with negligible beam scattering. The radiation-induced responses of single 3 MeV α-particles delivered with focused microbeam are visualized in situ over thirty minutes after irradiation by the accumulation of the GFP-tagged RNF8 protein at DNA damaged sites.

  11. Display characterization by eye: contrast ratio and discrimination throughout the grayscale

    NASA Astrophysics Data System (ADS)

    Gille, Jennifer; Arend, Larry; Larimer, James O.

    2004-06-01

    We have measured the ability of observers to estimate the contrast ratio (maximum white luminance / minimum black or gray) of various displays and to assess luminous discrimination over the tonescale of the display. This was done using only the computer itself and easily-distributed devices such as neutral density filters. The ultimate goal of this work is to see how much of the characterization of a display can be performed by the ordinary user in situ, in a manner that takes advantage of the unique abilities of the human visual system and measures visually important aspects of the display. We discuss the relationship among contrast ratio, tone scale, display transfer function and room lighting. These results may contribute to the development of applications that allow optimization of displays for the situated viewer / display system without instrumentation and without indirect inferences from laboratory to workplace.

  12. Pycortex: an interactive surface visualizer for fMRI

    PubMed Central

    Gao, James S.; Huth, Alexander G.; Lescroart, Mark D.; Gallant, Jack L.

    2015-01-01

    Surface visualizations of fMRI provide a comprehensive view of cortical activity. However, surface visualizations are difficult to generate and most common visualization techniques rely on unnecessary interpolation which limits the fidelity of the resulting maps. Furthermore, it is difficult to understand the relationship between flattened cortical surfaces and the underlying 3D anatomy using tools available currently. To address these problems we have developed pycortex, a Python toolbox for interactive surface mapping and visualization. Pycortex exploits the power of modern graphics cards to sample volumetric data on a per-pixel basis, allowing dense and accurate mapping of the voxel grid across the surface. Anatomical and functional information can be projected onto the cortical surface. The surface can be inflated and flattened interactively, aiding interpretation of the correspondence between the anatomical surface and the flattened cortical sheet. The output of pycortex can be viewed using WebGL, a technology compatible with modern web browsers. This allows complex fMRI surface maps to be distributed broadly online without requiring installation of complex software. PMID:26483666

  13. ESTEEM: A Novel Framework for Qualitatively Evaluating and Visualizing Spatiotemporal Embeddings in Social Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arendt, Dustin L.; Volkova, Svitlana

    Analyzing and visualizing large amounts of social media communications and contrasting short-term conversation changes over time and geo-locations is extremely important for commercial and government applications. Earlier approaches for large-scale text stream summarization used dynamic topic models and trending words. Instead, we rely on text embeddings – low-dimensional word representations in a continuous vector space where similar words are embedded nearby each other. This paper presents ESTEEM,1 a novel tool for visualizing and evaluating spatiotemporal embeddings learned from streaming social media texts. Our tool allows users to monitor and analyze query words and their closest neighbors with an interactive interface.more » We used state-of- the-art techniques to learn embeddings and developed a visualization to represent dynamically changing relations between words in social media over time and other dimensions. This is the first interactive visualization of streaming text representations learned from social media texts that also allows users to contrast differences across multiple dimensions of the data.« less

  14. Chemical and visual communication during mate searching in rock shrimp.

    PubMed

    Díaz, Eliecer R; Thiel, Martin

    2004-06-01

    Mate searching in crustaceans depends on different communicational cues, of which chemical and visual cues are most important. Herein we examined the role of chemical and visual communication during mate searching and assessment in the rock shrimp Rhynchocinetes typus. Adult male rock shrimp experience major ontogenetic changes. The terminal molt stages (named "robustus") are dominant and capable of monopolizing females during the mating process. Previous studies had shown that most females preferably mate with robustus males, but how these dominant males and receptive females find each other is uncertain, and is the question we examined herein. In a Y-maze designed to test for the importance of waterborne chemical cues, we observed that females approached the robustus male significantly more often than the typus male. Robustus males, however, were unable to locate receptive females via chemical signals. Using an experimental set-up that allowed testing for the importance of visual cues, we demonstrated that receptive females do not use visual cues to select robustus males, but robustus males use visual cues to find receptive females. Visual cues used by the robustus males were the tumults created by agitated aggregations of subordinate typus males around the receptive females. These results indicate a strong link between sexual communication and the mating system of rock shrimp in which dominant males monopolize receptive females. We found that females and males use different (sex-specific) communicational cues during mate searching and assessment, and that the sexual communication of rock shrimp is similar to that of the American lobster, where females are first attracted to the dominant males by chemical cues emitted by these males. A brief comparison between these two species shows that female behaviors during sexual communication contribute strongly to the outcome of mate searching and assessment.

  15. A Presentation of Spectacular Visualizations

    NASA Technical Reports Server (NTRS)

    Hasler, Fritz; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Florida and the KSC Visitor's Center. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer hurricanes and tornadic thunderstorms. See the latest spectacular images from NASA and the National Oceanic and Atmospheric Administration (NOAA) remote sensing missions like the Geostationary Operational Environmental Satellites (GOES), NOAA, Tropical Rainfall Measuring Mission (TRMM), SeaWiFS, Landsat7, and new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran, and Linda. See visualizations featured on covers of magazines like Newsweek, TIME, National Geographic, Popular Science, and on National and International Network TV. New Digital Earth visualization tools allow us to roam and zoom through massive global images including a Landsat tour of the US, with drill-downs into major cities using one meter resolution spy-satellite technology from the Space Imaging IKONOS satellite. Spectacular new visualizations of the global atmosphere and oceans are shown. See massive dust storms sweeping across Africa. See ocean vortexes and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUs, 5 Gigabytes of RAM and Terabyte disk using two projectors across the super sized Universe Theater panoramic screen.

  16. Indocyanine green for intraoperative localization of ureter.

    PubMed

    Siddighi, Sam; Yune, Junchan Joshua; Hardesty, Jeffrey

    2014-10-01

    Intraurethral injection of indocyanine green (ICG; Akorn, Lake Forest, IL) and visualization under near-infrared (NIR) light allows for real-time delineation of the ureter. This technology can be helpful to prevent iatrogenic ureteral injury during pelvic surgery. Patients were scheduled to undergo robot-assisted laparoscopic sacrocolpopexy. Before the robotic surgery started, the tip of a 6-F ureteral catheter was inserted into the ureteral orifice. Twenty-five milligrams of ICG was dissolved in 10-mL of sterile water and injected through the open catheter. The same procedure was repeated on the opposite side. The ICG reversibly stained the inside lining of the ureter by binding to proteins on urothelial layer. During the course of robotic surgery, the NIR laser on the da Vinci Si surgical robot (Intuitive Surgical, Inc, Sunnyvale, CA) was used to excite ICG molecules, and infrared emission was captured by the da Vinci filtered lens system and electronically converted to green color. Thus, the ureter fluoresced green, which allowed its definitive identification throughout the entire case. In all cases of >10 patients, we were able to visualize bilateral ureters with this technology, even though there was some variation in brightness that depended on the depth of the ureter from the peritoneal surface. For example, in a morbidly obese patient, the ureters were not as bright green. There were no intraoperative or postoperative adverse effects attributable to ICG administration for up to 2 months of observation. In our experience, this novel method of intraurethral ICG injection was helpful to identify the entire course of ureter and allowed a safe approach to tissues that were adjacent to the urinary tract. The advantage of our technique is that it requires the insertion of just the tip of ureteral catheter. Despite our limited cohort of patients, our findings are consistent with previous reports of the excellent safety profile of intravenous and intrabiliary ICG. Intraurethral injection of ICG and visualization under NIR light allows for real-time delineation of the ureter. This technology can be helpful to prevent iatrogenic ureteral injury during pelvic surgery. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Electric field-induced emission enhancement and modulation in individual CdSe nanowires.

    PubMed

    Vietmeyer, Felix; Tchelidze, Tamar; Tsou, Veronica; Janko, Boldizsar; Kuno, Masaru

    2012-10-23

    CdSe nanowires show reversible emission intensity enhancements when subjected to electric field strengths ranging from 5 to 22 MV/m. Under alternating positive and negative biases, emission intensity modulation depths of 14 ± 7% are observed. Individual wires are studied by placing them in parallel plate capacitor-like structures and monitoring their emission intensities via single nanostructure microscopy. Observed emission sensitivities are rationalized by the field-induced modulation of carrier detrapping rates from NW defect sites responsible for nonradiative relaxation processes. The exclusion of these states from subsequent photophysics leads to observed photoluminescence quantum yield enhancements. We quantitatively explain the phenomenon by developing a kinetic model to account for field-induced variations of carrier detrapping rates. The observed phenomenon allows direct visualization of trap state behavior in individual CdSe nanowires and represents a first step toward developing new optical techniques that can probe defects in low-dimensional materials.

  18. Learning to visually perceive the relative mass of colliding balls in globally and locally constrained task ecologies.

    PubMed

    Jacobs, D M; Runeson, S; Michaels, C F

    2001-10-01

    Novice observers differ from each other in the kinematic variables they use for the perception of kinetic properties, but they converge on more useful variables after practice with feedback. The colliding-balls paradigm was used to investigate how the convergence depends on the relations between the candidate variables and the to-be-perceived property, relative mass. Experiment 1 showed that observers do not change in the variables they use if the variables with which they start allow accurate performance. Experiment 2 showed that, at least for some observers, convergence can be facilitated by reducing the correlations between commonly used nonspecifying variables and relative mass but not by keeping those variables constant. Experiments 3a and 3b further demonstrated that observers learn not to rely on a particular nonspecifying variable if the correlation between that variable and relative mass is reduced.

  19. Visions of Our Planet's Atmosphere, Land & Oceans - ETheater Presentation

    NASA Technical Reports Server (NTRS)

    Hasler, F.

    2000-01-01

    The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Florida and the KSC Visitor's Center. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer hurricanes & tornadic thunderstorms. See the latest spectacular images from NASA and NOAA remote sensing missions like GOES, NOAA, TRMM, SeaWiFS, Landsat7, & new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran and Linda. See visualizations featured on covers of ma'gazines like Newsweek, TIME, National Geographic, Popular Science and on National & International Network TV. New Digital Earth visualization tools allow us to roam & zoom through massive global images including a Landsat tour of the US, with drill-downs into major cities using 1 m resolution spy-satellite technology from the Space Imaging IKONOS satellite. Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. See ocean vortexes and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUS, 5 Gigabytes of RAM and Terabyte disk using two projectors across the super sized Universe Theater panoramic screen.

  20. Audiovisual correspondence between musical timbre and visual shapes

    PubMed Central

    Adeli, Mohammad; Rouat, Jean; Molotchnikoff, Stéphane

    2014-01-01

    This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g., simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e., its shape, color (or grayscale) and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. One hundred and nineteen subjects (31 females and 88 males) participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians, and 36 claimed non-musicians. Thirty-one subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre. PMID:24910604

  1. Visions of Our Planet's Atmosphere, Land and Oceans: Electronic-Theater 2000

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.

    2000-01-01

    The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to the Delaware Bay and Philadelphia area. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer tropical cyclones & tornadic thunderstorms. See the latest spectacular images from NASA, NOAA & UMETSAT remote sensing missions like GOES, Meteosat, NOAA, TRMM, SeaWiFS, Landsat7, & new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran and Linda. see visualizations featured on covers of magazines like Newsweek, TIME, National Geographic, Popular Science and on National & International Network TV. New Digital Earth visualization tools allow us to roam & zoom through massive global images including Landsat tours of the US, and Africa with drill downs of major global cities using 1 m resolution commercialized spy-satellite technology from the Space Imaging IKONOS satellite. Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. see ocean vortexes and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUs, 5 Gigabytes of RAM and Terabyte disk using two projectors across a super sized panoramic screen.

  2. NASA/NOAA/AMS Earth Science Electronic Theatre

    NASA Technical Reports Server (NTRS)

    Hasler, Fritz; Pierce, Hal; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Florida and the KSC Visitor's Center. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer hurricanes & tornadic thunderstorms. See the latest spectacular images from NASA and NOAA remote sensing missions like GOES, NOAA, TRMM, SeaWiFS, Landsat 7, & new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran and Linda. See visualizations featured on covers of magazines like Newsweek, TIME, National Geographic, Popular Science and on National & International Network TV. New Digital Earth visualization tools allow us to roam & zoom through massive global images including a Landsat tour of the US, with drill-downs into major cities using 1 m resolution spy-satellite technology from the Space Imaging IKONOS satellite, Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. See ocean vortexes and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUs, 5 Gigabytes of RAM and Terabyte disk using two projectors across the super sized Universe Theater panoramic screen.

  3. Results of Observations of Occultations of Stars by Main-Belt and Trojan Asteroids, and the Promise of Gaia

    NASA Astrophysics Data System (ADS)

    Dunham, David W.; Herald, David Russell; Preston, Steven; Loader, Brian; Bixby Dunham, Joan

    2016-10-01

    For 40 years, the sizes and shapes of scores of asteroids have been determined from observations of asteroidal occultations, and many hundreds of high-precision positions of the asteroids relative to stars have been measured. Earlier this year, the 3000th observation of an asteroidal occultation was documented. Some of the first evidence for satellites of asteroids was obtained from the early efforts; now, the orbits and sizes of some satellites discovered by other means have been refined from occultation observations. Also, several close binary stars have been discovered, and the angular diameters of some stars have been measured from analysis of these observations. The International Occultation Timing Association (IOTA) coordinates this activity worldwide, from predicting and publicizing the events, to accurately timing the occultations from as many stations as possible, and publishing and archiving the observations. The first observations were timed visually, but now nearly all observations are either video-recorded, or recorded with CCD drift scans, allowing small magnitude-drop events to be recorded, and resulting in more consistent results. Techniques have been developed allowing one or two observers to set up multiple stations with small telescopes, video cameras, and timers, thereby recording many chords, even across a whole asteroid; some examples will be shown.Later this year, the first release of Gaia data will allow us to greatly improve the vast star catalog that we use for both predicting and analyzing these events. Although the first asteroidal data will wait until the 4th Gaia release, before that, we can greatly improve the orbits of asteroids that have occulted 3 or more stars in the past so that we can start computing the paths of future occultations by them to few km accuracy. In a couple of years, we'll be able to realistically predict one to two orders of magnitude more events than we can now, allowing efforts to be concentrated on smaller objects of the highest scientific interest, including some comets.

  4. Computer programming for generating visual stimuli.

    PubMed

    Bukhari, Farhan; Kurylo, Daniel D

    2008-02-01

    Critical to vision research is the generation of visual displays with precise control over stimulus metrics. Generating stimuli often requires adapting commercial software or developing specialized software for specific research applications. In order to facilitate this process, we give here an overview that allows nonexpert users to generate and customize stimuli for vision research. We first give a review of relevant hardware and software considerations, to allow the selection of display hardware, operating system, programming language, and graphics packages most appropriate for specific research applications. We then describe the framework of a generic computer program that can be adapted for use with a broad range of experimental applications. Stimuli are generated in the context of trial events, allowing the display of text messages, the monitoring of subject responses and reaction times, and the inclusion of contingency algorithms. This approach allows direct control and management of computer-generated visual stimuli while utilizing the full capabilities of modern hardware and software systems. The flowchart and source code for the stimulus-generating program may be downloaded from www.psychonomic.org/archive.

  5. pV3-Gold Visualization Environment for Computer Simulations

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa L.

    1997-01-01

    A new visualization environment, pV3-Gold, can be used during and after a computer simulation to extract and visualize the physical features in the results. This environment, which is an extension of the pV3 visualization environment developed at the Massachusetts Institute of Technology with guidance and support by researchers at the NASA Lewis Research Center, features many tools that allow users to display data in various ways.

  6. Changes in Visual/Spatial and Analytic Strategy Use in Organic Chemistry with the Development of Expertise

    ERIC Educational Resources Information Center

    Vlacholia, Maria; Vosniadou, Stella; Roussos, Petros; Salta, Katerina; Kazi, Smaragda; Sigalas, Michael; Tzougraki, Chryssa

    2017-01-01

    We present two studies that investigated the adoption of visual/spatial and analytic strategies by individuals at different levels of expertise in the area of organic chemistry, using the Visual Analytic Chemistry Task (VACT). The VACT allows the direct detection of analytic strategy use without drawing inferences about underlying mental…

  7. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  8. Visualizing Time: How Linguistic Metaphors Are Incorporated into Displaying Instruments in the Process of Interpreting Time-Varying Signals

    ERIC Educational Resources Information Center

    Garcia-Belmonte, Germà

    2017-01-01

    Spatial visualization is a well-established topic of education research that has allowed improving science and engineering students' skills on spatial relations. Connections have been established between visualization as a comprehension tool and instruction in several scientific fields. Learning about dynamic processes mainly relies upon static…

  9. Qualitative Differences in the Representation of Abstract versus Concrete Words: Evidence from the Visual-World Paradigm

    ERIC Educational Resources Information Center

    Dunabeitia, Jon Andoni; Aviles, Alberto; Afonso, Olivia; Scheepers, Christoph; Carreiras, Manuel

    2009-01-01

    In the present visual-world experiment, participants were presented with visual displays that included a target item that was a semantic associate of an abstract or a concrete word. This manipulation allowed us to test a basic prediction derived from the qualitatively different representational framework that supports the view of different…

  10. The Right Hemisphere Advantage in Visual Change Detection Depends on Temporal Factors

    ERIC Educational Resources Information Center

    Spotorno, Sara; Faure, Sylvane

    2011-01-01

    What accounts for the Right Hemisphere (RH) functional superiority in visual change detection? An original task which combines one-shot and divided visual field paradigms allowed us to direct change information initially to the RH or the Left Hemisphere (LH) by deleting, respectively, an object included in the left or right half of a scene…

  11. Preparing Content-Rich Learning Environments with VPython and Excel, Controlled by Visual Basic for Applications

    ERIC Educational Resources Information Center

    Prayaga, Chandra

    2008-01-01

    A simple interface between VPython and Microsoft (MS) Office products such as Word and Excel, controlled by Visual Basic for Applications, is described. The interface allows the preparation of content-rich, interactive learning environments by taking advantage of the three-dimensional (3D) visualization capabilities of VPython and the GUI…

  12. A Visual Galaxy Classification Interface and its Classroom Application

    NASA Astrophysics Data System (ADS)

    Kautsch, Stefan J.; Phung, Chau; VanHilst, Michael; Castro, Victor H

    2014-06-01

    Galaxy morphology is an important topic in modern astronomy to understand questions concerning the evolution and formation of galaxies and their dark matter content. In order to engage students in exploring galaxy morphology, we developed a web-based, graphical interface that allows students to visually classify galaxy images according to various morphological types. The website is designed with HTML5, JavaScript, PHP, and a MySQL database. The classification interface provides hands-on research experience and training for students and interested clients, and allows them to contribute to studies of galaxy morphology. We present the first results of a pilot study and compare the visually classified types using our interface with that from automated classification routines.

  13. GODIVA2: interactive visualization of environmental data on the Web.

    PubMed

    Blower, J D; Haines, K; Santokhee, A; Liu, C L

    2009-03-13

    GODIVA2 is a dynamic website that provides visual access to several terabytes of physically distributed, four-dimensional environmental data. It allows users to explore large datasets interactively without the need to install new software or download and understand complex data. Through the use of open international standards, GODIVA2 maintains a high level of interoperability with third-party systems, allowing diverse datasets to be mutually compared. Scientists can use the system to search for features in large datasets and to diagnose the output from numerical simulations and data processing algorithms. Data providers around Europe have adopted GODIVA2 as an INSPIRE-compliant dynamic quick-view system for providing visual access to their data.

  14. A method to determine the impact of reduced visual function on nodule detection performance.

    PubMed

    Thompson, J D; Lança, C; Lança, L; Hogg, P

    2017-02-01

    In this study we aim to validate a method to assess the impact of reduced visual function and observer performance concurrently with a nodule detection task. Three consultant radiologists completed a nodule detection task under three conditions: without visual defocus (0.00 Dioptres; D), and with two different magnitudes of visual defocus (-1.00 D and -2.00 D). Defocus was applied with lenses and visual function was assessed prior to each image evaluation. Observers evaluated the same cases on each occasion; this comprised of 50 abnormal cases containing 1-4 simulated nodules (5, 8, 10 and 12 mm spherical diameter, 100 HU) placed within a phantom, and 25 normal cases (images containing no nodules). Data was collected under the free-response paradigm and analysed using Rjafroc. A difference in nodule detection performance would be considered significant at p < 0.05. All observers had acceptable visual function prior to beginning the nodule detection task. Visual acuity was reduced to an unacceptable level for two observers when defocussed to -1.00 D and for one observer when defocussed to -2.00 D. Stereoacuity was unacceptable for one observer when defocussed to -2.00 D. Despite unsatisfactory visual function in the presence of defocus we were unable to find a statistically significant difference in nodule detection performance (F(2,4) = 3.55, p = 0.130). A method to assess visual function and observer performance is proposed. In this pilot evaluation we were unable to detect any difference in nodule detection performance when using lenses to reduce visual function. Copyright © 2016 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.

  15. The use of intonation for turn anticipation in observed conversations without visual signals as source of information

    PubMed Central

    Keitel, Anne; Daum, Moritz M.

    2015-01-01

    The anticipation of a speaker’s next turn is a key element of successful conversation. This can be achieved using a multitude of cues. In natural conversation, the most important cue for adults to anticipate the end of a turn (and therefore the beginning of the next turn) is the semantic and syntactic content. In addition, prosodic cues, such as intonation, or visual signals that occur before a speaker starts speaking (e.g., opening the mouth) help to identify the beginning and the end of a speaker’s turn. Early in life, prosodic cues seem to be more important than in adulthood. For example, it was previously shown that 3-year-old children anticipated more turns in observed conversations when intonation was available compared with when not, and this beneficial effect was present neither in younger children nor in adults (Keitel et al., 2013). In the present study, we investigated this effect in greater detail. Videos of conversations between puppets with either normal or flattened intonation were presented to children (1-year-olds and 3-year-olds) and adults. The use of puppets allowed the control of visual signals: the verbal signals (speech) started exactly at the same time as the visual signals (mouth opening). With respect to the children, our findings replicate the results of the previous study: 3-year-olds anticipated more turns with normal intonation than with flattened intonation, whereas 1-year-olds did not show this effect. In contrast to our previous findings, the adults showed the same intonation effect as the 3-year-olds. This suggests that adults’ cue use varies depending on the characteristics of a conversation. Our results further support the notion that the cues used to anticipate conversational turns differ in development. PMID:25713548

  16. Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot

    PubMed Central

    Raman, Rajani; Sarkar, Sandip

    2016-01-01

    Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812

  17. Simulation services and analysis tools at the CCMC to study multi-scale structure and dynamics of Earth's magnetopause

    NASA Astrophysics Data System (ADS)

    Kuznetsova, M. M.; Liu, Y. H.; Rastaetter, L.; Pembroke, A. D.; Chen, L. J.; Hesse, M.; Glocer, A.; Komar, C. M.; Dorelli, J.; Roytershteyn, V.

    2016-12-01

    The presentation will provide overview of new tools, services and models implemented at the Community Coordinated Modeling Center (CCMC) to facilitate MMS dayside results analysis. We will provide updates on implementation of Particle-in-Cell (PIC) simulations at the CCMC and opportunities for on-line visualization and analysis of results of PIC simulations of asymmetric magnetic reconnection for different guide fields and boundary conditions. Fields, plasma parameters, particle distribution moments as well as particle distribution functions calculated in selected regions of the vicinity of reconnection sites can be analyzed through the web-based interactive visualization system. In addition there are options to request distribution functions in user selected regions of interest and to fly through simulated magnetic reconnection configurations and a map of distributions to facilitate comparisons with observations. A broad collection of global magnetosphere models hosted at the CCMC provide opportunity to put MMS observations and local PIC simulations into global context. We recently implemented the RECON-X post processing tool (Glocer et al, 2016) which allows users to determine the location of separator surface around closed field lines and between open field lines and solar wind field lines. The tool also finds the separatrix line where the two surfaces touch and positions of magnetic nulls. The surfaces and the separatrix line can be visualized relative to satellite positions in the dayside magnetosphere using an interactive HTML-5 visualization for each time step processed. To validate global magnetosphere models' capability to simulate locations of dayside magnetosphere boundaries we will analyze the proximity of MMS to simulated separatrix locations for a set of MMS diffusion region crossing events.

  18. An electrocorticographic BCI using code-based VEP for control in video applications: a single-subject study

    PubMed Central

    Kapeller, Christoph; Kamada, Kyousuke; Ogawa, Hiroshi; Prueckl, Robert; Scharinger, Josef; Guger, Christoph

    2014-01-01

    A brain-computer-interface (BCI) allows the user to control a device or software with brain activity. Many BCIs rely on visual stimuli with constant stimulation cycles that elicit steady-state visual evoked potentials (SSVEP) in the electroencephalogram (EEG). This EEG response can be generated with a LED or a computer screen flashing at a constant frequency, and similar EEG activity can be elicited with pseudo-random stimulation sequences on a screen (code-based BCI). Using electrocorticography (ECoG) instead of EEG promises higher spatial and temporal resolution and leads to more dominant evoked potentials due to visual stimulation. This work is focused on BCIs based on visual evoked potentials (VEP) and its capability as a continuous control interface for augmentation of video applications. One 35 year old female subject with implanted subdural grids participated in the study. The task was to select one out of four visual targets, while each was flickering with a code sequence. After a calibration run including 200 code sequences, a linear classifier was used during an evaluation run to identify the selected visual target based on the generated code-based VEPs over 20 trials. Multiple ECoG buffer lengths were tested and the subject reached a mean online classification accuracy of 99.21% for a window length of 3.15 s. Finally, the subject performed an unsupervised free run in combination with visual feedback of the current selection. Additionally, an algorithm was implemented that allowed to suppress false positive selections and this allowed the subject to start and stop the BCI at any time. The code-based BCI system attained very high online accuracy, which makes this approach very promising for control applications where a continuous control signal is needed. PMID:25147509

  19. 3D visualization of ultra-fine ICON climate simulation data

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Spickermann, Dela; Böttinger, Michael

    2016-04-01

    Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (<10km) ocean simulations, as well as for ultra-fine cloud resolving (120m) atmospheric simulations. This results in very large 3D time dependent multi-variate data that need to be displayed and analyzed. We have developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.

  20. Cosmic cookery: making a stereoscopic 3D animated movie

    NASA Astrophysics Data System (ADS)

    Holliman, Nick; Baugh, Carlton; Frenk, Carlos; Jenkins, Adrian; Froner, Barbara; Hassaine, Djamel; Helly, John; Metcalfe, Nigel; Okamoto, Takashi

    2006-02-01

    This paper describes our experience making a short stereoscopic movie visualizing the development of structure in the universe during the 13.7 billion years from the Big Bang to the present day. Aimed at a general audience for the Royal Society's 2005 Summer Science Exhibition, the movie illustrates how the latest cosmological theories based on dark matter and dark energy are capable of producing structures as complex as spiral galaxies and allows the viewer to directly compare observations from the real universe with theoretical results. 3D is an inherent feature of the cosmology data sets and stereoscopic visualization provides a natural way to present the images to the viewer, in addition to allowing researchers to visualize these vast, complex data sets. The presentation of the movie used passive, linearly polarized projection onto a 2m wide screen but it was also required to playback on a Sharp RD3D display and in anaglyph projection at venues without dedicated stereoscopic display equipment. Additionally lenticular prints were made from key images in the movie. We discuss the following technical challenges during the stereoscopic production process; 1) Controlling the depth presentation, 2) Editing the stereoscopic sequences, 3) Generating compressed movies in display specific formats. We conclude that the generation of high quality stereoscopic movie content using desktop tools and equipment is feasible. This does require careful quality control and manual intervention but we believe these overheads are worthwhile when presenting inherently 3D data as the result is significantly increased impact and better understanding of complex 3D scenes.

  1. Evolution of crossmodal reorganization of the voice area in cochlear-implanted deaf patients.

    PubMed

    Rouger, Julien; Lagleyre, Sébastien; Démonet, Jean-François; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal

    2012-08-01

    Psychophysical and neuroimaging studies in both animal and human subjects have clearly demonstrated that cortical plasticity following sensory deprivation leads to a brain functional reorganization that favors the spared modalities. In postlingually deaf patients, the use of a cochlear implant (CI) allows a recovery of the auditory function, which will probably counteract the cortical crossmodal reorganization induced by hearing loss. To study the dynamics of such reversed crossmodal plasticity, we designed a longitudinal neuroimaging study involving the follow-up of 10 postlingually deaf adult CI users engaged in a visual speechreading task. While speechreading activates Broca's area in normally hearing subjects (NHS), the activity level elicited in this region in CI patients is abnormally low and increases progressively with post-implantation time. Furthermore, speechreading in CI patients induces abnormal crossmodal activations in right anterior regions of the superior temporal cortex normally devoted to processing human voice stimuli (temporal voice-sensitive areas-TVA). These abnormal activity levels diminish with post-implantation time and tend towards the levels observed in NHS. First, our study revealed that the neuroplasticity after cochlear implantation involves not only auditory but also visual and audiovisual speech processing networks. Second, our results suggest that during deafness, the functional links between cortical regions specialized in face and voice processing are reallocated to support speech-related visual processing through cross-modal reorganization. Such reorganization allows a more efficient audiovisual integration of speech after cochlear implantation. These compensatory sensory strategies are later completed by the progressive restoration of the visuo-audio-motor speech processing loop, including Broca's area. Copyright © 2011 Wiley Periodicals, Inc.

  2. Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation

    PubMed Central

    Nowke, Christian; Diaz-Pier, Sandra; Weyers, Benjamin; Hentschel, Bernd; Morrison, Abigail; Kuhlen, Torsten W.; Peyser, Alexander

    2018-01-01

    Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed. PMID:29937723

  3. Comparative visualization of genetic and physical maps with Strudel.

    PubMed

    Bayer, Micha; Milne, Iain; Stephen, Gordon; Shaw, Paul; Cardle, Linda; Wright, Frank; Marshall, David

    2011-05-01

    Data visualization can play a key role in comparative genomics, for example, underpinning the investigation of conserved synteny patterns. Strudel is a desktop application that allows users to easily compare both genetic and physical maps interactively and efficiently. It can handle large datasets from several genomes simultaneously, and allows all-by-all comparisons between these. Installers for Strudel are available for Windows, Linux, Solaris and Mac OS X at http://bioinf.scri.ac.uk/strudel/.

  4. Electronic-Theater 2001: Visions of Our Planet's Atmosphere, Land and Oceans

    NASA Technical Reports Server (NTRS)

    Hasler, Authur; Starr, David OC. (Technical Monitor)

    2001-01-01

    The NASA/NOAA/AMS Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Wisconsin, Madison and the Monona Terrace Center. Drop in on the Kennedy Space Center and Park City Utah, site of the 2002 Olympics using I m IKONOS "Spy Satellite" data. Go back to the early weather satellite images from the 1960s pioneered by UW. Scientists and see them contrasted with the latest US and International global satellite weather movies including hurricanes & tornadoes. See the latest spectacular images from NASA/NOAA remote sensing missions like Terra GOES, TRMM, SeaWiFS, Landsat 7 that are visualized & explained. See how High Definition Television (HDTV) is revolutionizing the way we communicate science in cooperation with the American Museum of Natural History in NYC. See dust storms in Africa and smoke plumes from fires in Mexico. See visualizations featured on Newsweek, TIME, National Geographic, Popular Science covers & National & International Network TV. New visualization tools allow us to roam & zoom through massive global images eg Landsat tours of the US, Africa, & New Zealand showing desert and mountain geology as well as seasonal changes in vegetation. See animations of the polar ice packs and the motion of gigantic Antarctic Icebergs from SeaWinds data. Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. See vortices and currents in the global oceans that bring up the nutrients to feed tiny plankton and draw the fish, whales and fisherman. See the how the ocean blooms in response to these currents and El Nina/La Nina climate changes. The demonstration is interactively driven by a SGI Onyx 11 Graphics Supercomputer with four CPUs, 8 Gigabytes of RAM and Terabyte of disk. With five projectors on a giant IMAX sized 18 x 72 ft screen. See the city lights, fishing fleets, gas flares and bio-mass burning of the Earth at night observed by the "nightvision" DMSP military satellite.

  5. Visions of Our Planet's Atmosphere, Land and Oceans Electronic-Theater 2001

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The NASA/NOAA/AMS Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Fredericton New Brunswick. Drop in on the Kennedy Space Center and Park City Utah, site of the 2002 Olympics using 1 m IKONOS "Spy Satellite" data. Go back to the early weather satellite images from the 1960s and see them contrasted with the latest US and International global satellite weather movies including hurricanes & tornadoes. See the latest spectacular images from NASA/NOAA and Canadian remote sensing missions like Terra GOES, TRMM, SeaWiFS, Landsat 7, and Radarsat that are visualized & explained. See how High Definition Television (HDTV) is revolutionizing the way we communicate science in cooperation with the American Museum of Natural History in NYC. See dust storms in Africa and smoke plumes from fires in Mexico. See visualizations featured on Newsweek, TIME, National Geographic, Popular Science covers & National & International Network TV. New visualization tools allow us to roam & zoom through massive global images eg Landsat tours of the US, Africa, & New Zealand showing desert and mountain geology as well as seasonal changes in vegetation. See animations of the polar ice packs and the motion of gigantic Antarctic Icebergs from SeaWinds data. Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. See vortexes and currents in the global oceans that bring up the nutrients to feed tiny plankton and draw the fish, whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Onyx II Graphics Supercomputer with four CPUs, 8 Gigabytes of RAM and Terabyte of disk. With multiple projectors on a giant screen. See the city lights, fishing fleets, gas flares and bio-mass burning of the Earth at night observed by the "night-vision" DMSP military satellite.

  6. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning

    PubMed Central

    Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor behaviours. PMID:26963919

  7. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning.

    PubMed

    Arrighi, Pieranna; Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno; Andre, Paolo

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor behaviours.

  8. WinTICS-24 --- A Telescope Control Interface for MS Windows

    NASA Astrophysics Data System (ADS)

    Hawkins, R. Lee

    1995-12-01

    WinTICS-24 is a telescope control system interface and observing assistant written in Visual Basic for MS Windows. It provides the ability to control a telescope and up to 3 other instruments via the serial ports on an IBM-PC compatible computer, all from one consistent user interface. In addition to telescope control, WinTICS contains an observing logbook, trouble log (which can automatically email its entries to a responsible person), lunar phase display, object database (which allows the observer to type in the name of an object and automatically slew to it), a time of minimum calculator for eclipsing binary stars, and an interface to the Guide CD-ROM for bringing up finder charts of the current telescope coordinates. Currently WinTICS supports control of DFM telescopes, but is easily adaptable to other telescopes and instrumentation.

  9. Neuromodulation and mitochondrial transport: live imaging in hippocampal neurons over long durations.

    PubMed

    Edelman, David B; Owens, Geoffrey C; Chen, Sigeng

    2011-06-17

    To understand the relationship between mitochondrial transport and neuronal function, it is critical to observe mitochondrial behavior in live cultured neurons for extended durations(1-3). This is now possible through the use of vital dyes and fluorescent proteins with which cytoskeletal components, organelles, and other structures in living cells can be labeled and then visualized via dynamic fluorescence microscopy. For example, in embryonic chicken sympathetic neurons, mitochondrial movement was characterized using the vital dye rhodamine 123(4). In another study, mitochondria were visualized in rat forebrain neurons by transfection of mitochondrially targeted eYFP(5). However, imaging of primary neurons over minutes, hours, or even days presents a number of issues. Foremost among these are: 1) maintenance of culture conditions such as temperature, humidity, and pH during long imaging sessions; 2) a strong, stable fluorescent signal to assure both the quality of acquired images and accurate measurement of signal intensity during image analysis; and 3) limiting exposure times during image acquisition to minimize photobleaching and avoid phototoxicity. Here, we describe a protocol that permits the observation, visualization, and analysis of mitochondrial movement in cultured hippocampal neurons with high temporal resolution and under optimal life support conditions. We have constructed an affordable stage-top incubator that provides good temperature regulation and atmospheric gas flow, and also limits the degree of media evaporation, assuring stable pH and osmolarity. This incubator is connected, via inlet and outlet hoses, to a standard tissue culture incubator, which provides constant humidity levels and an atmosphere of 5-10% CO(2;)/air. This design offers a cost-effective alternative to significantly more expensive microscope incubators that don't necessarily assure the viability of cells over many hours or even days. To visualize mitochondria, we infect cells with a lentivirus encoding a red fluorescent protein that is targeted to the mitochondrion. This assures a strong and persistent signal, which, in conjunction with the use of a stable xenon light source, allows us to limit exposure times during image acquisition and all but precludes photobleaching and phototoxicity. Two injection ports on the top of the stage-top incubator allow the acute administration of neurotransmitters and other reagents intended to modulate mitochondrial movement. In sum, lentivirus-mediated expression of an organelle-targeted red fluorescent protein and the combination of our stage-top incubator, a conventional inverted fluorescence microscope, CCD camera, and xenon light source allow us to acquire time-lapse images of mitochondrial transport in living neurons over longer durations than those possible in studies deploying conventional vital dyes and off-the-shelf life support systems.

  10. Fast and slow readers of the Hebrew language show divergence in brain response ∼200 ms post stimulus: an ERP study.

    PubMed

    Korinth, Sebastian Peter; Breznitz, Zvia

    2014-01-01

    Higher N170 amplitudes to words and to faces were recently reported for faster readers of German. Since the shallow German orthography allows phonological recoding of single letters, the reported speed advantages might have their origin in especially well-developed visual processing skills of faster readers. In contrast to German, adult readers of Hebrew are forced to process letter chunks up to whole words. This dependence on more complex visual processing might have created ceiling effects for this skill. Therefore, the current study examined whether also in the deep Hebrew orthography visual processing skills as reflected by N170 amplitudes explain reading speed differences. Forty university students, native speakers of Hebrew without reading impairments, accomplished a lexical decision task (i.e., deciding whether a visually presented stimulus represents a real or a pseudo word) and a face decision task (i.e., deciding whether a face was presented complete or with missing facial features) while their electroencephalogram was recorded from 64 scalp positions. In both tasks stronger event related potentials (ERPs) were observed for faster readers in time windows at about 200 ms. Unlike in previous studies, ERP waveforms in relevant time windows did not correspond to N170 scalp topographies. The results support the notion of visual processing ability as an orthography independent marker of reading proficiency, which advances our understanding about regular and impaired reading development.

  11. Is the Theory of Mind deficit observed in visual paradigms in schizophrenia explained by an impaired attention toward gaze orientation?

    PubMed

    Roux, Paul; Forgeot d'Arc, Baudoin; Passerieux, Christine; Ramus, Franck

    2014-08-01

    Schizophrenia is associated with poor Theory of Mind (ToM), particularly in goal and belief attribution to others. It is also associated with abnormal gaze behaviors toward others: individuals with schizophrenia usually look less to others' face and gaze, which are crucial epistemic cues that contribute to correct mental states inferences. This study tests the hypothesis that impaired ToM in schizophrenia might be related to a deficit in visual attention toward gaze orientation. We adapted a previous non-verbal ToM paradigm consisting of animated cartoons allowing the assessment of goal and belief attribution. In the true and false belief conditions, an object was displaced while an agent was either looking at it or away, respectively. Eye movements were recorded to quantify visual attention to gaze orientation (proportion of time participants spent looking at the head of the agent while the target object changed locations). 29 patients with schizophrenia and 29 matched controls were tested. Compared to controls, patients looked significantly less at the agent's head and had lower performance in belief and goal attribution. Performance in belief and goal attribution significantly increased with the head looking percentage. When the head looking percentage was entered as a covariate, the group effect on belief and goal attribution performance was not significant anymore. Patients' deficit on this visual ToM paradigm is thus entirely explained by a decreased visual attention toward gaze. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Radiation Channels Close to a Plasmonic Nanowire Visualized by Back Focal Plane Imaging

    PubMed Central

    Hartmann, Nicolai; Piatkowski, Dawid; Ciesielski, Richard; Mackowski, Sebastian; Hartschuh, Achim

    2014-01-01

    We investigated the angular radiation patterns, a key characteristic of an emitting system, from individual silver nanowires decorated with rare earth ion-doped nanocrystals. Back focal plane radiation patterns of the nanocrystal photoluminescence after local two-photon excitation can be described by two emission channels: Excitation of propagating surface plasmons in the nanowire followed by leakage radiation and direct dipolar emission observed also in the absence of the nanowire. Theoretical modeling reproduces the observed radiation patterns which strongly depend on the position of excitation along the nanowire. Our analysis allows to estimate the branching ratio into both emission channels and to determine the diameter dependent surface plasmon quasi-momentum, important parameters of emitter-plasmon structures. PMID:24131299

  13. Diderot: a Domain-Specific Language for Portable Parallel Scientific Visualization and Image Analysis.

    PubMed

    Kindlmann, Gordon; Chiw, Charisee; Seltzer, Nicholas; Samuels, Lamont; Reppy, John

    2016-01-01

    Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.

  14. Off-surface infrared flow visualization

    NASA Technical Reports Server (NTRS)

    Manuel, Gregory S. (Inventor); Obara, Clifford J. (Inventor); Daryabeigi, Kamran (Inventor); Alderfer, David W. (Inventor)

    1993-01-01

    A method for visualizing off-surface flows is provided. The method consists of releasing a gas with infrared absorbing and emitting characteristics into a fluid flow and imaging the flow with an infrared imaging system. This method allows for visualization of off-surface fluid flow in-flight. The novelty of this method is found in providing an apparatus for flow visualization which is contained within the aircraft so as not to disrupt the airflow around the aircraft, is effective at various speeds and altitudes, and is longer-lasting than previous methods of flow visualization.

  15. Probabilistic Modeling and Visualization of the Flexibility in Morphable Models

    NASA Astrophysics Data System (ADS)

    Lüthi, M.; Albrecht, T.; Vetter, T.

    Statistical shape models, and in particular morphable models, have gained widespread use in computer vision, computer graphics and medical imaging. Researchers have started to build models of almost any anatomical structure in the human body. While these models provide a useful prior for many image analysis task, relatively little information about the shape represented by the morphable model is exploited. We propose a method for computing and visualizing the remaining flexibility, when a part of the shape is fixed. Our method, which is based on Probabilistic PCA, not only leads to an approach for reconstructing the full shape from partial information, but also allows us to investigate and visualize the uncertainty of a reconstruction. To show the feasibility of our approach we performed experiments on a statistical model of the human face and the femur bone. The visualization of the remaining flexibility allows for greater insight into the statistical properties of the shape.

  16. The perception of visual images encoded in musical form: a study in cross-modality information transfer.

    PubMed Central

    Cronly-Dillon, J; Persaud, K; Gregory, R P

    1999-01-01

    This study demonstrates the ability of blind (previously sighted) and blindfolded (sighted) subjects in reconstructing and identifying a number of visual targets transformed into equivalent musical representations. Visual images are deconstructed through a process which selectively segregates different features of the image into separate packages. These are then encoded in sound and presented as a polyphonic musical melody which resembles a Baroque fugue with many voices, allowing subjects to analyse the component voices selectively in combination, or separately in sequence, in a manner which allows a subject to patch together and bind the different features of the object mentally into a mental percept of a single recognizable entity. The visual targets used in this study included a variety of geometrical figures, simple high-contrast line drawings of man-made objects, natural and urban scenes, etc., translated into sound and presented to the subject in polyphonic musical form. PMID:10643086

  17. Single-case research design in pediatric psychology: considerations regarding data analysis.

    PubMed

    Cohen, Lindsey L; Feinstein, Amanda; Masuda, Akihiko; Vowles, Kevin E

    2014-03-01

    Single-case research allows for an examination of behavior and can demonstrate the functional relation between intervention and outcome in pediatric psychology. This review highlights key assumptions, methodological and design considerations, and options for data analysis. Single-case methodology and guidelines are reviewed with an in-depth focus on visual and statistical analyses. Guidelines allow for the careful evaluation of design quality and visual analysis. A number of statistical techniques have been introduced to supplement visual analysis, but to date, there is no consensus on their recommended use in single-case research design. Single-case methodology is invaluable for advancing pediatric psychology science and practice, and guidelines have been introduced to enhance the consistency, validity, and reliability of these studies. Experts generally agree that visual inspection is the optimal method of analysis in single-case design; however, statistical approaches are becoming increasingly evaluated and used to augment data interpretation.

  18. Visualizing Sea Level Rise with Augmented Reality

    NASA Astrophysics Data System (ADS)

    Kintisch, E. S.

    2013-12-01

    Looking Glass is an application on the iPhone that visualizes in 3-D future scenarios of sea level rise, overlaid on live camera imagery in situ. Using a technology known as augmented reality, the app allows a layperson user to explore various scenarios of sea level rise using a visual interface. Then the user can see, in an immersive, dynamic way, how those scenarios would affect a real place. The first part of the experience activates users' cognitive, quantitative thinking process, teaching them how global sea level rise, tides and storm surge contribute to flooding; the second allows an emotional response to a striking visual depiction of possible future catastrophe. This project represents a partnership between a science journalist, MIT, and the Rhode Island School of Design, and the talk will touch on lessons this projects provides on structuring and executing such multidisciplinary efforts on future design projects.

  19. Navigation-supported diagnosis of the substantia nigra by matching midbrain sonography and MRI

    NASA Astrophysics Data System (ADS)

    Salah, Zein; Weise, David; Preim, Bernhard; Classen, Joseph; Rose, Georg

    2012-03-01

    Transcranial sonography (TCS) is a well-established neuroimaging technique that allows for visualizing several brainstem structures, including the substantia nigra, and helps for the diagnosis and differential diagnosis of various movement disorders, especially in Parkinsonian syndromes. However, proximate brainstem anatomy can hardly be recognized due to the limited image quality of B-scans. In this paper, a visualization system for the diagnosis of the substantia nigra is presented, which utilizes neuronavigated TCS to reconstruct tomographical slices from registered MRI datasets and visualizes them simultaneously with corresponding TCS planes in realtime. To generate MRI tomographical slices, the tracking data of the calibrated ultrasound probe are passed to an optimized slicing algorithm, which computes cross sections at arbitrary positions and orientations from the registered MRI dataset. The extracted MRI cross sections are finally fused with the region of interest from the ultrasound image. The system allows for the computation and visualization of slices at a near real-time rate. Primary tests of the system show an added value to the pure sonographic imaging. The system also allows for reconstructing volumetric (3D) ultrasonic data of the region of interest, and thus contributes to enhancing the diagnostic yield of midbrain sonography.

  20. Generalization of color-difference formulas for any illuminant and any observer by assuming perfect color constancy in a color-vision model based on the OSA-UCS system.

    PubMed

    Oleari, Claudio; Melgosa, Manuel; Huertas, Rafael

    2011-11-01

    The most widely used color-difference formulas are based on color-difference data obtained under D65 illumination or similar and for a 10° visual field; i.e., these formulas hold true for the CIE 1964 observer adapted to D65 illuminant. This work considers the psychometric color-vision model based on the Optical Society of America-Uniform Color Scales (OSA-UCS) system previously published by the first author [J. Opt. Soc. Am. A 21, 677 (2004); Color Res. Appl. 30, 31 (2005)] with the additional hypothesis that complete illuminant adaptation with perfect color constancy exists in the visual evaluation of color differences. In this way a computational procedure is defined for color conversion between different illuminant adaptations, which is an alternative to the current chromatic adaptation transforms. This color conversion allows the passage between different observers, e.g., CIE 1964 and CIE 1931. An application of this color conversion is here made in the color-difference evaluation for any observer and in any illuminant adaptation: these transformations convert tristimulus values related to any observer and illuminant adaptation to those related to the observer and illuminant adaptation of the definition of the color-difference formulas, i.e., to the CIE 1964 observer adapted to the D65 illuminant, and then the known color-difference formulas can be applied. The adaptations to the illuminants A, C, F11, D50, Planckian and daylight at any color temperature and for CIE 1931 and CIE 1964 observers are considered as examples, and all the corresponding transformations are given for practical use.

Top