Science.gov

Sample records for algorithm visualization technology

  1. Algorithm Visualization in Teaching Practice

    ERIC Educational Resources Information Center

    Törley, Gábor

    2014-01-01

    This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.

  2. Hypermedia and visual technology

    NASA Technical Reports Server (NTRS)

    Walker, Lloyd

    1990-01-01

    Applications of a codified professional practice that uses visual representations of the thoughts and ideas of a working group are reported in order to improve productivity, problem solving, and innovation. This visual technology process was developed under the auspices of General Foods as part of a multi-year study. The study resulted in the validation of this professional service as a way to use art and design to facilitate productivity and innovation and to define new opportunities. It was also used by NASA for planning Lunar/Mars exploration and by other companies for general business and advanced strategic planning, developing new product concepts, and litigation support. General Foods has continued to use the service for packaging innovation studies.

  3. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  4. HEP visualization and video technology

    SciTech Connect

    Lebrun, P.; Swoboda, D.

    1994-12-31

    The use of scientific visualization for HEP analysis is briefly reviewed. The applications are highly interactive and very dynamical in nature. At Fermilab, E687, in collaboration with Visual Media Services, has produced a 1/2 hour video tape demonstrating the capability of SGI-EXPLORER applied to a Dalitz Analysis of Charm decay. This short contribution describes the authors experience with visualization and video technologies.

  5. Algorithm of contrast enhancement for visual document images with underexposure

    NASA Astrophysics Data System (ADS)

    Tian, Da-zeng; Hao, Yong; Ha, Ming-hu; Tian, Xue-dong; Ha, Yan

    2008-03-01

    The visual document image is the electronic image about newspapers, books or magazines taken by the digital camera, the digital vidicon etc. Whose getting is more convenient than got from the scanner. Along with the development of OCR technology, visual document images could be recognized by OCR. Affected by some factors, digital image will be degraded during its acquisition, processing, transmission. One of the main problems affecting image quality, leading to unpleasant pictures, comes from improper exposure to light. So preprocessing is becoming much more significant before recognition in order to get an appropriate image satisfied recognition requirements. For the low contrast images with underexposure, according to the visual document image's characteristic, a new algorithm, based on image background separation, for image object enhance is proposed, The proposed method calculate the threshold of separation firstly, And different processing be taken on foreground and background: Various gray values in image background will be merged into unitary gray value, whereas the contrast of foreground will be enhanced. The proposed algorithm implemented in Visual C++ 6.0, and compared the result of proposed algorithm with the results of Otsu's method and histogram equalization. The experimental results show clearly that this algorithm could enhance the details of image object adequately, increase the recognition rate, and avoid the block effect at the same time.

  6. Visual Analytics Technology Transition Progress

    SciTech Connect

    Scholtz, Jean; Cook, Kristin A.; Whiting, Mark A.; Lemon, Douglas K.; Greenblatt, Howard

    2009-09-23

    The authors provide a description of the transition process for visual analytic tools and contrast this with the transition process for more traditional software tools. This paper takes this into account and describes a user-oriented approach to technology transition including a discussion of key factors that should be considered and adapted to each situation. The progress made in transitioning visual analytic tools in the past five years is described and the challenges that remain are enumerated.

  7. Learning sorting algorithms through visualization construction

    NASA Astrophysics Data System (ADS)

    Cetin, Ibrahim; Andrews-Larson, Christine

    2016-01-01

    Recent increased interest in computational thinking poses an important question to researchers: What are the best ways to teach fundamental computing concepts to students? Visualization is suggested as one way of supporting student learning. This mixed-method study aimed to (i) examine the effect of instruction in which students constructed visualizations on students' programming achievement and students' attitudes toward computer programming, and (ii) explore how this kind of instruction supports students' learning according to their self-reported experiences in the course. The study was conducted with 58 pre-service teachers who were enrolled in their second programming class. They expect to teach information technology and computing-related courses at the primary and secondary levels. An embedded experimental model was utilized as a research design. Students in the experimental group were given instruction that required students to construct visualizations related to sorting, whereas students in the control group viewed pre-made visualizations. After the instructional intervention, eight students from each group were selected for semi-structured interviews. The results showed that the intervention based on visualization construction resulted in significantly better acquisition of sorting concepts. However, there was no significant difference between the groups with respect to students' attitudes toward computer programming. Qualitative data analysis indicated that students in the experimental group constructed necessary abstractions through their engagement in visualization construction activities. The authors of this study argue that the students' active engagement in the visualization construction activities explains only one side of students' success. The other side can be explained through the instructional approach, constructionism in this case, used to design instruction. The conclusions and implications of this study can be used by researchers and

  8. Visual Analytics Science and Technology

    SciTech Connect

    Wong, Pak C.

    2007-03-01

    It is an honor to welcome you to the first theme issue of information visualization (IVS) dedicated entirely to the study of visual analytics. It all started from the establishment of the U.S. Department of Homeland Security (DHS) sponsored National Visualization and Analytics Center™ (NVAC™) at the Pacific Northwest National Laboratory (PNNL) in 2004. In 2005, under the leadership of NVAC, a team of the world’s best and brightest multidisciplinary scholars coauthored its first research and development (R&D) agenda Illuminating the Path, which defines the study as “the science of analytical reasoning facilitated by interactive visual interfaces.” Among the most exciting, challenging, and educational events developed since then was the first IEEE Symposium on Visual Analytics Science and Technology (VAST) held in Baltimore, Maryland in October 2006. This theme issue features seven outstanding articles selected from the IEEE VAST proceedings and a commentary article contributed by Jim Thomas, the director of NVAC, on the status and progress of the center.

  9. Scientific and Technical Visualization in Technology Education

    ERIC Educational Resources Information Center

    Ernst, Jeremy V.; Clark, Aaron C.

    2007-01-01

    In this article, the authors discusses Visualization in Technology Education (VisTE). VisTE units are designed to enhance students' knowledge in science, develop good visual and presentation skills, understand emerging technologies, and most of all help with the integration of standards that promote technological literacy. Technological changes…

  10. Designing, Visualizing, and Discussing Algorithms within a CS 1 Studio Experience: An Empirical Study

    ERIC Educational Resources Information Center

    Hundhausen, Christopher D.; Brown, Jonathan L.

    2008-01-01

    Within the context of an introductory CS1 unit on algorithmic problem-solving, we are exploring the pedagogical value of a novel active learning activity--the "studio experience"--that actively engages learners with algorithm visualization technology. In a studio experience, student pairs are tasked with (a) developing a solution to an algorithm…

  11. Data Mining Technologies Inspired from Visual Principle

    NASA Astrophysics Data System (ADS)

    Xu, Zongben

    In this talk we review the recent work done by our group on data mining (DM) technologies deduced from simulating visual principle. Through viewing a DM problem as a cognition problems and treading a data set as an image with each light point located at a datum position, we developed a series of high efficient algorithms for clustering, classification and regression via mimicking visual principles. In pattern recognition, human eyes seem to possess a singular aptitude to group objects and find important structure in an efficient way. Thus, a DM algorithm simulating visual system may solve some basic problems in DM research. From this point of view, we proposed a new approach for data clustering by modeling the blurring effect of lateral retinal interconnections based on scale space theory. In this approach, as the data image blurs, smaller light blobs merge into large ones until the whole image becomes one light blob at a low enough level of resolution. By identifying each blob with a cluster, the blurring process then generates a family of clustering along the hierarchy. The proposed approach provides unique solutions to many long standing problems, such as the cluster validity and the sensitivity to initialization problems, in clustering. We extended such an approach to classification and regression problems, through combatively employing the Weber's law in physiology and the cell response classification facts. The resultant classification and regression algorithms are proven to be very efficient and solve the problems of model selection and applicability to huge size of data set in DM technologies. We finally applied the similar idea to the difficult parameter setting problem in support vector machine (SVM). Viewing the parameter setting problem as a recognition problem of choosing a visual scale at which the global and local structures of a data set can be preserved, and the difference between the two structures be maximized in the feature space, we derived a

  12. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  13. Learning Sorting Algorithms through Visualization Construction

    ERIC Educational Resources Information Center

    Cetin, Ibrahim; Andrews-Larson, Christine

    2016-01-01

    Recent increased interest in computational thinking poses an important question to researchers: What are the best ways to teach fundamental computing concepts to students? Visualization is suggested as one way of supporting student learning. This mixed-method study aimed to (i) examine the effect of instruction in which students constructed…

  14. UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.

  15. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  16. Instructional Technology and Molecular Visualization

    ERIC Educational Resources Information Center

    Appling, Jeffrey R.; Peake, Lisa C.

    2004-01-01

    The effect of intervening use of molecular visualization software was tested on 73 first-year general chemistry students. Pretests and posttests included both traditional multiple-choice questions and model-building activities. Overall students improved after working with the software, although students performed less well on the model-building…

  17. Visualizing and improving the robustness of phase retrieval algorithms

    SciTech Connect

    Tripathi, Ashish; Leyffer, Sven; Munson, Todd; Wild, Stefan M.

    2015-06-01

    Coherent x-ray diffractive imaging is a novel imaging technique that utilizes phase retrieval and nonlinear optimization methods to image matter at nanometer scales. We explore how the convergence properties of a popular phase retrieval algorithm, Fienup's HIO, behave by introducing a reduced dimensionality problem allowing us to visualize and quantify convergence to local minima and the globally optimal solution. We then introduce generalizations of HIO that improve upon the original algorithm's ability to converge to the globally optimal solution.

  18. Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course

    ERIC Educational Resources Information Center

    Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio

    2012-01-01

    In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…

  19. Visual alignment technology for seamless steel pipe linearity measurement

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Xue, Ting; Zhu, Jigui; Ye, Shenghua

    2006-06-01

    Linearity measurement is the key problem in seamless steel pipe industry. For the modern industry of seamless steel pipe production, the traditional method cannot meet the needs of on-line and real-time measurement performance. Recently, visual inspection has developed rapidly and has the advantages of high speed, high precision, non-contact, automation and high manoeuvrability. So a novel approach to on-line and real-time linearity measurement of seamless steel pipe based on visual alignment technology is presented in this paper. Firstly the theory of visual alignment measuring is introduced. And then an on-line and real-time linearity measuring system, which consists of multistructured light sensor for seamless steel pipe factory of Tianjin, is invented with the technology of visual alignment. And key technologies for a visual alignment, such as the optimum design of high precision light-structured sensor, coordinates integration of multisensor, the mathematical model of visual measurement, and algorithm for ellipse center computations with high precision are studied in detail. Measurement results show that the measuring system is reasonable and can measure not only the linearity but also the coaxiality of large-scale parts.

  20. A Topology Visualization Early Warning Distribution Algorithm for Large-Scale Network Security Incidents

    PubMed Central

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology. PMID:24191145

  1. A topology visualization early warning distribution algorithm for large-scale network security incidents.

    PubMed

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology. PMID:24191145

  2. Notions of Technology and Visual Literacy

    ERIC Educational Resources Information Center

    Stankiewicz, Mary Ann

    2004-01-01

    For many art educators, the word "technology" conjures up visions of overhead projectors and VCRs, video and digital cameras, computers equipped with graphic programs and presentation software, digital labs where images rendered in pixels replace the debris of charcoal dust and puddled paints. One forgets that visual literacy and technology have…

  3. Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms

    2002-05-01

    We developed new pattern recognition (PR) algorithms based on a human visual perception model. We named these algorithms Visual Empirical Region of Influence (VERI) algorithms. To compare the new algorithm's effectiveness against othe PR algorithms, we benchmarked their clustering capabilities with a standard set of two-dimensional data that is well known in the PR community. The VERI algorithm succeeded in clustering all the data correctly. No existing algorithm had previously clustered all the pattens inmore » the data set successfully. The commands to execute VERI algorithms are quite difficult to master when executed from a DOS command line. The algorithm requires several parameters to operate correctly. From our own experiences we realized that if we wanted to provide a new data analysis tool to the PR community we would have to provide a new data analysis tool to the PR community we would have to make the tool powerful, yet easy and intuitive to use. That was our motivation for developing graphical user interfaces (GUI's) to the VERI algorithms. We developed GUI's to control the VERI algorithm in a single pass mode and in an optimization mode. We also developed a visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization package is integrated into the single pass interface. Both the single pass interface and optimization interface are part of the PR software package we have developed and make available to other users. The single pass mode only finds PR results for the sets of features in the data set that are manually requested by the user. The optimization model uses a brute force method of searching through the cominations of features in a data set for features that produce

  4. Technologies for Visualization in Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Miceli, Kristina D.; Cooper, D. M. (Technical Monitor)

    1993-01-01

    State-of-the-art research in computational aerosciences produces' complex, time-dependent datasets. Simulations can also be multidisciplinary in nature, coupling two or more physical disciplines such as fluid dynamics, structural dynamics, thermodynamics, and acoustics. Many diverse technologies are necessary for visualizing computational aerosciences simulations. This paper describes these technologies and how they contribute to building effective tools for use by domain scientists. These technologies include data management, distributed environments, advanced user interfaces, rapid prototyping environments, parallel computation, and methods to visualize the scalar and vector fields associated with computational aerosciences datasets.

  5. Visual Attention and Applications in Multimedia Technologies

    PubMed Central

    Le Callet, Patrick; Niebur, Ernst

    2013-01-01

    Making technological advances in the field of human-machine interactions requires that the capabilities and limitations of the human perceptual system are taken into account. The focus of this report is an important mechanism of perception, visual selective attention, which is becoming more and more important for multimedia applications. We introduce the concept of visual attention and describe its underlying mechanisms. In particular, we introduce the concepts of overt and covert visual attention, and of bottom-up and top-down processing. Challenges related to modeling visual attention and their validation using ad hoc ground truth are also discussed. Examples of the usage of visual attention models in image and video processing are presented. We emphasize multimedia delivery, retargeting and quality assessment of image and video, medical imaging, and the field of stereoscopic 3D images applications. PMID:24489403

  6. APPLYING SIMPLE TECHNOLOGY ACCOMPLISHES VISUAL INSPECTION CHALLENGES

    SciTech Connect

    Robinson, C

    2007-07-21

    This paper discusses the successful implementation of simple video technologies at the Savannah River Site (SRS) to perform complex visual inspection, monitoring, and surveillance tasks. Because SRS facilities are similar to those of an industrial plant, the environmental and accessibility considerations for remote viewing are the primary determining factors in the selection of technology. The constraints and challenges associated with remote viewing are discussed, and examples of applications are given.

  7. Mobile assistive technologies for the visually impaired.

    PubMed

    Hakobyan, Lilit; Lumsden, Jo; O'Sullivan, Dympna; Bartlett, Hannah

    2013-01-01

    There are around 285 million visually impaired people worldwide, and around 370,000 people are registered as blind or partially sighted in the UK. Ongoing advances in information technology (IT) are increasing the scope for IT-based mobile assistive technologies to facilitate the independence, safety, and improved quality of life of the visually impaired. Research is being directed at making mobile phones and other handheld devices accessible via our haptic (touch) and audio sensory channels. We review research and innovation within the field of mobile assistive technology for the visually impaired and, in so doing, highlight the need for successful collaboration between clinical expertise, computer science, and domain users to realize fully the potential benefits of such technologies. We initially reflect on research that has been conducted to make mobile phones more accessible to people with vision loss. We then discuss innovative assistive applications designed for the visually impaired that are either delivered via mainstream devices and can be used while in motion (e.g., mobile phones) or are embedded within an environment that may be in motion (e.g., public transport) or within which the user may be in motion (e.g., smart homes). PMID:24054999

  8. New mobile technologies and visual acuity.

    PubMed

    Livingstone, I A T; Lok, A S L; Tarbert, C

    2014-01-01

    Mobile devices have shown promise in visual assessment. Traditional acuity measurement involves retro-illuminated charts or card-based modalities. Mobile platforms bring potential to improve on both portability and objectivity. The present research activity relates to design and validation of a novel tablet-based infant acuity test. Early results in an adult cohort, with various levels of artificially degraded vision, suggest improved test-retest reliability compared with current standards for infant acuity. Future pragmatic trials will assess the value of this emerging technology in pediatric visual screening. PMID:25570420

  9. Delicate visual artifacts of advanced digital video processing algorithms

    NASA Astrophysics Data System (ADS)

    Nicolas, Marina M.; Lebowsky, Fritz

    2005-03-01

    With the incoming of digital TV, sophisticated video processing algorithms have been developed to improve the rendering of motion or colors. However, the perceived subjective quality of these new systems sometimes happens to be in conflict with the objective measurable improvement we expect to get. In this presentation, we show examples where algorithms should visually improve the skin tone rendering of decoded pictures under normal conditions, but surprisingly fail, when the quality of mpeg encoding drops below a just noticeable threshold. In particular, we demonstrate that simple objective criteria used for the optimization, such as SAD, PSNR or histogram sometimes fail, partly because they are defined on a global scale, ignoring local characteristics of the picture content. We then integrate a simple human visual model to measure potential artifacts with regard to spatial and temporal variations of the objects' characteristics. Tuning some of the model's parameters allows correlating the perceived objective quality with compression metrics of various encoders. We show the evolution of our reference parameters in respect to the compression ratios. Finally, using the output of the model, we can control the parameters of the skin tone algorithm to reach an improvement in overall system quality.

  10. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  11. Human Computation in Visualization: Using Purpose Driven Games for Robust Evaluation of Visualization Algorithms.

    PubMed

    Ahmed, N; Zheng, Ziyi; Mueller, K

    2012-12-01

    Due to the inherent characteristics of the visualization process, most of the problems in this field have strong ties with human cognition and perception. This makes the human brain and sensory system the only truly appropriate evaluation platform for evaluating and fine-tuning a new visualization method or paradigm. However, getting humans to volunteer for these purposes has always been a significant obstacle, and thus this phase of the development process has traditionally formed a bottleneck, slowing down progress in visualization research. We propose to take advantage of the newly emerging field of Human Computation (HC) to overcome these challenges. HC promotes the idea that rather than considering humans as users of the computational system, they can be made part of a hybrid computational loop consisting of traditional computation resources and the human brain and sensory system. This approach is particularly successful in cases where part of the computational problem is considered intractable using known computer algorithms but is trivial to common sense human knowledge. In this paper, we focus on HC from the perspective of solving visualization problems and also outline a framework by which humans can be easily seduced to volunteer their HC resources. We introduce a purpose-driven game titled "Disguise" which serves as a prototypical example for how the evaluation of visualization algorithms can be mapped into a fun and addicting activity, allowing this task to be accomplished in an extensive yet cost effective way. Finally, we sketch out a framework that transcends from the pure evaluation of existing visualization methods to the design of a new one. PMID:26357117

  12. Machine vision algorithm generation using human visual models

    NASA Astrophysics Data System (ADS)

    Daley, Wayne D.; Doll, Theodore J.; McWhorter, Shane W.; Wasilewski, Anthony A.

    1999-01-01

    The design of robust machine vision algorithms is one of the most difficult parts of developing and integrating automated systems. Historically, most of the techniques have been developed using ad hoc methodologies. This problem is more severe in the area of natural/biological products. In this arena, it has been difficult to capture and model the natural variability to be expected in the products. This present difficulty in performing quality and process control in the meat, fruit and vegetable industries. While some systems have been introduced, they do not adequately address the wide range of needs. This paper will propose an algorithm development technique that utilizes modes of the human visual system. It will address that subset of problems that humans perform well, but have proven difficult to automate with the standard machine vision techniques. The basis of the technique evaluation will be the Georgia Tech Vision model. This approach demonstrates a high level of accuracy in its ability to solve difficult problems. This paper will present the approach, the result, and possibilities for implementation.

  13. Medical visualization based on VRML technology and its application

    NASA Astrophysics Data System (ADS)

    Yin, Meng; Luo, Qingming; Lu, Qiang; Sheng, Rongbing; Liu, Yafeng

    2003-07-01

    Current high-performance computers and advanced image processing capabilities have made the application of three dimensional visualization objects in biomedical images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3-D data are typically stored and processed on powerful servers accessible by using TCP/IP, we held the results of the isosurface be applied in medical visualization generally. So in this system we use the 3-D file format VRML2.0, which is used through the Web interface for manipulating 3-D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm, using OpenGL and MFC techniques to render the isosurface and manipulate voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3-D image processing on personal computers is rather slow and the set of tools for 3-D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed. With the help of OCT and MPE scanning image system, applying these techniques to the visualization of rabbit brain, constructing data sets of hierarchical subdivisions of the cerebral information, we can establish a virtual environment on the World Wide Web for the rabbit brain research from its gross anatomy to its tissue and cellular levels of detail, providng graphical modeling and information management of both the outer and the inner space of the rabbit brain.

  14. SciDAC Visualization and Analytics Center for EnablingTechnologies

    SciTech Connect

    Bethel, E. Wes; Johnson, Chris; Joy, Ken; Ahern, Sean; Pascucci,Valerio; Childs, Hank; Cohen, Jonathan; Duchaineau, Mark; Hamann, Bernd; Hansen, Charles; Laney, Dan; Lindstrom, Peter; Meredith, Jermey; Ostrouchov, George; Parker, Steven; Silva, Claudio; Sanderson, Allen; Tricoche, Xavier.

    2007-06-30

    The Visualization and Analytics Center for EnablingTechnologies (VACET) focuses on leveraging scientific visualization andanalytics software technology as an enabling technology for increasingscientific productivity and insight. Advances in computational technologyhave resulted in an 'information big bang,' which in turn has created asignificant data understanding challenge. This challenge is widelyacknowledged to be one of the primary bottlenecks in contemporaryscience. The vision of VACET is to adapt, extend, create when necessary,and deploy visual data analysis solutions that are responsive to theneeds of DOE'scomputational and experimental scientists. Our center isengineered to be directly responsive to those needs and to deliversolutions for use in DOE's large open computing facilities. The researchand development directly target data understanding problems provided byour scientific application stakeholders. VACET draws from a diverse setof visualization technology ranging from production quality applicationsand application frameworks to state-of-the-art algorithms forvisualization, analysis, analytics, data manipulation, and datamanagement.

  15. Visual Literacy: The Missing Piece of Your Technology Integration Course

    ERIC Educational Resources Information Center

    Sosa, Teri

    2009-01-01

    This article reports the result of an action research study that explored the need for visual literacy as an additional instructional input for students creating technology integration solutions. The introduction of visual literacy concepts is useful in two ways. First, it raises visual considerations to the conscious consideration of students.…

  16. Textile Visual Materials: Appropriate Technology in Action.

    ERIC Educational Resources Information Center

    Donoghue, Beverly Emerson

    An innovative educational medium--screenprinted visual aids on cloth--is one alternative to conventional media in Africa, where visual materials are important communication tools but conventional media and materials are often scarce. A production process for cloth visual aids was developed and evaluated in Ghana and Sudan through the…

  17. Accommodating Technology in the Visual Literacy Classroom.

    ERIC Educational Resources Information Center

    Lloyd, Carla V.; Barnhurst, Kevin G.

    The development of a visual literacy facility, the Creative Visual Lab, at the S. I. Newhouse School of Public Communications at Syracuse University (New York) is described. The facility was designed to provide students with the instruction that would develop their computer proficiency and visual sensitivity without being, in itself, completely…

  18. Visualizing Global Wildfire Automated Biomass Burning Algorithm Data

    NASA Astrophysics Data System (ADS)

    Schmidt, C. C.; Hoffman, J.; Prins, E. M.

    2013-12-01

    The Wildfire Automated Biomass Burning Algorithm (WFABBA) produces fire detection and characterization from a global constellation of geostationary satellites on a realtime basis. Presentation of this data in a timely and meaningful way has been a challenge, but as hardware and software have advanced and web tools have evolved, new options have rapidly arisen. The WFABBA team at the Cooperative Institute for Meteorological Satellite Studies (CIMSS) at the Space Science Engineering Center (SSEC) have begun implementation of a web-based framework that allows a user to visualize current and archived fire data from NOAA's Geostationary Operational Environmental Satellite (GOES), EUMETSAT's Meteosat Second Generation (MSG), JMA's Multifunction Transport Satellite (MTSAT), and KMA's COMS series of satellites. User group needs vary from simple examination of the most recent data to multi-hour composites to animations, as well as saving datasets for further review. In order to maximize the usefulness of the data, a user-friendly and scaleable interface has been under development that will, when complete, allow access to approximately 18 years of WFABBA data, as well as the data produced in real-time. Implemented, planned, and potential additional features will be examined.

  19. Identification of coherent patterns in gene expression data using an efficient biclustering algorithm and parallel coordinate visualization

    PubMed Central

    Cheng, Kin-On; Law, Ngai-Fong; Siu, Wan-Chi; Liew, Alan Wee-Chung

    2008-01-01

    Background The DNA microarray technology allows the measurement of expression levels of thousands of genes under tens/hundreds of different conditions. In microarray data, genes with similar functions usually co-express under certain conditions only [1]. Thus, biclustering which clusters genes and conditions simultaneously is preferred over the traditional clustering technique in discovering these coherent genes. Various biclustering algorithms have been developed using different bicluster formulations. Unfortunately, many useful formulations result in NP-complete problems. In this article, we investigate an efficient method for identifying a popular type of biclusters called additive model. Furthermore, parallel coordinate (PC) plots are used for bicluster visualization and analysis. Results We develop a novel and efficient biclustering algorithm which can be regarded as a greedy version of an existing algorithm known as pCluster algorithm. By relaxing the constraint in homogeneity, the proposed algorithm has polynomial-time complexity in the worst case instead of exponential-time complexity as in the pCluster algorithm. Experiments on artificial datasets verify that our algorithm can identify both additive-related and multiplicative-related biclusters in the presence of overlap and noise. Biologically significant biclusters have been validated on the yeast cell-cycle expression dataset using Gene Ontology annotations. Comparative study shows that the proposed approach outperforms several existing biclustering algorithms. We also provide an interactive exploratory tool based on PC plot visualization for determining the parameters of our biclustering algorithm. Conclusion We have proposed a novel biclustering algorithm which works with PC plots for an interactive exploratory analysis of gene expression data. Experiments show that the biclustering algorithm is efficient and is capable of detecting co-regulated genes. The interactive analysis enables an optimum

  20. Comparing Learning Performance of Students Using Algorithm Visualizations Collaboratively on Different Engagement Levels

    ERIC Educational Resources Information Center

    Laakso, Mikko-Jussi; Myller, Niko; Korhonen, Ari

    2009-01-01

    In this paper, two emerging learning and teaching methods have been studied: collaboration in concert with algorithm visualization. When visualizations have been employed in collaborative learning, collaboration introduces new challenges for the visualization tools. In addition, new theories are needed to guide the development and research of the…

  1. Vector Field Visual Data Analysis Technologies for Petascale Computational Science

    SciTech Connect

    Garth, Christoph; Deines, Eduard; Joy, Kenneth I.; Bethel, E. Wes; Childs, Hank; Weber, Gunther; Ahern, Sean; Pugmire, Dave; Sanderson, Allen; Johnson, Chris

    2009-11-13

    State-of-the-art computational science simulations generate large-scale vector field data sets. Visualization and analysis is a key aspect of obtaining insight into these data sets and represents an important challenge. This article discusses possibilities and challenges of modern vector field visualization and focuses on methods and techniques developed in the SciDAC Visualization and Analytics Center for Enabling Technologies (VACET) and deployed in the open-source visualization tool, VisIt.

  2. Teaching Intonation in Discourse Using Speech Visualization Technology

    ERIC Educational Resources Information Center

    Levis, John; Pickering, Lucy

    2004-01-01

    Intonation, long thought to be a key to effectiveness in spoken language, is more and more commonly addressed in English language teaching through the use of speech visualization technology. While the use of visualization technology is a crucial advance in the teaching of intonation, such teaching can be further enhanced by connecting technology…

  3. SciDAC Visualization and Analytics Center for Enabling Technologies

    SciTech Connect

    Joy, Kenneth I.

    2014-09-14

    This project focuses on leveraging scientific visualization and analytics software technology as an enabling technology for increasing scientific productivity and insight. Advances in computational technology have resulted in an "information big bang," which in turn has created a significant data understanding challenge. This challenge is widely acknowledged to be one of the primary bottlenecks in contemporary science. The vision for our Center is to respond directly to that challenge by adapting, extending, creating when necessary and deploying visualization and data understanding technologies for our science stakeholders. Using an organizational model as a Visualization and Analytics Center for Enabling Technologies (VACET), we are well positioned to be responsive to the needs of a diverse set of scientific stakeholders in a coordinated fashion using a range of visualization, mathematics, statistics, computer and computational science and data management technologies.

  4. Using Technology to Support Visual Learning Strategies

    ERIC Educational Resources Information Center

    O'Bannon, Blanche; Puckett, Kathleen; Rakes, Glenda

    2006-01-01

    Visual learning is a strategy for visually representing the structure of information and for representing the ways in which concepts are related. Based on the work of Ausubel, these hierarchical maps facilitate student learning of unfamiliar information in the K-12 classroom. This paper presents the research base for this Type II computer tool, as…

  5. Hardware Implementation of a Spline-Based Genetic Algorithm for Embedded Stereo Vision Sensor Providing Real-Time Visual Guidance to the Visually Impaired

    NASA Astrophysics Data System (ADS)

    Lee, Dah-Jye; Anderson, Jonathan D.; Archibald, James K.

    2008-12-01

    Many image and signal processing techniques have been applied to medical and health care applications in recent years. In this paper, we present a robust signal processing approach that can be used to solve the correspondence problem for an embedded stereo vision sensor to provide real-time visual guidance to the visually impaired. This approach is based on our new one-dimensional (1D) spline-based genetic algorithm to match signals. The algorithm processes image data lines as 1D signals to generate a dense disparity map, from which 3D information can be extracted. With recent advances in electronics technology, this 1D signal matching technique can be implemented and executed in parallel in hardware such as field-programmable gate arrays (FPGAs) to provide real-time feedback about the environment to the user. In order to complement (not replace) traditional aids for the visually impaired such as canes and Seeing Eyes dogs, vision systems that provide guidance to the visually impaired must be affordable, easy to use, compact, and free from attributes that are awkward or embarrassing to the user. "Seeing Eye Glasses," an embedded stereo vision system utilizing our new algorithm, meets all these requirements.

  6. How Dynamic Visualization Technology Can Support Molecular Reasoning

    ERIC Educational Resources Information Center

    Levy, Dalit

    2013-01-01

    This paper reports the results of a study aimed at exploring the advantages of dynamic visualization for the development of better understanding of molecular processes. We designed a technology-enhanced curriculum module in which high school chemistry students conduct virtual experiments with dynamic molecular visualizations of solid, liquid, and…

  7. Integrated Mathematics, Science, and Technology: An Introduction to Scientific Visualization.

    ERIC Educational Resources Information Center

    Thomas, David A.; And Others

    1996-01-01

    Demonstrates the use of scientific visualization, a computer graphics technology developed to extend the use of our visual system to contexts and problem-solving situations where sight itself is not directly possible or in which normal vision fails to provide adequate opportunity for analysis. (DDR)

  8. Evaluating Microcomputer Access Technology for Use by Visually Impaired Students.

    ERIC Educational Resources Information Center

    Ruconich, Sandra

    1984-01-01

    The article outlines advantages and limitations of five types of access to microcomputer technology for visually impaired students: electronic braille, paper braille, Optacon, synthetic speech, and enlarged print. Additional considerations in access decisions are noted. (CL)

  9. Visual pattern recognition network: its training algorithm and its optoelectronic architecture

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Liu, Liren

    1996-07-01

    A visual pattern recognition network and its training algorithm are proposed. The network constructed of a one-layer morphology network and a two-layer modified Hamming net. This visual network can implement invariant pattern recognition with respect to image translation and size projection. After supervised learning takes place, the visual network extracts image features and classifies patterns much the same as living beings do. Moreover we set up its optoelectronic architecture for real-time pattern recognition.

  10. Enhanced Detection of Multivariate Outliers Using Algorithm-Based Visual Display Techniques.

    ERIC Educational Resources Information Center

    Dickinson, Wendy B.

    This study uses an algorithm-based visual display technique (FACES) to provide enhanced detection of multivariate outliers within large-scale data sets. The FACES computer graphing algorithm (H. Chernoff, 1973) constructs a cartoon-like face, using up to 18 variables for each case. A major advantage of FACES is the ability to store and show the…

  11. A novel evaluation metric based on visual perception for moving target detection algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Liu, Lei; Cui, Minjie; Li, He

    2016-05-01

    Traditional performance evaluation index for moving target detection algorithm, whose each index's emphasis is different when it is used to evaluate the performance of the moving target detection algorithm, is inconvenient for people to make an evaluation of the performance of algorithm comprehensively and objectively. Particularly, when the detection results of different algorithms have the same number of the foreground point and the background point, the algorithm's each traditional index is the same, and we can't use the traditional index to compare the performance of the moving target detection algorithms, which is the disadvantage of traditional evaluation index that takes pixel as a unit when calculating the index. To solve this problem, combining with the feature of human's visual perception system, this paper presents a new evaluation index-Visual Fluctuation (VF) based on the principle of image block to evaluate the performance of moving target detection algorithm. Experiments showed that the new evaluation index based on the visual perception makes up for the deficiency of traditional one, and the calculation results are not only in accordance with visual perception of human, but also evaluate the performance of the moving target detection algorithm more objectively.

  12. Visualization of large medical data sets using memory-optimized CPU and GPU algorithms

    NASA Astrophysics Data System (ADS)

    Kiefer, Gundolf; Lehmann, Helko; Weese, Juergen

    2005-04-01

    With the evolution of medical scanners towards higher spatial resolutions, the sizes of image data sets are increasing rapidly. To profit from the higher resolution in medical applications such as 3D-angiography for a more efficient and precise diagnosis, high-performance visualization is essential. However, to make sure that the performance of a volume rendering algorithm scales with the performance of future computer architectures, technology trends need to be considered. The design of such scalable volume rendering algorithms remains challenging. One of the major trends in the development of computer architectures is the wider use of cache memory hierarchies to bridge the growing gap between the faster evolving processing power and the slower evolving memory access speed. In this paper we propose ways to exploit the standard PC"s cache memories supporting the main processors (CPU"s) and the graphics hardware (graphics processing unit, GPU), respectively, for computing Maximum Intensity Projections (MIPs). To this end, we describe a generic and flexible way to improve the cache efficiency of software ray casting algorithms and show by means of cache simulations, that it enables cache miss rates close to the theoretical optimum. For GPU-based rendering we propose a similar, brick-based technique to optimize the utilization of onboard caches and the transfer of data to the GPU on-board memory. All algorithms produce images of identical quality, which enables us to compare the performance of their implementations in a fair way without eventually trading quality for speed. Our comparison indicates that the proposed methods perform superior, in particular for large data sets.

  13. A visual sensitivity based low-bit-rate image compression algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Qing; Li, Xiaoguang; Li, Zhuo

    2013-03-01

    In this paper, we present a visual sensitivity based low-bit-rate image compression algorithm. Using the idea that different image regions have different perceptual significance relative to quality, the input image is divided into edges, textures and smooth regions. For the edges, the standard JPEG algorithm with an appropriate quantitative step is applied so that the details can be preserved. For the textures, the JPEG algorithm is applied on the down-scale version. For the smooth regions, a skipping scheme is employed in the compression process so as to save bits. Experimental results show the superior performance of our method in terms of both compression efficiency and visual quality.

  14. Knowledge-based visual image processing IDE model for algorithm and system rapid prototyping

    NASA Astrophysics Data System (ADS)

    Zhang, Biyin; Chen, Wei; Wang, Yuanbin

    2009-10-01

    A novel intelligent model for Image Processing (IP) research integrated development environment (IDE) is presented for rapid converting conceptual model of IP algorithm into computational model and program implementation. Considering psychology of IP and computer programming, this model presents a cycle model of IP research process and establishes an improved expert system prototype. Visualization approaches are introduced into visualizing three phases of IP development. An intelligent methodology is applied to reuse algorithms, graphical user interfaces (GUI) and data visualizing tools. Thus, researchers are allowed to fix more attention only on their own interest algorithm models. Experimental results show that the development based the new model enhances rapid algorithm prototype modeling with great efficiency and speed.

  15. Visual Impairments. Tech Use Guide: Using Computer Technology.

    ERIC Educational Resources Information Center

    Carr, Annette

    This guide describes adaptive technology for reading printed text and producing written material, to assist the student who has a visual impairment. The special technologies discussed include auditory text access, text enlargement, tactile text access, portable notetaking devices, and computer access. The guide concludes with lists of the…

  16. Assistive Technology Competencies for Teachers of Students with Visual Impairments

    ERIC Educational Resources Information Center

    Smith, Derrick W.; Kelley, Pat; Maushak, Nancy J.; Griffin-Shirley, Nora; Lan, William Y.

    2009-01-01

    Using the expert opinion of more than 30 professionals, this Delphi study set out to develop a set of assistive technology competencies for teachers of students with visual impairments. The result of the study was the development of a highly reliable and valid set of 111 assistive technology competencies. (Contains 2 tables.)

  17. Reasoning visualization in expert systems - The applicability of algorithm animation techniques

    NASA Technical Reports Server (NTRS)

    Selig, William J.; Johannes, James D.

    1990-01-01

    This paper presents the results of research into providing a means for users to flexibly create visualizations of the reasoning processes of forward-chaining rule-based expert systems using algorithm animation techniques. Levels of reasoning are described in order to identify the information necessary from the expert system development environment for these visualizations. A dual-process visualization environment is presented consisting of: (1) a version of CLIPS modified for the identified information access requirements; and (2) VISOR, an algorithm animation-based system for creating visualizations of arbitrary complexity which can be triggered by 'interesting event' messages from the running expert-system application. This is followed by examples from several visualizations performed during the scope of this work.

  18. Do the Visual Complexity Algorithms Match the Generalization Process in Geographical Displays?

    NASA Astrophysics Data System (ADS)

    Brychtová, A.; Çöltekin, A.; Pászto, V.

    2016-06-01

    In this study, we first develop a hypothesis that existing quantitative visual complexity measures will overall reflect the level of cartographic generalization, and test this hypothesis. Specifically, to test our hypothesis, we first selected common geovisualization types (i.e., cartographic maps, hybrid maps, satellite images and shaded relief maps) and retrieved examples as provided by Google Maps, OpenStreetMap and SchweizMobil by swisstopo. Selected geovisualizations vary in cartographic design choices, scene contents and different levels of generalization. Following this, we applied one of Rosenholtz et al.'s (2007) visual clutter algorithms to obtain quantitative visual complexity scores for screenshots of the selected maps. We hypothesized that visual complexity should be constant across generalization levels, however, the algorithm suggested that the complexity of small-scale displays (less detailed) is higher than those of large-scale (high detail). We also observed vast differences in visual complexity among maps providers, which we attribute to their varying approaches towards the cartographic design and generalization process. Our efforts will contribute towards creating recommendations as to how the visual complexity algorithms could be optimized for cartographic products, and eventually be utilized as a part of the cartographic design process to assess the visual complexity.

  19. An Algorithm for Treating Uncertainties in the Visualization of Pipeline Sensors' Datasets

    NASA Astrophysics Data System (ADS)

    Olufemi, A. Folorunso; Sunar, Mohd. Shahrizal; Kari, Sarudin

    Researchers have seen visualization as a tool in presenting data based on available datasets. Its usage is however undermined by its inability to acknowledge the associated uncertainties in real world measurements. Visualization results are said to be "too generous", providing us with visual assumptions that though, may not be too far from reality, but the associated inaccuracies could become significant when dealing with life dependant datasets. Uncertainty reality is now becoming a significant research interest. In most cases accuracy is a neglected issue. Two wrong assumptions are believed; the first is that the data visualized is accurate, and the second is that the visualization process is exempt from errors. The objectives of this paper are to present the implications of inaccuracies and propose a treatment algorithm for the visualizations of pipeline sensors' datasets. The paper also features attributes that gives a user an idea of sensors' datasets inaccuracies.

  20. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  1. Applications of aerospace technology in industry: A technology transfer profile. Visual display systems

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The growth of common as well as emerging visual display technologies are surveyed. The major inference is that contemporary society is rapidly growing evermore reliant on visual display for a variety of purposes. Because of its unique mission requirements, the National Aeronautics and Space Administration has contributed in an important and specific way to the growth of visual display technology. These contributions are characterized by the use of computer-driven visual displays to provide an enormous amount of information concisely, rapidly and accurately.

  2. A new algorithm of laser 3D visualization based on space-slice

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Song, Yanfeng; Song, Yong; Cao, Jie; Hao, Qun

    2013-12-01

    Traditional visualization algorithms based on three-dimensional (3D) laser point cloud data consist of two steps: stripe point cloud data into different target objects and establish the 3D surface models of the target objects to realize visualization using interpolation point or surface fitting method. However, some disadvantages, such as low efficiency, loss of image details, exist in most of these algorithms. In order to cope with these problems, a 3D visualization algorithm based on space-slice is proposed in this paper, which includes two steps: data classification and image reconstruction. In the first step, edge detection method is used to check the parametric continuity and extract edges to classify data into different target regions preliminarily. In the second stage, the divided data is split further into space-slice according to coordinates. Based on space-slice of the point cloud data, one-dimensional interpolation methods is adopted to get the curves connected by each group of cloud point data smoother. In the end, these interpolation points obtained from each group are made by the use of getting the fitting surface. As expected, visual morphology of the objects is obtained. The simulation experiment results compared with real scenes show that the final visual images have explicit details and the overall visual result is natural.

  3. IR and visual image registration based on mutual information and PSO-Powell algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Youwen; Gao, Kun; Miu, Xianghu

    2014-11-01

    Infrared and visual image registration has a wide application in the fields of remote sensing and military. Mutual information (MI) has proved effective and successful in infrared and visual image registration process. To find the most appropriate registration parameters, optimal algorithms, such as Particle Swarm Optimization (PSO) algorithm or Powell search method, are often used. The PSO algorithm has strong global search ability and search speed is fast at the beginning, while the weakness is low search performance in late search stage. In image registration process, it often takes a lot of time to do useless search and solution's precision is low. Powell search method has strong local search ability. However, the search performance and time is more sensitive to initial values. In image registration, it is often obstructed by local maximum and gets wrong results. In this paper, a novel hybrid algorithm, which combined PSO algorithm and Powell search method, is proposed. It combines both advantages that avoiding obstruction caused by local maximum and having higher precision. Firstly, using PSO algorithm gets a registration parameter which is close to global minimum. Based on the result in last stage, the Powell search method is used to find more precision registration parameter. The experimental result shows that the algorithm can effectively correct the scale, rotation and translation additional optimal algorithm. It can be a good solution to register infrared difference of two images and has a greater performance on time and precision than traditional and visible images.

  4. Advanced visualization technology for terascale particle accelerator simulations

    SciTech Connect

    Ma, K-L; Schussman, G.; Wilson, B.; Ko, K.; Qiang, J.; Ryne, R.

    2002-11-16

    This paper presents two new hardware-assisted rendering techniques developed for interactive visualization of the terascale data generated from numerical modeling of next generation accelerator designs. The first technique, based on a hybrid rendering approach, makes possible interactive exploration of large-scale particle data from particle beam dynamics modeling. The second technique, based on a compact texture-enhanced representation, exploits the advanced features of commodity graphics cards to achieve perceptually effective visualization of the very dense and complex electromagnetic fields produced from the modeling of reflection and transmission properties of open structures in an accelerator design. Because of the collaborative nature of the overall accelerator modeling project, the visualization technology developed is for both desktop and remote visualization settings. We have tested the techniques using both time varying particle data sets containing up to one billion particle s per time step and electromagnetic field data sets with millions of mesh elements.

  5. Technological Solutions for Visually Impaired People in Sweden.

    ERIC Educational Resources Information Center

    Lindstrom, J. I.

    1990-01-01

    This article discusses technology available in Sweden for visually impaired and deaf-blind people. It describes systems for stop announcements on buses and trams, queuing systems in shops and banks, text telephones, synthetic speech or braille displays of newspapers and other information sources, and home computers. Ideas for the future are also…

  6. Visualization and Students' Performance in Technology-Based Calculus.

    ERIC Educational Resources Information Center

    Galindo, Enrique

    The relationship between college students' preferred mode of processing mathematical information--visual or nonvisual--and their performance in calculus classes with and without technology was investigated. Students elected one of three different versions of an introductory differential calculus course: using graphing calculators, using the…

  7. Visual Metaphors in the Representation of Communication Technology.

    ERIC Educational Resources Information Center

    Kaplan, Stuart Jay

    1990-01-01

    Examines the role of metaphors (particularly visual metaphors) in communicating social values associated with new communication technology by analyzing magazine advertisements for computing and advanced telecommunications products and services. Finds that the "lever" and the "synthesis of old and new values" metaphors are dominant in both general…

  8. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  9. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  10. Software and Algorithms for Biomedical Image Data Processing and Visualization

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Lambert, James; Lam, Raymond

    2004-01-01

    A new software equipped with novel image processing algorithms and graphical-user-interface (GUI) tools has been designed for automated analysis and processing of large amounts of biomedical image data. The software, called PlaqTrak, has been specifically used for analysis of plaque on teeth of patients. New algorithms have been developed and implemented to segment teeth of interest from surrounding gum, and a real-time image-based morphing procedure is used to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The PlaqTrak system integrates these components into a single software suite with an easy-to-use GUI (see Figure 1) that allows users to do an end-to-end run of a patient s record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image. The automated and accurate processing of the captured images to segment each tooth [see Figure 2(a)] and then detect plaque on a tooth-by-tooth basis is a critical component of the PlaqTrak system to do clinical trials and analysis with minimal human intervention. These features offer distinct advantages over other competing systems that analyze groups of teeth or synthetic teeth. PlaqTrak divides each segmented tooth into eight regions using an advanced graphics morphing procedure [see results on a chipped tooth in Figure 2(b)], and a pattern recognition classifier is then used to locate plaque [red regions in Figure 2(d)] and enamel regions. The morphing allows analysis within regions of teeth, thereby facilitating detailed statistical analysis such as the amount of plaque present on the biting surfaces on teeth. This software system is applicable to a host of biomedical applications, such as cell analysis and life detection, or robotic applications, such

  11. Presentation Technology in the Age of Electronic Eloquence: From Visual Aid to Visual Rhetoric

    ERIC Educational Resources Information Center

    Cyphert, Dale

    2007-01-01

    Attention to presentation technology in the public speaking classroom has grown along with its contemporary use, but instruction generally positions the topic as a subset of visual aids. As contemporary public discourse enters an age of electronic eloquence, instructional focus on verbal communication might limit students' capacity to effectively…

  12. Looking at Algorithm Visualization through the Eyes of Pre-Service ICT Teachers

    ERIC Educational Resources Information Center

    Saltan, Fatih

    2016-01-01

    The study investigated pre-service ICT teachers' perceptions of algorithm visualization (AV) with regard to appropriateness of teaching levels and contribution to learning and motivation. In order to achieve this aim, a qualitative case study was carried out. The participants consisted of 218 pre-service ICT teachers from four different…

  13. GreedEx: A Visualization Tool for Experimentation and Discovery Learning of Greedy Algorithms

    ERIC Educational Resources Information Center

    Velazquez-Iturbide, J. A.; Debdi, O.; Esteban-Sanchez, N.; Pizarro, C.

    2013-01-01

    Several years ago we presented an experimental, discovery-learning approach to the active learning of greedy algorithms. This paper presents GreedEx, a visualization tool developed to support this didactic method. The paper states the design goals of GreedEx, makes explicit the major design decisions adopted, and describes its main characteristics…

  14. A Survey of Successful Evaluations of Program Visualization and Algorithm Animation Systems

    ERIC Educational Resources Information Center

    Urquiza-Fuentes, Jaime; Velazquez-Iturbide, J. Angel

    2009-01-01

    This article reviews successful educational experiences in using program and algorithm visualizations (PAVs). First, we survey a total of 18 PAV systems that were subject to 33 evaluations. We found that half of the systems have only been tested for usability, and those were shallow inspections. The rest were evaluated with respect to their…

  15. Visual Tracking Based on an Improved Online Multiple Instance Learning Algorithm.

    PubMed

    Wang, Li Jia; Zhang, Hua

    2016-01-01

    An improved online multiple instance learning (IMIL) for a visual tracking algorithm is proposed. In the IMIL algorithm, the importance of each instance contributing to a bag probability is with respect to their probabilities. A selection strategy based on an inner product is presented to choose weak classifier from a classifier pool, which avoids computing instance probabilities and bag probability M times. Furthermore, a feedback strategy is presented to update weak classifiers. In the feedback update strategy, different weights are assigned to the tracking result and template according to the maximum classifier score. Finally, the presented algorithm is compared with other state-of-the-art algorithms. The experimental results demonstrate that the proposed tracking algorithm runs in real-time and is robust to occlusion and appearance changes. PMID:26843855

  16. Visual Tracking Based on an Improved Online Multiple Instance Learning Algorithm

    PubMed Central

    Wang, Li Jia; Zhang, Hua

    2016-01-01

    An improved online multiple instance learning (IMIL) for a visual tracking algorithm is proposed. In the IMIL algorithm, the importance of each instance contributing to a bag probability is with respect to their probabilities. A selection strategy based on an inner product is presented to choose weak classifier from a classifier pool, which avoids computing instance probabilities and bag probability M times. Furthermore, a feedback strategy is presented to update weak classifiers. In the feedback update strategy, different weights are assigned to the tracking result and template according to the maximum classifier score. Finally, the presented algorithm is compared with other state-of-the-art algorithms. The experimental results demonstrate that the proposed tracking algorithm runs in real-time and is robust to occlusion and appearance changes. PMID:26843855

  17. Robotic vision technology and algorithms for space applications

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar

    1988-01-01

    The vision data requirements for various automation and robotics applications for the Space Station are discussed. The advanced systems technology involved with robotic sensing for perception is reviewed, noting the unique requirements of vision systems in space. Three areas of algorithm development are discussed: shape extraction based on illumination, shape extraction by sensor fusion, and generalized image point correspondence. Possibilities for future developments in robotic vision technology are considered.

  18. Framework and algorithms for illustrative visualizations of time-varying flows on unstructured meshes

    DOE PAGESBeta

    Rattner, Alexander S.; Guillen, Donna Post; Joshi, Alark; Garimella, Srinivas

    2016-03-17

    Photo- and physically realistic techniques are often insufficient for visualization of fluid flow simulations, especially for 3D and time-varying studies. Substantial research effort has been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. However, a great deal of work has been reproduced in this field, as many research groups have developed specialized visualization software. Additionally, interoperability between illustrative visualization software is limited due to diverse processing and rendering architectures employed in different studies. In this investigation, a framework for illustrative visualization is proposed, and implemented in MarmotViz, a ParaViewmore » plug-in, enabling its use on a variety of computing platforms with various data file formats and mesh geometries. Region-of-interest identification and feature-tracking algorithms incorporated into this tool are described. Implementations of multiple illustrative effect algorithms are also presented to demonstrate the use and flexibility of this framework. Here, by providing an integrated framework for illustrative visualization of CFD data, MarmotViz can serve as a valuable asset for the interpretation of simulations of ever-growing scale.« less

  19. Generalized Framework and Algorithms for Illustrative Visualization of Time-Varying Data on Unstructured Meshes

    SciTech Connect

    Alexander S. Rattner; Donna Post Guillen; Alark Joshi

    2012-12-01

    Photo- and physically-realistic techniques are often insufficient for visualization of simulation results, especially for 3D and time-varying datasets. Substantial research efforts have been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. While these efforts have yielded valuable visualization results, a great deal of work has been reproduced in studies as individual research groups often develop purpose-built platforms. Additionally, interoperability between illustrative visualization software is limited due to specialized processing and rendering architectures employed in different studies. In this investigation, a generalized framework for illustrative visualization is proposed, and implemented in marmotViz, a ParaView plugin, enabling its use on variety of computing platforms with various data file formats and mesh geometries. Detailed descriptions of the region-of-interest identification and feature-tracking algorithms incorporated into this tool are provided. Additionally, implementations of multiple illustrative effect algorithms are presented to demonstrate the use and flexibility of this framework. By providing a framework and useful underlying functionality, the marmotViz tool can act as a springboard for future research in the field of illustrative visualization.

  20. MODIS algorithm development and data visualization using ACTS

    NASA Technical Reports Server (NTRS)

    Abbott, Mark R.

    1992-01-01

    The study of the Earth as a system will require the merger of scientific and data resources on a much larger scale than has been done in the past. New methods of scientific research, particularly in the development of geographically dispersed, interdisciplinary teams, are necessary if we are to understand the complexity of the Earth system. Even the planned satellite missions themselves, such as the Earth Observing System, will require much more interaction between researchers and engineers if they are to produce scientifically useful data products. A key component in these activities is the development of flexible, high bandwidth data networks that can be used to move large amounts of data as well as allow researchers to communicate in new ways, such as through video. The capabilities of the Advanced Communications Technology Satellite (ACTS) will allow the development of such networks. The Pathfinder global AVHRR data set and the upcoming SeaWiFS Earthprobe mission would serve as a testbed in which to develop the tools to share data and information among geographically distributed researchers. Our goal is to develop a 'Distributed Research Environment' that can be used as a model for scientific collaboration in the EOS era. The challenge is to unite the advances in telecommunications with the parallel advances in computing and networking.

  1. Validation of Statistical Sampling Algorithms in Visual Sample Plan (VSP): Summary Report

    SciTech Connect

    Nuffer, Lisa L; Sego, Landon H.; Wilson, John E.; Hassig, Nancy L.; Pulsipher, Brent A.; Matzke, Brett D.

    2009-02-18

    The U.S. Department of Homeland Security, Office of Technology Development (OTD) contracted with a set of U.S. Department of Energy national laboratories, including the Pacific Northwest National Laboratory (PNNL), to write a Remediation Guidance for Major Airports After a Chemical Attack. The report identifies key activities and issues that should be considered by a typical major airport following an incident involving release of a toxic chemical agent. Four experimental tasks were identified that would require further research in order to supplement the Remediation Guidance. One of the tasks, Task 4, OTD Chemical Remediation Statistical Sampling Design Validation, dealt with statistical sampling algorithm validation. This report documents the results of the sampling design validation conducted for Task 4. In 2005, the Government Accountability Office (GAO) performed a review of the past U.S. responses to Anthrax terrorist cases. Part of the motivation for this PNNL report was a major GAO finding that there was a lack of validated sampling strategies in the U.S. response to Anthrax cases. The report (GAO 2005) recommended that probability-based methods be used for sampling design in order to address confidence in the results, particularly when all sample results showed no remaining contamination. The GAO also expressed a desire that the methods be validated, which is the main purpose of this PNNL report. The objective of this study was to validate probability-based statistical sampling designs and the algorithms pertinent to within-building sampling that allow the user to prescribe or evaluate confidence levels of conclusions based on data collected as guided by the statistical sampling designs. Specifically, the designs found in the Visual Sample Plan (VSP) software were evaluated. VSP was used to calculate the number of samples and the sample location for a variety of sampling plans applied to an actual release site. Most of the sampling designs validated are

  2. Verification of visual odometry algorithms with an OpenGL-based software tool

    NASA Astrophysics Data System (ADS)

    Skulimowski, Piotr; Strumillo, Pawel

    2015-05-01

    We present a software tool called a stereovision egomotion sequence generator that was developed for testing visual odometry (VO) algorithms. Various approaches to single and multicamera VO algorithms are reviewed first, and then a reference VO algorithm that has served to demonstrate the program's features is described. The program offers simple tools for defining virtual static three-dimensional scenes and arbitrary six degrees of freedom motion paths within such scenes and output sequences of stereovision images, disparity ground-truth maps, and segmented scene images. A simple script language is proposed that simplifies tests of VO algorithms for user-defined scenarios. The program's capabilities are demonstrated by testing a reference VO technique that employs stereoscopy and feature tracking.

  3. ROCIT : a visual object recognition algorithm based on a rank-order coding scheme.

    SciTech Connect

    Gonzales, Antonio Ignacio; Reeves, Paul C.; Jones, John J.; Farkas, Benjamin D.

    2004-06-01

    This document describes ROCIT, a neural-inspired object recognition algorithm based on a rank-order coding scheme that uses a light-weight neuron model. ROCIT coarsely simulates a subset of the human ventral visual stream from the retina through the inferior temporal cortex. It was designed to provide an extensible baseline from which to improve the fidelity of the ventral stream model and explore the engineering potential of rank order coding with respect to object recognition. This report describes the baseline algorithm, the model's neural network architecture, the theoretical basis for the approach, and reviews the history of similar implementations. Illustrative results are used to clarify algorithm details. A formal benchmark to the 1998 FERET fafc test shows above average performance, which is encouraging. The report concludes with a brief review of potential algorithmic extensions for obtaining scale and rotational invariance.

  4. Volumetric visualization algorithm development for an FPGA-based custom computing machine

    NASA Astrophysics Data System (ADS)

    Sallinen, Sami J.; Alakuijala, Jyrki; Helminen, Hannu; Laitinen, Joakim

    1998-05-01

    Rendering volumetric medical images is a burdensome computational task for contemporary computers due to the large size of the data sets. Custom designed reconfigurable hardware could considerably speed up volume visualization if an algorithm suitable for the platform is used. We present an algorithm and speedup techniques for visualizing volumetric medical CT and MR images with a custom-computing machine based on a Field Programmable Gate Array (FPGA). We also present simulated performance results of the proposed algorithm calculated with a software implementation running on a desktop PC. Our algorithm is capable of generating perspective projection renderings of single and multiple isosurfaces with transparency, simulated X-ray images, and Maximum Intensity Projections (MIP). Although more speedup techniques exist for parallel projection than for perspective projection, we have constrained ourselves to perspective viewing, because of its importance in the field of radiotherapy. The algorithm we have developed is based on ray casting, and the rendering is sped up by three different methods: shading speedup by gradient precalculation, a new generalized version of Ray-Acceleration by Distance Coding (RADC), and background ray elimination by speculative ray selection.

  5. New bionic navigation algorithm based on the visual navigation mechanism of bees

    NASA Astrophysics Data System (ADS)

    Huang, Yufeng; Liu, Yi; Liu, Jianguo

    2015-04-01

    Through some research on visual navigation mechanisms of flying insects especially honeybees, a novel navigation algorithm integrating entropy flow with Kalman filter has been introduced in this paper. Concepts of entropy image and entropy flow are also introduced, which can characterize topographic features and measure changes of the image respectively. To characterize texture feature and spatial distribution of an image, a new concept of contrast entropy image has been presented in this paper. Applying the contrast entropy image to the navigation algorithm to test its' performance of navigation and comparing with simulation results of intensity entropy image, a conclusion that contrast entropy image performs better and more robust in navigation has been made.

  6. Simulating Visual Learning and Optical Illusions via a Network-Based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Siu, Theodore; Vivar, Miguel; Shinbrot, Troy

    We present a neural network model that uses a genetic algorithm to identify spatial patterns. We show that the model both learns and reproduces common visual patterns and optical illusions. Surprisingly, we find that the illusions generated are a direct consequence of the network architecture used. We discuss the implications of our results and the insights that we gain on how humans fall for optical illusions

  7. Tensor dissimilarity based adaptive seeding algorithm for DT-MRI visualization with streamtubes

    NASA Astrophysics Data System (ADS)

    Weldeselassie, Yonas T.; Hamarneh, Ghassan; Weiskopf, Daniel

    2007-03-01

    In this paper, we propose an adaptive seeding strategy for visualization of diffusion tensor magnetic resonance imaging (DT-MRI) data using streamtubes. DT-MRI is a medical imaging modality that captures unique water diffusion properties and fiber orientation information of the imaged tissues. Visualizing DT-MRI data using streamtubes has the advantage that not only the anisotropic nature of the diffusion is visualized but also the underlying anatomy of biological structures is revealed. This makes streamtubes significant for the analysis of fibrous tissues in medical images. In order to avoid rendering multiple similar streamtubes, an adaptive seeding strategy is employed which takes into account similarity of tensors in a given region. The goal is to automate the process of generating seed points such that regions with dissimilar tensors are assigned more seed points compared to regions with similar tensors. The algorithm is based on tensor dissimilarity metrics that take into account both diffusion magnitudes and directions to optimize the seeding positions and density of streamtubes in order to reduce the visual clutter. Two recent advances in tensor calculus and tensor dissimilarity metrics are utilized: the Log-Euclidean and the J-divergence. Results show that adaptive seeding not only helps to cull unnecessary streamtubes that would obscure visualization but also do so without having to compute the culled streamtubes, which makes the visualization process faster.

  8. How Dynamic Visualization Technology can Support Molecular Reasoning

    NASA Astrophysics Data System (ADS)

    Levy, Dalit

    2012-11-01

    This paper reports the results of a study aimed at exploring the advantages of dynamic visualization for the development of better understanding of molecular processes. We designed a technology-enhanced curriculum module in which high school chemistry students conduct virtual experiments with dynamic molecular visualizations of solid, liquid, and gas. They interact with the visualizations and carry out inquiry activities to make and refine connections between observable phenomena and atomic level processes related to phase change. The explanations proposed by 300 pairs of students in response to pre/post-assessment items have been analyzed using a scale for measuring the level of molecular reasoning. Results indicate that from pretest to posttest, students make progress in their level of molecular reasoning and are better able to connect intermolecular forces and phase change in their explanations. The paper presents the results through the lens of improvement patterns and the metaphor of the "ladder of molecular reasoning," and discusses how this adds to our understanding of the benefits of interacting with dynamic molecular visualizations.

  9. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    PubMed Central

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  10. Incorporating a wheeled vehicle model in a new monocular visual odometry algorithm for dynamic outdoor environments.

    PubMed

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  11. Visual Servoing: A technology in search of an application

    SciTech Connect

    Feddema, J.T.

    1994-05-01

    Considerable research has been performed on Robotic Visual Servoing (RVS) over the past decade. Using real-time visual feedback, researchers have demonstrated that robotic systems can pick up moving parts, insert bolts, apply sealant, and guide vehicles. With the rapid improvements being made in computing and image processing hardware, one would expect that every robot manufacturer would have a RVS option by the end of the 1990s. So why aren`t the Fanucs, ABBs, Adepts, and Motomans of the world investing heavily in RVS? I would suggest four seasons: cost, complexity, reliability, and lack of demand. Solutions to the first three are approaching the point where RVS could be commercially available; however, the lack of demand is keeping RVS from becoming a reality in the near future. A new set of applications is needed to focus near term RVS development. These must be applications which currently do not have solutions. Once developed and working in one application area, the technology is more likely to quickly spread to other areas. DOE has several applications that are looking for technological solutions, such as agile weapons production, weapons disassembly, decontamination and dismantlement of nuclear facilities, and hazardous waste remediation. This paper will examine a few of these areas and suggest directions for application-driven visual servoing research.

  12. High-sensitivity strain visualization using electroluminescence technologies

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Jo, Hongki

    2016-04-01

    Visualizing mechanical strain/stress changes is an emerging area in structural health monitoring. Several ways are available for strain change visualization through the color/brightness change of the materials subjected to the mechanical stresses, for example, using mechanoluminescence (ML) materials and mechanoresponsive polymers (MRP). However, these approaches were not effectively applicable for civil engineering system yet, due to insufficient sensitivity to low-level strain of typical civil structures and limitation in measuring both static and dynamic strain. In this study, design and validation for high-sensitivity strain visualization using electroluminescence technologies are presented. A high-sensitivity Wheatstone bridge, of which bridge balance is precisely controllable circuits, is used with a gain-adjustable amplifier. The monochrome electroluminescence (EL) technology is employed to convert both static and dynamic strain change into brightness/color change of the EL materials, through either brightness change mode (BCM) or color alternation mode (CAM). A prototype has been made and calibrated in lab, the linearity between strain and brightness change has been investigated.

  13. Creation of an Accurate Algorithm to Detect Snellen Best Documented Visual Acuity from Ophthalmology Electronic Health Record Notes

    PubMed Central

    French, Dustin D; Gill, Manjot; Mitchell, Christopher; Jackson, Kathryn; Kho, Abel; Bryar, Paul J

    2016-01-01

    Background Visual acuity is the primary measure used in ophthalmology to determine how well a patient can see. Visual acuity for a single eye may be recorded in multiple ways for a single patient visit (eg, Snellen vs. Jäger units vs. font print size), and be recorded for either distance or near vision. Capturing the best documented visual acuity (BDVA) of each eye in an individual patient visit is an important step for making electronic ophthalmology clinical notes useful in research. Objective Currently, there is limited methodology for capturing BDVA in an efficient and accurate manner from electronic health record (EHR) notes. We developed an algorithm to detect BDVA for right and left eyes from defined fields within electronic ophthalmology clinical notes. Methods We designed an algorithm to detect the BDVA from defined fields within 295,218 ophthalmology clinical notes with visual acuity data present. About 5668 unique responses were identified and an algorithm was developed to map all of the unique responses to a structured list of Snellen visual acuities. Results Visual acuity was captured from a total of 295,218 ophthalmology clinical notes during the study dates. The algorithm identified all visual acuities in the defined visual acuity section for each eye and returned a single BDVA for each eye. A clinician chart review of 100 random patient notes showed a 99% accuracy detecting BDVA from these records and 1% observed error. Conclusions Our algorithm successfully captures best documented Snellen distance visual acuity from ophthalmology clinical notes and transforms a variety of inputs into a structured Snellen equivalent list. Our work, to the best of our knowledge, represents the first attempt at capturing visual acuity accurately from large numbers of electronic ophthalmology notes. Use of this algorithm can benefit research groups interested in assessing visual acuity for patient centered outcome. All codes used for this study are currently

  14. DOE's SciDAC Visualization and Analytics Center for EnablingTechnologies -- Strategy for Petascale Visual Data Analysis Success

    SciTech Connect

    Bethel, E Wes; Johnson, Chris; Aragon, Cecilia; Rubel, Oliver; Weber, Gunther; Pascucci, Valerio; Childs, Hank; Bremer, Peer-Timo; Whitlock, Brad; Ahern, Sean; Meredith, Jeremey; Ostrouchov, George; Joy, Ken; Hamann, Bernd; Garth, Christoph; Cole, Martin; Hansen, Charles; Parker, Steven; Sanderson, Allen; Silva, Claudio; Tricoche, Xavier

    2007-10-01

    The focus of this article is on how one group of researchersthe DOE SciDAC Visualization and Analytics Center for EnablingTechnologies (VACET) is tackling the daunting task of enabling knowledgediscovery through visualization and analytics on some of the world slargest and most complex datasets and on some of the world's largestcomputational platforms. As a Center for Enabling Technology, VACET smission is the creation of usable, production-quality visualization andknowledge discovery software infrastructure that runs on large, parallelcomputer systems at DOE's Open Computing facilities and that providessolutions to challenging visual data exploration and knowledge discoveryneeds of modern science, particularly the DOE sciencecommunity.

  15. Spatial Information Processing: Standards-Based Open Source Visualization Technology

    NASA Astrophysics Data System (ADS)

    Hogan, P.

    2009-12-01

    . Spatial information intelligence is a global issue that will increasingly affect our ability to survive as a species. Collectively we must better appreciate the complex relationships that make life on Earth possible. Providing spatial information in its native context can accelerate our ability to process that information. To maximize this ability to process information, three basic elements are required: data delivery (server technology), data access (client technology), and data processing (information intelligence). NASA World Wind provides open source client and server technologies based on open standards. The possibilities for data processing and data sharing are enhanced by this inclusive infrastructure for geographic information. It is interesting that this open source and open standards approach, unfettered by proprietary constraints, simultaneously provides for entirely proprietary use of this same technology. 1. WHY WORLD WIND? NASA World Wind began as a single program with specific functionality, to deliver NASA content. But as the possibilities for virtual globe technology became more apparent, we found that while enabling a new class of information technology, we were also getting in the way. Researchers, developers and even users expressed their desire for World Wind functionality in ways that would service their specific needs. They want it in their web pages. They want to add their own features. They want to manage their own data. They told us that only with this kind of flexibility, could their objectives and the potential for this technology be truly realized. World Wind client technology is a set of development tools, a software development kit (SDK) that allows a software engineer to create applications requiring geographic visualization technology. 2. MODULAR COMPONENTRY Accelerated evolution of a technology requires that the essential elements of that technology be modular components such that each can advance independent of the other

  16. Chaotic Visual Cryptosystem Using Empirical Mode Decomposition Algorithm for Clinical EEG Signals.

    PubMed

    Lin, Chin-Feng

    2016-03-01

    This paper, proposes a chaotic visual cryptosystem using an empirical mode decomposition (EMD) algorithm for clinical electroencephalography (EEG) signals. The basic design concept is to integrate two-dimensional (2D) chaos-based encryption scramblers, the EMD algorithm, and a 2D block interleaver method to achieve a robust and unpredictable visual encryption mechanism. Energy-intrinsic mode function (IMF) distribution features of the clinical EEG signal are developed for chaotic encryption parameters. The maximum and second maximum energy ratios of the IMFs of a clinical EEG signal to its refereed total energy are used for the starting points of chaotic logistic map types of encrypted chaotic signals in the x and y vectors, respectively. The minimum and second minimum energy ratios of the IMFs of a clinical EEG signal to its refereed total energy are used for the security level parameters of chaotic logistic map types of encrypted chaotic signals in the x and y vectors, respectively. Three EEG database, and seventeen clinical EEG signals were tested, and the average r and mse values are 0.0201 and 4.2626 × 10(-29), respectively, for the original and chaotically-encrypted through EMD clinical EEG signals. The chaotically-encrypted signal cannot be recovered if there is an error in the input parameters, for example, an initial point error of 0.000001 %. The encryption effects of the proposed chaotic EMD visual encryption mechanism are excellent. PMID:26645316

  17. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation

  18. Algorithm-supported visual error correction (AVEC) of heart rate measurements in dogs, Canis lupus familiaris.

    PubMed

    Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt

    2015-12-01

    Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p < .001). After AVEC, the HR standard deviation and variability (HRV; i.e., RMSSD, pNN50, and SDNN) were significantly greater than after correction by the Polar tool (all p < .001). Furthermore, the HR data strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data. PMID:25540125

  19. A disparity-based stereo algorithm for mobility hazard avoidance in artificial visual prosthetic system

    NASA Astrophysics Data System (ADS)

    Li, Ruonan; Zhang, Xudong

    2005-10-01

    To help the walking patient avoid the mobility hazard is one of the key tasks of artificial visual prosthesis: a newly risen research project. A mobility hazard detection algorithm especially for such a system is proposed, in which a U-V-D space model is constructed for the non-hazard targets in the ground plane, so that those objects violating the model are easily identified as obstacles or pits. An iterative maximum likelihood procedure is invented to fit the model accurately and robustly with fast convergence being meanwhile accomplished.

  20. Assessing the GOANNA Visual Field Algorithm Using Artificial Scotoma Generation on Human Observers

    PubMed Central

    Chong, Luke X.; Turpin, Andrew; McKendrick, Allison M.

    2016-01-01

    Purpose To validate the performance of a new perimetric algorithm (Gradient-Oriented Automated Natural Neighbor Approach; GOANNA) in humans using a novel combination of computer simulation and human testing, which we call Artificial Scotoma Generation (ASG). Methods Fifteen healthy observers were recruited. Baseline conventional automated perimetry was performed on the Octopus 900. Visual field sensitivity was measured using two different procedures: GOANNA and Zippy Estimation by Sequential Testing (ZEST). Four different scotoma types were induced in each observer by implementing a novel technique that inserts a step between the algorithm and the perimeter, which in turn alters presentation levels to simulate scotomata in human observers. Accuracy, precision, and unique number of locations tested were measured, with the maximum difference between a location and its neighbors (Max_d) used to stratify results. Results GOANNA sampled significantly more locations than ZEST (paired t-test, P < 0.001), while maintaining comparable test times. Difference plots showed that GOANNA displayed greater accuracy than ZEST when Max_d was in the 10 to 30 dB range (with the exception of Max_d = 20 dB; Wilcoxon, P < 0.001). Similarly, GOANNA demonstrated greater precision than ZEST when Max_d was in the 20 to 30 dB range (Wilcoxon, P < 0.001). Conclusions We have introduced a novel method for assessing accuracy of perimetric algorithms in human observers. Results observed in the current study agreed with the results seen in earlier simulation studies, and thus provide support for performing larger scale clinical trials with GOANNA in the future. Translational Relevance The GOANNA perimetric testing algorithm offers a new paradigm for visual field testing where locations for testing are chosen that target scotoma borders. Further, the ASG methodology used in this paper to assess GOANNA shows promise as a hybrid between computer simulation and patient testing, which may allow more

  1. Restoring visual perception using microsystem technologies: engineering and manufacturing perspectives.

    PubMed

    Krisch, I; Hosticka, B J

    2007-01-01

    Microsystem technologies offer significant advantages in the development of neural prostheses. In the last two decades, it has become feasible to develop intelligent prostheses that are fully implantable into the human body with respect to functionality, complexity, size, weight, and compactness. Design and development enforce collaboration of various disciplines including physicians, engineers, and scientists. The retina implant system can be taken as one sophisticated example of a prosthesis which bypasses neural defects and enables direct electrical stimulation of nerve cells. This micro implantable visual prosthesis assists blind patients to return to the normal course of life. The retina implant is intended for patients suffering from retinitis pigmentosa or macular degeneration. In this contribution, we focus on the epiretinal prosthesis and discuss topics like system design, data and power transfer, fabrication, packaging and testing. In detail, the system is based upon an implantable micro electro stimulator which is powered and controlled via a wireless inductive link. Microelectronic circuits for data encoding and stimulation are assembled on flexible substrates with an integrated electrode array. The implant system is encapsulated using parylene C and silicone rubber. Results extracted from experiments in vivo demonstrate the retinotopic activation of the visual cortex. PMID:17691337

  2. Autonomous Visual Navigation of an Indoor Environment Using a Parsimonious, Insect Inspired Familiarity Algorithm

    PubMed Central

    Brayfield, Brad P.

    2016-01-01

    The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH) has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects’ brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path’s end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery. PMID:27119720

  3. Autonomous Visual Navigation of an Indoor Environment Using a Parsimonious, Insect Inspired Familiarity Algorithm.

    PubMed

    Gaffin, Douglas D; Brayfield, Brad P

    2016-01-01

    The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH) has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects' brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path's end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery. PMID:27119720

  4. Increasing observer objectivity with audio-visual technology: the Sphygmocorder.

    PubMed

    Atkins; O'Brien; Wesseling; Guelen

    1997-10-01

    The most fallible component of blood pressure measurement is the human observer. The traditional technique of measuring blood pressure does not allow the result of the measurement to be checked by independent observers, thereby leaving the method open to bias. In the Sphygmocorder, several components used to measure blood pressure have been combined innovatively with audio-visual recording technology to produce a system consisting of a mercury sphygmomanometer, an occluding cuff, an automatic inflation-deflation source, a stethoscope, a microphone capable of detecting Korotkoff sounds, a camcorder and a display screen. The accuracy of the Sphygmocorder against the trained human observer has been confirmed previously using the protocol of the British Hypertension Society and in this article the updated system incorporating a number of innovations is described. PMID:10234128

  5. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2016-07-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  6. A rib-specific multimodal registration algorithm for fused unfolded rib visualization using PET/CT

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Kopaczka, Marcin; Wimmer, Andreas; Platsch, Günther; Declerck, Jérôme

    2014-03-01

    Respiratory motion affects the alignment of PET and CT volumes from PET/CT examinations in a non-rigid manner. This becomes particularly apparent if reviewing fine anatomical structures such as ribs when assessing bone metastases, which frequently occur in many advanced cancers. To make this routine diagnostic task more efficient, a fused unfolded rib visualization for 18F-NaF PET/CT is presented. It allows to review the whole rib cage in a single image. This advanced visualization is enabled by a novel rib-specific registration algorithm that rigidly optimizes the local alignment of each individual rib in both modalities based on a matched filter response function. More specifically, rib centerlines are automatically extracted from CT and subsequently individually aligned to the corresponding bone-specific PET rib uptake pattern. The proposed method has been validated on 20 PET/CT scans acquired at different clinical sites. It has been demonstrated that the presented rib- specific registration method significantly improves the rib alignment without having to run complex deformable registration algorithms. At the same time, it guarantees that rib lesions are not further deformed, which may otherwise affect quantitative measurements such as SUVs. Considering clinically relevant distance thresholds, the centerline portion with good alignment compared to the ground truth improved from 60:6% to 86:7% after registration while approximately 98% can be still considered as acceptably aligned.

  7. Automatic mapping of visual cortex receptive fields: a fast and precise algorithm.

    PubMed

    Fiorani, Mario; Azzi, João C B; Soares, Juliana G M; Gattass, Ricardo

    2014-01-15

    An important issue for neurophysiological studies of the visual system is the definition of the region of the visual field that can modify a neuron's activity (i.e., the neuron's receptive field - RF). Usually a trade-off exists between precision and the time required to map a RF. Manual methods (qualitative) are fast but impose a variable degree of imprecision, while quantitative methods are more precise but usually require more time. We describe a rapid quantitative method for mapping visual RFs that is derived from computerized tomography and named back-projection. This method finds the intersection of responsive regions of the visual field based on spike density functions that are generated over time in response to long bars moving in different directions. An algorithm corrects the response profiles for latencies and allows for the conversion of the time domain into a 2D-space domain. The final product is an RF map that shows the distribution of the neuronal activity in visual-spatial coordinates. In addition to mapping the RF, this method also provides functional properties, such as latency, orientation and direction preference indexes. This method exhibits the following beneficial properties: (a) speed; (b) ease of implementation; (c) precise RF localization; (d) sensitivity (this method can map RFs based on few responses); (e) reliability (this method provides consistent information about RF shapes and sizes, which will allow for comparative studies); (f) comprehensiveness (this method can scan for RFs over an extensive area of the visual field); (g) informativeness (it provides functional quantitative data about the RF); and (h) usefulness (this method can map RFs in regions without direct retinal inputs, such as the cortical representations of the optic disc and of retinal lesions, which should allow for studies of functional connectivity, reorganization and neural plasticity). Furthermore, our method allows for precise mapping of RFs in a 30° by 30

  8. SAR data exploitation: computational technology enabling SAR ATR algorithm development

    NASA Astrophysics Data System (ADS)

    Majumder, Uttam K.; Casteel, Curtis H., Jr.; Buxa, Peter; Minardi, Michael J.; Zelnio, Edmund G.; Nehrbass, John W.

    2007-04-01

    A fundamental issue with synthetic aperture radar (SAR) application development is data processing and exploitation in real-time or near real-time. The power of high performance computing (HPC) clusters, FPGA, and the IBM Cell processor presents new algorithm development possibilities that have not been fully leveraged. In this paper, we will illustrate the capability of SAR data exploitation which was impractical over the last decade due to computing limitations. We can envision that SAR imagery encompassing city size coverage at extremely high levels of fidelity could be processed at near-real time using the above technologies to empower the warfighter with access to critical information for the war on terror, homeland defense, as well as urban warfare.

  9. 77 FR 5291 - Thermo Tech Technologies Inc., T.V.G. Technologies Ltd., and Visual Frontier, Inc.; Order of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-02

    ... From the Federal Register Online via the Government Publishing Office ] SECURITIES AND EXCHANGE COMMISSION Thermo Tech Technologies Inc., T.V.G. Technologies Ltd., and Visual Frontier, Inc.; Order of... accurate information concerning the securities of Visual Frontier, Inc. because it has not filed...

  10. A Gaze-Driven Evolutionary Algorithm to Study Aesthetic Evaluation of Visual Symmetry

    PubMed Central

    Bertamini, Marco; Jones, Andrew; Holmes, Tim; Zanker, Johannes M.

    2016-01-01

    Empirical work has shown that people like visual symmetry. We used a gaze-driven evolutionary algorithm technique to answer three questions about symmetry preference. First, do people automatically evaluate symmetry without explicit instruction? Second, is perfect symmetry the best stimulus, or do people prefer a degree of imperfection? Third, does initial preference for symmetry diminish after familiarity sets in? Stimuli were generated as phenotypes from an algorithmic genotype, with genes for symmetry (coded as deviation from a symmetrical template, deviation–symmetry, DS gene) and orientation (0° to 90°, orientation, ORI gene). An eye tracker identified phenotypes that were good at attracting and retaining the gaze of the observer. Resulting fitness scores determined the genotypes that passed to the next generation. We recorded changes to the distribution of DS and ORI genes over 20 generations. When participants looked for symmetry, there was an increase in high-symmetry genes. When participants looked for the patterns they preferred, there was a smaller increase in symmetry, indicating that people tolerated some imperfection. Conversely, there was no increase in symmetry during free viewing, and no effect of familiarity or orientation. This work demonstrates the viability of the evolutionary algorithm approach as a quantitative measure of aesthetic preference. PMID:27433324

  11. Feature extraction algorithm for 3D scene modeling and visualization using monostatic SAR

    NASA Astrophysics Data System (ADS)

    Jackson, Julie A.; Moses, Randolph L.

    2006-05-01

    We present a feature extraction algorithm to detect scattering centers in three dimensions using monostatic synthetic aperture radar imagery. We develop attributed scattering center models that describe the radar response of canonical shapes. We employ these models to characterize a complex target geometry as a superposition of simpler, low-dimensional structures. Such a characterization provides a means for target visualization. Fitting an attributed scattering model to sensed radar data is comprised of two problems: detection and estimation. The detection problem is to find canonical targets in clutter. The estimation problem then fits the detected canonical shape model with parameters, such as size and orientation, that correspond to the measured target response. We present an algorithm to detect canonical scattering structures amidst clutter and to estimate the corresponding model parameters. We employ full-polarimetric imagery to accurately classify canonical shapes. Interformetric processing allows us to estimate scattering center locations in three-dimensions. We apply the algorithm to scattering prediction data of a simple scene comprised of canonical scatterers and to scattering predictions of a backhoe.

  12. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  13. A Gaze-Driven Evolutionary Algorithm to Study Aesthetic Evaluation of Visual Symmetry.

    PubMed

    Makin, Alexis D J; Bertamini, Marco; Jones, Andrew; Holmes, Tim; Zanker, Johannes M

    2016-03-01

    Empirical work has shown that people like visual symmetry. We used a gaze-driven evolutionary algorithm technique to answer three questions about symmetry preference. First, do people automatically evaluate symmetry without explicit instruction? Second, is perfect symmetry the best stimulus, or do people prefer a degree of imperfection? Third, does initial preference for symmetry diminish after familiarity sets in? Stimuli were generated as phenotypes from an algorithmic genotype, with genes for symmetry (coded as deviation from a symmetrical template, deviation-symmetry, DS gene) and orientation (0° to 90°, orientation, ORI gene). An eye tracker identified phenotypes that were good at attracting and retaining the gaze of the observer. Resulting fitness scores determined the genotypes that passed to the next generation. We recorded changes to the distribution of DS and ORI genes over 20 generations. When participants looked for symmetry, there was an increase in high-symmetry genes. When participants looked for the patterns they preferred, there was a smaller increase in symmetry, indicating that people tolerated some imperfection. Conversely, there was no increase in symmetry during free viewing, and no effect of familiarity or orientation. This work demonstrates the viability of the evolutionary algorithm approach as a quantitative measure of aesthetic preference. PMID:27433324

  14. Infrastructure for Scalable and Interoperable Visualization and Analysis Software Technology

    SciTech Connect

    Bethel, E. Wes

    2004-08-01

    This document describes the LBNL vision for issues to be considered when assembling a large, multi-institution visualization and analysis effort. It was drafted at the request of the PNNL National Visual Analytics Center in July 2004.

  15. Assistive Technology Competencies of Teachers of Students with Visual Impairments: A Comparison of Perceptions

    ERIC Educational Resources Information Center

    Zhou, Li; Smith, Derrick W.; Parker, Amy T.; Griffin-Shirley, Nora

    2011-01-01

    This study surveyed teachers of students with visual impairments in Texas on their perceptions of a set of assistive technology competencies developed for teachers of students with visual impairments by Smith and colleagues (2009). Differences in opinion between practicing teachers of students with visual impairments and Smith's group of…

  16. Fast algorithms for visualizing fluid motion in steady flow on unstructured grids

    NASA Technical Reports Server (NTRS)

    Ueng, S. K.; Sikorski, K.; Ma, Kwan-Liu

    1995-01-01

    The plotting of streamlines is an effective way of visualizing fluid motion in steady flows. Additional information about the flowfield, such as local rotation and expansion, can be shown by drawing in the form of a ribbon or tube. In this paper, we present efficient algorithms for the construction of streamlines, streamribbons and streamtubes on unstructured grids. A specialized version of the Runge-Kutta method has been developed to speed up the integration of particle paths. We have also derived closed-form solutions for calculating angular rotation rate and radius to construct streamribbons and streamtubes, respectively. According to our analysis and test results, these formulations are two to four times better in performance than previous numerical methods. As a large number of traces are calculated, the improved performance could be significant.

  17. An Efficient Algorithm Embedded in an Ultrasonic Visualization Technique for Damage Inspection Using the AE Sensor Excitation Method

    PubMed Central

    Liu, Yaolu; Goda, Riu; Samata, Kiyoshi; Kanda, Atsushi; Hu, Ning; Zhang, Jianyu; Ning, Huiming; Wu, Liangke

    2014-01-01

    To improve the reliability of a Lamb wave visualization technique and to obtain more information about structural damages (e.g., size and shape), we put forward a new signal processing algorithm to identify damage more clearly in an inspection region. Since the kinetic energy of material particles in a damaged area would suddenly change when ultrasonic waves encounter the damage, the new algorithm embedded in the wave visualization technique is aimed at monitoring the kinetic energy variations of all points in an inspection region to construct a damage diagnostic image. To validate the new algorithm, three kinds of surface damages on the center of aluminum plates, including two non-penetrative slits with different depths and a circular dent, were experimentally inspected. From the experimental results, it can be found that the new algorithm can remarkably enhance the quality of the diagnostic image, especially for some minor defects. PMID:25356647

  18. Technology as an Aid in Assessing Visual Acuity in Severely/Profoundly Retarded Children.

    ERIC Educational Resources Information Center

    Longo, Julie; And Others

    1982-01-01

    Technology has been used to measure visual acuity with the severely or profoundly mentally retarded child. The following categories of technology have been used for assessment: the recording of visual fixation within the habituation paradigm; equipment to measure eye movements and pursuits; operant techniques; and electrodiagnostic techniques…

  19. Visuals for Interactive Video: Old Fashioned Images for a New Fangled Technology.

    ERIC Educational Resources Information Center

    Braden, Roberts A.

    Pointing out that interactive video (IAV) represents a synthesis of four primary technologies--computers, television, visual design, and instructional design--this paper discusses the what, why, and how of IAV visuals. The features and relevant aspects of each technology are briefly discussed, as well as the impact of each of these technologies…

  20. Rolling ball sifting algorithm for the augmented visual inspection of carotid bruit auscultation.

    PubMed

    Huang, Adam; Lee, Chung-Wei; Liu, Hon-Man

    2016-01-01

    Carotid bruits are systolic sounds associated with turbulent blood flow through atherosclerotic stenosis in the neck. They are audible intermittent high-frequency (above 200 Hz) sounds mixed with background noise and transmitted low-frequency (below 100 Hz) heart sounds that wax and wane periodically. It is a nontrivial task to extract both bruits and heart sounds with high fidelity for further computer-aided auscultation and diagnosis. In this paper we propose a rolling ball sifting algorithm that is capable to filter signals with a sharper frequency selectivity mechanism in the time domain. By rolling two balls (one above and one below the signal) of a suitable radius, the balls are large enough to roll over bruits and yet small enough to ride on heart sound waveforms. The high-frequency bruits can then be extracted according to a tangibility criterion by using the local extrema touched by the balls. Similarly, the low-frequency heart sounds can be acquired by a larger radius. By visualizing the periodicity information of both the extracted heart sounds and bruits, the proposed visual inspection method can potentially improve carotid bruit diagnosis accuracy. PMID:27452722

  1. Rolling ball sifting algorithm for the augmented visual inspection of carotid bruit auscultation

    PubMed Central

    Huang, Adam; Lee, Chung-Wei; Liu, Hon-Man

    2016-01-01

    Carotid bruits are systolic sounds associated with turbulent blood flow through atherosclerotic stenosis in the neck. They are audible intermittent high-frequency (above 200 Hz) sounds mixed with background noise and transmitted low-frequency (below 100 Hz) heart sounds that wax and wane periodically. It is a nontrivial task to extract both bruits and heart sounds with high fidelity for further computer-aided auscultation and diagnosis. In this paper we propose a rolling ball sifting algorithm that is capable to filter signals with a sharper frequency selectivity mechanism in the time domain. By rolling two balls (one above and one below the signal) of a suitable radius, the balls are large enough to roll over bruits and yet small enough to ride on heart sound waveforms. The high-frequency bruits can then be extracted according to a tangibility criterion by using the local extrema touched by the balls. Similarly, the low-frequency heart sounds can be acquired by a larger radius. By visualizing the periodicity information of both the extracted heart sounds and bruits, the proposed visual inspection method can potentially improve carotid bruit diagnosis accuracy. PMID:27452722

  2. Visual Sensor Technology for Advanced Surveillance Systems: Historical View, Technological Aspects and Research Activities in Italy

    PubMed Central

    Foresti, Gian Luca; Micheloni, Christian; Piciarelli, Claudio; Snidaro, Lauro

    2009-01-01

    The paper is a survey of the main technological aspects of advanced visual-based surveillance systems. A brief historical view of such systems from the origins to nowadays is given together with a short description of the main research projects in Italy on surveillance applications in the last twenty years. The paper then describes the main characteristics of an advanced visual sensor network that (a) directly processes locally acquired digital data, (b) automatically modifies intrinsic (focus, iris) and extrinsic (pan, tilt, zoom) parameters to increase the quality of acquired data and (c) automatically selects the best subset of sensors in order to monitor a given moving object in the observed environment. PMID:22574011

  3. Computer Modeling and Visualization in Design Technology: An Instructional Model.

    ERIC Educational Resources Information Center

    Guidera, Stan

    2002-01-01

    Design visualization can increase awareness of issues related to perceptual and psychological aspects of design that computer-assisted design and computer modeling may not allow. A pilot university course developed core skills in modeling and simulation using visualization. Students were consistently able to meet course objectives. (Contains 16…

  4. Assistive Technologies for Library Patrons with Visual Disabilities

    ERIC Educational Resources Information Center

    Sunrich, Matthew; Green, Ravonne

    2007-01-01

    This study provides an overview of the various products available for library patrons with blindness or visual impairments. To provide some insight into the status of library services for patrons with blindness, a sample of American universities that are recognized for their programs for students with visual impairments was surveyed to discern…

  5. Adaptive and robust algorithms and tests for visual-based navigation of a space robotic manipulator

    NASA Astrophysics Data System (ADS)

    Sabatini, Marco; Monti, Riccardo; Gasbarri, Paolo; Palmerini, Giovanni B.

    2013-02-01

    Optical navigation for guidance and control of robotic systems is a well-established technique from both theoretic and practical points of view. According to the positioning of the camera, the problem can be approached in two ways: the first one, "hand-in-eye", deals with a fixed camera, external to the robot, which allows to determine the position of the target object to be reached. The second one, "eye-in-hand", consists in a camera accommodated on the end-effector of the manipulator. Here, the target object position is not determined in an absolute reference frame, but with respect to the image plane of the mobile camera. In this paper, the algorithms and the test campaign applied to the case of the planar multibody manipulator developed in the Guidance and Navigation Lab at the University of Rome La Sapienza are reported with respect to the eye-in-hand case. In fact, being the space environment the target application for this research activity, it is quite difficult to imagine a fixed, non-floating camera in the case of an orbital grasping maneuver. The classic approach of Image Base Visual Servoing considers the evaluation of the control actions directly on the basis of the error between the current image of a feature and the image of the same feature in a final desired configuration. Both simulation and experimental tests show that such a classic approach can fail when navigation errors and actuation delays are included. Moreover, changing light conditions or the presence of unexpected obstacles can lead to a camera failure in target acquisition. In order to overcome these two problems, a Modified Image Based Visual Servoing algorithm and an Extended Kalman Filtering for feature position estimation are developed and applied. In particular, the filtering shows a quite good performance if target's depth information is supplied. A simple procedure for estimating initial target depth is therefore developed and tested. As a result of the application of all the

  6. Integrating advanced visualization technology into the planetary Geoscience workflow

    NASA Astrophysics Data System (ADS)

    Huffman, John; Forsberg, Andrew; Loomis, Andrew; Head, James; Dickson, James; Fassett, Caleb

    2011-09-01

    Recent advances in computer visualization have allowed us to develop new tools for analyzing the data gathered during planetary missions, which is important, since these data sets have grown exponentially in recent years to tens of terabytes in size. As part of the Advanced Visualization in Solar System Exploration and Research (ADVISER) project, we utilize several advanced visualization techniques created specifically with planetary image data in mind. The Geoviewer application allows real-time active stereo display of images, which in aggregate have billions of pixels. The ADVISER desktop application platform allows fast three-dimensional visualization of planetary images overlain on digital terrain models. Both applications include tools for easy data ingest and real-time analysis in a programmatic manner. Incorporation of these tools into our everyday scientific workflow has proved important for scientific analysis, discussion, and publication, and enabled effective and exciting educational activities for students from high school through graduate school.

  7. An algorithmic method for functionally defining regions of interest in the ventral visual pathway.

    PubMed

    Julian, J B; Fedorenko, Evelina; Webster, Jason; Kanwisher, Nancy

    2012-05-01

    In a widely used functional magnetic resonance imaging (fMRI) data analysis method, functional regions of interest (fROIs) are handpicked in each participant using macroanatomic landmarks as guides, and the response of these regions to new conditions is then measured. A key limitation of this standard handpicked fROI method is the subjectivity of decisions about which clusters of activated voxels should be treated as the particular fROI in question in each subject. Here we apply the Group-Constrained Subject-Specific (GSS) method for defining fROIs, recently developed for identifying language fROIs (Fedorenko et al., 2010), to algorithmically identify fourteen well-studied category-selective regions of the ventral visual pathway (Kanwisher, 2010). We show that this method retains the benefit of defining fROIs in individual subjects without the subjectivity inherent in the traditional handpicked fROI approach. The tools necessary for using this method are available on our website (http://web.mit.edu/bcs/nklab/GSS.shtml). PMID:22398396

  8. Improving chemical mapping algorithm and visualization in full-field hard x-ray spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Chang, Cheng; Xu, Wei; Chen-Wiegart, Yu-chen Karen; Wang, Jun; Yu, Dantong

    2013-12-01

    X-ray Absorption Near Edge Structure (XANES) imaging, an advanced absorption spectroscopy technique, at the Transmission X-ray Microscopy (TXM) Beamline X8C of NSLS enables high-resolution chemical mapping (a.k.a. chemical composition identification or chemical spectra fitting). Two-Dimensional (2D) chemical mapping has been successfully applied to study many functional materials to decide the percentages of chemical components at each pixel position of the material images. In chemical mapping, the attenuation coefficient spectrum of the material (sample) can be fitted with the weighted sum of standard spectra of individual chemical compositions, where the weights are the percentages to be calculated. In this paper, we first implemented and compared two fitting approaches: (i) a brute force enumeration method, and (ii) a constrained least square minimization algorithm proposed by us. Next, as 2D spectra fitting can be conducted pixel by pixel, so theoretically, both methods can be implemented in parallel. In order to demonstrate the feasibility of parallel computing in the chemical mapping problem and investigate how much efficiency improvement can be achieved, we used the second approach as an example and implemented a parallel version for a multi-core computer cluster. Finally we used a novel way to visualize the calculated chemical compositions, by which domain scientists could grasp the percentage difference easily without looking into the real data.

  9. Assistive Technology Approaches for Large-Scale Assessment: Perceptions of Teachers of Students with Visual Impairments

    ERIC Educational Resources Information Center

    Johnstone, Christopher; Thurlow, Martha; Altman, Jason; Timmons, Joe; Kato, Kentaro

    2009-01-01

    Assistive technology approaches to aid students with visual impairments are becoming commonplace in schools. These approaches, however, present challenges for assessment because students' level of access to different technologies may vary by school district and state. To better understand what assistive technology tools are used in reading…

  10. Identification of Quality Visual-Based Learning Material for Technology Education

    ERIC Educational Resources Information Center

    Katsioloudis, Petros

    2010-01-01

    It is widely known that the use of visual technology enhances learning by providing a better understanding of the topic as well as motivating students. If all visual-based learning materials (tables, figures, photos, etc.) were equally effective in facilitating student achievement of all kinds of educational objectives, there would virtually be no…

  11. Assessment of Indoor Route-Finding Technology for People Who Are Visually Impaired

    ERIC Educational Resources Information Center

    Kalia, Amy A.; Legge, Gordon E.; Roy, Rudrava; Ogale, Advait

    2010-01-01

    This study investigated navigation with route instructions generated by digital-map software and synthetic speech. The participants, either visually impaired or sighted wearing blindfolds, successfully located rooms in an unfamiliar building. Users with visual impairments demonstrated better route-finding performance when the technology provided…

  12. Automatic Tuning of a Retina Model for a Cortical Visual Neuroprosthesis Using a Multi-Objective Optimization Genetic Algorithm.

    PubMed

    Martínez-Álvarez, Antonio; Crespo-Cano, Rubén; Díaz-Tahoces, Ariadna; Cuenca-Asensi, Sergio; Ferrández Vicente, José Manuel; Fernández, Eduardo

    2016-11-01

    The retina is a very complex neural structure, which contains many different types of neurons interconnected with great precision, enabling sophisticated conditioning and coding of the visual information before it is passed via the optic nerve to higher visual centers. The encoding of visual information is one of the basic questions in visual and computational neuroscience and is also of seminal importance in the field of visual prostheses. In this framework, it is essential to have artificial retina systems to be able to function in a way as similar as possible to the biological retinas. This paper proposes an automatic evolutionary multi-objective strategy based on the NSGA-II algorithm for tuning retina models. Four metrics were adopted for guiding the algorithm in the search of those parameters that best approximate a synthetic retinal model output with real electrophysiological recordings. Results show that this procedure exhibits a high flexibility when different trade-offs has to be considered during the design of customized neuro prostheses. PMID:27354187

  13. Audio Visual Technology and the Teaching of Foreign Languages.

    ERIC Educational Resources Information Center

    Halbig, Michael C.

    Skills in comprehending the spoken language source are becoming increasingly important due to the audio-visual orientation of our culture. It would seem natural, therefore, to adjust the learning goals and environment accordingly. The video-cassette machine is an ideal means for creating this learning environment and developing the listening…

  14. POLYMERASE CHAIN REACTION (PCR) TECHNOLOGY IN VISUAL BEACH

    EPA Science Inventory

    In 2000, the US Congress passed the Beaches Environmental Assessment and Coastal Health Act under which the EPA has the mandate to manage all significant public beaches by 2008. As a result, EPA, USGS and NOAA are developing the Visual Beach program which consists of software eq...

  15. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    NASA Astrophysics Data System (ADS)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  16. VACET: Proposed SciDAC2 Visualization and Analytics Center forEnabling Technologies

    SciTech Connect

    Bethel, W.; Johnson, Chris; Hansen, Charles; Parker, Steve; Sanderson, Allen; Silva, Claudio; Tricoche, Xavier; Pascucci, Valerio; Childs, Hank; Cohen, Jonathon; Duchaineau, Mark; Laney, Dan; Lindstrom,Peter; Ahern, Sean; Meredith, Jeremy; Ostrouchov, George; Joy, Ken; Hamann, Bernd

    2006-06-19

    This paper accompanies a poster that is being presented atthe SciDAC 2006 meeting in Denver, CO. This project focuses on leveragingscientific visualization and analytics software technology as an enablingtechnology for increasing scientific productivity and insight. Advancesincomputational technology have resultedin an "information big bang,"which in turn has createda significant data understanding challenge. Thischallenge is widely acknowledged to be one of the primary bottlenecks incontemporary science. The vision for our Center is to respond directly tothat challenge by adapting, extending, creating when necessary anddeploying visualization and data understanding technologies for ourscience stakeholders. Using an organizational model as a Visualizationand Analytics Center for Enabling Technologies (VACET), we are wellpositioned to be responsive to the needs of a diverse set of scientificstakeholders in a coordinated fashion using a range of visualization,mathematics, statistics, computer and computational science and datamanagement technologies.

  17. The evolution of a visual-to-auditory sensory substitution device using interactive genetic algorithms.

    PubMed

    Wright, Thomas; Ward, Jamie

    2013-08-01

    Sensory substitution is a promising technique for mitigating the loss of a sensory modality. Sensory substitution devices (SSDs) work by converting information from the impaired sense (e.g., vision) into another, intact sense (e.g., audition). However, there are a potentially infinite number of ways of converting images into sounds, and it is important that the conversion takes into account the limits of human perception and other user-related factors (e.g., whether the sounds are pleasant to listen to). The device explored here is termed "polyglot" because it generates a very large set of solutions. Specifically, we adapt a procedure that has been in widespread use in the design of technology but has rarely been used as a tool to explore perception-namely, interactive genetic algorithms. In this procedure, a very large range of potential sensory substitution devices can be explored by creating a set of "genes" with different allelic variants (e.g., different ways of translating luminance into loudness). The most successful devices are then "bred" together, and we statistically explore the characteristics of the selected-for traits after multiple generations. The aim of the present study is to produce design guidelines for a better SSD. In three experiments, we vary the way that the fitness of the device is computed: by asking the user to rate the auditory aesthetics of different devices (Experiment 1), and by measuring the ability of participants to match sounds to images (Experiment 2) and the ability to perceptually discriminate between two sounds derived from similar images (Experiment 3). In each case, the traits selected for by the genetic algorithm represent the ideal SSD for that task. Taken together, these traits can guide the design of a better SSD. PMID:23298393

  18. Advanced metaheuristic algorithms for laser optimization in optical accelerator technologies

    NASA Astrophysics Data System (ADS)

    Tomizawa, Hiromitsu

    2011-10-01

    Lasers are among the most important experimental tools for user facilities, including synchrotron radiation and free electron lasers (FEL). In the synchrotron radiation field, lasers are widely used for experiments with Pump-Probe techniques. Especially for X-ray-FELs, lasers play important roles as seed light sources or photocathode-illuminating light sources to generate a high-brightness electron bunch. For future accelerators, laser-based techonologies such as electro-optic (EO) sampling to measure ultra-short electron bunches and optical-fiber-based femtosecond timing systems have been intensively developed in the last decade. Therefore, controls and optimizations of laser pulse characteristics are strongly required for many kinds of experiments and improvement of accelerator systems. However, people believe that lasers should be tuned and customized for each requirement manually by experts. This makes it difficult for laser systems to be part of the common accelerator infrastructure. Automatic laser tuning requires sophisticated algorithms, and the metaheuristic algorithm is one of the best solutions. The metaheuristic laser tuning system is expected to reduce the human effort and time required for laser preparations. I have shown some successful results on a metaheuristic algorithm based on a genetic algorithm to optimize spatial (transverse) laser profiles, and a hill-climbing method extended with a fuzzy set theory to choose one of the best laser alignments automatically for each machine requirement.

  19. A Dependence on Technology and Algorithms or a Lack of Number Sense?

    ERIC Educational Resources Information Center

    Calvert, Lynn M. Gordon

    1999-01-01

    Dependence on algorithms is as insidious as dependence on technology. States that many children and adults lack the facility to recognize and work with relationships in and between numbers and number operations. (ASK)

  20. Application of machine vision technology to the development of aids for the visually impaired

    NASA Astrophysics Data System (ADS)

    Molloy, Derek; McGowan, T.; Clarke, K.; McCorkell, C.; Whelan, Paul F.

    1994-10-01

    This paper presents an experimental system for the combination of three areas of visual cues to aid recognition. The research is aimed at investigating the possibility of using this combination of information for scene description for the visually impaired. The areas identified as providing suitable visual cues are motion, shape and color. The combination of these provide a significant amount of information for recognition and description purposes by machine vision equipment and also allow the possibility of giving the user a more complete description of their environment. Research and development in the application of machine vision technologies to rehabilitative technologies has generally concentrated on utilizing a single visual cue. A novel method for the combination of techniques and technologies successful in machine vision is being explored. Work to date has concentrated on the integration of shape recognition, motion tracking, color extraction, speech synthesis, symbolic programming and auditory imaging of colors.

  1. High End Visualization of Geophysical Datasets Using Immersive Technology: The SIO Visualization Center.

    NASA Astrophysics Data System (ADS)

    Newman, R. L.

    2002-12-01

    How many images can you display at one time with Power Point without getting "postage stamps"? Do you have fantastic datasets that you cannot view because your computer is too slow/small? Do you assume a few 2-D images of a 3-D picture are sufficient? High-end visualization centers can minimize and often eliminate these problems. The new visualization center [http://siovizcenter.ucsd.edu] at Scripps Institution of Oceanography [SIO] immerses users into a virtual world by projecting 3-D images onto a Panoram GVR-120E wall-sized floor-to-ceiling curved screen [7' x 23'] that has 3.2 mega-pixels of resolution. The Infinite Reality graphics subsystem is driven by a single-pipe SGI Onyx 3400 with a system bandwidth of 44 Gbps. The Onyx is powered by 16 MIPS R12K processors and 16 GB of addressable memory. The system is also equipped with transmitters and LCD shutter glasses which permit stereographic 3-D viewing of high-resolution images. This center is ideal for groups of up to 60 people who can simultaneously view these large-format images. A wide range of hardware and software is available, giving the users a totally immersive working environment in which to display, analyze, and discuss large datasets. The system enables simultaneous display of video and audio streams from sources such as SGI megadesktop and stereo megadesktop, S-VHS video, DVD video, and video from a Macintosh or PC. For instance, one-third of the screen might be displaying S-VHS video from a remotely-operated-vehicle [ROV], while the remaining portion of the screen might be used for an interactive 3-D flight over the same parcel of seafloor. The video and audio combinations using this system are numerous, allowing users to combine and explore data and images in innovative ways, greatly enhancing scientists' ability to visualize, understand and collaborate on complex datasets. In the not-distant future, with the rapid growth in networking speeds in the US, it will be possible for Earth Sciences

  2. School, Family and Other Influences on Assistive Technology Use: Access and Challenges for Students with Visual Impairment in Singapore

    ERIC Educational Resources Information Center

    Wong, Meng Ee; Cohen, Libby

    2011-01-01

    Assistive technologies are essential enablers for individuals with visual impairments, but although Singapore is technologically advanced, students with visual impairments are not yet full participants in this technological society. This study investigates the barriers and challenges to the use of assistive technologies by students with visual…

  3. Essays on Visual Representation Technology and Decision Making in Teams

    ERIC Educational Resources Information Center

    Peng, Chih-Hung

    2013-01-01

    Information technology has played several important roles in group decision making, such as communication support and decision support. Little is known about how information technology can be used to persuade members of a group to reach a consensus. In this dissertation, I aim to address the issues that are related to the role of visual…

  4. The use of advanced technology for visual inspection training.

    PubMed

    Gramopadhye, A; Bhagwat, S; Kimbler, D; Greenstein, J

    1998-10-01

    In the past, training with traditional methods was shown to improve inspection performance. However, advances in technology have automated training and revolutionized the way training will be delivered in the future. Examples of such technology include computer-based simulators, digital interactive video, computer-based training, and intelligent tutoring systems. Despite the lower cost and increased availability of computer technology, the application of advanced technology to training within the manufacturing industry and specifically for inspection has been limited. In this vein, a case study is presented which shows how advanced technology along with our basic knowledge of training principles, can be used to develop a computer-based training program for a contact lens inspection task. Improvements due to computer-based inspection training were measured in an evaluation study and are reported. PMID:9703350

  5. Vision technology/algorithms for space robotics applications

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar; Defigueiredo, Rui J. P.

    1987-01-01

    The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed.

  6. Use of Assistive Technology by Students with Visual Impairments: Findings from a National Survey

    ERIC Educational Resources Information Center

    Kelly, Stacy M.

    2009-01-01

    This study investigated the use of assistive technology by students in the United States who are visually impaired through a secondary analysis of a nationally representative database. It found that the majority of students were not using assistive technology. Implications for interventions and potential changes in policy or practice are…

  7. Visualizing Math: How Intelligent Tutoring Technology Can Help Math-Challenged Students

    ERIC Educational Resources Information Center

    Wolf, Michael

    2010-01-01

    Many students who struggle with basic mathematics courses benefit from digital instructional technologies, including additional visual and supplemental materials on difficult concepts and skills. Currently, the intelligent tutoring technology feature in the learning management system (LMS) should make it easier for students to deal with material…

  8. A Comparative Analysis of Spatial Visualization Ability and Drafting Models for Industrial and Technology Education Students

    ERIC Educational Resources Information Center

    Katsioloudis, Petros; Jovanovic, Vukica; Jones, Mildred

    2014-01-01

    The main purpose of this study was to determine significant positive effects among the use of three different types of drafting models, and to identify whether any differences exist towards promotion of spatial visualization ability for students in Industrial Technology and Technology Education courses. In particular, the study compared the use of…

  9. Assistive Technology for Students with Visual Impairments: Challenges and Needs in Teachers' Preparation Programs and Practice

    ERIC Educational Resources Information Center

    Zhou, Li; Parker, Amy T.; Smith, Derrick W.; Griffin-Shirley, Nora

    2011-01-01

    This article reports on a survey of 165 teachers of students with visual impairments in Texas to examine their perceptions of their knowledge of assistive technology. The results showed that they had significant deficits in knowledge in 55 (74.32%) of the 74 assistive technology competencies that were examined and that 57.5% of them lacked…

  10. Application of a novel particle tracking algorithm in the flow visualization of an artificial abdominal aortic aneurysm.

    PubMed

    Zhang, Yang; Wang, Yuan; He, Wenbo; Yang, Bin

    2014-01-01

    A novel Particle Tracking Velocimetry (PTV) algorithm based on Voronoi Diagram (VD) is proposed and briefed as VD-PTV. The robustness of VD-PTV for pulsatile flow is verified through a test that includes a widely used artificial flow and a classic reference algorithm. The proposed algorithm is then applied to visualize the flow in an artificial abdominal aortic aneurysm included in a pulsatile circulation system that simulates the aortic blood flow in human body. Results show that, large particles tend to gather at the upstream boundary because of the backflow eddies that follow the pulsation. This qualitative description, together with VD-PTV, has laid a foundation for future works that demand high-level quantification. PMID:25226961

  11. Algorithm for automatic forced spirometry quality assessment: technological developments.

    PubMed

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community. PMID:25551213

  12. Algorithm for Automatic Forced Spirometry Quality Assessment: Technological Developments

    PubMed Central

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community. PMID:25551213

  13. Research on algorithm for infrared hyperspectral imaging Fourier transform spectrometer technology

    NASA Astrophysics Data System (ADS)

    Wan, Lifang; Chen, Yan; Liao, Ningfang; Lv, Hang; He, Shufang; Li, Yasheng

    2015-08-01

    This paper reported the algorithm for Infrared Hyperspectral Imaging Radiometric Spectrometer Technology. Six different apodization functions are been used and compared, and the phase corrected technologies of Forman is researched and improved, fast fourier transform(FFT)is been used in this paper instead of the linear convolution to reduce the quantity of computation.The interferograms is achieved by the Infrared Hyperspectral Imaging Radiometric Spectrometer which are corrected and rebuilded by the improved algorithm, this algorithm reduce the noise and accelerate the computing speed with the higher accuracy of spectrometers.

  14. Using Dramatic Events and Visualization Technology to Teach About Watersheds

    NASA Astrophysics Data System (ADS)

    Huth, A. K.; Hall, M. K.

    2008-12-01

    We developed a GIS-based, two-unit module about dynamic watersheds that uses spatial visualization tools, inquiry-based questioning, and eyewitness accounts of historical, dramatic events to teach students about the natural phenomenon of watershed evolution. The module puts into context the relationship between watersheds and human behavior. The centerpiece of Unit 1 is the Big Thompson watershed in Colorado and the flash flood that killed 145 people in 1976. Unit 2 is a case history of the Sabino Canyon watershed in Arizona, which was ravaged by wildfires in 2002 and 2003, as well as destructive debris flows in 2006. Students examine the causes and magnitude of each of these events, and how they changed landscapes and people. Both units use MyWorld GIS and Google Earth visualization software tools. A teacher workshop in summer 2008 revealed an increased understanding of watersheds and an improved comfort level using spatial visualization software after working with the module. Prior to the workshop, a survey demonstrated that fewer than 50 percent of the workshop participants knew the name of the watershed in which they lived. After the workshop, all teachers were able to conceptually define a watershed, identify the watershed in which they live, and describe hazards that would put a watershed and its communities at risk for dramatic changes. They also showed an increased awareness of seasonal variations in streamflow and made connections between these variations and the sources that generate streamflow in different watersheds. We expect similar results among high-school students who will field test these materials during the Fall 2008 semester.

  15. Expansion of the visual angle of a car rear-view image via an image mosaic algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Zhuangwen; Zhu, Liangrong; Sun, Xincheng

    2015-05-01

    The rear-view image system is one of the active safety devices in cars and is widely applied in all types of vehicles and traffic safety areas. However, studies made by both domestic and foreign researchers were based on a single image capture device while reversing, so a blind area still remained to drivers. Even if multiple cameras were used to expand the visual angle of the car's rear-view image in some studies, the blind area remained because different source images were not mosaicked together. To acquire an expanded visual angle of a car rear-view image, two charge-coupled device cameras with optical axes angled at 30 deg were mounted below the left and right fenders of a car in three light conditions-sunny outdoors, cloudy outdoors, and an underground garage-to capture rear-view heterologous images of the car. Then these rear-view heterologous images were rapidly registered through the scale invariant feature transform algorithm. Combined with the random sample consensus algorithm, the two heterologous images were finally mosaicked using the linear weighted gradated in-and-out fusion algorithm, and a seamless and visual-angle-expanded rear-view image was acquired. The four-index test results showed that the algorithms can mosaic rear-view images well in the underground garage condition, where the average rate of correct matching was the lowest among the three conditions. The rear-view image mosaic algorithm presented had the best information preservation, the shortest computation time and the most complete preservation of the image detail features compared to the mean value method (MVM) and segmental fusion method (SFM), and it was also able to perform better in real time and provided more comprehensive image details than MVM and SFM. In addition, it had the most complete image preservation from source images among the three algorithms. The method introduced by this paper provided the basis for researching the expansion of the visual angle of a car rear

  16. Better-Than-Visual Technologies for Next Generation Air Transportation System Terminal Maneuvering Area Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Bailey, Randall E.; Shelton, Kevin J.; Jones, Denise R.; Kramer, Lynda J.; Arthur, Jarvis J., III; Williams, Steve P.; Barmore, Bryan E.; Ellis, Kyle E.; Rehfeld, Sherri A.

    2011-01-01

    A consortium of industry, academia and government agencies are devising new concepts for future U.S. aviation operations under the Next Generation Air Transportation System (NextGen). Many key capabilities are being identified to enable NextGen, including the concept of Equivalent Visual Operations (EVO) replicating the capacity and safety of today's visual flight rules (VFR) in all-weather conditions. NASA is striving to develop the technologies and knowledge to enable EVO and to extend EVO towards a Better-Than-Visual (BTV) operational concept. The BTV operational concept uses an electronic means to provide sufficient visual references of the external world and other required flight references on flight deck displays that enable VFR-like operational tempos and maintain and improve the safety of VFR while using VFR-like procedures in all-weather conditions. NASA Langley Research Center (LaRC) research on technologies to enable the concept of BTV is described.

  17. Assisting the visually impaired to deal with telephone interview jobs using information and commutation technology.

    PubMed

    Yeh, Fung-Huei; Yang, Chung-Chieh

    2014-12-01

    This study proposed a new information and commutation technology assisted blind telephone interview (ICT-ABTI) system to help visually impaired people to do telephone interview jobs as normal sighted people and create more diverse employment opportunities for them. The study also used an ABAB design to assess the system with seven visually impaired people. As the results, they can accomplish 3070 effective telephone interviews per month independently. The results also show that working performance of the visually impaired can be improved effectively with appropriate design of operation working flow and accessible software. The visually impaired become productive, lucrative, and self-sufficient by using ICT-ABTI system to do telephone interview jobs. The results were also shared through the APEC Digital Opportunity Center platform to help visually impaired in Philippines, Malaysia and China. PMID:25209925

  18. Evaluation of Visual Analytics Environments: The Road to the Visual Analytics Science and Technology Challenge Evaluation Methodology

    SciTech Connect

    Scholtz, Jean; Plaisant, Catherine; Whiting, Mark A.; Grinstein, Georges

    2014-09-28

    The evaluation of visual analytics environments was a topic in Illuminating the Path [Thomas 2005] as a critical aspect of moving research into practice. For a thorough understanding of the utility of the systems available, evaluation not only involves assessing the visualizations, interactions or data processing algorithms themselves, but also the complex processes that a tool is meant to support (such as exploratory data analysis and reasoning, communication through visualization, or collaborative data analysis [Lam 2012; Carpendale 2007]). Researchers and practitioners in the field have long identified many of the challenges faced when planning, conducting, and executing an evaluation of a visualization tool or system [Plaisant 2004]. Evaluation is needed to verify that algorithms and software systems work correctly and that they represent improvements over the current infrastructure. Additionally to effectively transfer new software into a working environment, it is necessary to ensure that the software has utility for the end-users and that the software can be incorporated into the end-user’s infrastructure and work practices. Evaluation test beds require datasets, tasks, metrics and evaluation methodologies. As noted in [Thomas 2005] it is difficult and expensive for any one researcher to setup an evaluation test bed so in many cases evaluation is setup for communities of researchers or for various research projects or programs. Examples of successful community evaluations can be found [Chinchor 1993; Voorhees 2007; FRGC 2012]. As visual analytics environments are intended to facilitate the work of human analysts, one aspect of evaluation needs to focus on the utility of the software to the end-user. This requires representative users, representative tasks, and metrics that measure the utility to the end-user. This is even more difficult as now one aspect of the test methodology is access to representative end-users to participate in the evaluation. In many

  19. The Use of Technology and Visualization in Calculus Instruction

    ERIC Educational Resources Information Center

    Samuels, Jason

    2010-01-01

    This study was inspired by a history of student difficulties in calculus, and innovation in response to those difficulties. The goals of the study were fourfold. First, to design a mathlet for students to explore local linearity. Second, to redesign the curriculum of first semester calculus around the use of technology, an emphasis on…

  20. An Alternative Option to Dedicated Braille Notetakers for People with Visual Impairments: Universal Technology for Better Access

    ERIC Educational Resources Information Center

    Hong, Sunggye

    2012-01-01

    Technology provides equal access to information and helps people with visual impairments to complete tasks more independently. Among various assistive technology options for people with visual impairments, braille notetakers have been considered the most significant because of their technological innovation. Braille notetakers allow users who are…

  1. Design and implementation of information visualization system on science and technology industry based on GIS

    NASA Astrophysics Data System (ADS)

    Wu, Xiaofang; Jiang, Liushi

    2011-02-01

    Usually in the traditional science and technology information system, the only text and table form are used to manage the data, and the mathematic statistics method is applied to analyze the data. It lacks for the spatial analysis and management of data. Therefore, GIS technology is introduced to visualize and analyze the information data on science and technology industry. Firstly, by using the developed platform-microsoft visual studio 2005 and ArcGIS Engine, the information visualization system on science and technology industry based on GIS is built up, which implements various functions, such as data storage and management, inquiry, statistics, chart analysis, thematic map representation. It can show the change of science and technology information from the space and time axis intuitively. Then, the data of science and technology in Guangdong province are taken as experimental data and are applied to the system. And by considering the factors of humanities, geography and economics so on, the situation and change tendency of science and technology information of different regions are analyzed and researched, and the corresponding suggestion and method are brought forward in order to provide the auxiliary support for development of science and technology industry in Guangdong province.

  2. Optimal vaccination schedule search using genetic algorithm over MPI technology

    PubMed Central

    2012-01-01

    Background Immunological strategies that achieve the prevention of tumor growth are based on the presumption that the immune system, if triggered before tumor onset, could be able to defend from specific cancers. In supporting this assertion, in the last decade active immunization approaches prevented some virus-related cancers in humans. An immunopreventive cell vaccine for the non-virus-related human breast cancer has been recently developed. This vaccine, called Triplex, targets the HER-2-neu oncogene in HER-2/neu transgenic mice and has shown to almost completely prevent HER-2/neu-driven mammary carcinogenesis when administered with an intensive and life-long schedule. Methods To better understand the preventive efficacy of the Triplex vaccine in reduced schedules we employed a computational approach. The computer model developed allowed us to test in silico specific vaccination schedules in the quest for optimality. Specifically here we present a parallel genetic algorithm able to suggest optimal vaccination schedule. Results & Conclusions The enormous complexity of combinatorial space to be explored makes this approach the only possible one. The suggested schedule was then tested in vivo, giving good results. Finally, biologically relevant outcomes of optimization are presented. PMID:23148787

  3. An algorithmic interactive planning framework in support of sustainable technologies

    NASA Astrophysics Data System (ADS)

    Prica, Marija D.

    This thesis addresses the difficult problem of generation expansion planning that employs the most effective technologies in today's changing electric energy industry. The electrical energy industry, in both the industrialized world and in developing countries, is experiencing transformation in a number of different ways. This transformation is driven by major technological breakthroughs (such as the influx of unconventional smaller-scale resources), by industry restructuring, changing environmental objectives, and the ultimate threat of resource scarcity. This thesis proposes a possible planning framework in support of sustainable technologies where sustainability is viewed as a mix of multiple attributes ranging from reliability and environmental impact to short- and long-term efficiency. The idea of centralized peak-load pricing, which accounts for the tradeoffs between cumulative operational effects and the cost of new investments, is the key concept in support of long-term planning in the changing industry. To start with, an interactive planning framework for generation expansion is posed as a distributed decision-making model. In order to reconcile the distributed sub-objectives of different decision makers with system-wide sustainability objectives, a new concept of distributed interactive peak load pricing is proposed. To be able to make the right decisions, the decision makers must have sufficient information about the estimated long-term electricity prices. The sub-objectives of power plant owners and load-serving entities are profit maximization. Optimized long-term expansion plans based on predicted electricity prices are communicated to the system-wide planning authority as long-run bids. The long-term expansion bids are cleared by the coordinating planner so that the system-wide long-term performance criteria are satisfied. The interactions between generation owners and the coordinating planning authority are repeated annually. We view the proposed

  4. Calibration of visual model for space manipulator with a hybrid LM-GA algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Wensong; Wang, Zhongyu

    2016-01-01

    A hybrid LM-GA algorithm is proposed to calibrate the camera system of space manipulator to improve its locational accuracy. This algorithm can dynamically fuse the Levenberg-Marqurdt (LM) algorithm and Genetic Algorithm (GA) together to minimize the error of nonlinear camera model. LM algorithm is called to optimize the initial camera parameters that are generated by genetic process previously. Iteration should be stopped if the optimized camera parameters meet the accuracy requirements. Otherwise, new populations are generated again by GA and optimized afresh by LM algorithm until the optimal solutions meet the accuracy requirements. A novel measuring machine of space manipulator is designed to on-orbit dynamic simulation and precision test. The camera system of space manipulator, calibrated by hybrid LM-GA algorithm, is used for locational precision test in this measuring instrument. The experimental results show that the mean composite errors are 0.074 mm for hybrid LM-GA camera calibration model, 1.098 mm for LM camera calibration model, and 1.202 mm for GA camera calibration model. Furthermore, the composite standard deviations are 0.103 mm for the hybrid LM-GA camera calibration model, 1.227 mm for LM camera calibration model, and 1.351 mm for GA camera calibration model. The accuracy of hybrid LM-GA camera calibration model is more than 10 times higher than that of other two methods. All in all, the hybrid LM-GA camera calibration model is superior to both the LM camera calibration model and GA camera calibration model.

  5. A new algorithm for integrated image quality measurement based on wavelet transform and human visual system

    NASA Astrophysics Data System (ADS)

    Wang, Haihui

    2006-01-01

    An essential determinant of the value of digital images is their quality. Over the past years, there have been many attempts to develop models or metrics for image quality that incorporate elements of human visual sensitivity. However, there is no current standard and objective definition of spectral image quality. This paper proposes a reliable automatic method for objective image quality measurement by wavelet transform and Human visual system. This way the proposed measure differentiates between the random and signal-dependant distortion, which have different effects on human observer. Performance of the proposed quality measure is illustrated by examples involving images with different types of degradation. The technique provides a means to relate the quality of an image to the interpretation and quantification throughout the frequency range, in which the noise level is estimated for quality evaluation. The experimental results of using this method for image quality measurement exhibit good correlation to subjective visual quality assessments.

  6. Flight Deck Display Technologies for 4DT and Surface Equivalent Visual Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Jones, Denis R.; Shelton, Kevin J.; Arthur, Jarvis J., III; Bailey, Randall E.; Allamandola, Angela S.; Foyle, David C.; Hooey, Becky L.

    2009-01-01

    NASA research is focused on flight deck display technologies that may significantly enhance situation awareness, enable new operating concepts, and reduce the potential for incidents/accidents for terminal area and surface operations. The display technologies include surface map, head-up, and head-worn displays; 4DT guidance algorithms; synthetic and enhanced vision technologies; and terminal maneuvering area traffic conflict detection and alerting systems. This work is critical to ensure that the flight deck interface technologies and the role of the human participants can support the full realization of the Next Generation Air Transportation System (NextGen) and its novel operating concepts.

  7. Resisting the Lure of Technology-Driven Design: Pedagogical Approaches to Visual Communication

    ERIC Educational Resources Information Center

    Northcut, Kathryn M.; Brumberger, Eva R.

    2010-01-01

    Technical communicators are expected to work extensively with visual texts in workplaces. Fortunately, most academic curricula include courses in which the skills necessary for such tasks are introduced and sometimes developed in depth. We identify a tension between a focus on technological skill vs. a focus on principles and theory, arguing that…

  8. Incorporating Assistive Technology for Students with Visual Impairments into the Music Classroom

    ERIC Educational Resources Information Center

    Rush, Toby W.

    2015-01-01

    Although recent advances make it easier than ever before for students with severe visual impairments to be fully accommodated in the music classroom, one of the most significant current challenges in this area is most music educators' unfamiliarity with current assistive technology. Fortunately, many of these tools are readily available and even…

  9. Digital Technology in the Visual Arts Classroom: An [un]Easy Partnership

    ERIC Educational Resources Information Center

    Wilks, Judith; Cutcher, Alexandra; Wilks, Susan

    2012-01-01

    This article scrutinizes the dichotomy of the uneasy and easy partnerships that exist between digital technology and visual arts education. The claim that by putting computers into schools "we have bought 'one half of a product'... we've bought the infrastructure and the equipment but we haven't bought the educational…

  10. The Future of Access Technology for Blind and Visually Impaired People.

    ERIC Educational Resources Information Center

    Schreier, E. M.

    1990-01-01

    This article describes potential use of new technological products and services by blind/visually impaired people. Items discussed include computer input devices, public telephones, automatic teller machines, airline and rail arrival/departure displays, ticketing machines, information retrieval systems, order-entry terminals, optical character…

  11. Receptivity toward Assistive Computer Technology by Non-Users Who Are Blind/Visually Impaired

    ERIC Educational Resources Information Center

    Leff, Lisa

    2012-01-01

    The non-use of assistive computer technology by some people who are legallyblind/visually-impaired was investigated to determine the reasons for lack of interest (Chiang, Cole, Gupta, Kaiser, & Starren, 2006; Williamson, Wright, Schauder & Bow, 2001). Social and psychological factors implicated in non-interest were determined by profiling…

  12. Domain Visualization Using VxInsight[R] for Science and Technology Management.

    ERIC Educational Resources Information Center

    Boyack, Kevin W.; Wylie, Brian N.; Davidson, George S.

    2002-01-01

    Presents the application of a knowledge visualization tool, VxInsight[R], to enable domain analysis for science and technology management. Uses data mining from sources of bibliographic information to define subsets of relevant information and discusses citation mapping, text mapping, and journal mapping. (Author/LRW)

  13. Generating and Analyzing Visual Representations of Conic Sections with the Use of Technological Tools

    ERIC Educational Resources Information Center

    Santos-Trigo, Manuel; Espinosa-Perez, Hugo; Reyes-Rodriguez, Aaron

    2006-01-01

    Technological tools have the potential to offer students the possibility to represent information and relationships embedded in problems and concepts in ways that involve numerical, algebraic, geometric, and visual approaches. In this paper, the authors present and discuss an example in which an initial representation of a mathematical object…

  14. Assistive Technology Competencies for Teachers of Students with Visual Impairments: A National Study

    ERIC Educational Resources Information Center

    Zhou, Li; Ajuwon, Paul M.; Smith, Derrick W.; Griffin-Shirley, Nora; Parker, Amy T.; Okungu, Phoebe

    2012-01-01

    Introduction: For practicing teachers of students with visual impairments, assistive technology has assumed an important role in the education of their students' assessment and learning of content. Little research has addressed this area; therefore, the purpose of the study presented here was to identify the teachers' self-reported possession of…

  15. Teaching Proofs and Algorithms in Discrete Mathematics with Online Visual Logic Puzzles

    ERIC Educational Resources Information Center

    Cigas, John; Hsin, Wen-Jung

    2005-01-01

    Visual logic puzzles provide a fertile environment for teaching multiple topics in discrete mathematics. Many puzzles can be solved by the repeated application of a small, finite set of strategies. Explicitly reasoning from a strategy to a new puzzle state illustrates theorems, proofs, and logic principles. These provide valuable, concrete…

  16. Parallel technology for numerical modeling of fluid dynamics problems by high-accuracy algorithms

    NASA Astrophysics Data System (ADS)

    Gorobets, A. V.

    2015-04-01

    A parallel computation technology for modeling fluid dynamics problems by finite-volume and finite-difference methods of high accuracy is presented. The development of an algorithm, the design of a software implementation, and the creation of parallel programs for computations on large-scale computing systems are considered. The presented parallel technology is based on a multilevel parallel model combining various types of parallelism: with shared and distributed memory and with multiple and single instruction streams to multiple data flows.

  17. Seismic Sleuths: Using Visualization Technology to Teach Middle School Earth Sciences Content

    NASA Astrophysics Data System (ADS)

    Peach, C.; Kilb, D.; Kent, G.; Fisler, S.

    2006-12-01

    Scientists from the Scripps Institution of Oceanography (SIO) Visualization Center and science educators from the Birch Aquarium at Scripps (BAS) and Aquatic Adventures Science Education Foundation (AASEF) collaborated to create Seismic Sleuths, a field trip experience for 6th graders that introduces concepts in global tectonics and seismicity using data visualization techniques. Designed to teach 6th grade California Earth science content standards, the program emphasizes how scientists gather and use data to understand Earth processes. The Seismic Sleuths field trip program is the culminating event for a four-week, in-school Earth science enrichment program provided to four of San Diego's most underserved middle schools by AASEF. Using data and visualization techniques adopted from the SIO Visualization Center, the fieldtrip experience reinforces concepts taught in the in-school portion of the program. During the 1 1/2 hour field trip program, students rotate through three learning stations that include 1) examination of global topography and seismicity data using an internal projection globe; 2) interactive 3-D visualization (Fledermaus) of earthquake hypocenter data and topography at convergent and divergent plate boundaries; and 3) a working ocean bottom seismometer that is used to demonstrate how seismic data are collected. Data from an evaluation of the program suggest that use of the visualization technology enhances student learning with substantial increases in student knowledge measured in pre- and post-field trip student knowledge surveys.

  18. Visual Servoing of Quadrotor Micro-Air Vehicle Using Color-Based Tracking Algorithm

    NASA Astrophysics Data System (ADS)

    Azrad, Syaril; Kendoul, Farid; Nonami, Kenzo

    This paper describes a vision-based tracking system using an autonomous Quadrotor Unmanned Micro-Aerial Vehicle (MAV). The vision-based control system relies on color target detection and tracking algorithm using integral image, Kalman filters for relative pose estimation, and a nonlinear controller for the MAV stabilization and guidance. The vision algorithm relies on information from a single onboard camera. An arbitrary target can be selected in real-time from the ground control station, thereby outperforming template and learning-based approaches. Experimental results obtained from outdoor flight tests, showed that the vision-control system enabled the MAV to track and hover above the target as long as the battery is available. The target does not need to be pre-learned, or a template for detection. The results from image processing are sent to navigate a non-linear controller designed for the MAV by the researchers in our group.

  19. New speckle analysis algorithm for flow visualization in optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    De Pretto, Lucas R.; Nogueira, Gesse E. C.; Freitas, Anderson Z.

    2015-06-01

    Optical Coherence Tomography (OCT) is a noninvasive technique capable of generating in vivo high-resolution images. However, OCT images are degraded by a granular and random noise called speckle. Nevertheless, such a noise may be used to gather information regarding the sample, as is exploited by techniques like Speckle Variance - OCT (SV-OCT). SV-OCT is widely used in the literature, but the variance calculation is computationally expensive. Therefore, we propose a new algorithm to employ speckle in identifying flow based on the evaluation of intensity fluctuation between two consecutively acquired OCT images. Our results were compared to those obtained by traditional method of Speckle Variance to demonstrate the feasibility of the technique. Both algorithms were applied to series of OCT images from a microchannel flow phantom, as well as from a biological tissue with blood flow. The results obtained by our method are in good agreement with those from SV-OCT. We've also analyzed the performance of both algorithms, registering the processing time and memory use. Our method performed 31% faster with the same use of memory. Therefore, we demonstrated a new method to map flow on OCT images.

  20. iOS--Worthy of the Hype as Assistive Technology for Visual Impairments? A Phenomenological Study of iOS Device Use by Individuals with Visual Impairments

    ERIC Educational Resources Information Center

    Scott, Shari

    2013-01-01

    This qualitative study sought to explore the shared essence of the lived experiences of early adopters of iOS devices as assistive technology by persons with visual impairments. The capstone question addressed the idea of whether any one device could fully meet the assistive technology needs of this population. Purposeful sampling methods were…

  1. Art-Science-Technology collaboration through immersive, interactive 3D visualization

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.

    2014-12-01

    At the W. M. Keck Center for Active Visualization in Earth Sciences (KeckCAVES), a group of geoscientists and computer scientists collaborate to develop and use of interactive, immersive, 3D visualization technology to view, manipulate, and interpret data for scientific research. The visual impact of immersion in a CAVE environment can be extremely compelling, and from the outset KeckCAVES scientists have collaborated with artists to bring this technology to creative works, including theater and dance performance, installations, and gamification. The first full-fledged collaboration designed and produced a performance called "Collapse: Suddenly falling down", choreographed by Della Davidson, which investigated the human and cultural response to natural and man-made disasters. Scientific data (lidar scans of disaster sites, such as landslides and mine collapses) were fully integrated into the performance by the Sideshow Physical Theatre. This presentation will discuss both the technological and creative characteristics of, and lessons learned from the collaboration. Many parallels between the artistic and scientific process emerged. We observed that both artists and scientists set out to investigate a topic, solve a problem, or answer a question. Refining that question or problem is an essential part of both the creative and scientific workflow. Both artists and scientists seek understanding (in this case understanding of natural disasters). Differences also emerged; the group noted that the scientists sought clarity (including but not limited to quantitative measurements) as a means to understanding, while the artists embraced ambiguity, also as a means to understanding. Subsequent art-science-technology collaborations have responded to evolving technology for visualization and include gamification as a means to explore data, and use of augmented reality for informal learning in museum settings.

  2. Neural network and genetic algorithm technology in data mining of manufacturing quality information

    NASA Astrophysics Data System (ADS)

    Song, Limei; Qu, Xing-Hua; Ye, Shenghua

    2002-03-01

    Data Mining of Manufacturing Quality Information (MQI) is the key technology in Quality Lead Control. Of all the data mining methods, Neural Network and Genetic Algorithm is widely used for their strong advantages, such as non-linear, collateral, veracity etc. But if you singly use them, there will be some limitations preventing your research, such as convergence slowly, searching blindness etc. This paper combines their merits and use Genetic BP Algorithm in Data Mining of MQI. It has been successfully used in the key project of Natural Science Foundation of China (NSFC) - Quality Control and Zero-defect Engineering (Project No. 59735120).

  3. An alternating direction algorithm for two-phase flow visualization using gamma computed tomography

    NASA Astrophysics Data System (ADS)

    Xue, Qian; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi

    2012-12-01

    In order to build high-speed imaging systems with low cost and low radiation leakage, the number of radioactive sources and detectors in the multiphase flow computed tomography (CT) system has to be limited. Moreover, systematic and random errors are inevitable in practical applications. The limited and corrupted measurement data have made the tomographic inversion process the most critical part in multiphase flow CT. Although various iterative reconstruction algorithms have been developed based on least squares minimization, the imaging quality is still inadequate for the reconstruction of relatively complicated bubble flow. This paper extends an alternating direction method (ADM), which is originally proposed in compressed sensing, to image two-phase flow using a low-energy γ-CT system. An l1 norm-based regularization technique is utilized to treat the ill-posedness of the inverse problem, and the image reconstruction model is reformulated into one having partially separable objective functions, thereafter a dual-based ADM is adopted to solve the resulting problem. The feasibility is demonstrated in prototype experiments. Comparisons between the ADM and the conventional iterative algorithms show that the former has obviously improved the space resolution in reasonable time.

  4. SciDAC Visualization and Analytics Center for EnablingTechnology

    SciTech Connect

    Bethel, E. Wes; Johnson, Chris; Joy, Ken; Ahern, Sean; Pascucci,Valerio; Childs, Hank; Cohen, Jonathan; Duchaineau, Mark; Hamann, Bernd; Hansen, Charles; Laney, Dan; Lindstrom, Peter; Meredith, Jeremy; Ostrouchov, George; Parker, Steven; Silva, Claudio; Sanderson, Allen; Tricoche, Xavier

    2006-11-28

    The SciDAC2 Visualization and Analytics Center for EnablingTechnologies (VACET) began operation on 10/1/2006. This document, dated11/27/2006, is the first version of the VACET project management plan. Itwas requested by and delivered to ASCR/DOE. It outlines the Center'saccomplishments in the first six weeks of operation along with broadobjectives for the upcoming future (12-24 months).

  5. See-Through Technology for Biological Tissue: 3-Dimensional Visualization of Macromolecules

    PubMed Central

    2016-01-01

    Tissue clearing technology is currently one of the fastest growing fields in biomedical sciences. Tissue clearing techniques have become a powerful approach to understand further the structural information of intact biological tissues. Moreover, technological improvements in tissue clearing and optics allowed the visualization of neural network in the whole brain tissue with subcellular resolution. Here, we described an overview of various tissue-clearing techniques, with focus on the tissue-hydrogel mediated clearing methods, and discussed the main advantages and limitations of transparent tissue for clinical diagnosis. PMID:27230455

  6. See-Through Technology for Biological Tissue: 3-Dimensional Visualization of Macromolecules.

    PubMed

    Lee, Eunsoo; Kim, Hyun Jung; Sun, Woong

    2016-05-01

    Tissue clearing technology is currently one of the fastest growing fields in biomedical sciences. Tissue clearing techniques have become a powerful approach to understand further the structural information of intact biological tissues. Moreover, technological improvements in tissue clearing and optics allowed the visualization of neural network in the whole brain tissue with subcellular resolution. Here, we described an overview of various tissue-clearing techniques, with focus on the tissue-hydrogel mediated clearing methods, and discussed the main advantages and limitations of transparent tissue for clinical diagnosis. PMID:27230455

  7. Selective pattern enhancement processing for digital mammography, algorithms, and the visual evaluation

    NASA Astrophysics Data System (ADS)

    Yamada, Masahiko; Shimura, Kazuo; Nagata, Takefumi

    2003-05-01

    In order to enhance the micro calcifications selectively without enhancing noises, PEM (Pattern Enhancement Processing for Mammography) has been developed by utilizing not only the frequency information but also the structural information of the specified objects. PEM processing uses two structural characteristics i.e. steep edge structure and low-density isolated-point structure. The visual evaluation of PEM processing was done using two different resolution CR mammography images. The enhanced image by PEM processing was compared with the image without enhancement, and the conventional usharp-mask processed image. In the PEM processed image, an increase of noises due to enhancement was suppressed as compared with that in the conventional unsharp-mask processed image. The evaluation using CDMAM phantom showed that PEM processing improved the detection performance of a minute circular pattern. By combining PEM processing with the low and medium frequency enhancement processing, both mammary glands and micro calcifications are clearly enhanced.

  8. Theory and algorithms of an efficient fringe analysis technology for automatic measurement applications.

    PubMed

    Juarez-Salazar, Rigoberto; Guerrero-Sanchez, Fermin; Robledo-Sanchez, Carlos

    2015-06-10

    Some advances in fringe analysis technology for phase computing are presented. A full scheme for phase evaluation, applicable to automatic applications, is proposed. The proposal consists of: a fringe-pattern normalization method, Fourier fringe-normalized analysis, generalized phase-shifting processing for inhomogeneous nonlinear phase shifts and spatiotemporal visibility, and a phase-unwrapping method by a rounding-least-squares approach. The theoretical principles of each algorithm are given. Numerical examples and an experimental evaluation are presented. PMID:26192836

  9. PDB explorer -- a web based algorithm for protein annotation viewer and 3D visualization.

    PubMed

    Nayarisseri, Anuraj; Shardiwal, Rakesh Kumar; Yadav, Mukesh; Kanungo, Neha; Singh, Pooja; Shah, Pratik; Ahmed, Sheaza

    2014-12-01

    The PDB file format, is a text format characterizing the three dimensional structures of macro molecules available in the Protein Data Bank (PDB). Determined protein structure are found in coalition with other molecules or ions such as nucleic acids, water, ions, Drug molecules and so on, which therefore can be described in the PDB format and have been deposited in PDB database. PDB is a machine generated file, it's not human readable format, to read this file we need any computational tool to understand it. The objective of our present study is to develop a free online software for retrieval, visualization and reading of annotation of a protein 3D structure which is available in PDB database. Main aim is to create PDB file in human readable format, i.e., the information in PDB file is converted in readable sentences. It displays all possible information from a PDB file including 3D structure of that file. Programming languages and scripting languages like Perl, CSS, Javascript, Ajax, and HTML have been used for the development of PDB Explorer. The PDB Explorer directly parses the PDB file, calling methods for parsed element secondary structure element, atoms, coordinates etc. PDB Explorer is freely available at http://www.pdbexplorer.eminentbio.com/home with no requirement of log-in. PMID:25118648

  10. [Segmentation of Winter Wheat Canopy Image Based on Visual Spectral and Random Forest Algorithm].

    PubMed

    Liu, Ya-dong; Cui, Ri-xian

    2015-12-01

    Digital image analysis has been widely used in non-destructive monitoring of crop growth and nitrogen nutrition status due to its simplicity and efficiency. It is necessary to segment winter wheat plant from soil background for accessing canopy cover, intensity level of visible spectrum (R, G, and B) and other color indices derived from RGB. In present study, according to the variation in R, G, and B components of sRGB color space and L*, a*, and b* components of CIEL* a* b* color space between wheat plant and soil background, the segmentation of wheat plant from soil background were conducted by the Otsu's method based on a* component of CIEL* a* b* color space, and RGB based random forest method, and CIEL* a* b* based random forest method, respectively. Also the ability to segment wheat plant from soil background was evaluated with the value of segmentation accuracy. The results showed that all three methods had revealed good ability to segment wheat plant from soil background. The Otsu's method had lowest segmentation accuracy in comparison with the other two methods. There were only little difference in segmentation error between the two random forest methods. In conclusion, the random forest method had revealed its capacity to segment wheat plant from soil background with only the visual spectral information of canopy image without any color components combinations or any color space transformation. PMID:26964234

  11. The Combination Design of Enabling Technologies in Group Learning: New Study Support Service for Visually Impaired University Students

    ERIC Educational Resources Information Center

    Tangsri, Chatcai; Na-Takuatoong, Onjaree; Sophatsathit, Peraphon

    2013-01-01

    This article aims to show how the process of new service technology-based development improves the current study support service for visually impaired university students. Numerous studies have contributed to improving assisted aid technology such as screen readers, the development and the use of audiobooks, and technology that supports individual…

  12. The Use of Assistive Technology by High School Students with Visual Impairments: A Second Look at the Current Problem

    ERIC Educational Resources Information Center

    Kelly, Stacy M.

    2011-01-01

    Even though a wide variety of assistive technology tools and devices are available in the marketplace, many students with visual impairments (that is, those who are blind or have low vision) have not yet benefitted from using this specialized technology. This article presents a study that assessed the use of assistive technology by high school…

  13. Microscope and spectacle: on the complexities of using new visual technologies to communicate about wildlife conservation.

    PubMed

    Verma, Audrey; van der Wal, René; Fischer, Anke

    2015-11-01

    Wildlife conservation-related organisations increasingly employ new visual technologies in their science communication and public engagement efforts. Here, we examine the use of such technologies for wildlife conservation campaigns. We obtained empirical data from four UK-based organisations through semi-structured interviews and participant observation. Visual technologies were used to provide the knowledge and generate the emotional responses perceived by organisations as being necessary for motivating a sense of caring about wildlife. We term these two aspects 'microscope' and 'spectacle', metaphorical concepts denoting the duality through which these technologies speak to both the cognitive and the emotional. As conservation relies on public support, organisations have to be seen to deliver information that is not only sufficiently detailed and scientifically credible but also spectacular enough to capture public interest. Our investigation showed that balancing science and entertainment is a difficult undertaking for wildlife-related organisations as there are perceived risks of contriving experiences of nature and obscuring conservation aims. PMID:26508351

  14. Enhancing Visualization Skills-Improving Options and Success (EnViSIONS) of Engineering and Technology Students

    ERIC Educational Resources Information Center

    Veurink, N. L.; Hamlin, A. J.; Kampe; J. C. M.; Sorby, S. A.; Blasko, D. G.; Holliday-Darr, K. A.; Kremer, J. D. Trich; Harris, L. V. Abe; Connolly, P. E.; Sadowski, M. A.; Harris, K. S.; Brus, C. P.; Boyle, L. N.; Study, N. E.; Knott, T. W.

    2009-01-01

    Spatial visualization skills are vital to many careers and in particular to STEM fields. Materials have been developed at Michigan Technological University and Penn State Erie, The Behrend College to assess and develop spatial skills. The EnViSIONS (Enhancing Visualization Skills-Improving Options aNd Success) project is combining these materials…

  15. Computational fluid dynamics for propulsion technology: Geometric grid visualization in CFD-based propulsion technology research

    NASA Technical Reports Server (NTRS)

    Ziebarth, John P.; Meyer, Doug

    1992-01-01

    The coordination is examined of necessary resources, facilities, and special personnel to provide technical integration activities in the area of computational fluid dynamics applied to propulsion technology. Involved is the coordination of CFD activities between government, industry, and universities. Current geometry modeling, grid generation, and graphical methods are established to use in the analysis of CFD design methodologies.

  16. Demonstration of three gorges archaeological relics based on 3D-visualization technology

    NASA Astrophysics Data System (ADS)

    Xu, Wenli

    2015-12-01

    This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.

  17. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    NASA Astrophysics Data System (ADS)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  18. Designing a Collaborative Visual Analytics Tool for Social and Technological Change Prediction.

    SciTech Connect

    Wong, Pak C.; Leung, Lai-Yung R.; Lu, Ning; Scott, Michael J.; Mackey, Patrick S.; Foote, Harlan P.; Correia, James; Taylor, Zachary T.; Xu, Jianhua; Unwin, Stephen D.; Sanfilippo, Antonio P.

    2009-09-01

    We describe our ongoing efforts to design and develop a collaborative visual analytics tool to interactively model social and technological change of our society in a future setting. The work involves an interdisciplinary team of scientists from atmospheric physics, electrical engineering, building engineering, social sciences, economics, public policy, and national security. The goal of the collaborative tool is to predict the impact of global climate change on the U.S. power grids and its implications for society and national security. These future scenarios provide critical assessment and information necessary for policymakers and stakeholders to help formulate a coherent, unified strategy toward shaping a safe and secure society. The paper introduces the problem background and related work, explains the motivation and rationale behind our design approach, presents our collaborative visual analytics tool and usage examples, and finally shares the development challenge and lessons learned from our investigation.

  19. The monocular visual imaging technology model applied in the airport surface surveillance

    NASA Astrophysics Data System (ADS)

    Qin, Zhe; Wang, Jian; Huang, Chao

    2013-08-01

    At present, the civil aviation airports use the surface surveillance radar monitoring and positioning systems to monitor the aircrafts, vehicles and the other moving objects. Surface surveillance radars can cover most of the airport scenes, but because of the terminals, covered bridges and other buildings geometry, surface surveillance radar systems inevitably have some small segment blind spots. This paper presents a monocular vision imaging technology model for airport surface surveillance, achieving the perception of scenes of moving objects such as aircrafts, vehicles and personnel location. This new model provides an important complement for airport surface surveillance, which is different from the traditional surface surveillance radar techniques. Such technique not only provides clear objects activities screen for the ATC, but also provides image recognition and positioning of moving targets in this area. Thereby it can improve the work efficiency of the airport operations and avoid the conflict between the aircrafts and vehicles. This paper first introduces the monocular visual imaging technology model applied in the airport surface surveillance and then the monocular vision measurement accuracy analysis of the model. The monocular visual imaging technology model is simple, low cost, and highly efficient. It is an advanced monitoring technique which can make up blind spot area of the surface surveillance radar monitoring and positioning systems.

  20. Quantitative detection of defects based on Markov-PCA-BP algorithm using pulsed infrared thermography technology

    NASA Astrophysics Data System (ADS)

    Tang, Qingju; Dai, Jingmin; Liu, Junyan; Liu, Chunsheng; Liu, Yuanlin; Ren, Chunping

    2016-07-01

    Quantitative detection of debonding defects' diameter and depth in TBCs has been carried out using pulsed infrared thermography technology. By combining principal component analysis with neural network theory, the Markov-PCA-BP algorithm was proposed. The principle and realization process of the proposed algorithm was described. In the prediction model, the principal components which can reflect most characteristics of the thermal wave signal were set as the input, and the defect depth and diameter was set as the output. The experimental data from pulsed infrared thermography tests of TBCs with flat bottom hole defects was selected as the training and testing sample. Markov-PCA-BP predictive system was arrived, based on which both the defect depth and diameter were identified accurately, which proved the effectiveness of the proposed method for quantitative detection of debonding defects in TBCs.

  1. Ontology Sparse Vector Learning Algorithm for Ontology Similarity Measuring and Ontology Mapping via ADAL Technology

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Zhu, Linli; Wang, Kaiyun

    2015-12-01

    Ontology, a model of knowledge representation and storage, has had extensive applications in pharmaceutics, social science, chemistry and biology. In the age of “big data”, the constructed concepts are often represented as higher-dimensional data by scholars, and thus the sparse learning techniques are introduced into ontology algorithms. In this paper, based on the alternating direction augmented Lagrangian method, we present an ontology optimization algorithm for ontological sparse vector learning, and a fast version of such ontology technologies. The optimal sparse vector is obtained by an iterative procedure, and the ontology function is then obtained from the sparse vector. Four simulation experiments show that our ontological sparse vector learning model has a higher precision ratio on plant ontology, humanoid robotics ontology, biology ontology and physics education ontology data for similarity measuring and ontology mapping applications.

  2. Critical Visual Literacy: The New Phase of Applied Linguistics in the Era of Mobile Technology

    ERIC Educational Resources Information Center

    Dos Santos Costa, Giselda; Xavier, Antonio Carlos

    2016-01-01

    In our society, which is full of images, visual representations and visual experiences of all kinds, there is a paradoxically significant degree of visual illiteracy. Despite the importance of developing specific visual skills, visual literacy is not a priority in school curriculum (Spalter & van Dam, 2008). This work aims at (1) emphasising…

  3. The Impact of Assistive Technology on the Educational Performance of Students with Visual Impairments: A Synthesis of the Research

    ERIC Educational Resources Information Center

    Kelly, Stacy M.; Smith, Derrick W.

    2011-01-01

    This synthesis examined the research literature from 1965 to 2009 on the assistive technology that is used by individuals with visual impairments. The authors located and reviewed 256 articles for evidence-based research on assistive technology that had a positive impact on educational performance. Of the 256 studies, only 2 provided promising…

  4. 3D Simulation Technology as an Effective Instructional Tool for Enhancing Spatial Visualization Skills in Apparel Design

    ERIC Educational Resources Information Center

    Park, Juyeon; Kim, Dong-Eun; Sohn, MyungHee

    2011-01-01

    The purpose of this study is to explore the effectiveness of 3D simulation technology for enhancing spatial visualization skills in apparel design education and further to suggest an innovative teaching approach using the technology. Apparel design majors in an introductory patternmaking course, at a large Midwestern University in the United…

  5. Earthdata Search: Combining New Services and Technologies for Earth Science Data Discovery, Visualization, and Access

    NASA Astrophysics Data System (ADS)

    Quinn, P.; Pilone, D.

    2014-12-01

    A host of new services are revolutionizing discovery, visualization, and access of NASA's Earth science data holdings. At the same time, web browsers have become far more capable and open source libraries have grown to take advantage of these capabilities. Earthdata Search is a web application which combines modern browser features with the latest Earthdata services from NASA to produce a cutting-edge search and access client with features far beyond what was possible only a couple of years ago. Earthdata Search provides data discovery through the Common Metadata Repository (CMR), which provides a high-speed REST API for searching across hundreds of millions of data granules using temporal, spatial, and other constraints. It produces data visualizations by combining CMR data with Global Imagery Browse Services (GIBS) image tiles. Earthdata Search renders its visualizations using custom plugins built on Leaflet.js, a lightweight mobile-friendly open source web mapping library. The client further features an SVG-based interactive timeline view of search results. For data access, Earthdata Search provides easy temporal and spatial subsetting as well as format conversion by making use of OPeNDAP. While the client hopes to drive adoption of these services and standards, it provides fallback behavior for working with data that has not yet adopted them. This allows the client to remain on the cutting-edge of service offerings while still boasting a catalog containing thousands of data collections. In this session, we will walk through Earthdata Search and explain how it incorporates these new technologies and service offerings.

  6. A sensitive data extraction algorithm based on the content associated encryption technology for ICS

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Hao, Huang; Xie, Changsheng

    With the development of HD video, the protection of copyright becomes more complicated. More advanced copyright protection technology is needed. Traditional digital copyright protection technology generally uses direct or selective encryption algorithm and the key does not associate with the video content [1]. Once the encryption method is cracked or the key is stolen, the copyright of the video will be violated. To address this issue, this paper proposes a Sensitive Data Extraction Algorithm (SDEA) based on the content associated encryption technology which applies to the Internet Certification Service (ICS). The principle of content associated encryption is to extract some data from the video and use this extracted data as the key to encrypt the rest data. The extracted part from video is called sensitive data, and the rest part is called the main data. After extraction, the main data will not be played or poorly played. The encrypted sensitive data reach the terminal device through the safety certificated network and the main data are through ICS disc. The terminal equipments are responsible for synthesizing and playing these two parts of data. Consequently, even if the main data on disc is illegally obtained, the video cannot be played normally due to the lack of necessary sensitive data. It is proved by experiments that ICS using SDEA can destruct the video effectively with 0.25% extraction rates and the destructed video cannot be played well. It can also guarantee the consistency of the destructive effect on different videos with different contents. The sensitive data can be transported smoothly under the home Internet bandwidth.

  7. Collaborative Visualization Project: shared-technology learning environments for science learning

    NASA Astrophysics Data System (ADS)

    Pea, Roy D.; Gomez, Louis M.

    1993-01-01

    Project-enhanced science learning (PESL) provides students with opportunities for `cognitive apprenticeships' in authentic scientific inquiry using computers for data-collection and analysis. Student teams work on projects with teacher guidance to develop and apply their understanding of science concepts and skills. We are applying advanced computing and communications technologies to augment and transform PESL at-a-distance (beyond the boundaries of the individual school), which is limited today to asynchronous, text-only networking and unsuitable for collaborative science learning involving shared access to multimedia resources such as data, graphs, tables, pictures, and audio-video communication. Our work creates user technology (a Collaborative Science Workbench providing PESL design support and shared synchronous document views, program, and data access; a Science Learning Resource Directory for easy access to resources including two-way video links to collaborators, mentors, museum exhibits, media-rich resources such as scientific visualization graphics), and refine enabling technologies (audiovisual and shared-data telephony, networking) for this PESL niche. We characterize participation scenarios for using these resources and we discuss national networked access to science education expertise.

  8. Designing Haptic Assistive Technology for Individuals Who Are Blind or Visually Impaired.

    PubMed

    Pawluk, Dianne T V; Adams, Richard J; Kitada, Ryo

    2015-01-01

    This paper considers issues relevant for the design and use of haptic technology for assistive devices for individuals who are blind or visually impaired in some of the major areas of importance: Braille reading, tactile graphics, orientation and mobility. We show that there is a wealth of behavioral research that is highly applicable to assistive technology design. In a few cases, conclusions from behavioral experiments have been directly applied to design with positive results. Differences in brain organization and performance capabilities between individuals who are "early blind" and "late blind" from using the same tactile/haptic accommodations, such as the use of Braille, suggest the importance of training and assessing these groups individually. Practical restrictions on device design, such as performance limitations of the technology and cost, raise questions as to which aspects of these restrictions are truly important to overcome to achieve high performance. In general, this raises the question of what it means to provide functional equivalence as opposed to sensory equivalence. PMID:26336151

  9. Game on, science - how video game technology may help biologists tackle visualization challenges.

    PubMed

    Lv, Zhihan; Tek, Alex; Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc

    2013-01-01

    The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/. PMID:23483961

  10. Game On, Science - How Video Game Technology May Help Biologists Tackle Visualization Challenges

    PubMed Central

    Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc

    2013-01-01

    The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/. PMID:23483961

  11. Challenges and Recent Developments in Hearing Aids: Part I. Speech Understanding in Noise, Microphone Technologies and Noise Reduction Algorithms

    PubMed Central

    Chung, King

    2004-01-01

    This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225

  12. A Collaborative Education Network for Advancing Climate Literacy using Data Visualization Technology

    NASA Astrophysics Data System (ADS)

    McDougall, C.; Russell, E. L.; Murray, M.; Bendel, W. B.

    2013-12-01

    One of the more difficult issues in engaging broad audiences with scientific research is to present it in a way that is intuitive, captivating and up-to-date. Over the past ten years, the National Oceanic and Atmospheric Administration (NOAA) has made significant progress in this area through Science On a Sphere(R) (SOS). SOS is a room-sized, global display system that uses computers and video projectors to display Earth systems data onto a six-foot diameter sphere, analogous to a giant animated globe. This well-crafted data visualization system serves as a way to integrate and display global change phenomena; including polar ice melt, projected sea level rise, ocean acidification and global climate models. Beyond a display for individual data sets, SOS provides a holistic global perspective that highlights the interconnectedness of Earth systems, nations and communities. SOS is now a featured exhibit at more than 100 science centers, museums, universities, aquariums and other institutions around the world reaching more than 33 million visitors every year. To facilitate the development of how this data visualization technology and these visualizations could be used with public audiences, we recognized the need for the exchange of information among the users. To accomplish this, we established the SOS Users Collaborative Network. This network consists of the institutions that have an SOS system or partners who are creating content and educational programming for SOS. When we began the Network in 2005, many museums had limited capacity to both incorporate real-time, authentic scientific data about the Earth system and interpret global change visualizations. They needed not only the visualization platform and the scientific content, but also assistance with methods of approach. We needed feedback from these users on how to craft understandable visualizations and how to further develop the SOS platform to support learning. Through this Network and the collaboration

  13. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  14. Toward an Improved Haptic Zooming Algorithm for Graphical Information Accessed by Individuals Who Are Blind and Visually Impaired

    ERIC Educational Resources Information Center

    Rastogi, Ravi; Pawluk, Dianne T. V.

    2013-01-01

    An increasing amount of information content used in school, work, and everyday living is presented in graphical form. Unfortunately, it is difficult for people who are blind or visually impaired to access this information, especially when many diagrams are needed. One problem is that details, even in relatively simple visual diagrams, can be very…

  15. Simulation Evaluation of Synthetic Vision as an Enabling Technology for Equivalent Visual Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.

    2008-01-01

    Enhanced Vision (EV) and synthetic vision (SV) systems may serve as enabling technologies to meet the challenges of the Next Generation Air Transportation System (NextGen) Equivalent Visual Operations (EVO) concept ? that is, the ability to achieve or even improve on the safety of Visual Flight Rules (VFR) operations, maintain the operational tempos of VFR, and even, perhaps, retain VFR procedures independent of actual weather and visibility conditions. One significant challenge lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A piloted simulation experiment was conducted to evaluate the effects of the presence or absence of Synthetic Vision, the location of this information during an instrument approach (i.e., on a Head-Up or Head-Down Primary Flight Display), and the type of airport lighting information on landing minima. The quantitative data from this experiment were analyzed to begin the definition of performance-based criteria for all-weather approach and landing operations. Objective results from the present study showed that better approach performance was attainable with the head-up display (HUD) compared to the head-down display (HDD). A slight performance improvement in HDD performance was shown when SV was added, as the pilots descended below 200 ft to a 100 ft decision altitude, but this performance was not tested for statistical significance (nor was it expected to be statistically significant). The touchdown data showed that regardless of the display concept flown (SV HUD, Baseline HUD, SV HDD, Baseline HDD) a majority of the runs were within the performance-based defined approach and landing criteria in all the visibility levels, approach lighting systems, and decision altitudes tested. For this visual flight maneuver, RVR appeared to be the most significant influence in touchdown performance. The approach lighting system clearly impacted the pilot's ability to descend to 100 ft

  16. Development of layout split algorithms and printability evaluation for double patterning technology

    NASA Astrophysics Data System (ADS)

    Chiou, Tsann-Bim; Socha, Robert; Chen, Hong; Chen, Luoqi; Hsu, Stephen; Nikolsky, Peter; van Oosten, Anton; Chen, Alek C.

    2008-03-01

    When using the most advanced water-based immersion scanner at the 32nm node half-pitch, the image resolution will be below the k1 limit of 0.25. If EUV technology is not ready for mass production, double patterning technology (DPT) is one of the solutions to bridge the gap between wet ArF and EUV platforms. DPT technology implies a patterning process with two photolithography/etching steps. As a result, the critical pitch is reduced by a factor of 2, which means the k1 value could increase by a factor of 2. Due to the superimposition of patterns printed by two separate patterning steps, the overlay capability, in addition to image capability, contributes to critical dimension uniformity (CDU). The wafer throughput as well as cost is a concern because of the increased number of process steps. Therefore, the performance of imaging, overlay, and throughput of a scanner must be improved in order to implement DPT cost effectively. In addition, DPT requires an innovative software to evenly split the patterns into two layers for the full chip. Although current electronic design automation (EDA) tools can split the pattern through abundant geometry-manipulation functions, these functions, however, are insufficient. A rigorous pattern split requires more DPT-specific functions such as tagging/grouping critical features with two colors (and hence two layers), controlling the coloring sequence, correcting the printing error on stitching boundaries, dealing with color conflicts, increasing the coloring accuracy, considering full-chip possibility, etc. Therefore, in this paper we cover these issues by demonstrating a newly developed DPT pattern-split algorithm using a rule-based method. This method has one strong advantage of achieving very fast processing speed, so a full-chip DPT pattern split is practical. After the pattern split, all of the color conflicts are highlighted. Some of the color conflicts can be resolved by aggressive model-based methods, while the un

  17. A Meta-Analysis of the Educational Effectiveness of Three-Dimensional Visualization Technologies in Teaching Anatomy

    ERIC Educational Resources Information Center

    Yammine, Kaissar; Violato, Claudio

    2015-01-01

    Many medical graduates are deficient in anatomy knowledge and perhaps below the standards for safe medical practice. Three-dimensional visualization technology (3DVT) has been advanced as a promising tool to enhance anatomy knowledge. The purpose of this review is to conduct a meta-analysis of the effectiveness of 3DVT in teaching and learning…

  18. Collaborative Action Research Approach Promoting Professional Development for Teachers of Students with Visual Impairment in Assistive Technology

    ERIC Educational Resources Information Center

    Argyropoulos, Vassilios; Nikolaraizi, Magda; Tsiakali, Thomai; Kountrias, Polychronis; Koutsogiorgou, Sofia-Marina; Martos, Aineias

    2014-01-01

    This paper highlights the framework and discusses the results of an action research project which aimed to facilitate the adoption of assistive technology devices and specialized software by teachers of students with visual impairment via a digital educational game, developed specifically for this project. The persons involved in this…

  19. Long-Term Impact of Improving Visualization Abilities of Minority Engineering and Technology Students: Preliminary Results

    ERIC Educational Resources Information Center

    Study, Nancy E.

    2011-01-01

    Previous studies found that students enrolled in introductory engineering graphics courses at a historically black university (HBCU) had significantly lower than average test scores on the Purdue Spatial Visualization Test: Visualization of Rotations (PSVT) when it was administered during the first week of class. Since the ability to visualize is…

  20. Application Of Cathode-Ray Tube Technology To The Clinical Evaluation Of Visual Functions

    NASA Astrophysics Data System (ADS)

    Vernier, Francoise; Charlier, Jacques; Nguyen, Duc D.

    1988-02-01

    Cathode-ray tubes (CRTs) have many applications in the clinical evaluation of visual functions. They have been used to test visual acuity, contrast sensitivity, visual fields, and early development of vision in preverbal children. Because CRTs provide considerable flexibility for the definition of spatial and temporal components of the stimulus, their use provides an attractive solution to many visual stimulation problems. However, there are some limitations due to the scanning of the picture frame by the electron beam and also to the electron-photon conversion process. The spatial, photometric, spectral, and temporal characteristics of a specifically designed monochromatic television system are evaluated with reference to the physiological requirements of visual tests.

  1. Visual Analysis of Complex Networks and Community Structure

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Ye, Qi; Wang, Yi; Bi, Ran; Suo, Lijun; Hu, Deyong; Yang, Shengqi

    Many real-world domains can be represented as complex networks.A good visualization of a large and complex network is worth more than millions of words. Visual depictions of networks, which exploit human visual processing, are more prone to cognition of the structure of such complex networks than the computational representation. We star by briefly introducing some key technologies of network visualization, such as graph drawing algorithm and community discovery methods. The typical tools for network visualization are also reviewed. A newly developed software framework JSNVA for network visual analysis is introduced. Finally,the applications of JSNVA in bibliometric analysis and mobile call graph analysis are presented.

  2. Foetal images: the power of visual technology in antenatal care and the implications for women's reproductive freedom.

    PubMed

    Zechmeister, I

    2001-01-01

    Continuing medico-technical progress has led to an increasing medicalisation of pregnancy and childbirth. One of the most common technologies in this context is ultrasound. Based on some identified 'pro-technology feminist theories', notably the postmodernist feminist discourse, the technology of ultrasound is analysed focusing mainly on social and political rather than clinical issues. As empirical research suggests, ultrasound is welcomed by the majority of women. The analysis, however, shows that attitudes and decisions of women are influenced by broader social aspects. Furthermore, it demonstrates how the visual technology of ultrasound, in addition to other reproductive technology in maternity care, is linked to the 'personification' of the foetus and has therefore contributed to a new image of the foetus. The exploration of these issues challenges some arguments of feminist discourse. It draws attention to possible adverse implications of the technology for women's reproductive freedom and indicates the importance of the topic for political discussions. PMID:11874254

  3. Anaglyph Image Technology As a Visualization Tool for Teaching Geology of National Parks

    NASA Astrophysics Data System (ADS)

    Stoffer, P. W.; Phillips, E.; Messina, P.

    2003-12-01

    Anaglyphic stereo viewing technology emerged in the mid 1800's. Anaglyphs use offset images in contrasting colors (typically red and cyan) that when viewed through color filters produce a three-dimensional (3-D) image. Modern anaglyph image technology has become increasingly easy to use and relatively inexpensive using digital cameras, scanners, color printing, and common image manipulation software. Perhaps the primary drawbacks of anaglyph images include visualization problems with primary colors (such as flowers, bright clothing, or blue sky) and distortion factors in large depth-of-field images. However, anaglyphs are more versatile than polarization techniques since they can be printed, displayed on computer screens (such as on websites), or projected with a single projector (as slides or digital images), and red and cyan viewing glasses cost less than polarization glasses and other 3-D viewing alternatives. Anaglyph images are especially well suited for most natural landscapes, such as views dominated by natural earth tones (grays, browns, greens), and they work well for sepia and black and white images (making the conversion of historic stereo photography into anaglyphs easy). We used a simple stereo camera setup incorporating two digital cameras with a rigid base to photograph landscape features in national parks (including arches, caverns, cactus, forests, and coastlines). We also scanned historic stereographic images. Using common digital image manipulation software we created websites featuring anaglyphs of geologic features from national parks. We used the same images for popular 3-D poster displays at the U.S. Geological Survey Open House 2003 in Menlo Park, CA. Anaglyph photography could easily be used in combined educational outdoor activities and laboratory exercises.

  4. Pedagogical Praxis Surrounding the Integration of Photography, Visual Literacy, Digital Literacy, and Educational Technology into Business Education Classrooms: A Focus Group Study

    ERIC Educational Resources Information Center

    Schlosser, Peter Allen

    2010-01-01

    This paper reports on an investigation into how Marketing and Business Education Teachers utilize and integrate educational technology into curriculum through the use of photography. The ontology of this visual, technological, and language interface is explored with an eye toward visual literacy, digital literacy, and pedagogical praxis, focusing…

  5. Optimized design on condensing tubes high-speed TIG welding technology magnetic control based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Lin; Chang, Yunlong; Li, Yingmin; Lu, Ming

    2013-05-01

    An orthogonal experiment was conducted by the means of multivariate nonlinear regression equation to adjust the influence of external transverse magnetic field and Ar flow rate on welding quality in the process of welding condenser pipe by high-speed argon tungsten-arc welding (TIG for short). The magnetic induction and flow rate of Ar gas were used as optimum variables, and tensile strength of weld was set to objective function on the base of genetic algorithm theory, and then an optimal design was conducted. According to the request of physical production, the optimum variables were restrained. The genetic algorithm in the MATLAB was used for computing. A comparison between optimum results and experiment parameters was made. The results showed that the optimum technologic parameters could be chosen by the means of genetic algorithm with the conditions of excessive optimum variables in the process of high-speed welding. And optimum technologic parameters of welding coincided with experiment results.

  6. Effectiveness of GeoWall Visualization Technology for Conceptualization of the Sun-Earth-Moon System

    NASA Astrophysics Data System (ADS)

    Turner, N. E.; Gray, C.; Mitchell, E. J.

    2004-12-01

    One persistent difficulty many introductory astronomy students face is the lack of a 3-dimensional mental model of the Earth-Moon system. Students without such a mental model can have a very hard time conceptualizing the geometric relationships that cause the cycle of lunar phases. The GeoWall is a recently developed and affordable projection mechanism for three-dimensional stereo visualization which is becoming a popular tool in classrooms and research labs. We present results from a study using a 3-D GeoWall with a simulated sunlit Earth-Moon system on undergraduate students' ability to understand the origins of lunar phases. We test students exposed to only in-class instruction, some with a laboratory exercise using the GeoWall Earth-Moon simulation, some students who were exposed to both, and some with an alternate activity involving lunar observations. Students are given pre and post tests using the a diagnostic test called the Lunar Phase Concept Inventory (LPCI). We discuss the effectiveness of this technology as a teaching tool for lunar phases.

  7. In Vivo Corneal Biomechanical Properties with Corneal Visualization Scheimpflug Technology in Chinese Population

    PubMed Central

    Wu, Ying

    2016-01-01

    Purpose. To determine the repeatability of recalculated corneal visualization Scheimpflug technology (CorVis ST) parameters and to study the variation of biomechanical properties and their association with demographic and ocular characteristics. Methods. A total of 783 healthy subjects were included in this study. Comprehensive ophthalmological examinations were conducted. The repeatability of the recalculated biomechanical parameters with 90 subjects was assessed by the coefficient of variation (CV) and intraclass correlation coefficient (ICC). Univariate and multivariate linear regression models were used to identify demographic and ocular factors. Results. The repeatability of the central corneal thickness (CCT), deformation amplitude (DA), and first/second applanation time (A1/A2-time) exhibited excellent repeatability (CV% ≤ 3.312% and ICC ≥ 0.929 for all measurements). The velocity in/out (Vin/out), highest concavity- (HC-) radius, peak distance (PD), and DA showed a normal distribution. Univariate linear regression showed a statistically significant correlation between Vin, Vout, DA, PD, and HC-radius and IOP, CCT, and corneal volume, respectively. Multivariate analysis showed that IOP and CCT were negatively correlated with Vin, DA, and PD, while there was a positive correlation between Vout and HC-radius. Conclusion. The ICCs of the recalculated parameters, CCT, DA, A1-time, and A2-time, exhibited excellent repeatability. IOP, CCT, and corneal volume significantly influenced the biomechanical properties of the eye. PMID:27493965

  8. Visualizing sexual assault: an exploration of the use of optical technologies in the medico-legal context.

    PubMed

    White, Deborah; Du Mont, Janice

    2009-01-01

    This article is an exploration of the visualization of sexual assault in the context of adult women. In investigating the production of visual evidence, we outline the evolution of the specialized knowledge of medico-legal experts and describe the optical technologies involved in medical forensic examinations. We theorize that the principles and practices characterizing medicine, science and the law are mirrored in the medico-legal response to sexual assault. More specifically, we suggest that the demand for visual proof underpins the positivist approach taken in the pursuit of legal truth and that the generation of such evidence is based on producing discrete and decontextualized empirical facts through what are perceived to be objective technologies. Drawing on interview and focus group data with 14 sexual assault nurse examiners (SANEs) in Ontario, Canada, we examine perceptions and experiences of the role of the visual in sexual assault. Certain of their comments appear to lend support to our theoretical assumptions, indicating a sense of the institutional overemphasis placed on physical damage to sexually assaulted women's bodies and the drive towards the increased technologization of visual evidence documentation. They also noted that physical injuries are frequently absent and that those observed through more refined tools of microvisualization such as colposcopes may be explained away as having resulted from either vigorous consensual sex or a "trivial" sexual assault. Concerns were expressed regarding the possibly problematic ways in which either the lack or particular nature of visual evidence may play out in the legal context. The process of documenting external and internal injuries created for some an uncomfortable sense of fragmenting and objectifying the bodies of those women they must simultaneously care for. We point to the need for further research to enhance our understanding of this issue. PMID:18952339

  9. [Research on the Source Identification of Mine Water Inrush Based on LIF Technology and SIMCA Algorithm].

    PubMed

    Yan, Peng-cheng; Zhou, Meng-ran; Liu, Qi-meng; Zhang, Kai-yuan; He, Chen-yang

    2016-01-01

    Rapid source identification of mine water inrush is of great significance for early warning and prevention in mine water hazard. According to the problem that traditional chemical methods to identify source takes a long time, put forward a method for rapid source identification of mine water inrush with laser induced fluorescence (LIF) technology and soft independent modeling of class analogy (SIMCA) algorithm. Laser induced fluorescence technology has the characteristics of fast analysis, high sensitivity and so on. With the laser assisted, fluorescence spectrums can be collected real-time by the fluorescence spectrometer. According to the fluorescence spectrums, the type of water samples can be identified. If the database is completed, it takes a few seconds for coal mine water source identification, so it is of great significance for early warning and post-disaster relief in coal mine water disaster. The experiment uses 405 nm laser emission laser into the 5 kinds of water inrush samples and get 100 groups of fluorescence spectrum, and then put all fluorescence spectrums into preprocessing. Use 15 group spectrums of each water inrush samples, a total of 75 group spectrums, as the prediction set, the rest of 25 groups spectrums as the test set. Using principal component analysis (PCA) to modeling the 5 kinds of water samples respectively, and then classify the water samples with SIMCA on the basis of the PCA model. It was found that the fluorescence spectrum are obvious different of different water inrush samples. The fluorescence spectrums after preprocessing of Gaussian-Filter, under the condition of the principal component number is 2 and the significant level α = 5%, the accuracy of prediction set and testing set are all 100% with the SIMCA to classify the water inrush samples. PMID:27228775

  10. A Web-based Multi-user Interactive Visualization System For Large-Scale Computing Using Google Web Toolkit Technology

    NASA Astrophysics Data System (ADS)

    Weiss, R. M.; McLane, J. C.; Yuen, D. A.; Wang, S.

    2009-12-01

    We have created a web-based, interactive system for multi-user collaborative visualization of large data sets (on the order of terabytes) that allows users in geographically disparate locations to simultaneous and collectively visualize large data sets over the Internet. By leveraging asynchronous java and XML (AJAX) web development paradigms via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide remote, web-based users a web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota that provides high resolution visualizations to the order of 15 million pixels by Megan Damon. In the current version of our software, we have implemented a new, highly extensible back-end framework built around HTTP "server push" technology to provide a rich collaborative environment and a smooth end-user experience. Furthermore, the web application is accessible via a variety of devices including netbooks, iPhones, and other web- and javascript-enabled cell phones. New features in the current version include: the ability for (1) users to launch multiple visualizations, (2) a user to invite one or more other users to view their visualization in real-time (multiple observers), (3) users to delegate control aspects of the visualization to others (multiple controllers) , and (4) engage in collaborative chat and instant messaging with other users within the user interface of the web application. We will explain choices made regarding implementation, overall system architecture and method of operation, and the benefits of an extensible, modular design. We will also discuss future goals, features, and our plans for increasing scalability of the system which includes a discussion of the benefits potentially afforded us by a migration of server-side components to the Google Application Engine (http://code.google.com/appengine/).

  11. SBIR Phase II Final Report for Scalable Grid Technologies for Visualization Services

    SciTech Connect

    Sebastien Barre; Will Schroeder

    2006-10-15

    This project developed software tools for the automation of grid computing. In particular, the project focused in visualization and imaging tools (VTK, ParaView and ITK); i.e., we developed tools to automatically create Grid services from C++ programs implemented using the open-source VTK visualization and ITK segmentation and registration systems. This approach helps non-Grid experts to create applications using tools with which they are familiar, ultimately producing Grid services for visualization and image analysis by invocation of an automatic process.

  12. A low-power imager and compression algorithms for a brain-machine visual prosthesis for the blind

    NASA Astrophysics Data System (ADS)

    Turicchia, L.; O'Halloran, M.; Kumar, D. P.; Sarpeshkar, R.

    2008-08-01

    We present a synchronous time-based dual-threshold imager that experimentally achieves 95.5 dB dynamic range, while consuming 1.79 nJ/pixel/frame, making it one of the most wide-dynamic-range energy-efficient imagers reported. The imager has 150×256 pixels, with a pixel pitch of 12.5μm × 12.5μm and a fill factor of 42.7%. The imager is intended for use in a brain-machine visual prosthesis for the blind where energy efficiency and power are of paramount importance. Such prostheses will also need to convey visual information to patients with relatively few electrodes and in a manner that minimizes electrode interactions, just as cochlear implants have accomplished for deaf subjects. To achieve these goals, we present a strategy that compresses visual information into the basis coefficients of a few image kernels that encode enough information to provide reasonably good image reconstruction with 60 electrodes. The strategy also uses time-multiplexed stimulation of electrodes to minimize channel interactions like the continuous interleaved sampling (CIS) strategy used in cochlear implants. Some of the image kernels that we employ are similar to the receptive fields observed in biology and may thus be natural to learn, just as cochlear-implant subjects have learned to reconstruct sound from a few filter basis coefficients.

  13. Restoring Fort Frontenac in 3D: Effective Usage of 3D Technology for Heritage Visualization

    NASA Astrophysics Data System (ADS)

    Yabe, M.; Goins, E.; Jackson, C.; Halbstein, D.; Foster, S.; Bazely, S.

    2015-02-01

    This paper is composed of three elements: 3D modeling, web design, and heritage visualization. The aim is to use computer graphics design to inform and create an interest in historical visualization by rebuilding Fort Frontenac using 3D modeling and interactive design. The final model will be integr ated into an interactive website to learn more about the fort's historic imp ortance. It is apparent that using computer graphics can save time and money when it comes to historical visualization. Visitors do not have to travel to the actual archaeological buildings. They can simply use the Web in their own home to learn about this information virtually. Meticulously following historical records to create a sophisticated restoration of archaeological buildings will draw viewers into visualizations, such as the historical world of Fort Frontenac. As a result, it allows the viewers to effectively understand the fort's social sy stem, habits, and historical events.

  14. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  15. Computer architecture for efficient algorithmic executions in real-time systems: new technology for avionics systems and advanced space vehicles

    SciTech Connect

    Carroll, C.C.; Youngblood, J.N.; Saha, A.

    1987-12-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  16. Visual Prostheses: The Enabling Technology to Give Sight to the Blind

    PubMed Central

    Maghami, Mohammad Hossein; Sodagar, Amir Masoud; Lashay, Alireza; Riazi-Esfahani, Hamid; Riazi-Esfahani, Mohammad

    2014-01-01

    Millions of patients are either slowly losing their vision or are already blind due to retinal degenerative diseases such as retinitis pigmentosa (RP) and age-related macular degeneration (AMD) or because of accidents or injuries. Employment of artificial means to treat extreme vision impairment has come closer to reality during the past few decades. Currently, many research groups work towards effective solutions to restore a rudimentary sense of vision to the blind. Aside from the efforts being put on replacing damaged parts of the retina by engineered living tissues or microfabricated photoreceptor arrays, implantable electronic microsystems, referred to as visual prostheses, are also sought as promising solutions to restore vision. From a functional point of view, visual prostheses receive image information from the outside world and deliver them to the natural visual system, enabling the subject to receive a meaningful perception of the image. This paper provides an overview of technical design aspects and clinical test results of visual prostheses, highlights past and recent progress in realizing chronic high-resolution visual implants as well as some technical challenges confronted when trying to enhance the functional quality of such devices. PMID:25709777

  17. Direct or Directed: Orchestrating a More Harmonious Approach to Teaching Technology within an Art & Design Higher Education Curriculum with Special Reference to Visual Communications Courses

    ERIC Educational Resources Information Center

    Marshall, Lindsey; Meachem, Lester

    2007-01-01

    In this scoping study we have investigated the integration of subject-specific software into the structure of visual communications courses. There is a view that the response within visual communications courses to the rapid developments in technology has been linked to necessity rather than by design. Through perceptions of staff with day-to-day…

  18. Augmented Reality as a Visual and Spatial Learning Tool in Technology Education

    ERIC Educational Resources Information Center

    Thornton, Timothy; Ernst, Jeremy V.; Clark, Aaron C.

    2012-01-01

    Improvement in instructional practices through dynamic means of delivery remains a central consideration to technology educators. To help accomplish this, one must constantly utilize contemporary and cutting-edge technological applications in attempts to provide a more beneficial learning experience for students. These technologies must…

  19. Delaunay algorithm and principal component analysis for 3D visualization of mitochondrial DNA nucleoids by Biplane FPALM/dSTORM.

    PubMed

    Alán, Lukáš; Špaček, Tomáš; Ježek, Petr

    2016-07-01

    Data segmentation and object rendering is required for localization super-resolution microscopy, fluorescent photoactivation localization microscopy (FPALM), and direct stochastic optical reconstruction microscopy (dSTORM). We developed and validated methods for segmenting objects based on Delaunay triangulation in 3D space, followed by facet culling. We applied them to visualize mitochondrial nucleoids, which confine DNA in complexes with mitochondrial (mt) transcription factor A (TFAM) and gene expression machinery proteins, such as mt single-stranded-DNA-binding protein (mtSSB). Eos2-conjugated TFAM visualized nucleoids in HepG2 cells, which was compared with dSTORM 3D-immunocytochemistry of TFAM, mtSSB, or DNA. The localized fluorophores of FPALM/dSTORM data were segmented using Delaunay triangulation into polyhedron models and by principal component analysis (PCA) into general PCA ellipsoids. The PCA ellipsoids were normalized to the smoothed volume of polyhedrons or by the net unsmoothed Delaunay volume and remodeled into rotational ellipsoids to obtain models, termed DVRE. The most frequent size of ellipsoid nucleoid model imaged via TFAM was 35 × 45 × 95 nm; or 35 × 45 × 75 nm for mtDNA cores; and 25 × 45 × 100 nm for nucleoids imaged via mtSSB. Nucleoids encompassed different point density and wide size ranges, speculatively due to different activity stemming from different TFAM/mtDNA stoichiometry/density. Considering twofold lower axial vs. lateral resolution, only bulky DVRE models with an aspect ratio >3 and tilted toward the xy-plane were considered as two proximal nucleoids, suspicious occurring after division following mtDNA replication. The existence of proximal nucleoids in mtDNA-dSTORM 3D images of mtDNA "doubling"-supported possible direct observations of mt nucleoid division after mtDNA replication. PMID:26846371

  20. Capitalizing Distance Technologies To Benefit Rural Children and Youth with Visual Disabilities.

    ERIC Educational Resources Information Center

    Ferrell, Kay Alicyn; Wright, Charles; Persichitte, Kay A.; Lowell, Nathan

    The University of Northern Colorado developed a master's degree program to train specialists in the education of students with visual disabilities in the 14-state region of the Western Interstate Commission on Higher Education. The program is student-centered, stresses effective interaction between faculty and students and among students, and uses…

  1. A Wheelchair User with Visual and Intellectual Disabilities Managing Simple Orientation Technology for Indoor Travel

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; O'Reilly, Mark F.; Singh, Nirbhay N.; Sigafoos, Jeff; Campodonico, Francesca; Oliva, Doretta

    2009-01-01

    Persons with profound visual impairments and other disabilities, such as neuromotor and intellectual disabilities, may encounter serious orientation and mobility problems even in familiar indoor environments, such as their homes. Teaching these persons to develop maps of their daily environment, using miniature replicas of the areas or some…

  2. Computer-Based Compensatory Augmentative Communications Technology for Physically Disabled, Visually Impaired, and Speech Impaired Students.

    ERIC Educational Resources Information Center

    Shell, Duane F.; And Others

    1989-01-01

    The paper addresses computer-based augmentative writing systems for physically disabled and visually impaired students and augmentative communication systems for nonverbal speech-impaired students. Among the components described are keyboard support systems, switch systems, alternate interface systems, support software, voice output systems, and…

  3. Using Drawing Technology to Assess Students' Visualizations of Chemical Reaction Processes

    ERIC Educational Resources Information Center

    Chang, Hsin-Yi; Quintana, Chris; Krajcik, Joseph

    2014-01-01

    In this study, we investigated how students used a drawing tool to visualize their ideas of chemical reaction processes. We interviewed 30 students using thinking-aloud and retrospective methods and provided them with a drawing tool. We identified four types of connections the students made as they used the tool: drawing on existing knowledge,…

  4. A "Thinking Journey" to the Planets Using Scientific Visualization Technologies: Implications to Astronomy Education.

    ERIC Educational Resources Information Center

    Yair, Yoav; Schur, Yaron; Mintz, Rachel

    2003-01-01

    Presents a novel approach to teaching astronomy and planetary sciences centered on visual images and simulations of planetary objects. Focuses on the study of the moon and the planet Mars by means of observations, interpretation, and comparison to planet Earth. (Contains 22 references.) (Author/YDS)

  5. Information Technology in Art and Design: Visual Sensitivity, Learning and Assessment.

    ERIC Educational Resources Information Center

    Genin, T.

    1991-01-01

    Discusses current changes in the structures of British art education in light of the 1988 Education Reform Act (ERA), and describes the use of microcomputers to develop visual sensitivity and skills. A test model is discussed which examines an experiential approach toward the teaching, evaluation, and interpretation of color theory. (five…

  6. [Hyperspectral technology combined with CARS algorithm to quantitatively determine the SSC in Korla fragrant pear].

    PubMed

    Zhan, Bai-Shao; Ni, Jun-Hui; Li, Jun

    2014-10-01

    Hyperspectral imaging has large data volume and high dimensionality, and original spectra data includes a lot of noises and severe scattering. And, quality of acquired hyperspectral data can be influenced by non-monochromatic light, external stray light and temperature, which resulted in having some non-linear relationship between the acquired hyperspectral data and the predicted quality index. Therefore, the present study proposed that competitive adaptive reweighted sampling (CARS) algorithm is used to select the key variables from visible and near infrared hyperspectral data. The performance of CARS was compared with full spectra, successive projections algorithm (SPA), Monte Carlo-uninformative variable elimination (MC-UVE), genetic algorithm (GA) and GA-SPA (genetic algorithm-successive projections algorithm). Two hundred Korla fragrant pears were used as research object. SPXY algorithm was used to divided sample set to correction set with 150 samples and prediction set with 50 samples, respectively. Based on variables selected by different methods, linear PLS and nonlinear LS-SVM models were developed, respectively, and the performance of models was assessed using parameters r2, RMSEP and RPD. A comprehensive comparison found that GA, GA-SPA and CARS can effectively select the variables with strong and useful information. These methods can be used for selection of Vis-NIR hyperspectral data variables, particularly for CARS. LS-SVM model can obtain the best results for SSC prediction of Korla fragrant pear based on variables obtained from CARS method. r2, RMSEP and RPD were 0.851 2, 0.291 3 and 2.592 4, respectively. The study showed that CARS is an effectively hyperspectral variable selection method, and nonlinear LS-SVM model is more suitable than linear PLS model for quantitatively determining the quality of fra- grant pear based on hyperspectral information. PMID:25739220

  7. Evaluation of Static vs. Dynamic Visualizations for Engineering Technology Students and Implications on Spatial Visualization Ability: A Quasi-Experimental Study

    ERIC Educational Resources Information Center

    Katsioloudis, Petros; Dickerson, Daniel; Jovanovic, Vukica; Jones, Mildred

    2015-01-01

    The benefit of using static versus dynamic visualizations is a controversial one. Few studies have explored the effectiveness of static visualizations to those of dynamic visualizations, and the current state of the literature remains somewhat unclear. During the last decade there has been a lengthy debate about the opportunities for using…

  8. Multispectral image segmentation using parallel mean shift algorithm and CUDA technology

    NASA Astrophysics Data System (ADS)

    Zghidi, Hafedh; Walczak, Maksym; Świtoński, Adam

    2016-06-01

    We present a parallel mean shift algorithm running on CUDA and its possible application in segmentation of multispectral images. The aim of this paper is to present a method of analyzing highly noised multispectral images of various objects, so that important features are enhanced and easier to identify. The algorithm finds applications in analysis of multispectral images of eyes so that certain features visible only in specific wavelengths are made clearly visible despite high level of noise, for which processing time is very long.

  9. The Use and Non-Use of Assistive Technologies from the World of Information and Communication Technology by Visually Impaired Young People: A Walk on the Tightrope of Peer Inclusion

    ERIC Educational Resources Information Center

    Soderstrom, Sylvia; Ytterhus, Borgunn

    2010-01-01

    In affluent societies how people use technology is symbolic of various values and identities. This article investigates the symbolic values and use of assistive technologies from the world of information and communication technology (ICT) in the daily lives of 11 visually impaired young Norwegians. The article draws on a qualitative interview…

  10. High resolution visualization of USArray data on a 50 megapixel display using OptIPuter technologies.

    NASA Astrophysics Data System (ADS)

    Nayak, A. M.; Vernon, F.; Kent, G.; Orcutt, J.; Kilb, D.; Newman, R.; Smarr, L.; Defanti, T.; Leigh, J.; Renambot, L.; Johnson, A.

    2004-12-01

    A 50 megapixel display wall is under construction at the Cecil H. & Ida M. Green Institute of Geophysics and Planetary Physics (IGPP) for the display of multiple interactive 3D visualizations of various geophysical datasets. This system is designed through collaboration between major NSF funded projects such as OptIPuter and USArray (Earthscope), and will allow researchers to visually analyze data and present results at extremely high resolution. The OptIPuter project (www.optiputer.net) leverages the capabilities of dedicated optical networks that interconnect instruments, processors, computer storage and visualization resources to aid in Earth Sciences research. This system comprises a cluster of seven Apple Power Mac G5 machines and twelve Apple 30" LCD screens (of maximum resolution 2560 x 1600 each) tiled to form a 4x3 array and will be the first Apple-driven tiled display to our knowledge. The Antelope software will be used for seismic data monitoring and archiving along with web-based analytical tools developed at the Array Network Facility (ANF http://anf.ucsd.edu/) at IGPP. OptIPuter software (developed by the Electronic Visualization Laboratory) such as JuxtaView (an image viewer for interacting with remotely located extremely high resolution 2D images) and Vol-a-Tile (interactive volume rendering software allowing navigation into gigabyte-sized seismic volumes) will also be used. Interactive visualizations created by scientists at IGPP that overlay heterogeneous datasets such as seismic profiles, geology strata, earthquake locations, bathymetry and high resolution satellite imagery and aerial photos, using the Fledermaus software will also be viewed. The configuration of each cluster node is: dual CPU 2.5 GHz PowerPC G5, 8 GB RAM, 500 GB disk space, NVIDIA Ultra 6800 GeForce card, and the nodes are interconnected over gigabit Ethernet. This system will also be part of the OptIPuter infrastructure, with fiber connections to the OptIPuter CAVEwave on the

  11. A novel LTE scheduling algorithm for green technology in smart grid.

    PubMed

    Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid

    2015-01-01

    Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application's priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively. PMID:25830703

  12. A Novel LTE Scheduling Algorithm for Green Technology in Smart Grid

    PubMed Central

    Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid

    2015-01-01

    Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application’s priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively. PMID:25830703

  13. Compiler Optimization Pass Visualization: The Procedural Abstraction Case

    ERIC Educational Resources Information Center

    Schaeckeler, Stefan; Shang, Weijia; Davis, Ruth

    2009-01-01

    There is an active research community concentrating on visualizations of algorithms taught in CS1 and CS2 courses. These visualizations can help students to create concrete visual images of the algorithms and their underlying concepts. Not only "fundamental algorithms" can be visualized, but also algorithms used in compilers. Visualizations that…

  14. Comparison between PCR and larvae visualization methods for diagnosis of Strongyloides stercoralis out of endemic area: A proposed algorithm.

    PubMed

    Repetto, Silvia A; Ruybal, Paula; Solana, María Elisa; López, Carlota; Berini, Carolina A; Alba Soto, Catalina D; Cappa, Stella M González

    2016-05-01

    Underdiagnosis of chronic infection with the nematode Strongyloides stercoralis may lead to severe disease in the immunosuppressed. Thus, we have set-up a specific and highly sensitive molecular diagnosis in stool samples. Here, we compared the accuracy of our polymerase chain reaction (PCR)-based method with that of conventional diagnostic methods for chronic infection. We also analyzed clinical and epidemiological predictors of infection to propose an algorithm for the diagnosis of strongyloidiasis useful for the clinician. Molecular and gold standard methods were performed to evaluate a cohort of 237 individuals recruited in Buenos Aires, Argentina. Subjects were assigned according to their immunological status, eosinophilia and/or history of residence in endemic areas. Diagnosis of strongyloidiasis by PCR on the first stool sample was achieved in 71/237 (29.9%) individuals whereas only 35/237(27.4%) were positive by conventional methods, requiring up to four serial stool samples at weekly intervals. Eosinophilia and history of residence in endemic areas have been revealed as independent factors as they increase the likelihood of detecting the parasite according to our study population. Our results underscore the usefulness of robust molecular tools aimed to diagnose chronic S. stercoralis infection. Evidence also highlights the need to survey patients with eosinophilia even when history of an endemic area is absent. PMID:26868702

  15. Visual Literacy and Just-In-Time-Training: Enabling Learners through Technology.

    ERIC Educational Resources Information Center

    Burton, Terry

    Quality education is often impeded by lack of instructor time and by a failure to provide instruction that is individualized and at the point of need. Integration technology into course development can alleviate these problems, but only if the technology is easy to learn and supports a systems approach. In implementing a Web-based Technical…

  16. Influences on Visual Spatial Rotation: Science, Technology, Engineering, and Mathematics (STEM) Experiences, Age, and Gender

    ERIC Educational Resources Information Center

    Perry, Paula Christine

    2013-01-01

    Science, Technology, Engineering, and Mathematics (STEM) education curriculum is designed to strengthen students' science and math achievement through project based learning activities. As part of a STEM initiative, SeaPerch was developed at Massachusetts Institute of Technology. SeaPerch is an innovative underwater robotics program that…

  17. Three-Dimensional Media Technologies: Potentials for Study in Visual Literacy.

    ERIC Educational Resources Information Center

    Thwaites, Hal

    This paper presents an overview of three-dimensional media technologies (3Dmt). Many of the new 3Dmt are the direct result of interactions of computing, communications, and imaging technologies. Computer graphics are particularly well suited to the creation of 3D images due to the high resolution and programmable nature of the current displays.…

  18. Integrating Technology in the Classroom: A Visual Conceptualization of Teachers' Knowledge, Goals and Beliefs

    ERIC Educational Resources Information Center

    Chen, F-H.; Looi, C.-K.; Chen, W.

    2009-01-01

    In this paper, we devise a diagrammatic conceptualization to describe and represent the complex interplay of a teacher's knowledge (K), goals (G) and beliefs (B) in leveraging technology effectively in the classroom. The degree of coherency between the KGB region and the affordances of the technology serves as an indicator of the teachers'…

  19. Applying Technology to Visually Support Language and Communication in Individuals with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Shane, Howard C.; Laubscher, Emily H.; Schlosser, Ralf W.; Flynn, Suzanne; Sorce, James F.; Abramson, Jennifer

    2012-01-01

    The burgeoning role of technology in society has provided opportunities for the development of new means of communication for individuals with Autism Spectrum Disorders (ASD). This paper offers an organizational framework for describing traditional and emerging augmentative and alternative communication (AAC) technology, and highlights how tools…

  20. Learning about Urban Ecology through the Use of Visualization and Geospatial Technologies

    ERIC Educational Resources Information Center

    Barnett, Michael; Houle, Meredith; Mark, Sheron; Strauss, Eric; Hoffman, Emily

    2010-01-01

    During the past three years we have been designing and implementing a technology enhanced urban ecology program using geographic information systems (GIS) coupled with technology. Our initial work focused on professional development for in-service teachers and implementation in K-12 classrooms. However, upon reflection and analysis of the…

  1. A Visual Dashboard for Moving Health Technologies From “Lab to Village”

    PubMed Central

    Singer, Peter A

    2007-01-01

    New technologies are an important way of addressing global health challenges and human development. However, the road for new technologies from “lab to village” is neither simple nor straightforward. Until recently, there has been no conceptual framework for analyzing and addressing the myriad forces and issues involved in moving health technologies from the lab to those who need them. Recently, based on empirical research, we published such a model. In this paper, we focus on extending the model into a dashboard and examine how this dashboard can be used to manage the information related to the path from lab to village. The next step will be for groups interested in global health, and even the public via the Internet, to use the tool to help guide technologies down this tricky path to improve global health and foster human development. PMID:17951216

  2. Social representations of electricity network technologies: exploring processes of anchoring and objectification through the use of visual research methods.

    PubMed

    Devine-Wright, Hannah; Devine-Wright, Patrick

    2009-06-01

    The aim of this study was to explore everyday thinking about the UK electricity network, in light of government policy to increase the generation of electricity from renewable energy sources. Existing literature on public perceptions of electricity network technologies was broadened by adopting a more socially embedded conception of the construction of knowledge using the theory of social representations (SRT) to explore symbolic associations with network technologies. Drawing and association tasks were administered within nine discussion groups held in two places: a Scottish town where significant upgrades to the local transmission network were planned and an English city with no such plans. Our results illustrate the ways in which network technologies, such as high voltage (HV) pylons, are objectified in talk and drawings. These invoked positive as well as negative symbolic and affective associations, both at the level of specific pylons, and the 'National Grid' as a whole and are anchored in understanding of other networks such as mobile telecommunications. We conclude that visual methods are especially useful for exploring beliefs about technologies that are widespread, proximal to our everyday experience but nevertheless unfamiliar topics of everyday conversation. PMID:18789183

  3. Visualizing petroleum systems with a combination of GIS and multimedia technologies: An example from the West Siberia Basin

    SciTech Connect

    Walsh, D.B.; Grace, J.D.

    1996-12-31

    Petroleum system studies provide an ideal application for the combination of Geographic Information System (GIS) and multimedia technologies. GIS technology is used to build and maintain the spatial and tabular data within the study region. Spatial data may comprise the zones of active source rocks and potential reservoir facies. Similarly, tabular data include the attendant source rock parameters (e.g. pyroloysis results, organic carbon content) and field-level exploration and production histories for the basin. Once the spatial and tabular data base has been constructed, GIS technology is useful in finding favorable exploration trends, such as zones of high organic content, mature source rocks in positions adjacent to sealed, high porosity reservoir facies. Multimedia technology provides powerful visualization tools for petroleum system studies. The components of petroleum system development, most importantly generation, migration and trap development typically span periods of tens to hundreds of millions of years. The ability to animate spatial data over time provides an insightful alternative for studying the development of processes which are only captured in {open_quotes}snapshots{close_quotes} by static maps. New multimedia-authoring software provides this temporal dimension. The ability to record this data on CD-ROMs and allow user- interactivity further leverages the combination of spatial data bases, tabular data bases and time-based animations. The example used for this study was the Bazhenov-Neocomian petroleum system of West Siberia.

  4. The 3D visualization technology research of submarine pipeline based Horde3D GameEngine

    NASA Astrophysics Data System (ADS)

    Yao, Guanghui; Ma, Xiushui; Chen, Genlang; Ye, Lingjian

    2013-10-01

    With the development of 3D display and virtual reality technology, its application gets more and more widespread. This paper applies 3D display technology to the monitoring of submarine pipeline. We reconstruct the submarine pipeline and its surrounding submarine terrain in computer using Horde3D graphics rendering engine on the foundation database "submarine pipeline and relative landforms landscape synthesis database" so as to display the virtual scene of submarine pipeline based virtual reality and show the relevant data collected from the monitoring of submarine pipeline.

  5. Using virtual reality technology for aircraft visual inspection training: presence and comparison studies.

    PubMed

    Vora, Jeenal; Nair, Santosh; Gramopadhye, Anand K; Duchowski, Andrew T; Melloy, Brian J; Kanki, Barbara

    2002-11-01

    The aircraft maintenance industry is a complex system consisting of several interrelated human and machine components. Recognizing this, the Federal Aviation Administration (FAA) has pursued human factors related research. In the maintenance arena the research has focused on the aircraft inspection process and the aircraft inspector. Training has been identified as the primary intervention strategy to improve the quality and reliability of aircraft inspection. If training is to be successful, it is critical that we provide aircraft inspectors with appropriate training tools and environment. In response to this need, the paper outlines the development of a virtual reality (VR) system for aircraft inspection training. VR has generated much excitement but little formal proof that it is useful. However, since VR interfaces are difficult and expensive to build, the computer graphics community needs to be able to predict which applications will benefit from VR. To address this important issue, this research measured the degree of immersion and presence felt by subjects in a virtual environment simulator. Specifically, it conducted two controlled studies using the VR system developed for visual inspection task of an aft-cargo bay at the VR Lab of Clemson University. Beyond assembling the visual inspection virtual environment, a significant goal of this project was to explore subjective presence as it affects task performance. The results of this study indicated that the system scored high on the issues related to the degree of presence felt by the subjects. As a next logical step, this study, then, compared VR to an existing PC-based aircraft inspection simulator. The results showed that the VR system was better and preferred over the PC-based training tool. PMID:12507340

  6. Beauty and Precision: Weaving Complex Educational Technology Projects with Visual Instructional Design Languages

    ERIC Educational Resources Information Center

    Derntl, Michael; Parrish, Patrick; Botturi, Luca

    2010-01-01

    Instructional design and technology products result from many options and constraints. On the one hand, solutions should be creative, effective and flexible; on the other hand, developers and instructors need precise guidance and details on what to do during development and implementation. Communication of and about designs is supported by design…

  7. Visualizing the Future: Technology Competency Development in Clinical Medicine, and Implications for Medical Education

    ERIC Educational Resources Information Center

    Srinivasan, Malathi; Keenan, Craig R.; Yager, Joel

    2006-01-01

    Objective: In this article, the authors ask three questions. First, what will physicians need to know in order to be effective in the future? Second, what role will technology play in achieving that high level of effectiveness? Third, what specific skill sets will physicians need to master in order to become effective? Method: Through three case…

  8. Harnessing modern web application technology to create intuitive and efficient data visualization and sharing tools.

    PubMed

    Wood, Dylan; King, Margaret; Landis, Drew; Courtney, William; Wang, Runtang; Kelly, Ross; Turner, Jessica A; Calhoun, Vince D

    2014-01-01

    Neuroscientists increasingly need to work with big data in order to derive meaningful results in their field. Collecting, organizing and analyzing this data can be a major hurdle on the road to scientific discovery. This hurdle can be lowered using the same technologies that are currently revolutionizing the way that cultural and social media sites represent and share information with their users. Web application technologies and standards such as RESTful webservices, HTML5 and high-performance in-browser JavaScript engines are being utilized to vastly improve the way that the world accesses and shares information. The neuroscience community can also benefit tremendously from these technologies. We present here a web application that allows users to explore and request the complex datasets that need to be shared among the neuroimaging community. The COINS (Collaborative Informatics and Neuroimaging Suite) Data Exchange uses web application technologies to facilitate data sharing in three phases: Exploration, Request/Communication, and Download. This paper will focus on the first phase, and how intuitive exploration of large and complex datasets is achieved using a framework that centers around asynchronous client-server communication (AJAX) and also exposes a powerful API that can be utilized by other applications to explore available data. First opened to the neuroscience community in August 2012, the Data Exchange has already provided researchers with over 2500 GB of data. PMID:25206330

  9. Using Visual Technologies in the Introductory Programming Courses for Computer Science Majors

    ERIC Educational Resources Information Center

    Price, Kellie W.

    2013-01-01

    Decreasing enrollments, lower rates of student retention and changes in the learning styles of today's students are all issues that the Computer Science (CS) academic community is currently facing. As a result, CS educators are being challenged to find the right blend of technology and pedagogy for their curriculum in order to help students…

  10. Harnessing modern web application technology to create intuitive and efficient data visualization and sharing tools

    PubMed Central

    Wood, Dylan; King, Margaret; Landis, Drew; Courtney, William; Wang, Runtang; Kelly, Ross; Turner, Jessica A.; Calhoun, Vince D.

    2014-01-01

    Neuroscientists increasingly need to work with big data in order to derive meaningful results in their field. Collecting, organizing and analyzing this data can be a major hurdle on the road to scientific discovery. This hurdle can be lowered using the same technologies that are currently revolutionizing the way that cultural and social media sites represent and share information with their users. Web application technologies and standards such as RESTful webservices, HTML5 and high-performance in-browser JavaScript engines are being utilized to vastly improve the way that the world accesses and shares information. The neuroscience community can also benefit tremendously from these technologies. We present here a web application that allows users to explore and request the complex datasets that need to be shared among the neuroimaging community. The COINS (Collaborative Informatics and Neuroimaging Suite) Data Exchange uses web application technologies to facilitate data sharing in three phases: Exploration, Request/Communication, and Download. This paper will focus on the first phase, and how intuitive exploration of large and complex datasets is achieved using a framework that centers around asynchronous client-server communication (AJAX) and also exposes a powerful API that can be utilized by other applications to explore available data. First opened to the neuroscience community in August 2012, the Data Exchange has already provided researchers with over 2500 GB of data. PMID:25206330

  11. American Beauty: The Seduction of the Visual Image in the Culture of Technology

    ERIC Educational Resources Information Center

    Goudreau, Kim

    2006-01-01

    The critical examination of the film "American Beauty" reveals characteristics illustrative of the form of culture coextensive with modern technological societies. This form of culture creates an imbalance favoring the aesthetical over the ethical dimensions of human orientation. Absorption into the aesthetical dimension of the electronic or…

  12. Visualizing History: Computer Technology and the Graphic Presentation of the Past

    ERIC Educational Resources Information Center

    Moss, Mark

    2004-01-01

    Computer technology has impacted both the study and idea of history in a number of ways. The Internet has provided numerous web-sites for students to read, see and look into for historical information. Historians, both professional and public have also begun to utilize the computer in a variety of ways, both in academic terms as well as leisure…

  13. Fine-grained data assimilation algorithm with uncertainty assessment in variational modeling technology

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir; Tsvetova, Elena

    2013-04-01

    We consider an approach to data-assimilation schemes design based on introduction of the special control functions into the structure of the model equations to take into account various uncertainties. In the presence of measurement data this augmented model is treated with variation technique for the functional describing the misfit between measured and calculated values with the introduced control functions as the quantities to be minimized in the phase space of the augmented model state functions. Due to uncertainty, the weak-constraint variational principle is formulated. Then a discrete analogue of the variational principle functional is constructed by means of decomposition, splitting and finite-volume methods. From the stationary conditions for the variational principle functionals the systems of direct and adjoint equations as well as the uncertainty equations are obtained [1, 2]. In general case the systems can be solved iteratively with some conditions imposed to the parameters. As the splitting schemes is used, we propose to assimilate all available data at one model time step but on the corresponding splitting stages by means of direct algorithms without iterations. The approach can be called fine-grained data-assimilation. Such versions of algorithms are cost-effective, easy to be parallelized and may be useful for integrated models of atmospheric dynamics and chemistry. In the case of convection-diffusion stage and one time step analysis window the multidimensional model can be further decomposed with the splitting technique to a set of one-dimensional models. Each resulting one-dimensional fragment has the form of three diagonal block-matrix linear problem that can be solved with the matrix sweep method [3]. In the case of assimilation windows longer than one time step the result of fine-grained algorithm analysis can be used as initial guess. The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of

  14. Frequency-doubling technology perimetry and multifocal visual evoked potential in glaucoma, suspected glaucoma, and control patients

    PubMed Central

    Kanadani, Fabio N; Mello, Paulo AA; Dorairaj, Syril K; Kanadani, Tereza CM

    2014-01-01

    Introduction The gold standard in functional glaucoma evaluation is standard automated perimetry (SAP). However, SAP depends on the reliability of the patients’ responses and other external factors; therefore, other technologies have been developed for earlier detection of visual field changes in glaucoma patients. The frequency-doubling perimetry (FDT) is believed to detect glaucoma earlier than SAP. The multifocal visual evoked potential (mfVEP) is an objective test for functional evaluation. Objective To evaluate the sensitivity and specificity of FDT and mfVEP tests in normal, suspect, and glaucomatous eyes and compare the monocular and interocular mfVEP. Methods Ninety-five eyes from 95 individuals (23 controls, 33 glaucoma suspects, 39 glaucomatous) were enrolled. All participants underwent a full ophthalmic examination, followed by SAP, FDT, and mfVEP tests. Results The area under the curve for mean deviation and pattern standard deviation were 0.756 and 0.761, respectively, for FDT, 0.564 and 0.512 for signal and alpha for interocular mfVEP, and 0.568 and 0.538 for signal and alpha for monocular mfVEP. This difference between monocular and interocular mfVEP was not significant. Conclusion The FDT Matrix was superior to mfVEP in glaucoma detection. The difference between monocular and interocular mfVEP in the diagnosis of glaucoma was not significant. PMID:25075173

  15. Testing the Efficacy of Synthetic Vision during Non-Normal Operations as an Enabling Technology for Equivalent Visual Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.

    2008-01-01

    Synthetic Vision (SV) may serve as a revolutionary crew/vehicle interface enabling technology to meet the challenges of the Next Generation Air Transportation System Equivalent Visual Operations (EVO) concept that is, the ability to achieve or even improve on the safety of Visual Flight Rules (VFR) operations, maintain the operational tempos of VFR, and potentially retain VFR procedures independent of actual weather and visibility conditions. One significant challenge lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. An experiment was conducted to evaluate the effects of the presence or absence of SV, the location (head-up or head-down) of this information during an instrument approach, and the type of airport lighting information on landing minima. Another key element of the testing entailed investigating the pilot s awareness and reaction to non-normal events (i.e., failure conditions) that were unexpectedly introduced into the experiment. These non-normals are critical determinants in the underlying safety of all-weather operations. This paper presents the experimental results specific to pilot response to non-normal events using head-up and head-down synthetic vision displays.

  16. Visualization of Host-Polerovirus Interaction Topologies Using Protein Interaction Reporter Technology

    PubMed Central

    DeBlasio, Stacy L.; Chavez, Juan D.; Alexander, Mariko M.; Ramsey, John; Eng, Jimmy K.; Mahoney, Jaclyn; Gray, Stewart M.; Bruce, James E.

    2015-01-01

    ABSTRACT Demonstrating direct interactions between host and virus proteins during infection is a major goal and challenge for the field of virology. Most protein interactions are not binary or easily amenable to structural determination. Using infectious preparations of a polerovirus (Potato leafroll virus [PLRV]) and protein interaction reporter (PIR), a revolutionary technology that couples a mass spectrometric-cleavable chemical cross-linker with high-resolution mass spectrometry, we provide the first report of a host-pathogen protein interaction network that includes data-derived, topological features for every cross-linked site that was identified. We show that PLRV virions have hot spots of protein interaction and multifunctional surface topologies, revealing how these plant viruses maximize their use of binding interfaces. Modeling data, guided by cross-linking constraints, suggest asymmetric packing of the major capsid protein in the virion, which supports previous epitope mapping studies. Protein interaction topologies are conserved with other species in the Luteoviridae and with unrelated viruses in the Herpesviridae and Adenoviridae. Functional analysis of three PLRV-interacting host proteins in planta using a reverse-genetics approach revealed a complex, molecular tug-of-war between host and virus. Structural mimicry and diversifying selection—hallmarks of host-pathogen interactions—were identified within host and viral binding interfaces predicted by our models. These results illuminate the functional diversity of the PLRV-host protein interaction network and demonstrate the usefulness of PIR technology for precision mapping of functional host-pathogen protein interaction topologies. IMPORTANCE The exterior shape of a plant virus and its interacting host and insect vector proteins determine whether a virus will be transmitted by an insect or infect a specific host. Gaining this information is difficult and requires years of experimentation. We used

  17. Computational algorithms dealing with the classical and statistical mechanics of celestial scale polymers in space elevator technology

    NASA Astrophysics Data System (ADS)

    Knudsen, Steven; Golubovic, Leonardo

    Prospects to build Space Elevator (SE) systems have become realistic with ultra-strong materials such as carbon nano-tubes and diamond nano-threads. At cosmic length-scales, space elevators can be modeled as polymer like floppy strings of tethered mass beads. A new venue in SE science has emerged with the introduction of the Rotating Space Elevator (RSE) concept supported by novel algorithms discussed in this presentation. An RSE is a loopy string reaching into outer space. Unlike the classical geostationary SE concepts of Tsiolkovsky, Artsutanov, and Pearson, our RSE exhibits an internal rotation. Thanks to this, objects sliding along the RSE loop spontaneously oscillate between two turning points, one of which is close to the Earth whereas the other one is in outer space. The RSE concept thus solves a major problem in SE technology which is how to supply energy to the climbers moving along space elevator strings. The investigation of the classical and statistical mechanics of a floppy string interacting with objects sliding along it required development of subtle computational algorithms described in this presentation

  18. Integration and Exploitation of Advanced Visualization and Data Technologies to Teach STEM Subjects

    NASA Astrophysics Data System (ADS)

    Brandon, M. A.; Garrow, K. H.

    2014-12-01

    We live in an age where the volume of content available online to the general public is staggering. Integration of data from new technologies gives us amazing educational opportunities when appropriate narratives are provided. We prepared a distance learning credit bearing module that showcased many currently available data sets and state of the art technologies. It has been completed by many thousands of students with good feedback. Module highlights were the wide ranging and varied online activities which taught a wide range of STEM content. For example: it is well known that on Captain Scott's Terra Nova Expedition 1910-13, three researchers completed the "the worst journey in the world" to study emperor penguins. Using their primary records and clips from location filmed television documentaries we can tell their story and the reasons why it was important. However using state of the art content we can go much further. Using satellite data students can trace the path the researchers took and observe the penguin colony that they studied. Linking to modern Open Access literature students learn how they can estimate the numbers of animals in this and similar locations. Then by linking to freely available data from Antarctic Automatic Weather Stations students can learn quantitatively about the climatic conditions the animals are enduring in real time. They can then download and compare this with the regional climatic record to see if their observations are what could be expected. By considering the environment the penguins live in students can be taught about the evolutionary and behavioural adaptations the animals have undergone to survive. In this one activity we can teach a wide range of key learning points in an engaging and coherent way. It opened some students' eyes to the range of possibilities available to learn about our, and other planets. The addition and integration of new state of the art techniques and data sets only increases the opportunities to

  19. Development of closed-loop neural interface technology in a rat model: combining motor cortex operant conditioning with visual cortex microstimulation.

    PubMed

    Marzullo, Timothy Charles; Lehmkuhle, Mark J; Gage, Gregory J; Kipke, Daryl R

    2010-04-01

    Closed-loop neural interface technology that combines neural ensemble decoding with simultaneous electrical microstimulation feedback is hypothesized to improve deep brain stimulation techniques, neuromotor prosthetic applications, and epilepsy treatment. Here we describe our iterative results in a rat model of a sensory and motor neurophysiological feedback control system. Three rats were chronically implanted with microelectrode arrays in both the motor and visual cortices. The rats were subsequently trained over a period of weeks to modulate their motor cortex ensemble unit activity upon delivery of intra-cortical microstimulation (ICMS) of the visual cortex in order to receive a food reward. Rats were given continuous feedback via visual cortex ICMS during the response periods that was representative of the motor cortex ensemble dynamics. Analysis revealed that the feedback provided the animals with indicators of the behavioral trials. At the hardware level, this preparation provides a tractable test model for improving the technology of closed-loop neural devices. PMID:20144922

  20. Analysis and visualization of Arabidopsis thaliana GWAS using web 2.0 technologies

    PubMed Central

    Huang, Yu S.; Horton, Matthew; Vilhjálmsson, Bjarni J.; Seren, Ümit; Meng, Dazhe; Meyer, Christopher; Ali Amer, Muhammad; Borevitz, Justin O.; Bergelson, Joy; Nordborg, Magnus

    2011-01-01

    With large-scale genomic data becoming the norm in biological studies, the storing, integrating, viewing and searching of such data have become a major challenge. In this article, we describe the development of an Arabidopsis thaliana database that hosts the geographic information and genetic polymorphism data for over 6000 accessions and genome-wide association study (GWAS) results for 107 phenotypes representing the largest collection of Arabidopsis polymorphism data and GWAS results to date. Taking advantage of a series of the latest web 2.0 technologies, such as Ajax (Asynchronous JavaScript and XML), GWT (Google-Web-Toolkit), MVC (Model-View-Controller) web framework and Object Relationship Mapper, we have created a web-based application (web app) for the database, that offers an integrated and dynamic view of geographic information, genetic polymorphism and GWAS results. Essential search functionalities are incorporated into the web app to aid reverse genetics research. The database and its web app have proven to be a valuable resource to the Arabidopsis community. The whole framework serves as an example of how biological data, especially GWAS, can be presented and accessed through the web. In the end, we illustrate the potential to gain new insights through the web app by two examples, showcasing how it can be used to facilitate forward and reverse genetics research. Database URL: http://arabidopsis.usc.edu/ PMID:21609965

  1. Using Photos and Visual-Processing Assistive Technologies to Develop Self-Expression and Interpersonal Communication of Adolescents with Asperger Syndrome (AS)

    ERIC Educational Resources Information Center

    Shrieber, Betty; Cohen, Yael

    2013-01-01

    The purpose of this paper is to examine the use of photographs and assistive technologies for visual information processing as motivating tools for interpersonal communication of adolescents with Asperger Syndrome (AS), aged 16 to 18 years, attending special education school. Students with AS find it very difficult to create social and…

  2. Writing Fragments of Modernity: Visual Technology and Metafiction in Pablo Palacio's "Débora" and "Un hombre muerto a puntapiés"

    ERIC Educational Resources Information Center

    Ramos, Juan G.

    2016-01-01

    This current study explores the relationship between visual technology (cinema and photography) and a metanarrative preoccupation with the craft of literary narration in two texts by Pablo Palacio (Ecuador, 1906-47). In his novella "Débora" (1927), Palacio employs the language of cinema (e.g., the cinematograph, the cinema, references to…

  3. An Examination of the Effects of Collaborative Scientific Visualization via Model-Based Reasoning on Science, Technology, Engineering, and Mathematics (STEM) Learning within an Immersive 3D World

    ERIC Educational Resources Information Center

    Soleimani, Ali

    2013-01-01

    Immersive 3D worlds can be designed to effectively engage students in peer-to-peer collaborative learning activities, supported by scientific visualization, to help with understanding complex concepts associated with learning science, technology, engineering, and mathematics (STEM). Previous research studies have shown STEM learning benefits…

  4. Effects of Online Visual and Interactive Technological Tool (OVITT) on Early Adolescent Students' Mathematics Performance, Math Anxiety and Attitudes toward Math

    ERIC Educational Resources Information Center

    Orabuchi, Nkechi

    2013-01-01

    This study reported the results of a 3-month quasi-experimental study that determined the effectiveness of an online visual and interactive technological tool on sixth grade students' mathematics performance, math anxiety and attitudes towards math. There were 155 sixth grade students from a middle school in the North Texas area who participated…

  5. Designing Visual Earth: Multimedia Geographic Visualization for the Classroom.

    ERIC Educational Resources Information Center

    McWilliams, Harold

    1998-01-01

    Provides information on computer software using Geographic Information Systems (GIS) and visualization technologies and Visual Earth, a series of integrated classroom solutions for a variety of science topics. Describes some uses of GIS and Visual Earth in science classrooms. (ASK)

  6. Small animal fluorescence and bioluminescence tomography: a review of approaches, algorithms and technology update

    NASA Astrophysics Data System (ADS)

    Darne, Chinmay; Lu, Yujie; Sevick-Muraca, Eva M.

    2014-01-01

    Emerging fluorescence and bioluminescence tomography approaches have several common, yet several distinct features from established emission tomographies of PET and SPECT. Although both nuclear and optical imaging modalities involve counting of photons, nuclear imaging techniques collect the emitted high energy (100-511 keV) photons after radioactive decay of radionuclides while optical techniques count low-energy (1.5-4.1 eV) photons that are scattered and absorbed by tissues requiring models of light transport for quantitative image reconstruction. Fluorescence imaging has been recently translated into clinic demonstrating high sensitivity, modest tissue penetration depth, and fast, millisecond image acquisition times. As a consequence, the promise of quantitative optical tomography as a complement of small animal PET and SPECT remains high. In this review, we summarize the different instrumentation, methodological approaches and schema for inverse image reconstructions for optical tomography, including luminescence and fluorescence modalities, and comment on limitations and key technological advances needed for further discovery research and translation.

  7. The study of key technology on spectral reflectance reconstruction based on the algorithm of adaptive compressive sensing

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang; Zilan, Pan; Dawei, Zhang; Xiuhua, Ma

    2016-04-01

    In order to improve the reconstruction accuracy and reduce the workload, the algorithm of compressive sensing based on the iterative threshold is combined with the method of adaptive selection of the training sample, and a new algorithm of adaptive compressive sensing is put forward. The three kinds of training sample are used to reconstruct the spectral reflectance of the testing sample based on the compressive sensing algorithm and adaptive compressive sensing algorithm, and the color difference and error are compared. The experiment results show that spectral reconstruction precision based on the adaptive compressive sensing algorithm is better than that based on the algorithm of compressive sensing.

  8. Three-dimensional visualization and display technologies; Proceedings of the Meeting, Los Angeles, CA, Jan. 18-20, 1989

    NASA Technical Reports Server (NTRS)

    Robbins, Woodrow E. (Editor); Fisher, Scott S. (Editor)

    1989-01-01

    Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems.

  9. Visualizing Progress

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Reality Capture Technologies, Inc. is a spinoff company from Ames Research Center. Offering e-business solutions for optimizing management, design and production processes, RCT uses visual collaboration environments (VCEs) such as those used to prepare the Mars Pathfinder mission.The product, 4-D Reality Framework, allows multiple users from different locations to manage and share data. The insurance industry is one targeted commercial application for this technology.

  10. EDITORIAL: Focus on Visualization in Physics FOCUS ON VISUALIZATION IN PHYSICS

    NASA Astrophysics Data System (ADS)

    Sanders, Barry C.; Senden, Tim; Springel, Volker

    2008-12-01

    Advances in physics are intimately connected with developments in a new technology, the telescope, precision clocks, even the computer all have heralded a shift in thinking. These landmark developments open new opportunities accelerating research and in turn new scientific directions. These technological drivers often correspond to new instruments, but equally might just as well flag a new mathematical tool, an algorithm or even means to visualize physics in a new way. Early on in this twenty-first century, scientific communities are just starting to explore the potential of digital visualization. Whether visualization is used to represent and communicate complex concepts, or to understand and interpret experimental data, or to visualize solutions to complex dynamical equations, the basic tools of visualization are shared in each of these applications and implementations. High-performance computing exemplifies the integration of visualization with leading research. Visualization is an indispensable tool for analyzing and interpreting complex three-dimensional dynamics as well as to diagnose numerical problems in intricate parallel calculation algorithms. The effectiveness of visualization arises by exploiting the unmatched capability of the human eye and visual cortex to process the large information content of images. In a brief glance, we recognize patterns or identify subtle features even in noisy data, something that is difficult or impossible to achieve with more traditional forms of data analysis. Importantly, visualizations guide the intuition of researchers and help to comprehend physical phenomena that lie far outside of direct experience. In fact, visualizations literally allow us to see what would otherwise remain completely invisible. For example, artificial imagery created to visualize the distribution of dark matter in the Universe has been instrumental to develop the notion of a cosmic web, and for helping to establish the current standard model of

  11. Universal visualization platform

    NASA Astrophysics Data System (ADS)

    Gee, Alexander G.; Li, Hongli; Yu, Min; Smrtic, Mary Beth; Cvek, Urska; Goodell, Howie; Gupta, Vivek; Lawrence, Christine; Zhou, Jainping; Chiang, Chih-Hung; Grinstein, Georges G.

    2005-03-01

    Although there are a number of visualization systems to choose from when analyzing data, only a few of these allow for the integration of other visualization and analysis techniques. There are even fewer visualization toolkits and frameworks from which one can develop ones own visualization applications. Even within the research community, scientists either use what they can from the available tools or start from scratch to define a program in which they are able to develop new or modified visualization techniques and analysis algorithms. Presented here is a new general-purpose platform for constructing numerous visualization and analysis applications. The focus of this system is the design and experimentation of new techniques, and where the sharing of and integration with other tools becomes second nature. Moreover, this platform supports multiple large data sets, and the recording and visualizing of user sessions. Here we introduce the Universal Visualization Platform (UVP) as a modern data visualization and analysis system.

  12. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  13. The New and Computationally Efficient MIL-SOM Algorithm: Potential Benefits for Visualization and Analysis of a Large-Scale High-Dimensional Clinically Acquired Geographic Data

    PubMed Central

    Oyana, Tonny J.; Achenie, Luke E. K.; Heo, Joon

    2012-01-01

    The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen's SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen's SOM. PMID:22481977

  14. Shifting Sands and Turning Tides: Using 3D Visualization Technology to Shape the Environment for Undergraduate Students

    NASA Astrophysics Data System (ADS)

    Jenkins, H. S.; Gant, R.; Hopkins, D.

    2014-12-01

    Teaching natural science in a technologically advancing world requires that our methods reach beyond the traditional computer interface. Innovative 3D visualization techniques and real-time augmented user interfaces enable students to create realistic environments to understand the world around them. Here, we present a series of laboratory activities that utilize an Augmented Reality Sandbox to teach basic concepts of hydrology, geology, and geography to undergraduates at Harvard University and the University of Redlands. The Augmented Reality (AR) Sandbox utilizes a real sandbox that is overlain by a digital projection of topography and a color elevation map. A Microsoft Kinect 3D camera feeds altimetry data into a software program that maps this information onto the sand surface using a digital projector. Students can then manipulate the sand and observe as the Sandbox augments their manipulations with projections of contour lines, an elevation color map, and a simulation of water. The idea for the AR Sandbox was conceived at MIT by the Tangible Media Group in 2002 and the simulation software used here was written and developed by Dr. Oliver Kreylos of the University of California - Davis as part of the NSF funded LakeViz3D project. Between 2013 and 2014, we installed AR Sandboxes at Harvard and the University of Redlands, respectively, and developed laboratory exercises to teach flooding hazard, erosion and watershed development in undergraduate earth and environmental science courses. In 2013, we introduced a series of AR Sandbox laboratories in Introductory Geology, Hydrology, and Natural Disasters courses. We found laboratories that utilized the AR Sandbox at both universities allowed students to become quickly immersed in the learning process, enabling a more intuitive understanding of the processes that govern the natural world. The physical interface of the AR Sandbox reduces barriers to learning, can be used to rapidly illustrate basic concepts of geology

  15. Alterations Induced by Bangerter Filters on the Visual Field: A Frequency Doubling Technology and Standard Automated Perimetry Study

    PubMed Central

    Schiavi, Costantino; Tassi, Filippo; Finzi, Alessandro; Cellini, Mauro

    2015-01-01

    Purpose. To investigate the effects of Bangerter filters on the visual field in healthy and in amblyopic patients. Materials and Methods. Fifteen normal adults and fifteen anisometropic amblyopia patients were analysed with standard automated perimetry (SAP) and frequency doubling technology (FDT) at baseline and with filters 0.8 and 0.1. Results. With 0.1 filter in SAP there was an increase of MD compared with controls (−10.24 ± 1.09 dB) in either the amblyopic (−11.34 ± 2.06 dB; P < 0.050) or sound eyes (−11.34 ± 1.66 dB; P < 0.030). With filters 0.8 the PSD was increased in the amblyopic eyes (2.09 ± 0.70 dB; P < 0.007) and in the sound eyes (1.92 ± 0.29 dB; P < 0.004) compared with controls. The FDT-PSD values in the control group were increased with the interposition of the filters compared to baseline (0.8; P < 0.0004 and 0.1; P < 0.0010). We did not find significant differences of the baseline PSD between amblyopic eyes (3.80 ± 2.21 dB) and the sound eyes (4.33 ± 1.31 dB) and when comparing the filters 0.8 (4.55 ± 1.50 versus 4.53 ± 1.76 dB) and 0.1 (4.66 ± 1.80 versus 5.10 ± 2.04 dB). Conclusions. The use of Bangerter filters leads to a reduction of the functionality of the magno- and parvocellular pathway. PMID:25688299

  16. Algorithmic Skin: Health-Tracking Technologies, Personal Analytics and the Biopedagogies of Digitized Health and Physical Education

    ERIC Educational Resources Information Center

    Williamson, Ben

    2015-01-01

    The emergence of digitized health and physical education, or "eHPE", embeds software algorithms in the organization of health and physical education pedagogies. Particularly with the emergence of wearable and mobile activity trackers, biosensors and personal analytics apps, algorithmic processes have an increasingly powerful part to play…

  17. Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology

    NASA Astrophysics Data System (ADS)

    Jia, Wen-bin; Xiao, Fu-hai

    2013-03-01

    The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.

  18. Visual Inference Programming

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin; Timucin, Dogan; Rabbette, Maura; Curry, Charles; Allan, Mark; Lvov, Nikolay; Clanton, Sam; Pilewskie, Peter

    2002-01-01

    The goal of visual inference programming is to develop a software framework data analysis and to provide machine learning algorithms for inter-active data exploration and visualization. The topics include: 1) Intelligent Data Understanding (IDU) framework; 2) Challenge problems; 3) What's new here; 4) Framework features; 5) Wiring diagram; 6) Generated script; 7) Results of script; 8) Initial algorithms; 9) Independent Component Analysis for instrument diagnosis; 10) Output sensory mapping virtual joystick; 11) Output sensory mapping typing; 12) Closed-loop feedback mu-rhythm control; 13) Closed-loop training; 14) Data sources; and 15) Algorithms. This paper is in viewgraph form.

  19. Visual Alert System

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A visual alert system resulted from circuitry developed by Applied Cybernetics Systems for Langley as part of a space related telemetry system. James Campman, Applied Cybernetics president, left the company and founded Grace Industries, Inc. to manufacture security devices based on the Langley technology. His visual alert system combines visual and audible alerts for hearing impaired people. The company also manufactures an arson detection device called the electronic nose, and is currently researching additional applications of the NASA technology.

  20. EDITORIAL: Focus on Visualization in Physics FOCUS ON VISUALIZATION IN PHYSICS

    NASA Astrophysics Data System (ADS)

    Sanders, Barry C.; Senden, Tim; Springel, Volker

    2008-12-01

    Advances in physics are intimately connected with developments in a new technology, the telescope, precision clocks, even the computer all have heralded a shift in thinking. These landmark developments open new opportunities accelerating research and in turn new scientific directions. These technological drivers often correspond to new instruments, but equally might just as well flag a new mathematical tool, an algorithm or even means to visualize physics in a new way. Early on in this twenty-first century, scientific communities are just starting to explore the potential of digital visualization. Whether visualization is used to represent and communicate complex concepts, or to understand and interpret experimental data, or to visualize solutions to complex dynamical equations, the basic tools of visualization are shared in each of these applications and implementations. High-performance computing exemplifies the integration of visualization with leading research. Visualization is an indispensable tool for analyzing and interpreting complex three-dimensional dynamics as well as to diagnose numerical problems in intricate parallel calculation algorithms. The effectiveness of visualization arises by exploiting the unmatched capability of the human eye and visual cortex to process the large information content of images. In a brief glance, we recognize patterns or identify subtle features even in noisy data, something that is difficult or impossible to achieve with more traditional forms of data analysis. Importantly, visualizations guide the intuition of researchers and help to comprehend physical phenomena that lie far outside of direct experience. In fact, visualizations literally allow us to see what would otherwise remain completely invisible. For example, artificial imagery created to visualize the distribution of dark matter in the Universe has been instrumental to develop the notion of a cosmic web, and for helping to establish the current standard model of

  1. Energy and Technology Review

    SciTech Connect

    Quirk, W.J.

    1993-08-01

    The Lawrence Livermore National Laboratory was established in 1952 to do research on nuclear weapons and magnetic fusion energy. Since then, we other major programs have been added including laser fusion, and laser isotope separation, biomedical and environmental science, strategic defense and applied energy technology. These programs, in turn, require research in basic scientific disciplines, including chemistry and materials science, computer science and technology, engineering and physics. In this issue, Herald Brown, the Laboratory`s third director and now counselor at the Center for Strategic and International Studies, reminisces about his years at Livermore and comments about the Laboratory`s role in the future. Also an article on visualizing dynamic systems in three dimensions is presented. Researchers can use our interactive algorithms to translate massive quantities of numerical data into visual form and can assign the visual markers of their choice to represent three- dimensional phenomena in a two-dimensional setting, such as a monitor screen. Major work has been done in the visualization of climate modeling, but the algorithms can be used for visualizing virtually any phenomena.

  2. Energy and Technology Review

    NASA Astrophysics Data System (ADS)

    Quirk, W. J.

    1993-08-01

    The Lawrence Livermore National Laboratory was established in 1952 to do research on nuclear weapons and magnetic fusion energy. Since then, other major programs have been added including laser fusion, and laser isotope separation, biomedical and environmental science, strategic defense and applied energy technology. These programs, in turn, require research in basic scientific disciplines, including chemistry and materials science, computer science and technology, engineering and physics. In this issue, Herald Brown, the Laboratory's third director and now counselor at the Center for Strategic and International Studies, reminisces about his years at Livermore and comments about the Laboratory's role in the future. Also an article on visualizing dynamic systems in three dimensions is presented. Researchers can use our interactive algorithms to translate massive quantities of numerical data into visual form and can assign the visual markers of their choice to represent three-dimensional phenomena in a two-dimensional setting, such as a monitor screen. Major work has been done in the visualization of climate modeling, but the algorithms can be used for visualizing virtually any phenomena.

  3. Adaptation of a support vector machine algorithm for segmentation and visualization of retinal structures in volumetric optical coherence tomography data sets

    PubMed Central

    Zawadzki, Robert J.; Fuller, Alfred R.; Wiley, David F.; Hamann, Bernd; Choi, Stacey S.; Werner, John S.

    2008-01-01

    Recent developments in Fourier domain—optical coherence tomography (Fd-OCT) have increased the acquisition speed of current ophthalmic Fd-OCT instruments sufficiently to allow the acquisition of volumetric data sets of human retinas in a clinical setting. The large size and three-dimensional (3D) nature of these data sets require that intelligent data processing, visualization, and analysis tools are used to take full advantage of the available information. Therefore, we have combined methods from volume visualization, and data analysis in support of better visualization and diagnosis of Fd-OCT retinal volumes. Custom-designed 3D visualization and analysis software is used to view retinal volumes reconstructed from registered B-scans. We use a support vector machine (SVM) to perform semiautomatic segmentation of retinal layers and structures for subsequent analysis including a comparison of measured layer thicknesses. We have modified the SVM to gracefully handle OCT speckle noise by treating it as a characteristic of the volumetric data. Our software has been tested successfully in clinical settings for its efficacy in assessing 3D retinal structures in healthy as well as diseased cases. Our tool facilitates diagnosis and treatment monitoring of retinal diseases. PMID:17867795

  4. Using Self-Organizing Neural Network Map Combined with Ward's Clustering Algorithm for Visualization of Students' Cognitive Structural Models about Aliveness Concept

    PubMed Central

    Ugulu, Ilker; Aydin, Halil

    2016-01-01

    We propose an approach to clustering and visualization of students' cognitive structural models. We use the self-organizing map (SOM) combined with Ward's clustering to conduct cluster analysis. In the study carried out on 100 subjects, a conceptual understanding test consisting of open-ended questions was used as a data collection tool. The results of analyses indicated that students constructed the aliveness concept by associating it predominantly with human. Motion appeared as the most frequently associated term with the aliveness concept. The results suggest that the aliveness concept has been constructed using anthropocentric and animistic cognitive structures. In the next step, we used the data obtained from the conceptual understanding test for training the SOM. Consequently, we propose a visualization method about cognitive structure of the aliveness concept. PMID:26819579

  5. Report of the Conference on Visual Information Processing Research and Technology (Columbia, Maryland, June 10-21, 1974).

    ERIC Educational Resources Information Center

    National Inst. of Education (DHEW), Washington, DC.

    Chapter 1 of this report, "Introduction and General Recommendations for Eye-Movement Research and Instrumentation," discusses research priorities; encouraging and supporting theories, models, or simulations of information processing; and improved instrumentation in the field of visual information processing. Chapter 2, "Summary of Specific…

  6. Creating an Adaptive Technology Using a Cheminformatics System to Read Aloud Chemical Compound Names for People with Visual Disabilities

    ERIC Educational Resources Information Center

    Kamijo, Haruo; Morii, Shingo; Yamaguchi, Wataru; Toyooka, Naoki; Tada-Umezaki, Masahito; Hirobayashi, Shigeki

    2016-01-01

    Various tactile methods, such as Braille, have been employed to enhance the recognition ability of chemical structures by individuals with visual disabilities. However, it is unknown whether reading aloud the names of chemical compounds would be effective in this regard. There are no systems currently available using an audio component to assist…

  7. In the Palm of Your Hand: A Vision of the Future of Technology for People with Visual Impairments.

    ERIC Educational Resources Information Center

    Fruchterman, James R.

    2003-01-01

    This article discusses future directions for wireless cell phones, including personal computer capabilities, multiple input and output modalities, and open source platforms, and the benefits for people with visual impairments. The use of cell phones for increased accessibility of the Internet and for electronic books is also discussed. (Contains…

  8. Fluorescence Aggregation-Caused Quenching versus Aggregation-Induced Emission: A Visual Teaching Technology for Undergraduate Chemistry Students

    ERIC Educational Resources Information Center

    Ma, Xiaofeng; Sun, Rui; Cheng, Jinghui; Liu, Jiaoyan; Gou, Fei; Xiang, Haifeng; Zhou, Xiangge

    2016-01-01

    A laboratory experiment visually exploring two opposite basic principles of fluorescence of aggregation-caused quenching (ACQ) and aggregation-induced emission (AIE) is demonstrated. The students would prepared two salicylaldehyde-based Schiff bases through a simple one-pot condensation reaction of one equiv of 1,2-diamine with 2 equiv of…

  9. Correlative visualization techniques for multidimensional data

    NASA Technical Reports Server (NTRS)

    Treinish, Lloyd A.; Goettsche, Craig

    1989-01-01

    Critical to the understanding of data is the ability to provide pictorial or visual representation of those data, particularly in support of correlative data analysis. Despite the advancement of visualization techniques for scientific data over the last several years, there are still significant problems in bringing today's hardware and software technology into the hands of the typical scientist. For example, there are other computer science domains outside of computer graphics that are required to make visualization effective such as data management. Well-defined, flexible mechanisms for data access and management must be combined with rendering algorithms, data transformation, etc. to form a generic visualization pipeline. A generalized approach to data visualization is critical for the correlative analysis of distinct, complex, multidimensional data sets in the space and Earth sciences. Different classes of data representation techniques must be used within such a framework, which can range from simple, static two- and three-dimensional line plots to animation, surface rendering, and volumetric imaging. Static examples of actual data analyses will illustrate the importance of an effective pipeline in data visualization system.

  10. Evaluation of Visualization Software

    NASA Technical Reports Server (NTRS)

    Globus, Al; Uselton, Sam

    1995-01-01

    Visualization software is widely used in scientific and engineering research. But computed visualizations can be very misleading, and the errors are easy to miss. We feel that the software producing the visualizations must be thoroughly evaluated and the evaluation process as well as the results must be made available. Testing and evaluation of visualization software is not a trivial problem. Several methods used in testing other software are helpful, but these methods are (apparently) often not used. When they are used, the description and results are generally not available to the end user. Additional evaluation methods specific to visualization must also be developed. We present several useful approaches to evaluation, ranging from numerical analysis of mathematical portions of algorithms to measurement of human performance while using visualization systems. Along with this brief survey, we present arguments for the importance of evaluations and discussions of appropriate use of some methods.

  11. Visual field

    MedlinePlus

    Perimetry; Tangent screen exam; Automated perimetry exam; Goldmann visual field exam; Humphrey visual field exam ... Confrontation visual field exam : This is a quick and basic check of the visual field. The health care provider ...

  12. Visual Learning.

    ERIC Educational Resources Information Center

    Kirrane, Diane E.

    1992-01-01

    An increasingly visual culture is affecting work and training. Achievement of visual literacy means acquiring competence in critical analysis of visual images and in communicating through visual media. (SK)

  13. Visual field

    MedlinePlus

    Perimetry; Tangent screen exam; Automated perimetry exam; Goldmann visual field exam; Humphrey visual field exam ... Confrontation visual field exam : This is a quick and basic check of the visual field. The health care provider sits directly in front ...

  14. Declarative Visualization Queries

    NASA Astrophysics Data System (ADS)

    Pinheiro da Silva, P.; Del Rio, N.; Leptoukh, G. G.

    2011-12-01

    In an ideal interaction with machines, scientists may prefer to write declarative queries saying "what" they want from a machine than to write code stating "how" the machine is going to address the user request. For example, in relational database, users have long relied on specifying queries using Structured Query Language (SQL), a declarative language to request data results from a database management system. In the context of visualizations, we see that users are still writing code based on complex visualization toolkit APIs. With the goal of improving the scientists' experience of using visualization technology, we have applied this query-answering pattern to a visualization setting, where scientists specify what visualizations they want generated using a declarative SQL-like notation. A knowledge enhanced management system ingests the query and knows the following: (1) know how to translate the query into visualization pipelines; and (2) how to execute the visualization pipelines to generate the requested visualization. We define visualization queries as declarative requests for visualizations specified in an SQL like language. Visualization queries specify what category of visualization to generate (e.g., volumes, contours, surfaces) as well as associated display attributes (e.g., color and opacity), without any regards for implementation, thus allowing scientists to remain partially unaware of a wide range of visualization toolkit (e.g., Generic Mapping Tools and Visualization Toolkit) specific implementation details. Implementation details are only a concern for our knowledge-based visualization management system, which uses both the information specified in the query and knowledge about visualization toolkit functions to construct visualization pipelines. Knowledge about the use of visualization toolkits includes what data formats the toolkit operates on, what formats they output, and what views they can generate. Visualization knowledge, which is not

  15. 3-D visualization of ensemble weather forecasts - Part 1: The visualization tool Met.3D (version 1.0)

    NASA Astrophysics Data System (ADS)

    Rautenhaus, M.; Kern, M.; Schäfler, A.; Westermann, R.

    2015-02-01

    We present Met.3D, a new open-source tool for the interactive 3-D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns, however, is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output - 3-D visualization, ensemble visualization, and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2-D visualization methods commonly used in meteorology to 3-D visualization by combining both visualization types in a 3-D context. We address the issue of spatial perception in the 3-D view and present approaches to using the ensemble to allow the user to assess forecast uncertainty. Interactivity is key to our approach. Met.3D uses modern graphics technology to achieve interactive visualization on standard consumer hardware. The tool supports forecast data from the European Centre for Medium Range Weather Forecasts and can operate directly on ECMWF hybrid sigma-pressure level grids. We describe the employed visualization algorithms, and analyse the impact of the ECMWF grid topology on computing 3-D ensemble statistical quantitites. Our techniques are demonstrated with examples from the T-NAWDEX-Falcon 2012 campaign.

  16. Three-dimensional visualization of ensemble weather forecasts - Part 1: The visualization tool Met.3D (version 1.0)

    NASA Astrophysics Data System (ADS)

    Rautenhaus, M.; Kern, M.; Schäfler, A.; Westermann, R.

    2015-07-01

    We present "Met.3D", a new open-source tool for the interactive three-dimensional (3-D) visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns; however, it is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output - 3-D visualization, ensemble visualization and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2-D visualization methods commonly used in meteorology to 3-D visualization by combining both visualization types in a 3-D context. We address the issue of spatial perception in the 3-D view and present approaches to using the ensemble to allow the user to assess forecast uncertainty. Interactivity is key to our approach. Met.3D uses modern graphics technology to achieve interactive visualization on standard consumer hardware. The tool supports forecast data from the European Centre for Medium Range Weather Forecasts (ECMWF) and can operate directly on ECMWF hybrid sigma-pressure level grids. We describe the employed visualization algorithms, and analyse the impact of the ECMWF grid topology on computing 3-D ensemble statistical quantities. Our techniques are demonstrated with examples from the T-NAWDEX-Falcon 2012 (THORPEX - North Atlantic Waveguide and Downstream Impact Experiment) campaign.

  17. Proceedings of the 1984 IEEE Computer Society workshop on visual languages

    SciTech Connect

    Not Available

    1984-01-01

    This book presents the papers given at a conference on programming languages for image processing. Topics considered at the conference included algorithms, satellite pictures, a stereo vision method , a robot vision language, computer graphics, data base technology, remote sensing, man-machine systems, interactive display devices, natural language, pattern recognition, artificial intelligence, expert systems, and the nature of visual languages.

  18. BoreholeAR: A mobile tablet application for effective borehole database visualization using an augmented reality technology

    NASA Astrophysics Data System (ADS)

    Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong

    2015-03-01

    Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.

  19. Examining a knowledge domain: Interactive visualization of the Geographic Information Science and Technology Body of Knowledge 1

    NASA Astrophysics Data System (ADS)

    Stowell, Marilyn Ruth

    This research compared the effectiveness and performance of interactive visualizations of the GIS&T Body of Knowledge 1. The visualizations were created using Processing, and display the structure and content of the Body of Knowledge using various spatial layout methods: the Indented List, Tree Graph, treemap and Similarity Graph. The first three methods utilize the existing hierarchical structure of the BoK text, while the fourth method (Similarity Graph) serves as a jumping off point for exploring content-based visualizations of the BoK. The following questions have guided the framework of this research: (1) Which of the spatial layouts is most effective for completing tasks related to the GIS&T; BoK overall? How do they compare to each other in terms of performance? (2) Is one spatial layout significantly more or less effective than others for completing a particular cognitive task? (3) Is the user able to utilize the BoK as a basemap or reference system and make inferences based on BoK scorecard overlays? (4) Which design aspects of the interface assist in carrying out the survey objectives? Which design aspects of the application detract from fulfilling the objectives? To answer these questions, human subjects were recruited to participate in a survey, during which they were assigned a random spatial layout and were asked questions about the BoK based on their interaction with the visualization tool. 75 users were tested, 25 for each spatial layout. Statistical analysis revealed that there were no statistically significant differences between means for overall accuracy when comparing the three visualizations. In looking at individual questions, Tree Graph and Indented List yielded statistically significant higher scores for questions regarding the structure of the Body of Knowledge, as compared to the treemap. There was a significant strong positive correlation between the time taken to complete the survey and the final survey score. This correlation was

  20. Top Ten Interaction Challenges in Extreme-Scale Visual Analytics

    SciTech Connect

    Wong, Pak C.; Shen, Han-Wei; Chen, Chaomei

    2012-05-31

    The chapter presents ten selected user interfaces and interaction challenges in extreme-scale visual analytics. The study of visual analytics is often referred to as 'the science of analytical reasoning facilitated by interactive visual interfaces' in the literature. The discussion focuses on the issues of applying visual analytics technologies to extreme-scale scientific and non-scientific data ranging from petabyte to exabyte in sizes. The ten challenges are: in situ interactive analysis, user-driven data reduction, scalability and multi-level hierarchy, representation of evidence and uncertainty, heterogeneous data fusion, data summarization and triage for interactive query, analytics of temporally evolving features, the human bottleneck, design and engineering development, and the Renaissance of conventional wisdom. The discussion addresses concerns that arise from different areas of hardware, software, computation, algorithms, and human factors. The chapter also evaluates the likelihood of success in meeting these challenges in the near future.

  1. Evolving Attractive Faces Using Morphing Technology and a Genetic Algorithm: A New Approach to Determining Ideal Facial Aesthetics

    PubMed Central

    Wong, Brian J. F.; Karmi, Koohyar; Devcic, Zlatko; McLaren, Christine E.; Chen, Wen-Pin

    2013-01-01

    Objectives The objectives of this study were to: 1) determine if a genetic algorithm in combination with morphing software can be used to evolve more attractive faces; and 2) evaluate whether this approach can be used as a tool to define or identify the attributes of the ideal attractive face. Study Design Basic research study incorporating focus group evaluations. Methods Digital images were acquired of 250 female volunteers (18–25 y). Randomly selected images were used to produce a parent generation (P) of 30 synthetic faces using morphing software. Then, a focus group of 17 trained volunteers (18–25 y) scored each face on an attractiveness scale ranging from 1 (unattractive) to 10 (attractive). A genetic algorithm was used to select 30 new pairs from the parent generation, and these were morphed using software to produce a new first generation (F1) of faces. The F1 faces were scored by the focus group, and the process was repeated for a total of four iterations of the algorithm. The algorithm mimics natural selection by using the attractiveness score as the selection pressure; the more attractive faces are more likely to morph. All five generations (P-F4) were then scored by three focus groups: a) surgeons (n = 12), b) cosmetology students (n = 44), and c) undergraduate students (n = 44). Morphometric measurements were made of 33 specific features on each of the 150 synthetic faces, and correlated with attractiveness scores using univariate and multivariate analysis. Results The average facial attractiveness scores increased with each generation and were 3.66 (+0.60), 4.59 (±0.73), 5.50 (±0.62), 6.23 (±0.31), and 6.39 (±0.24) for P and F1–F4 generations, respectively. Histograms of attractiveness score distributions show a significant shift in the skew of each curve toward more attractive faces with each generation. Univariate analysis identified nasal width, eyebrow arch height, and lip thickness as being significantly correlated with attractiveness

  2. Visualization of electronic density

    DOE PAGESBeta

    Grosso, Bastien; Cooper, Valentino R.; Pine, Polina; Hashibon, Adham; Yaish, Yuval; Adler, Joan

    2015-04-22

    An atom’s volume depends on its electronic density. Although this density can only be evaluated exactly for hydrogen-like atoms, there are many excellent numerical algorithms and packages to calculate it for other materials. 3D visualization of charge density is challenging, especially when several molecular/atomic levels are intertwined in space. We explore several approaches to 3D charge density visualization, including the extension of an anaglyphic stereo visualization application based on the AViz package to larger structures such as nanotubes. We will describe motivations and potential applications of these tools for answering interesting questions about nanotube properties.

  3. A Review and Advance Technology in Multi-Area Automatic Generation Control by Using Minority Charge Carrier Inspired Algorithm

    NASA Astrophysics Data System (ADS)

    Madichetty, Sreedhar; Panda, Susmita; Mishra, Sambeet; Dasgupta, Abhijit

    2013-11-01

    This article deals with automatic generation control of a multi-area interconnected thermal system in different modes using intelligent integral and proportional-integral controllers. Appropriate generation rate constraint has been considered for the thermal generation plants. The two cumulated thermal areas are considered with reheat turbines. Performances of reheat turbine on dynamic responses have been investigated. Further, selection of suitable integral and proportional-integral controllers has been investigated with a minority charge carrier inspired algorithm. Cumulative system performance is examined considering with different load perturbation in both cumulative thermal areas. Further, system is investigated with different area control errors, and results are explored.

  4. Fast and precise algorithms for calculating offset correction in single photon counting ASICs built in deep sub-micron technologies

    NASA Astrophysics Data System (ADS)

    Maj, P.

    2014-07-01

    An important trend in the design of readout electronics working in the single photon counting mode for hybrid pixel detectors is to minimize the single pixel area without sacrificing its functionality. This is the reason why many digital and analog blocks are made with the smallest, or next to smallest, transistors possible. This causes a problem with matching among the whole pixel matrix which is acceptable by designers and, of course, it should be corrected with the use of dedicated circuitry, which, by the same rule of minimizing devices, suffers from the mismatch. Therefore, the output of such a correction circuit, controlled by an ultra-small area DAC, is not only a non-linear function, but it is also often non-monotonic. As long as it can be used for proper correction of the DC operation points inside each pixel, it is acceptable, but the time required for correction plays an important role for both chip verification and the design of a big, multi-chip system. Therefore, we present two algorithms: a precise one and a fast one. The first algorithm is based on the noise hits profiles obtained during so called threshold scan procedures. The fast correction procedure is based on the trim DACs scan and it takes less than a minute in a SPC detector systems consisting of several thousands of pixels.

  5. Medical image compression algorithm based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Chen, Minghong; Zhang, Guoping; Wan, Wei; Liu, Minmin

    2005-02-01

    With rapid development of electronic imaging and multimedia technology, the telemedicine is applied to modern medical servings in the hospital. Digital medical image is characterized by high resolution, high precision and vast data. The optimized compression algorithm can alleviate restriction in the transmission speed and data storage. This paper describes the characteristics of human vision system based on the physiology structure, and analyses the characteristics of medical image in the telemedicine, then it brings forward an optimized compression algorithm based on wavelet zerotree. After the image is smoothed, it is decomposed with the haar filters. Then the wavelet coefficients are quantified adaptively. Therefore, we can maximize efficiency of compression and achieve better subjective visual image. This algorithm can be applied to image transmission in the telemedicine. In the end, we examined the feasibility of this algorithm with an image transmission experiment in the network.

  6. VisPortal: Deploying grid-enabled visualization tools through a web-portal interface

    SciTech Connect

    Bethel, Wes; Siegerist, Cristina; Shalf, John; Shetty, Praveenkumar; Jankun-Kelly, T.J.; Kreylos, Oliver; Ma, Kwan-Liu

    2003-06-09

    The LBNL/NERSC Visportal effort explores ways to deliver advanced Remote/Distributed Visualization (RDV) capabilities through a Grid-enabled web-portal interface. The effort focuses on latency tolerant distributed visualization algorithms, GUI designs that are more appropriate for the capabilities of web interfaces, and refactoring parallel-distributed applications to work in a N-tiered component deployment strategy. Most importantly, our aim is to leverage commercially-supported technology as much as possible in order to create a deployable, supportable, and hence viable platform for delivering grid-based visualization services to collaboratory users.

  7. Accessibility of e-Learning and Computer and Information Technologies for Students with Visual Impairments in Postsecondary Education

    ERIC Educational Resources Information Center

    Fichten, Catherine S.; Asuncion, Jennison V.; Barile, Maria; Ferraro, Vittoria; Wolforth, Joan

    2009-01-01

    This article presents the results of two studies on the accessibility of e-learning materials and other information and computer and communication technologies for 143 Canadian college and university students with low vision and 29 who were blind. It offers recommendations for enhancing access, creating new learning opportunities, and eliminating…

  8. Data mashups deliver value to physician practices. New data visualization technology not just for consumers any longer.

    PubMed

    McBride, Jack

    2013-01-01

    Data mashup technology is one way medical practices can achieve greater insight into the operations of their business. Ultimately, this can lead to higher reimbursements. But what is a mashup, and how does it work? This article discusses how mashups--combining data from two or more disparate sources into a new useful service--are cropping up everywhere. PMID:23767131

  9. Applying the CHAID Algorithm to Analyze How Achievement Is Influenced by University Students' Demographics, Study Habits, and Technology Familiarity

    ERIC Educational Resources Information Center

    Baran, Bahar; Kiliç, Eylem

    2015-01-01

    The purpose of this study is to analyze three separate constructs (demographics, study habits, and technology familiarity) that can be used to identify university students' characteristics and the relationship between each of these constructs with student achievement. A survey method was used for the current study, and the participants included…

  10. Robotic Follow Algorithm

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  11. Algorithm For Modeling Coordinates Of Corners Of Buildings Determined With RTN GNSS Technology Using Vectors Translation Method

    NASA Astrophysics Data System (ADS)

    Krzyżek, Robert

    2015-09-01

    The paper presents an innovative solution which increases the reliability of determining the coordinates of corners of building structures in the RTN GNSS mode. Having performed the surveys of the base points in real time, it is proposed to use the method of line-line intersection, which results in capturing the Cartesian coordinates X, Y of the corners of buildings. The coordinates which were obtained in this way, are subjected to an innovative solution called the method of vectors translation. This method involves modeling the coordinates obtained by the algorithm developed by the author. As a result, we obtain the Cartesian coordinates X and Y of the corners of building structures, the accuracy and reliability of determining which is on a very high level.

  12. Visualization research on high efficiency and low NOx combustion technology with multiple air-staged and large angle counter flow of fuel-rich jet

    NASA Astrophysics Data System (ADS)

    Li, Y. Y.; Li, Y.; Lin, Z. C.; Fan, W. D.; Zhang, M. C.

    2010-03-01

    In this paper, a new technique for tangentially fired pulverized coal boiler, high efficiency and low NOx combustion technology with multiple air-staged and large angle counter flow of fuel-rich jet (ACCT for short), is proposed. Based on traditional air staged and rich-lean combustion technique, a NOx reduction area is introduced through air injection between primary combustion zone and secondary combustion zone. To verify the characters of this technique, experiment with a new developed visualization method, image processing on smog tracing with fractal dimension, is carried out on a cold model of 300 MW furnace designed with this technique. The result shows, compared to injection without counter flow, the center lines of counter flow injection go deeper into the chamber and form a smaller tangential circle, which means the rotating momentum of entire vortex is feebler and the exit gyration is weaker. It also shows that with counter flow, the fractal dimensions of boundary between primary jet and front fire side air is bigger, which means more intense turbulence and better mix. As a conclusion, with fractal dimension, image processing on smog tracing method can be a quantificational, convenient and effective visualization way without disturbing the flow field, and it's also acknowledged that ACCT has the following superiorities: high burn out rate, low NOx emission, stable burning, slagging preventing, and temp-bias reducing.

  13. Thalamic Visual Prosthesis.

    PubMed

    Nguyen, Hieu T; Tangutooru, Siva M; Rountree, Corey M; Kantzos, Andrew J Kantzos; Tarlochan, Faris; Yoon, W Jong; Troy, John B

    2016-08-01

    Glaucoma is a neurological disorder leading to blindness initially through the loss of retinal ganglion cells, followed by loss of neurons higher in the visual system. Some work has been undertaken to develop prostheses for glaucoma patients targeting tissues along the visual pathway, including the lateral geniculate nucleus (LGN) of the thalamus, but especially the visual cortex. This review makes the case for a visual prosthesis that targets the LGN. The compact nature and orderly structure of this nucleus make it a potentially better target to restore vision than the visual cortex. Existing research for the development of a thalamic visual prosthesis will be discussed along with the gaps that need to be addressed before such a technology could be applied clinically, as well as the challenge posed by the loss of LGN neurons as glaucoma progresses. PMID:27214884

  14. Visualization Techniques in Space and Atmospheric Sciences

    NASA Technical Reports Server (NTRS)

    Szuszczewicz, E. P. (Editor); Bredekamp, Joseph H. (Editor)

    1995-01-01

    Unprecedented volumes of data will be generated by research programs that investigate the Earth as a system and the origin of the universe, which will in turn require analysis and interpretation that will lead to meaningful scientific insight. Providing a widely distributed research community with the ability to access, manipulate, analyze, and visualize these complex, multidimensional data sets depends on a wide range of computer science and technology topics. Data storage and compression, data base management, computational methods and algorithms, artificial intelligence, telecommunications, and high-resolution display are just a few of the topics addressed. A unifying theme throughout the papers with regards to advanced data handling and visualization is the need for interactivity, speed, user-friendliness, and extensibility.

  15. Optimized algorithm module for large volume remote sensing image processing system

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Liu, Nan; Liu, Renyi; Wang, Jiawen; Zhang, Qin

    2007-12-01

    A new remote sensing image processing system's algorithm module has been introduced in this paper, which is coded with Visual C++ 6.0 program language and can process large volume of remote sensing image. At the same time, adopted key technologies in algorithm module are given. Two defects of American remote sensing image processing system called ERDAS has been put forward in image filter algorithm and the storage of pixel values that are out of data type range. In author's system two optimized methods has been implemented in these two aspects. By contrasted with ERDAS IMAGINE System, the two methods had been proved to be effective in image analysis.

  16. Visual agnosia.

    PubMed

    Álvarez, R; Masjuan, J

    2016-03-01

    Visual agnosia is defined as an impairment of object recognition, in the absence of visual acuity or cognitive dysfunction that would explain this impairment. This condition is caused by lesions in the visual association cortex, sparing primary visual cortex. There are 2 main pathways that process visual information: the ventral stream, tasked with object recognition, and the dorsal stream, in charge of locating objects in space. Visual agnosia can therefore be divided into 2 major groups depending on which of the two streams is damaged. The aim of this article is to conduct a narrative review of the various visual agnosia syndromes, including recent developments in a number of these syndromes. PMID:26358494

  17. Combining usability testing with eye-tracking technology: evaluation of a visualization support for antibiotic use in intensive care.

    PubMed

    Eghdam, Aboozar; Forsman, Johanna; Falkenhav, Magnus; Lind, Mats; Koch, Sabine

    2011-01-01

    This research work is an explorative study to measure efficiency, effectiveness and user satisfaction of a prototype called Infobiotika aiming to support antibiotic use in intensive care. The evaluation was performed by combining traditional usability testing with eye-tracking technology. The test was conducted with eight intensive care physicians whereof four specialists and four residents. During three test phases participants were asked to perform three types of tasks, namely navigational, clinical and tasks to measure the learning effect after 3-5 minutes free exploring time. A post-test questionnaire was used to explore user satisfaction. Based on the results and overall observations, Infobiotika seems to be effective and efficient in terms of supporting navigation and also a learnable product for intensive care physicians fulfilling their need to get an accurate overview of a patient status quickly. Applying eye-tracking technology during usability testing has shown to be a valuable complement to traditional methods that revealed many unexpected issues in terms of navigation and contributed a supplementary understanding about design problems and user performance. PMID:21893885

  18. Visual fatigue modeling for stereoscopic video shot based on camera motion

    NASA Astrophysics Data System (ADS)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  19. Rapid Sampling for Visualizations with Ordering Guarantees

    PubMed Central

    Kim, Albert; Blais, Eric; Parameswaran, Aditya; Indyk, Piotr; Madden, Sam; Rubinfeld, Ronitt

    2015-01-01

    Visualizations are frequently used as a means to understand trends and gather insights from datasets, but often take a long time to generate. In this paper, we focus on the problem of rapidly generating approximate visualizations while preserving crucial visual properties of interest to analysts. Our primary focus will be on sampling algorithms that preserve the visual property of ordering; our techniques will also apply to some other visual properties. For instance, our algorithms can be used to generate an approximate visualization of a bar chart very rapidly, where the comparisons between any two bars are correct. We formally show that our sampling algorithms are generally applicable and provably optimal in theory, in that they do not take more samples than necessary to generate the visualizations with ordering guarantees. They also work well in practice, correctly ordering output groups while taking orders of magnitude fewer samples and much less time than conventional sampling schemes. PMID:26779380

  20. Two Contrasting Approaches to Building High School Teacher Capacity to Teach About Local Climate Change Using Powerful Geospatial Data and Visualization Technology

    NASA Astrophysics Data System (ADS)

    Zalles, D. R.

    2011-12-01

    The presentation will compare and contrast two different place-based approaches to helping high school science teachers use geospatial data visualization technology to teach about climate change in their local regions. The approaches are being used in the development, piloting, and dissemination of two projects for high school science led by the author: the NASA-funded Data-enhanced Investigations for Climate Change Education (DICCE) and the NSF funded Studying Topography, Orographic Rainfall, and Ecosystems with Geospatial Information Technology (STORE). DICCE is bringing an extensive portal of Earth observation data, the Goddard Interactive Online Visualization and Analysis Infrastructure, to high school classrooms. STORE is making available data for viewing results of a particular IPCC-sanctioned climate change model in relation to recent data about average temperatures, precipitation, and land cover for study areas in central California and western New York State. Across the two projects, partner teachers of academically and ethnically diverse students from five states are participating in professional development and pilot testing. Powerful geospatial data representation technologies are difficult to implement in high school science because of challenges that teachers and students encounter navigating data access and making sense of data characteristics and nomenclature. Hence, on DICCE, the researchers are testing the theory that by providing a scaffolded technology-supported process for instructional design, starting from fundamental questions about the content domain, teachers will make better instructional decisions. Conversely, the STORE approach is rooted in the perspective that co-design of curricular materials among researchers and teacher partners that work off of "starter" lessons covering focal skills and understandings will lead to the most effective utilizations of the technology in the classroom. The projects' goals and strategies for student

  1. Quantifying Pilot Visual Attention in Low Visibility Terminal Operations

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle K.; Arthur, J. J.; Latorella, Kara A.; Kramer, Lynda J.; Shelton, Kevin J.; Norman, Robert M.; Prinzel, Lawrence J.

    2012-01-01

    Quantifying pilot visual behavior allows researchers to determine not only where a pilot is looking and when, but holds implications for specific behavioral tracking when these data are coupled with flight technical performance. Remote eye tracking systems have been integrated into simulators at NASA Langley with effectively no impact on the pilot environment. This paper discusses the installation and use of a remote eye tracking system. The data collection techniques from a complex human-in-the-loop (HITL) research experiment are discussed; especially, the data reduction algorithms and logic to transform raw eye tracking data into quantified visual behavior metrics, and analysis methods to interpret visual behavior. The findings suggest superior performance for Head-Up Display (HUD) and improved attentional behavior for Head-Down Display (HDD) implementations of Synthetic Vision System (SVS) technologies for low visibility terminal area operations. Keywords: eye tracking, flight deck, NextGen, human machine interface, aviation

  2. Visualization of electronic density

    NASA Astrophysics Data System (ADS)

    Grosso, Bastien; Cooper, Valentino R.; Pine, Polina; Hashibon, Adham; Yaish, Yuval; Adler, Joan

    2015-10-01

    The spatial volume occupied by an atom depends on its electronic density. Although this density can only be evaluated exactly for hydrogen-like atoms, there are many excellent algorithms and packages to calculate it numerically for other materials. Three-dimensional visualization of charge density is challenging, especially when several molecular/atomic levels are intertwined in space. In this paper, we explore several approaches to this, including the extension of an anaglyphic stereo visualization application based on the AViz package for hydrogen atoms and simple molecules to larger structures such as nanotubes. We will describe motivations and potential applications of these tools for answering interesting physical questions about nanotube properties.

  3. Biological Visualization, Imaging and Simulation(Bio-VIS) at NASA Ames Research Center: Developing New Software and Technology for Astronaut Training and Biology Research in Space

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey

    2003-01-01

    The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices

  4. Extreme Scale Visual Analytics

    SciTech Connect

    Steed, Chad A; Potok, Thomas E; Pullum, Laura L; Ramanathan, Arvind; Shipman, Galen M; Thornton, Peter E

    2013-01-01

    Given the scale and complexity of today s data, visual analytics is rapidly becoming a necessity rather than an option for comprehensive exploratory analysis. In this paper, we provide an overview of three applications of visual analytics for addressing the challenges of analyzing climate, text streams, and biosurveilance data. These systems feature varying levels of interaction and high performance computing technology integration to permit exploratory analysis of large and complex data of global significance.

  5. The change in critical technologies for computational physics

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1990-01-01

    It is noted that the types of technology required for computational physics are changing as the field matures. Emphasis has shifted from computer technology to algorithm technology and, finally, to visual analysis technology as areas of critical research for this field. High-performance graphical workstations tied to a supercommunicator with high-speed communications along with the development of especially tailored visualization software has enabled analysis of highly complex fluid-dynamics simulations. Particular reference is made here to the development of visual analysis tools at NASA's Numerical Aerodynamics Simulation Facility. The next technology which this field requires is one that would eliminate visual clutter by extracting key features of simulations of physics and technology in order to create displays that clearly portray these key features. Research in the tuning of visual displays to human cognitive abilities is proposed. The immediate transfer of technology to all levels of computers, specifically the inclusion of visualization primitives in basic software developments for all work stations and PCs, is recommended.

  6. Architecture for Teraflop Visualization

    SciTech Connect

    Breckenridge, A.R.; Haynes, R.A.

    1999-04-09

    Sandia Laboratories' computational scientists are addressing a very important question: How do we get insight from the human combined with the computer-generated information? The answer inevitably leads to using scientific visualization. Going one technology leap further is teraflop visualization, where the computing model and interactive graphics are an integral whole to provide computing for insight. In order to implement our teraflop visualization architecture, all hardware installed or software coded will be based on open modules and dynamic extensibility principles. We will illustrate these concepts with examples in our three main research areas: (1) authoring content (the computer), (2) enhancing precision and resolution (the human), and (3) adding behaviors (the physics).

  7. Visual attitude propagation for small satellites

    NASA Astrophysics Data System (ADS)

    Rawashdeh, Samir A.

    As electronics become smaller and more capable, it has become possible to conduct meaningful and sophisticated satellite missions in a small form factor. However, the capability of small satellites and the range of possible applications are limited by the capabilities of several technologies, including attitude determination and control systems. This dissertation evaluates the use of image-based visual attitude propagation as a compliment or alternative to other attitude determination technologies that are suitable for miniature satellites. The concept lies in using miniature cameras to track image features across frames and extracting the underlying rotation. The problem of visual attitude propagation as a small satellite attitude determination system is addressed from several aspects: related work, algorithm design, hardware and performance evaluation, possible applications, and on-orbit experimentation. These areas of consideration reflect the organization of this dissertation. A "stellar gyroscope" is developed, which is a visual star-based attitude propagator that uses relative motion of stars in an imager's field of view to infer the attitude changes. The device generates spacecraft relative attitude estimates in three degrees of freedom. Algorithms to perform the star detection, correspondence, and attitude propagation are presented. The Random Sample Consensus (RANSAC) approach is applied to the correspondence problem to successfully pair stars across frames while mitigating falsepositive and false-negative star detections. This approach provides tolerance to the noise levels expected in using miniature optics and no baffling, and the noise caused by radiation dose on orbit. The hardware design and algorithms are validated using test images of the night sky. The application of the stellar gyroscope as part of a CubeSat attitude determination and control system is described. The stellar gyroscope is used to augment a MEMS gyroscope attitude propagation

  8. Why High Performance Visual Data Analytics is both Relevant and Difficult

    SciTech Connect

    Bethel, E. Wes; Byna, Suren; Ruebel, Oliver; Wu, K. John; Wehner, Michael

    2012-12-01

    Data visualization, as well as data analysis and data analytics, are all an integral part of the scientific process. Collectively, these technologies provide the means to gain insight into data of ever-increasing size and complexity. Over the past two decades, a substantial amount of visualization, analysis, and analytics R&D has focused on the challenges posed by increasing data size and complexity, as well as on the increasing complexity of a rapidly changing computational platform landscape. While some of this research focuses on solely on technologies, such as indexing and searching or novel analysis or visualization algorithms, other R&D projects focus on applying technological advances to specific application problems. Some of the most interesting and productive results occur when these two activities R&D and application are conducted in a collaborative fashion, where application needs drive R&D, and R&D results are immediately applicable to real world problems.

  9. Laser Optometric Assessment Of Visual Display Viewability

    NASA Astrophysics Data System (ADS)

    Murch, Gerald M.

    1983-08-01

    Through the technique of laser optometry, measurements of a display user's visual accommodation and binocular convergence were used to assess the visual impact of display color, technology, contrast, and work time. The studies reported here indicate the potential of visual-function measurements as an objective means of improving the design of visual displays.

  10. Visual Scripting.

    ERIC Educational Resources Information Center

    Halas, John

    Visual scripting is the coordination of words with pictures in sequence. This book presents the methods and viewpoints on visual scripting of fourteen film makers, from nine countries, who are involved in animated cinema; it contains concise examples of how a storybook and preproduction script can be prepared in visual terms; and it includes a…

  11. A Double-function Digital Watermarking Algorithm Based on Chaotic System and LWT

    NASA Astrophysics Data System (ADS)

    Yuxia, Zhao; Jingbo, Fan

    A double- function digital watermarking technology is studied and a double-function digital watermarking algorithm of colored image is presented based on chaotic system and the lifting wavelet transformation (LWT).The algorithm has realized the double aims of the copyright protection and the integrity authentication of image content. Making use of feature of human visual system (HVS), the watermark image is embedded into the color image's low frequency component and middle frequency components by different means. The algorithm has great security by using two kinds chaotic mappings and Arnold to scramble the watermark image at the same time. The algorithm has good efficiency by using LWT. The emulation experiment indicates the algorithm has great efficiency and security, and the effect of concealing is really good.

  12. Acoustic and visual remote sensing of barrels of radioactive waste: Application of civilian and military technology to environmental management of the oceans

    SciTech Connect

    Karl, H.A.; Chin, J.L.; Maher, N.M.; Chavez, P.S. Jr.; Ueber, E.; Van Peeters, W.; Curl, H.

    1995-04-01

    As part of an ongoing strategic research project to find barrels of radioactive waste off San Francisco, the U.S. Navy (USN), the U.S. Geological Survey (USGS), and the Gulf of the Farallones National Marine Sanctuary (GFNMS) pooled their expertise, resources, and technology to form a partnership to verify new computer enhancement techniques developed for detecting targets the size of 55 gallon barrels on sidescan sonar images. Between 1946 and 1970, approximately 47,800 large barrels and other containers of radioactive waste were dumped in the ocean west of San Francisco; the containers litter an area of the sea floor of at least 1400 km {sup 2} knows as the Farallon Island Radioactive Waste Dump. The exact location of the containers and the potential hazard the containers pose to the environment is unknown. The USGS developed computer techniques and contracted with private industry to enhance sidescan data, collected in cooperation with the GFNMS, to detect objects as small as 55 gallon steel barrels while conducting regional scale sidescan sonar surveys. Using a subset of the regional sonar survey, images were plotted over a 125 km {sub 2} area. The acoustic interpretations were verified visually using the USN DSV Sea Cliff and the unmanned Advanced Tethered Vehicle (ATV). Barrels and other physical features were found where image enhancement had indicated they would be found. The interagency cooperation among the USN, USGS, and GFNMS has led to develop a cost effective and time efficient method to locate the barrels of radioactive waste. This method has universal application for locating containers of hazardous waste over a regional scale in other ocean areas such as Boston Harbor and the Kara Sea in the Arctic. This successful application of military and civilian expertise and technology has provided scientific information to help formulate policy decisions that affect the environmental management and quality of the ocean.

  13. Visualization of localization microscopy data.

    PubMed

    Baddeley, David; Cannell, Mark B; Soeller, Christian

    2010-02-01

    Localization microscopy techniques based on localizing single fluorophore molecules now routinely achieve accuracies better than 30 nm. Unlike conventional optical microscopies, localization microscopy experiments do not generate an image but a list of discrete coordinates of estimated fluorophore positions. Data display and analysis therefore generally require visualization methods that translate the position data into conventional images. Here we investigate the properties of several widely used visualization techniques and show that a commonly used algorithm based on rendering Gaussians may lead to a 1.44-fold loss of resolution. Existing methods typically do not explicitly take sampling considerations into account and thus may produce spurious structures. We present two additional visualization algorithms, an adaptive histogram method based on quad-trees and a Delaunay triangulation based visualization of point data that address some of these deficiencies. The new visualization methods are designed to suppress erroneous detail in poorly sampled image areas but avoid loss of resolution in well-sampled regions. A number of criteria for scoring visualization methods are developed as a guide for choosing among visualization methods and are used to qualitatively compare various algorithms. PMID:20082730

  14. Visual Imagery without Visual Perception?

    ERIC Educational Resources Information Center

    Bertolo, Helder

    2005-01-01

    The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…

  15. Distributed visualization

    SciTech Connect

    Arnold, T.R.

    1991-12-31

    Within the last half decade or so, two technological evolutions have culminated in mature products of potentially great utility to computer simulation. One is the emergence of low-cost workstations with versatile graphics and substantial local CPU power. The other is the adoption of UNIX as a de facto ``standard`` operating system on at least some machines offered by virtually all vendors. It is now possible to perform transient simulations in which the number- crunching capability of a supercomputer is harnessed to allow both process control and graphical visualization on a workstation. Such a distributed computer system is described as it now exists: a large FORTRAN application on a CRAY communicates with the balance of the simulation on a SUN-3 or SUN-4 via remote procedure call (RPC) protocol. The hooks to the application and the graphics have been made very flexible. Piping of output from the CRAY to the SUN is nonselective, allowing the user to summon data and draw or plot at will. The ensemble of control, application, data handling, and graphics modules is loosely coupled, which further generalizes the utility of the software design.

  16. Distributed visualization

    SciTech Connect

    Arnold, T.R.

    1991-01-01

    Within the last half decade or so, two technological evolutions have culminated in mature products of potentially great utility to computer simulation. One is the emergence of low-cost workstations with versatile graphics and substantial local CPU power. The other is the adoption of UNIX as a de facto standard'' operating system on at least some machines offered by virtually all vendors. It is now possible to perform transient simulations in which the number- crunching capability of a supercomputer is harnessed to allow both process control and graphical visualization on a workstation. Such a distributed computer system is described as it now exists: a large FORTRAN application on a CRAY communicates with the balance of the simulation on a SUN-3 or SUN-4 via remote procedure call (RPC) protocol. The hooks to the application and the graphics have been made very flexible. Piping of output from the CRAY to the SUN is nonselective, allowing the user to summon data and draw or plot at will. The ensemble of control, application, data handling, and graphics modules is loosely coupled, which further generalizes the utility of the software design.

  17. Brain functional network connectivity based on a visual task: visual information processing-related brain regions are significantly activated in the task state

    PubMed Central

    Yang, Yan-li; Deng, Hong-xia; Xing, Gui-yang; Xia, Xiao-luan; Li, Hai-fang

    2015-01-01

    It is not clear whether the method used in functional brain-network related research can be applied to explore the feature binding mechanism of visual perception. In this study, we investigated feature binding of color and shape in visual perception. Functional magnetic resonance imaging data were collected from 38 healthy volunteers at rest and while performing a visual perception task to construct brain networks active during resting and task states. Results showed that brain regions involved in visual information processing were obviously activated during the task. The components were partitioned using a greedy algorithm, indicating the visual network existed during the resting state. Z-values in the vision-related brain regions were calculated, confirming the dynamic balance of the brain network. Connectivity between brain regions was determined, and the result showed that occipital and lingual gyri were stable brain regions in the visual system network, the parietal lobe played a very important role in the binding process of color features and shape features, and the fusiform and inferior temporal gyri were crucial for processing color and shape information. Experimental findings indicate that understanding visual feature binding and cognitive processes will help establish computational models of vision, improve image recognition technology, and provide a new theoretical mechanism for feature binding in visual perception. PMID:25883631

  18. Visualization Design Environment

    SciTech Connect

    Pomplun, A.R.; Templet, G.J.; Jortner, J.N.; Friesen, J.A.; Schwegel, J.; Hughes, K.R.

    1999-02-01

    Improvements in the performance and capabilities of computer software and hardware system, combined with advances in Internet technologies, have spurred innovative developments in the area of modeling, simulation and visualization. These developments combine to make it possible to create an environment where engineers can design, prototype, analyze, and visualize components in virtual space, saving the time and expenses incurred during numerous design and prototyping iterations. The Visualization Design Centers located at Sandia National Laboratories are facilities built specifically to promote the ''design by team'' concept. This report focuses on designing, developing and deploying this environment by detailing the design of the facility, software infrastructure and hardware systems that comprise this new visualization design environment and describes case studies that document successful application of this environment.

  19. Assisting the Visually Impaired: Obstacle Detection and Warning System by Acoustic Feedback

    PubMed Central

    Rodríguez, Alberto; Yebes, J. Javier; Alcantarilla, Pablo F.; Bergasa, Luis M.; Almazán, Javier; Cela, Andrés

    2012-01-01

    The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map, potential obstacles can be detected in 3D in indoor and outdoor scenarios. A ground plane estimation algorithm based on RANSAC plus filtering techniques allows the robust detection of the ground in every frame. A polar grid representation is proposed to account for the potential obstacles in the scene. The design is completed with acoustic feedback to assist visually impaired users while approaching obstacles. Beep sounds with different frequencies and repetitions inform the user about the presence of obstacles. Audio bone conducting technology is employed to play these sounds without interrupting the visually impaired user from hearing other important sounds from its local environment. A user study participated by four visually impaired volunteers supports the proposed system. PMID:23247413

  20. Technology.

    ERIC Educational Resources Information Center

    Online-Offline, 1998

    1998-01-01

    Focuses on technology, on advances in such areas as aeronautics, electronics, physics, the space sciences, as well as computers and the attendant progress in medicine, robotics, and artificial intelligence. Describes educational resources for elementary and middle school students, including Web sites, CD-ROMs and software, videotapes, books,…

  1. Frameless Volume Visualization.

    PubMed

    Petkov, Kaloian; Kaufman, Arie E

    2016-02-01

    We have developed a novel visualization system based on the reconstruction of high resolution and high frame rate images from a multi-tiered stream of samples that are rendered framelessly. This decoupling of the rendering system from the display system is particularly suitable when dealing with very high resolution displays or expensive rendering algorithms, where the latency of generating complete frames may be prohibitively high for interactive applications. In contrast to the traditional frameless rendering technique, we generate the lowest latency samples on the optimal sampling lattice in the 3D domain. This approach avoids many of the artifacts associated with existing sample caching and reprojection methods during interaction that may not be acceptable in many visualization applications. Advanced visualization effects are generated remotely and streamed into the reconstruction system using tiered samples with varying latencies and quality levels. We demonstrate the use of our visualization system for the exploration of volumetric data at stable guaranteed frame rates on high resolution displays, including a 470 megapixel tiled display as part of the Reality Deck immersive visualization facility. PMID:26731452

  2. Learning from Balance Sheet Visualization

    ERIC Educational Resources Information Center

    Tanlamai, Uthai; Soongswang, Oranuj

    2011-01-01

    This exploratory study examines alternative visuals and their effect on the level of learning of balance sheet users. Executive and regular classes of graduate students majoring in information technology in business were asked to evaluate the extent of acceptance and enhanced capability of these alternative visuals toward their learning…

  3. Visual Resources on the Internet.

    ERIC Educational Resources Information Center

    Jaber, William E.; Hou, Feng

    With the development of the Internet technology and proliferation of the network application, visual materials have been digitized and archived on many publicly accessible computer servers. However, these visual resources can be beneficial to educators only when they know what they are, what they look like, in what format they are created, and how…

  4. New Algorithm for Managing Childhood Illness Using Mobile Technology (ALMANACH): A Controlled Non-Inferiority Study on Clinical Outcome and Antibiotic Use in Tanzania

    PubMed Central

    Shao, Amani Flexson; Rambaud-Althaus, Clotilde; Samaka, Josephine; Faustine, Allen Festo; Perri-Moore, Seneca; Swai, Ndeniria; Mitchell, Marc; Genton, Blaise; D’Acremont, Valérie

    2015-01-01

    Introduction The decline of malaria and scale-up of rapid diagnostic tests calls for a revision of IMCI. A new algorithm (ALMANACH) running on mobile technology was developed based on the latest evidence. The objective was to ensure that ALMANACH was safe, while keeping a low rate of antibiotic prescription. Methods Consecutive children aged 2–59 months with acute illness were managed using ALMANACH (2 intervention facilities), or standard practice (2 control facilities) in Tanzania. Primary outcomes were proportion of children cured at day 7 and who received antibiotics on day 0. Results 130/842 (15∙4%) in ALMANACH and 241/623 (38∙7%) in control arm were diagnosed with an infection in need for antibiotic, while 3∙8% and 9∙6% had malaria. 815/838 (97∙3%;96∙1–98.4%) were cured at D7 using ALMANACH versus 573/623 (92∙0%;89∙8–94∙1%) using standard practice (p<0∙001). Of 23 children not cured at D7 using ALMANACH, 44% had skin problems, 30% pneumonia, 26% upper respiratory infection and 13% likely viral infection at D0. Secondary hospitalization occurred for one child using ALMANACH and one who eventually died using standard practice. At D0, antibiotics were prescribed to 15∙4% (12∙9–17∙9%) using ALMANACH versus 84∙3% (81∙4–87∙1%) using standard practice (p<0∙001). 2∙3% (1∙3–3.3) versus 3∙2% (1∙8–4∙6%) received an antibiotic secondarily. Conclusion Management of children using ALMANACH improve clinical outcome and reduce antibiotic prescription by 80%. This was achieved through more accurate diagnoses and hence better identification of children in need of antibiotic treatment or not. The building on mobile technology allows easy access and rapid update of the decision chart. Trial Registration Pan African Clinical Trials Registry PACTR201011000262218 PMID:26161535

  5. Mathematical Visualization

    ERIC Educational Resources Information Center

    Rogness, Jonathan

    2011-01-01

    Advances in computer graphics have provided mathematicians with the ability to create stunning visualizations, both to gain insight and to help demonstrate the beauty of mathematics to others. As educators these tools can be particularly important as we search for ways to work with students raised with constant visual stimulation, from video games…

  6. Visual Literacy

    ERIC Educational Resources Information Center

    Felten, Peter

    2008-01-01

    Living in an image-rich world does not mean students (or faculty and administrators) naturally possess sophisticated visual literacy skills, just as continually listening to an iPod does not teach a person to critically analyze or create music. Instead, "visual literacy involves the ability to understand, produce, and use culturally significant…

  7. Visual Literacy.

    ERIC Educational Resources Information Center

    Lamberski, Richard J.

    A series of articles examines visual literacy from the perspectives of definition, research, curriculum, and resources. Articles examining the definition of visual literacy approach it in terms of semantics, techniques, and exploratory definition areas. There are surveys of present and potential research, and a discussion of the problem of…

  8. Visual Closure.

    ERIC Educational Resources Information Center

    Groffman, Sidney

    An experimental test of visual closure based on an information-theory concept of perception was devised to test the ability to discriminate visual stimuli with reduced cues. The test is to be administered in a timed individual situation in which the subject is presented with sets of incomplete drawings of simple objects that he is required to name…

  9. Visual Thinking.

    ERIC Educational Resources Information Center

    Arnheim, Rudolf

    Based on the more general principle that all thinking (including reasoning) is basically perceptual in nature, the author proposes that visual perception is not a passive recording of stimulus material but an active concern of the mind. He delineates the task of visually distinguishing changes in size, shape, and position and points out the…

  10. Virtual Environments in Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Lisinski, T. A. (Technical Monitor)

    1994-01-01

    Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.

  11. Visualizing Energy Resources Dynamically on Earth (VERDE)

    2009-06-01

    VERDE is a software service that ingests data on real-time eneryg grid status and analyzes the data with models and algorithms presenting the output in a form that can be visualized by client spatio-temporal browsers

  12. Expanding the Frontiers of Visual Analytics and Visualization

    SciTech Connect

    Dill, John; Earnshaw, Rae; Kasik, David; Vince, John; Wong, Pak C.

    2012-05-31

    Expanding the Frontiers of Visual Analytics and Visualization contains international contributions by leading researchers from within the field. Dedicated to the memory of Jim Thomas, the book begins with the dynamics of evolving a vision based on some of the principles that Jim and colleagues established and in which Jim’s leadership was evident. This is followed by chapters in the areas of visual analytics, visualization, interaction, modelling, architecture, and virtual reality, before concluding with the key area of technology transfer to industry.

  13. Building Adoption of Visual Analytics Software

    SciTech Connect

    Chinchor, Nancy; Cook, Kristin A.; Scholtz, Jean

    2012-01-05

    Adoption of technology is always difficult. Issues such as having the infrastructure necessary to support the technology, training for users, integrating the technology into current processes and tools, and having the time, managerial support, and necessary funds need to be addressed. In addition to these issues, the adoption of visual analytics tools presents specific challenges that need to be addressed. This paper discusses technology adoption challenges and approaches for visual analytics technologies.

  14. Visual cognition

    PubMed Central

    Cavanagh, Patrick

    2011-01-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719

  15. Visualization of multidimensional database

    NASA Astrophysics Data System (ADS)

    Lee, Chung

    2008-01-01

    The concept of multidimensional databases has been extensively researched and wildly used in actual database application. It plays an important role in contemporary information technology, but due to the complexity of its inner structure, the database design is a complicated process and users are having a hard time fully understanding and using the database. An effective visualization tool for higher dimensional information system helps database designers and users alike. Most visualization techniques focus on displaying dimensional data using spreadsheets and charts. This may be sufficient for the databases having three or fewer dimensions but for higher dimensions, various combinations of projection operations are needed and a full grasp of total database architecture is very difficult. This study reviews existing visualization techniques for multidimensional database and then proposes an alternate approach to visualize a database of any dimension by adopting the tool proposed by Kiviat for software engineering processes. In this diagramming method, each dimension is represented by one branch of concentric spikes. This paper documents a C++ based visualization tool with extensive use of OpenGL graphics library and GUI functions. Detailed examples of actual databases demonstrate the feasibility and effectiveness in visualizing multidimensional databases.

  16. Challenges for Visual Analytics

    SciTech Connect

    Thomas, James J.; Kielman, Joseph

    2009-09-23

    Visual analytics has seen unprecedented growth in its first five years of mainstream existence. Great progress has been made in a short time, yet great challenges must be met in the next decade to provide new technologies that will be widely accepted by societies throughout the world. This paper sets the stage for some of those challenges in an effort to provide the stimulus for the research, both basic and applied, to address and exceed the envisioned potential for visual analytics technologies. We start with a brief summary of the initial challenges, followed by a discussion of the initial driving domains and applications, as well as additional applications and domains that have been a part of recent rapid expansion of visual analytics usage. We look at the common characteristics of several tools illustrating emerging visual analytics technologies, and conclude with the top ten challenges for the field of study. We encourage feedback and collaborative participation by members of the research community, the wide array of user communities, and private industry.

  17. Visual impairment.

    PubMed

    Ellenberger, Carl

    2016-01-01

    This chapter can guide the use of imaging in the evaluation of common visual syndromes: transient visual disturbance, including migraine and amaurosis fugax; acute optic neuropathy complicating multiple sclerosis, neuromyelitis optica spectrum disorder, Leber hereditary optic neuropathy, and Susac syndrome; papilledema and pseudotumor cerebri syndrome; cerebral disturbances of vision, including posterior cerebral arterial occlusion, posterior reversible encephalopathy, hemianopia after anterior temporal lobe resection, posterior cortical atrophy, and conversion blindness. Finally, practical efforts in visual rehabilitation by sensory substitution for blind patients can improve their lives and disclose new information about the brain. PMID:27430448

  18. Personal Visualization and Personal Visual Analytics.

    PubMed

    Huang, Dandan; Tory, Melanie; Aseniero, Bon Adriel; Bartram, Lyn; Bateman, Scott; Carpendale, Sheelagh; Tang, Anthony; Woodbury, Robert

    2015-03-01

    Data surrounds each and every one of us in our daily lives, ranging from exercise logs, to archives of our interactions with others on social media, to online resources pertaining to our hobbies. There is enormous potential for us to use these data to understand ourselves better and make positive changes in our lives. Visualization (Vis) and visual analytics (VA) offer substantial opportunities to help individuals gain insights about themselves, their communities and their interests; however, designing tools to support data analysis in non-professional life brings a unique set of research and design challenges. We investigate the requirements and research directions required to take full advantage of Vis and VA in a personal context. We develop a taxonomy of design dimensions to provide a coherent vocabulary for discussing personal visualization and personal visual analytics. By identifying and exploring clusters in the design space, we discuss challenges and share perspectives on future research. This work brings together research that was previously scattered across disciplines. Our goal is to call research attention to this space and engage researchers to explore the enabling techniques and technology that will support people to better understand data relevant to their personal lives, interests, and needs. PMID:26357073

  19. D Visualization of Volcanic Ash Dispersion Prediction with Spatial Information Open Platform in Korea

    NASA Astrophysics Data System (ADS)

    Youn, J.; Kim, T.

    2016-06-01

    Visualization of disaster dispersion prediction enables decision makers and civilian to prepare disaster and to reduce the damage by showing the realistic simulation results. With advances of GIS technology and the theory of volcanic disaster prediction algorithm, the predicted disaster dispersions are displayed in spatial information. However, most of volcanic ash dispersion predictions are displayed in 2D. 2D visualization has a limitation to understand the realistic dispersion prediction since its height could be presented only by colour. Especially for volcanic ash, 3D visualization of dispersion prediction is essential since it could bring out big aircraft accident. In this paper, we deals with 3D visualization techniques of volcanic ash dispersion prediction with spatial information open platform in Korea. First, time-series volcanic ash 3D position and concentrations are calculated with WRF (Weather Research and Forecasting) model and Modified Fall3D algorithm. For 3D visualization, we propose three techniques; those are 'Cube in the air', 'Cube in the cube', and 'Semi-transparent plane in the air' methods. In the 'Cube in the Air', which locates the semitransparent cubes having different color depends on its particle concentration. Big cube is not realistic when it is zoomed. Therefore, cube is divided into small cube with Octree algorithm. That is 'Cube in the Cube' algorithm. For more realistic visualization, we apply 'Semi-transparent Volcanic Ash Plane' which shows the ash as fog. The results are displayed in the 'V-world' which is a spatial information open platform implemented by Korean government. Proposed techniques were adopted in Volcanic Disaster Response System implemented by Korean Ministry of Public Safety and Security.

  20. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  1. Treemap Visualizations for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Ianni, J.; Gorrell, Z.

    Making sense of massive data sets is a problem for many military domains including space. With unwieldy big data sets used for space situational awareness (SSA), important trends and outliers may not be easy to spot especially not at-a-glance. One method being explored to visualize SSA data sets is called treemapping. Treemaps fill screen space with nested rectangles (tiles) of various sizes and colors to represent multiple dimensions of hierarchical data sets. By mapping these dimensions effectively with a tiling algorithm that maintains an appropriate aspect ratio, patterns can emerge that often would have gone unnoticed. The ability to interactively perform range filtering (in our case with sliders) and object drill-downs (hyperlinking the tiles) make this technology powerful for in-depth analyses in addition to at-a-glance awareness. For one SSA analysis, the tiles could represent satellites that are grouped by country, sized by apogee, and colored/shaded by the launch date. Filter sliders could allow apogee range or launch dates to be narrowed for better resolution of a smaller data set. The application of this technology for the Joint Space Operations Center (JSpOC) Mission System (JMS) is being explored on a DARPA Small Business Innovative Research (SBIR) effort as a plug-in to the existing User-Defined Operational Picture (UDOP). In addition, visualization of DARPA OrbitOutlook small telescope data will be demonstrated. This research will investigate what SSA analyses are best served by treemaps, the best tiling algorithms for these problems, and how the treemaps should be integrated into the existing JMS UDOP workflow. Finally, we introduce a variation of treemaps that help leaders allocate their time to tasks based on importance and urgency.

  2. Visualization research is growing and expanding.

    PubMed

    Ma, Kwan-Liu; Fujishiro, Issei; Li, Hua

    2008-01-01

    Visualization has become an increasingly active area of research because of its usefulness in a wide range of applications. This special issue features invited articles from PacificVis 2008 and highlights the state of the art in visualization algorithms, systems, and applications. PMID:18753031

  3. Visual cognition

    SciTech Connect

    Pinker, S.

    1985-01-01

    This book consists of essays covering issues in visual cognition presenting experimental techniques from cognitive psychology, methods of modeling cognitive processes on computers from artificial intelligence, and methods of studying brain organization from neuropsychology. Topics considered include: parts of recognition; visual routines; upward direction; mental rotation, and discrimination of left and right turns in maps; individual differences in mental imagery, computational analysis and the neurological basis of mental imagery: componental analysis.

  4. Visualization of Traffic Accidents

    NASA Technical Reports Server (NTRS)

    Wang, Jie; Shen, Yuzhong; Khattak, Asad

    2010-01-01

    Traffic accidents have tremendous impact on society. Annually approximately 6.4 million vehicle accidents are reported by police in the US and nearly half of them result in catastrophic injuries. Visualizations of traffic accidents using geographic information systems (GIS) greatly facilitate handling and analysis of traffic accidents in many aspects. Environmental Systems Research Institute (ESRI), Inc. is the world leader in GIS research and development. ArcGIS, a software package developed by ESRI, has the capabilities to display events associated with a road network, such as accident locations, and pavement quality. But when event locations related to a road network are processed, the existing algorithm used by ArcGIS does not utilize all the information related to the routes of the road network and produces erroneous visualization results of event locations. This software bug causes serious problems for applications in which accurate location information is critical for emergency responses, such as traffic accidents. This paper aims to address this problem and proposes an improved method that utilizes all relevant information of traffic accidents, namely, route number, direction, and mile post, and extracts correct event locations for accurate traffic accident visualization and analysis. The proposed method generates a new shape file for traffic accidents and displays them on top of the existing road network in ArcGIS. Visualization of traffic accidents along Hampton Roads Bridge Tunnel is included to demonstrate the effectiveness of the proposed method.

  5. Space shuttle visual simulation system design study

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The current and near-future state-of-the-art in visual simulation equipment technology is related to the requirements of the space shuttle visual system. Image source, image sensing, and displays are analyzed on a subsystem basis, and the principal conclusions are used in the formulation of a recommended baseline visual system. Perceptibility and visibility are also analyzed.

  6. A bioinspired collision detection algorithm for VLSI implementation

    NASA Astrophysics Data System (ADS)

    Cuadri, J.; Linan, G.; Stafford, R.; Keil, M. S.; Roca, E.

    2005-06-01

    In this paper a bioinspired algorithm for collision detection is proposed, based on previous models of the locust (Locusta migratoria) visual system reported by F.C. Rind and her group, in the University of Newcastle-upon-Tyne. The algorithm is suitable for VLSI implementation in standard CMOS technologies as a system-on-chip for automotive applications. The working principle of the algorithm is to process a video stream that represents the current scenario, and to fire an alarm whenever an object approaches on a collision course. Moreover, it establishes a scale of warning states, from no danger to collision alarm, depending on the activity detected in the current scenario. In the worst case, the minimum time before collision at which the model fires the collision alarm is 40 msec (1 frame before, at 25 frames per second). Since the average time to successfully fire an airbag system is 2 msec, even in the worst case, this algorithm would be very helpful to more efficiently arm the airbag system, or even take some kind of collision avoidance countermeasures. Furthermore, two additional modules have been included: a "Topological Feature Estimator" and an "Attention Focusing Algorithm". The former takes into account the shape of the approaching object to decide whether it is a person, a road line or a car. This helps to take more adequate countermeasures and to filter false alarms. The latter centres the processing power into the most active zones of the input frame, thus saving memory and processing time resources.

  7. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  8. Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics.

    PubMed

    Stolper, Charles D; Perer, Adam; Gotz, David

    2014-12-01

    As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records. PMID:26356879

  9. Polygon cluster pattern recognition based on new visual distance

    NASA Astrophysics Data System (ADS)

    Shuai, Yun; Shuai, Haiyan; Ni, Lin

    2007-06-01

    The pattern recognition of polygon clusters is a most attention-getting problem in spatial data mining. The paper carries through a research on this problem, based on spatial cognition principle and visual recognition Gestalt principle combining with spatial clustering method, and creates two innovations: First, the paper carries through a great improvement to the concept---"visual distance". In the definition of this concept, not only are Euclid's Distance, orientation difference and dimension discrepancy comprehensively thought out, but also is "similarity degree of object shape" crucially considered. In the calculation of "visual distance", the distance calculation model is built using Delaunay Triangulation geometrical structure. Second, the research adopts spatial clustering analysis based on MST Tree. In the design of pruning algorithm, the study initiates data automatism delamination mechanism and introduces Simulated Annealing Optimization Algorithm. This study provides a new research thread for GIS development, namely, GIS is an intersection principle, whose research method should be open and diverse. Any mature technology of other relative principles can be introduced into the study of GIS, but, they need to be improved on technical measures according to the principles of GIS as "spatial cognition science". Only to do this, can GIS develop forward on a higher and stronger plane.

  10. Employing Omnidirectional Visual Control for Mobile Robotics.

    ERIC Educational Resources Information Center

    Wright, J. R., Jr.; Jung, S.; Steplight, S.; Wright, J. R., Sr.; Das, A.

    2000-01-01

    Describes projects using conventional technologies--incorporation of relatively inexpensive visual control with mobile robots using a simple remote control vehicle platform, a camera, a mirror, and a computer. Explains how technology teachers can apply them in the classroom. (JOW)

  11. New Technologies for Acquisition and 3-D Visualization of Geophysical and Other Data Types Combined for Enhanced Understandings and Efficiencies of Oil and Gas Operations, Deepwater Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Thomson, J. A.; Gee, L. J.; George, T.

    2002-12-01

    This presentation shows results of a visualization method used to display and analyze multiple data types in a geospatially referenced three-dimensional (3-D) space. The integrated data types include sonar and seismic geophysical data, pipeline and geotechnical engineering data, and 3-D facilities models. Visualization of these data collectively in proper 3-D orientation yields insights and synergistic understandings not previously obtainable. Key technological components of the method are: 1) high-resolution geophysical data obtained using a newly developed autonomous underwater vehicle (AUV), 2) 3-D visualization software that delivers correctly positioned display of multiple data types and full 3-D flight navigation within the data space and 3) a highly immersive visualization environment (HIVE) where multidisciplinary teams can work collaboratively to develop enhanced understandings of geospatially complex data relationships. The initial study focused on an active deepwater development area in the Green Canyon protraction area, Gulf of Mexico. Here several planned production facilities required detailed, integrated data analysis for design and installation purposes. To meet the challenges of tight budgets and short timelines, an innovative new method was developed based on the combination of newly developed technologies. Key benefits of the method include enhanced understanding of geologically complex seabed topography and marine soils yielding safer and more efficient pipeline and facilities siting. Environmental benefits include rapid and precise identification of potential locations of protected deepwater biological communities for avoidance and protection during exploration and production operations. In addition, the method allows data presentation and transfer of learnings to an audience outside the scientific and engineering team. This includes regulatory personnel, marine archaeologists, industry partners and others.

  12. Visual Prosthesis

    PubMed Central

    Schiller, Peter H.; Tehovnik, Edward J.

    2009-01-01

    There are more than 40 million blind individuals in the world whose plight would be greatly ameliorated by creating a visual prosthetic. We begin by outlining the basic operational characteristics of the visual system as this knowledge is essential for producing a prosthetic device based on electrical stimulation through arrays of implanted electrodes. We then list a series of tenets that we believe need to be followed in this effort. Central among these is our belief that the initial research in this area, which is in its infancy, should first be carried out in animals. We suggest that implantation of area V1 holds high promise as the area is of a large volume and can therefore accommodate extensive electrode arrays. We then proceed to consider coding operations that can effectively convert visual images viewed by a camera to stimulate electrode arrays to yield visual impressions that can provide shape, motion and depth information. We advocate experimental work that mimics electrical stimulation effects non-invasively in sighted human subjects using a camera from which visual images are converted into displays on a monitor akin to those created by electrical stimulation. PMID:19065857

  13. Visual stability

    PubMed Central

    Melcher, David

    2011-01-01

    Our vision remains stable even though the movements of our eyes, head and bodies create a motion pattern on the retina. One of the most important, yet basic, feats of the visual system is to correctly determine whether this retinal motion is owing to real movement in the world or rather our own self-movement. This problem has occupied many great thinkers, such as Descartes and Helmholtz, at least since the time of Alhazen. This theme issue brings together leading researchers from animal neurophysiology, clinical neurology, psychophysics and cognitive neuroscience to summarize the state of the art in the study of visual stability. Recently, there has been significant progress in understanding the limits of visual stability in humans and in identifying many of the brain circuits involved in maintaining a stable percept of the world. Clinical studies and new experimental methods, such as transcranial magnetic stimulation, now make it possible to test the causal role of different brain regions in creating visual stability and also allow us to measure the consequences when the mechanisms of visual stability break down. PMID:21242136

  14. Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory

    PubMed Central

    Vega, Julio; Perdices, Eduardo; Cañas, José M.

    2013-01-01

    Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333

  15. Robot evolutionary localization based on attentive visual short-term memory.

    PubMed

    Vega, Julio; Perdices, Eduardo; Cañas, José M

    2013-01-01

    Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333

  16. Visual cues for data mining

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.

    1996-04-01

    This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.

  17. Exploratory Analysis of Stochastic Local Search Algorithms in Biobjective Optimization

    NASA Astrophysics Data System (ADS)

    López-Ibáñez, Manuel; Paquete, Luís; Stützle, Thomas

    This chapter introduces two Perl programs that implement graphical tools for exploring the performance of stochastic local search algorithms for biobjective optimization problems. These tools are based on the concept of the empirical attainment function (EAF), which describes the probabilistic distribution of the outcomes obtained by a stochastic algorithm in the objective space. In particular, we consider the visualization of attainment surfaces and differences between the first-order EAFs of the outcomes of two algorithms. This visualization allows us to identify certain algorithmic behaviors in a graphical way. We explain the use of these visualization tools and illustrate them with examples arising from practice.

  18. Science information systems: Visualization

    NASA Technical Reports Server (NTRS)

    Wall, Ray J.

    1991-01-01

    Future programs in earth science, planetary science, and astrophysics will involve complex instruments that produce data at unprecedented rates and volumes. Current methods for data display, exploration, and discovery are inadequate. Visualization technology offers a means for the user to comprehend, explore, and examine complex data sets. The goal of this program is to increase the effectiveness and efficiency of scientists in extracting scientific information from large volumes of instrument data.

  19. Library Automation Design for Visually Impaired People

    ERIC Educational Resources Information Center

    Yurtay, Nilufer; Bicil, Yucel; Celebi, Sait; Cit, Guluzar; Dural, Deniz

    2011-01-01

    Speech synthesis is a technology used in many different areas in computer science. This technology can bring a solution to reading activity of visually impaired people due to its text to speech conversion. Based on this problem, in this study, a system is designed needed for a visually impaired person to make use of all the library facilities in…

  20. Localization Algorithms of Underwater Wireless Sensor Networks: A Survey

    PubMed Central

    Han, Guangjie; Jiang, Jinfang; Shu, Lei; Xu, Yongjun; Wang, Feng

    2012-01-01

    In Underwater Wireless Sensor Networks (UWSNs), localization is one of most important technologies since it plays a critical role in many applications. Motivated by widespread adoption of localization, in this paper, we present a comprehensive survey of localization algorithms. First, we classify localization algorithms into three categories based on sensor nodes’ mobility: stationary localization algorithms, mobile localization algorithms and hybrid localization algorithms. Moreover, we compare the localization algorithms in detail and analyze future research directions of localization algorithms in UWSNs. PMID:22438752

  1. Visualization-by-Sketching: An Artist's Interface for Creating Multivariate Time-Varying Data Visualizations.

    PubMed

    Schroeder, David; Keefe, Daniel F

    2016-01-01

    We present Visualization-by-Sketching, a direct-manipulation user interface for designing new data visualizations. The goals are twofold: First, make the process of creating real, animated, data-driven visualizations of complex information more accessible to artists, graphic designers, and other visual experts with traditional, non-technical training. Second, support and enhance the role of human creativity in visualization design, enabling visual experimentation and workflows similar to what is possible with traditional artistic media. The approach is to conceive of visualization design as a combination of processes that are already closely linked with visual creativity: sketching, digital painting, image editing, and reacting to exemplars. Rather than studying and tweaking low-level algorithms and their parameters, designers create new visualizations by painting directly on top of a digital data canvas, sketching data glyphs, and arranging and blending together multiple layers of animated 2D graphics. This requires new algorithms and techniques to interpret painterly user input relative to data "under" the canvas, balance artistic freedom with the need to produce accurate data visualizations, and interactively explore large (e.g., terabyte-sized) multivariate datasets. Results demonstrate a variety of multivariate data visualization techniques can be rapidly recreated using the interface. More importantly, results and feedback from artists support the potential for interfaces in this style to attract new, creative users to the challenging task of designing more effective data visualizations and to help these users stay "in the creative zone" as they work. PMID:26529734

  2. Determination of Sea Ice Thickness from Angular and Frequency Correlation Functions and by Genetic Algorithm: A Theoretical Study of New Instrument Technology

    NASA Astrophysics Data System (ADS)

    Hussein, Z. A.; Kuga, Y.; Ishimaru, A.; Jaruwatanadilok, S.; McDonald, K. C.; Holt, B.; Pak, K.; Jordan, R.; Perovich, D.; Sturm, M.

    2004-12-01

    information about the thickness of the layers. However, the amplitude of the surface ACF/FCF is impacted by the surface roughness characteristics, and reliable ACF/FCF phase information is obtained when its amplitude is sufficiently above the instrument system noise level. Using this aforementioned model, we were able to estimate the sea ice thickness, h, from ACF/FCF. We apply a Genetic Algorithm (GA) to the estimation. The GA method is developed to maximize a fitness function exp(Pm(h)-P(h))2 where P(h) is the phase of ACF/FCF calculated from forward model, and Pm(h) is the measured phase of ACF/FCF- in this case the phase is obtained from simulated forward data using this model. These results show that the sea ice thickness retrieval can be done by the ACF/FCF method. We are currently developing this new instrument technology under the NASA/ESTO instrument incubator program (IIP). We are planning on an Arctic sea ice field experiment from an aircraft in March-April 2005 to validate and improve the inversion model.

  3. Breast Tissue 3D Segmentation and Visualization on MRI

    PubMed Central

    Cui, Xiangfei; Sun, Feifei

    2013-01-01

    Tissue segmentation and visualization are useful for breast lesion detection and quantitative analysis. In this paper, a 3D segmentation algorithm based on Kernel-based Fuzzy C-Means (KFCM) is proposed to separate the breast MR images into different tissues. Then, an improved volume rendering algorithm based on a new transfer function model is applied to implement 3D breast visualization. Experimental results have been shown visually and have achieved reasonable consistency. PMID:23983676

  4. Visualizing Dynamic Bitcoin Transaction Patterns

    PubMed Central

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J.

    2016-01-01

    Abstract This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  5. Visualizing Dynamic Bitcoin Transaction Patterns.

    PubMed

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J

    2016-06-01

    This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  6. SciDAC Institute for Ultrascale Visualization

    SciTech Connect

    Humphreys, Grigori R.

    2008-09-30

    The Institute for Ultrascale Visualization aims to address visualization needs of SciDAC science domains, including research topics in advanced scientific visualization architectures, algorithms, and interfaces for understanding large, complex datasets. During the current project period, the focus of the team at the University of Virginia has been interactive remote rendering for scientific visualization. With high-performance computing resources enabling increasingly complex simulations, scientists may desire to interactively visualize huge 3D datasets. Traditional large-scale 3D visualization systems are often located very close to the processing clusters, and are linked to them with specialized connections for high-speed rendering. However, this tight coupling of processing and display limits possibilities for remote collaboration, and prohibits scientists from using their desktop workstations for data exploration. In this project, we are developing a client/server system for interactive remote 3D visualization on desktop computers.

  7. Introduction to Vector Field Visualization

    NASA Technical Reports Server (NTRS)

    Kao, David; Shen, Han-Wei

    2010-01-01

    Vector field visualization techniques are essential to help us understand the complex dynamics of flow fields. These can be found in a wide range of applications such as study of flows around an aircraft, the blood flow in our heart chambers, ocean circulation models, and severe weather predictions. The vector fields from these various applications can be visually depicted using a number of techniques such as particle traces and advecting textures. In this tutorial, we present several fundamental algorithms in flow visualization including particle integration, particle tracking in time-dependent flows, and seeding strategies. For flows near surfaces, a wide variety of synthetic texture-based algorithms have been developed to depict near-body flow features. The most common approach is based on the Line Integral Convolution (LIC) algorithm. There also exist extensions of LIC to support more flexible texture generations for 3D flow data. This tutorial reviews these algorithms. Tensor fields are found in several real-world applications and also require the aid of visualization to help users understand their data sets. Examples where one can find tensor fields include mechanics to see how material respond to external forces, civil engineering and geomechanics of roads and bridges, and the study of neural pathway via diffusion tensor imaging. This tutorial will provide an overview of the different tensor field visualization techniques, discuss basic tensor decompositions, and go into detail on glyph based methods, deformation based methods, and streamline based methods. Practical examples will be used when presenting the methods; and applications from some case studies will be used as part of the motivation.

  8. First Generation ASCI Production Visualization Environments

    SciTech Connect

    Heermann, P.D.

    1999-04-08

    The delivery of the first one tera-operations/sec computer has significantly impacted production data visualization, affecting data transfer, post processing, and rendering. Terascale computing has motivated a need to consider the entire data visualization system; improving a single algorithm is not sufficient. This paper presents a systems approach to decrease by a factor of four the time required to prepare large data sets for visualization.For daily production use, all stages in the processing pipeline from physics simulation code to pixels on a screen, must be balanced to yield good overall performance. Also, to complete the data path from screen to the analyst's eye, user display systems for individuals and teams are examined. Performance of the initial visualization system is compared with recent improvements. Lessons learned from the coordinated deployment of improved algorithms are also discussed, including the need for 64 bit addressing and a fully parallel data visualization pipeline.

  9. Visualizing inequality

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2016-07-01

    The study of socioeconomic inequality is of substantial importance, scientific and general alike. The graphic visualization of inequality is commonly conveyed by Lorenz curves. While Lorenz curves are a highly effective statistical tool for quantifying the distribution of wealth in human societies, they are less effective a tool for the visual depiction of socioeconomic inequality. This paper introduces an alternative to Lorenz curves-the hill curves. On the one hand, the hill curves are a potent scientific tool: they provide detailed scans of the rich-poor gaps in human societies under consideration, and are capable of accommodating infinitely many degrees of freedom. On the other hand, the hill curves are a powerful infographic tool: they visualize inequality in a most vivid and tangible way, with no quantitative skills that are required in order to grasp the visualization. The application of hill curves extends far beyond socioeconomic inequality. Indeed, the hill curves are highly effective 'hyperspectral' measures of statistical variability that are applicable in the context of size distributions at large. This paper establishes the notion of hill curves, analyzes them, and describes their application in the context of general size distributions.

  10. The On-Line Archive Science Information Services (OASIS) as an Enabling Technology for Visual Cross-Comparisons Between Multiple Data Sets

    NASA Astrophysics Data System (ADS)

    Berriman, G. B.; Kong, M.; Good, J. C.; Lonsdale, C. J.; Voges, W.; Henry, T. J.; Bean, J. L.; Blackwell, J.

    Wide differences between the positional and photometric quality of astronomical data sets requires visual examination for accurate cross-identifications of sources between data sets. Given the tedium involved in these comparisons, astronomers need methods to perform rapid cross-comparisons. The On-Line Science Information Services (OASIS), a Java toolkit recently released by the IPAC Infrared Science Archive (IRSA), provides such capability. OASIS is a collection of services, designed to be run independently of each other, that access and integrate multiple image and catalog archives. Its first release provides access to IRAS, 2MASS, MSX, FIRST, and NVSS images, and is interoperable with NED and CDS. We describe its applicability to two projects that require bulk visual cross-identification: the NStars database, which serves quality-controlled data for all stellar, substellar, and planetary objects within 25 parsecs of the Sun; and identification of far-IR counterparts of 750 active galaxies measured by ROSAT.

  11. Visualization of protein interaction networks: problems and solutions

    PubMed Central

    2013-01-01

    Background Visualization concerns the representation of data visually and is an important task in scientific research. Protein-protein interactions (PPI) are discovered using either wet lab techniques, such mass spectrometry, or in silico predictions tools, resulting in large collections of interactions stored in specialized databases. The set of all interactions of an organism forms a protein-protein interaction network (PIN) and is an important tool for studying the behaviour of the cell machinery. Since graphic representation of PINs may highlight important substructures, e.g. protein complexes, visualization is more and more used to study the underlying graph structure of PINs. Although graphs are well known data structures, there are different open problems regarding PINs visualization: the high number of nodes and connections, the heterogeneity of nodes (proteins) and edges (interactions), the possibility to annotate proteins and interactions with biological information extracted by ontologies (e.g. Gene Ontology) that enriches the PINs with semantic information, but complicates their visualization. Methods In these last years many software tools for the visualization of PINs have been developed. Initially thought for visualization only, some of them have been successively enriched with new functions for PPI data management and PIN analysis. The paper analyzes the main software tools for PINs visualization considering four main criteria: (i) technology, i.e. availability/license of the software and supported OS (Operating System) platforms; (ii) interoperability, i.e. ability to import/export networks in various formats, ability to export data in a graphic format, extensibility of the system, e.g. through plug-ins; (iii) visualization, i.e. supported layout and rendering algorithms and availability of parallel implementation; (iv) analysis, i.e. availability of network analysis functions, such as clustering or mining of the graph, and the possibility to

  12. A method for automatically abstracting visual documents

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.

    1994-01-01

    Visual documents--motion sequences on film, videotape, and digital recording--constitute a major source of information for the Space Agency, as well as all other government and private sector entities. This article describes a method for automatically selecting key frames from visual documents. These frames may in turn be used to represent the total image sequence of visual documents in visual libraries, hypermedia systems, and training algorithm reduces 51 minutes of video sequences to 134 frames; a reduction of information in the range of 700:1.

  13. A method for automatically abstracting visual documents

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.

    1993-01-01

    Visual documents - motion sequences on film, video-tape, and digital recordings - constitute a major source of information for the Space Agency, as well as all other government and private sector entities. This article describes a method for automatically selecting key frames from visual documents. These frames may in turn be used to represent the total image sequence of visual documents in visual libraries, hypermedia systems, and training guides. The performance of the abstracting algorithm reduces 51 minutes of video sequences to 134 frames; a reduction of information in the range of 700:1.

  14. Exploring the Relationship between Access Technology and Standardized Test Scores for Youths with Visual Impairments: Secondary Analysis of the National Longitudinal Transition Study 2

    ERIC Educational Resources Information Center

    Freeland, Amy L.; Emerson, Robert Wall; Curtis, Amy B.; Fogarty, Kieran

    2010-01-01

    This article presents the findings of a secondary analysis of the National Longitudinal Transition Study 2 that explored the predictive association between training in access technology and performance on the Woodcock-Johnson Tests of Academic Achievement: III. The results indicated that the use of access technology had a limited predictive…

  15. Information Should Be Visual: New and Emerging Technologies and Their Application in the VET Sector for Students Who Are Deaf and Hard of Hearing.

    ERIC Educational Resources Information Center

    Knuckey, J.; Lawford, L.; Kay, J.

    A project explored deaf and hard-of-hearing students' current use of new and emerging learning technology in technical and further education (TAFE) institutes across Australia. Findings indicated that new learning technologies aided communication, especially when e-mail and Internet chat are used; built self-esteem through self-directed learning;…

  16. Algorithmic Animation in Education--Review of Academic Experience

    ERIC Educational Resources Information Center

    Esponda-Arguero, Margarita

    2008-01-01

    This article is a review of the pedagogical experience obtained with systems for algorithmic animation. Algorithms consist of a sequence of operations whose effect on data structures can be visualized using a computer. Students learn algorithms by stepping the animation through the different individual operations, possibly reversing their effect.…

  17. Spatial Visualization Ability and Laparoscopic Skills in Novice Learners: Evaluating Stereoscopic versus Monoscopic Visualizations

    ERIC Educational Resources Information Center

    Roach, Victoria A.; Mistry, Manisha R.; Wilson, Timothy D.

    2014-01-01

    Elevated spatial visualization ability (Vz) is thought to influence surgical skill acquisition and performance. Current research suggests that stereo visualization technology and its association with skill performance may confer perceptual advantages. This is of particular interest in laparoscopic skill training, where stereo visualization may…

  18. Information visualization: Beyond traditional engineering

    NASA Technical Reports Server (NTRS)

    Thomas, James J.

    1995-01-01

    This presentation addresses a different aspect of the human-computer interface; specifically the human-information interface. This interface will be dominated by an emerging technology called Information Visualization (IV). IV goes beyond the traditional views of computer graphics, CADS, and enables new approaches for engineering. IV specifically must visualize text, documents, sound, images, and video in such a way that the human can rapidly interact with and understand the content structure of information entities. IV is the interactive visual interface between humans and their information resources.

  19. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  20. On detection and visualization techniques for cyber security situation awareness

    NASA Astrophysics Data System (ADS)

    Yu, Wei; Wei, Shixiao; Shen, Dan; Blowers, Misty; Blasch, Erik P.; Pham, Khanh D.; Chen, Genshe; Zhang, Hanlin; Lu, Chao

    2013-05-01

    Networking technologies are exponentially increasing to meet worldwide communication requirements. The rapid growth of network technologies and perversity of communications pose serious security issues. In this paper, we aim to developing an integrated network defense system with situation awareness capabilities to present the useful information for human analysts. In particular, we implement a prototypical system that includes both the distributed passive and active network sensors and traffic visualization features, such as 1D, 2D and 3D based network traffic displays. To effectively detect attacks, we also implement algorithms to transform real-world data of IP addresses into images and study the pattern of attacks and use both the discrete wavelet transform (DWT) based scheme and the statistical based scheme to detect attacks. Through an extensive simulation study, our data validate the effectiveness of our implemented defense system.

  1. Mapping scientific frontiers : the quest for knowledge visualization.

    SciTech Connect

    Boyack, Kevin W.

    2003-08-01

    multi-dimensional scaling, advanced dimensional reduction, social network analysis, Pathfinder network scaling, and landscape visualizations. No algorithms are given here; rather, these techniques are described from the point of view of enabling 'visual thinking'. The Generalized Similarity Analysis (GSA) technique used by Chen in his recent published papers is also introduced here. Information and computer science professionals would be wise not to skip through these early chapters. Although principles of gestalt psychology, cartography, thematic maps, and association techniques may be outside their technology comfort zone, or interest, these predecessors lay a groundwork for the 'visual thinking' that is required to create effective visualizations. Indeed, the great challenge in information visualization is to transform the abstract and intangible into something visible, concrete, and meaningful to the user. The second part of the book, covering the final three chapters, extends the mapping metaphor into the realm of scientific discovery through the structuring of literatures in a way that enables us to see scientific frontiers or paradigms. Case studies are used extensively to show the logical progression that has been made in recent years to get us to this point. Homage is paid to giants of the last 20 years including Michel Callon for co-word mapping, Henry Small for document co-citation analysis and specialty narratives (charting a path linking the different sciences), and Kate McCain for author co-citation analysis, whose work has led to the current state-of-the-art. The last two chapters finally answer the question - 'What does a scientific paradigm look like?' The visual answer given is specific to the GSA technique used by Chen, but does satisfy the intent of the book - to introduce a way to visually identify scientific frontiers. A variety of case studies, mostly from Chen's previously published work - supermassive black holes, cross-domain applications of

  2. Flow visualization

    NASA Astrophysics Data System (ADS)

    Weinstein, Leonard M.

    Flow visualization techniques are reviewed, with particular attention given to those applicable to liquid helium flows. Three techniques capable of obtaining qualitative and quantitative measurements of complex 3D flow fields are discussed including focusing schlieren, particle image volocimetry, and holocinematography (HCV). It is concluded that the HCV appears to be uniquely capable of obtaining full time-varying, 3D velocity field data, but is limited to the low speeds typical of liquid helium facilities.

  3. Flow visualization

    NASA Technical Reports Server (NTRS)

    Weinstein, Leonard M.

    1991-01-01

    Flow visualization techniques are reviewed, with particular attention given to those applicable to liquid helium flows. Three techniques capable of obtaining qualitative and quantitative measurements of complex 3D flow fields are discussed including focusing schlieren, particle image volocimetry, and holocinematography (HCV). It is concluded that the HCV appears to be uniquely capable of obtaining full time-varying, 3D velocity field data, but is limited to the low speeds typical of liquid helium facilities.

  4. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  5. An algorithm for segmenting range imagery

    SciTech Connect

    Roberts, R.S.

    1997-03-01

    This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.

  6. Ultrascale visualization capabilities for the ParaView/VTK framework

    2009-06-09

    The software is a set of technologies developed by the SciDAC Institute for Ultrascale Visualization in order to address the visualization needs for petascale computing and beyond. These technologies include improved I/O performance, simulation co-processing, advanced rendering capabilities, and specialized visualization techniques developed for SciDAC applications.

  7. A tool for intraoperative visualization of registration results

    NASA Astrophysics Data System (ADS)

    King, Franklin; Lasso, Andras; Pinter, Csaba; Fichtinger, Gabor

    2014-03-01

    PURPOSE: Validation of image registration algorithms is frequently accomplished by the visual inspection of the resulting linear or deformable transformation due to the lack of ground truth information. Visualization of transformations produced by image registration algorithms during image-guided interventions allows for a clinician to evaluate the accuracy of the result transformation. Software packages that perform the visualization of transformations exist, but are not part of a clinically usable software application. We present a tool that visualizes both linear and deformable transformations and is integrated in an open-source software application framework suited for intraoperative use and general evaluation of registration algorithms. METHODS: A choice of six different modes are available for visualization of a transform. Glyph visualization mode uses oriented and scaled glyphs, such as arrows, to represent the displacement field in 3D whereas glyph slice visualization mode creates arrows that can be seen as a 2D vector field. Grid visualization mode creates deformed grids shown in 3D whereas grid slice visualization mode creates a series of 2D grids. Block visualization mode creates a deformed bounding box of the warped volume. Finally, contour visualization mode creates isosurfaces and isolines that visualize the magnitude of displacement across a volume. The application 3D Slicer was chosen as the platform for the transform visualizer tool. 3D Slicer is a comprehensive open-source application framework developed for medical image computing and used for intra-operative registration. RESULTS: The transform visualizer tool fulfilled the requirements for quick evaluation of intraoperative image registrations. Visualizations were generated in 3D Slicer with little computation time on realistic datasets. It is freely available as an extension for 3D Slicer. CONCLUSION: A tool for the visualization of displacement fields was created and integrated into 3D Slicer

  8. Uncluttered Single-Image Visualization of Vascular Structures using GPU and Integer Programming

    PubMed Central

    Won, Joong-Ho; Jeon, Yongkweon; Rosenberg, Jarrett; Yoon, Sungroh; Rubin, Geoffrey D.; Napel, Sandy

    2013-01-01

    Direct projection of three-dimensional branching structures, such as networks of cables, blood vessels, or neurons onto a 2D image creates the illusion of intersecting structural parts and creates challenges for understanding and communication. We present a method for visualizing such structures, and demonstrate its utility in visualizing the abdominal aorta and its branches, whose tomographic images might be obtained by computed tomography or magnetic resonance angiography, in a single two-dimensional stylistic image, without overlaps among branches. The visualization method, termed uncluttered single-image visualization (USIV), involves optimization of geometry. This paper proposes a novel optimization technique that utilizes an interesting connection of the optimization problem regarding USIV to the protein structure prediction problem. Adopting the integer linear programming-based formulation for the protein structure prediction problem, we tested the proposed technique using 30 visualizations produced from five patient scans with representative anatomical variants in the abdominal aortic vessel tree. The novel technique can exploit commodity-level parallelism, enabling use of general-purpose graphics processing unit (GPGPU) technology that yields a significant speedup. Comparison of the results with the other optimization technique previously reported elsewhere suggests that, in most aspects, the quality of the visualization is comparable to that of the previous one, with a significant gain in the computation time of the algorithm. PMID:22291148

  9. Managing Complexity in Multidisciplinary Visualization

    NASA Technical Reports Server (NTRS)

    Miceli, Kristina D.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    As high performance computing technology progresses, computational simulations are becoming more advanced in their capabilities. In the computational aerosciences domain, single discipline steady-state simulations computed on a single grid are far from the state-of-the-art. In their place are complex, time-dependent multidisciplinary simulations that attempt to model a given geometry more realistically. The product of these multidisciplinary simulations is a massive amount of data stored in different formats, grid topologies, units of measure, etc., as a result of the differences in the simulated physical domains. In addition to the challenges posed by setting up and performing the simulation, additional challenges exist in analyzing computational results. Visualization plays an important role in the advancement of multidisciplinary simulations. To date, visualization has been used to aid in the interpretation of large amounts of simulation data. Because the human visual system is effective in digesting a large amount of information presented graphically, visualization has helped simulation scientists to understand complex simulation results. As these simulations become even more complex, integrating several different physical domains, visualization will be critical to digest the massive amount of information. Another important role for visualization is to provide a common communication medium from which the domain scientists can use to develop, debug, and analyze their work. Multidisciplinary analyses are the next step in simulation technology, not only in computational aerosciences, but in many other areas such as global climate modeling. Visualization researchers must understand and work towards the challenges posed by multidisciplinary simulation scenarios. This paper addresses some of these challenges, describing technologies that must be investigated to create a useful visualization analysis tool for domain scientists.

  10. Processing Visual Images

    SciTech Connect

    Litke, Alan

    2006-03-27

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  11. Optimization of thrust algorithm calibration for Computing System (TCS) for Thrust the NASA Highly Maneuverable Aircraft Technology (HiMAT) vehicle's propulsion system

    NASA Technical Reports Server (NTRS)

    Hamer, M. J.; Alexander, R. I.

    1981-01-01

    A simplified gross thrust computing technique for the HiMAT J85-GE-21 engine using altitude facility data was evaluated. The results over the full engine envelope for both the standard engine mode and the open nozzle engine mode are presented. Results using afterburner casing static pressure taps are compared to those using liner static pressure taps. It is found that the technique is very accurate for both the standard and open nozzle engine modes. The difference in the algorithm accuracy for a calibration based on data from one test condition was small compared to a calibration based on data from all of the test conditions.

  12. Unsupervised Learning for Visual Pattern Analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Nanning; Xue, Jianru

    This chapter presents an overview of topics and major concepts in unsupervised learning for visual pattern analysis. Cluster analysis and dimensionality are two important topics in unsupervised learning. Clustering relates to the grouping of similar objects in visual perception, while dimensionality reduction is essential for the compact representation of visual patterns. In this chapter, we focus on clustering techniques, offering first a theoretical basis, then a look at some applications in visual pattern analysis. With respect to the former, we introduce both concepts and algorithms. With respect to the latter, we discuss visual perceptual grouping. In particular, the problem of image segmentation is discussed in terms of contour and region grouping. Finally, we present a brief introduction to learning visual pattern representations, which serves as a prelude to the following chapters.

  13. An optimized web-based approach for collaborative stereoscopic medical visualization

    PubMed Central

    Kaspar, Mathias; Parsad, Nigel M; Silverstein, Jonathan C

    2013-01-01

    Objective Medical visualization tools have traditionally been constrained to tethered imaging workstations or proprietary client viewers, typically part of hospital radiology systems. To improve accessibility to real-time, remote, interactive, stereoscopic visualization and to enable collaboration among multiple viewing locations, we developed an open source approach requiring only a standard web browser with no added client-side software. Materials and Methods Our collaborative, web-based, stereoscopic, visualization system, CoWebViz, has been used successfully for the past 2 years at the University of Chicago to teach immersive virtual anatomy classes. It is a server application that streams server-side visualization applications to client front-ends, comprised solely of a standard web browser with no added software. Results We describe optimization considerations, usability, and performance results, which make CoWebViz practical for broad clinical use. We clarify technical advances including: enhanced threaded architecture, optimized visualization distribution algorithms, a wide range of supported stereoscopic presentation technologies, and the salient theoretical and empirical network parameters that affect our web-based visualization approach. Discussion The implementations demonstrate usability and performance benefits of a simple web-based approach for complex clinical visualization scenarios. Using this approach overcomes technical challenges that require third-party web browser plug-ins, resulting in the most lightweight client. Conclusions Compared to special software and hardware deployments, unmodified web browsers enhance remote user accessibility to interactive medical visualization. Whereas local hardware and software deployments may provide better interactivity than remote applications, our implementation demonstrates that a simplified, stable, client approach using standard web browsers is sufficient for high quality three

  14. Computer Imagery and Visualization in Built Environment Education: The CAL-Visual Approach.

    ERIC Educational Resources Information Center

    Bouchlaghem, N.; Wilson, A.; Beacham, N.; Sher, W.

    2002-01-01

    Discussion of information and communication technology in the United Kingdom focuses on the use of multimedia technologies, particularly digital imagery and visualization material, to improve student knowledge and understanding. Describes the CAL (computer assisted learning)-Visual system that was developed for civil and building engineering…

  15. Efficient Visualization of Document Streams

    NASA Astrophysics Data System (ADS)

    Grčar, Miha; Podpečan, Vid; Juršič, Matjaž; Lavrač, Nada

    In machine learning and data mining, multidimensional scaling (MDS) and MDS-like methods are extensively used for dimensionality reduction and for gaining insights into overwhelming amounts of data through visualization. With the growth of the Web and activities of Web users, the amount of data not only grows exponentially but is also becoming available in the form of streams, where new data instances constantly flow into the system, requiring the algorithm to update the model in near-real time. This paper presents an algorithm for document stream visualization through a MDS-like distance-preserving projection onto a 2D canvas. The visualization algorithm is essentially a pipeline employing several methods from machine learning. Experimental verification shows that each stage of the pipeline is able to process a batch of documents in constant time. It is shown that in the experimental setting with a limited buffer capacity and a constant document batch size, it is possible to process roughly 2.5 documents per second which corresponds to approximately 25% of the entire blogosphere rate and should be sufficient for most real-life applications.

  16. Multimodality Neurological Data Visualization With Multi-VOI-Based DTI Fiber Dynamic Integration.

    PubMed

    Zhang, Qi; Alexander, Murray; Ryner, Lawrence

    2016-01-01

    Brain lesions are usually located adjacent to critical spinal structures, so it is a challenging task for neurosurgeons to precisely plan a surgical procedure without damaging healthy tissues and nerves. The advancement of medical imaging technologies produces a large amount of neurological data, which are capable of showing a wide variety of brain properties. Advanced algorithms of medical data computing and visualization are critically helpful in efficiently utilizing the acquired data for disease diagnosis and brain function and structure exploration, which is helpful for treatment planning. In this paper, we describe new algorithms and a software framework for multiple volume of interest specified diffusion tensor imaging (DTI) fiber dynamic visualization. The displayed results have been integrated with a volume rendering pipeline for multimodality neurological data exploration. A depth texture indexing algorithm is used to detect DTI fiber tracts in graphics process units (GPUs), which makes fibers to be displayed and interactively manipulated with brain data acquired from functional magnetic resonance imaging, T1- and T2-weighted anatomic imaging, and angiographic imaging. The developed software platform is built on an object-oriented structure, which is transparent and extensible. It provides a comprehensive human-computer interface for data exploration and information extraction. The GPU-accelerated high-performance computing kernels have been implemented to enable our software to dynamically visualize neurological data. The developed techniques will be useful in computer-aided neurological disease diagnosis, brain structure exploration, and general cognitive neuroscience. PMID:25376048

  17. Visual bioethics.

    PubMed

    Lauritzen, Paul

    2008-12-01

    Although images are pervasive in public policy debates in bioethics, few who work in the field attend carefully to the way that images function rhetorically. If the use of images is discussed at all, it is usually to dismiss appeals to images as a form of manipulation. Yet it is possible to speak meaningfully of visual arguments. Examining the appeal to images of the embryo and fetus in debates about abortion and stem cell research, I suggest that bioethicists would be well served by attending much more carefully to how images function in public policy debates. PMID:19085479

  18. Real-time tracking using stereo and motion: Visual perception for space robotics

    NASA Technical Reports Server (NTRS)

    Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann

    1994-01-01

    The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.

  19. A Learner-Centered Approach for Training Science Teachers through Virtual Reality and 3D Visualization Technologies: Practical Experience for Sharing

    ERIC Educational Resources Information Center

    Yeung, Yau-Yuen

    2004-01-01

    This paper presentation will report on how some science educators at the Science Department of The Hong Kong Institute of Education have successfully employed an array of innovative learning media such as three-dimensional (3D) and virtual reality (VR) technologies to create seven sets of resource kits, most of which are being placed on the…

  20. Adult Learning Strategies and Approaches (ALSA). Resources for Teachers of Adults. A Handbook of Practical Advice on Audio-Visual Aids and Educational Technology for Tutors and Organisers.

    ERIC Educational Resources Information Center

    Cummins, John; And Others

    This handbook is part of a British series of publications written for part-time tutors, volunteers, organizers, and trainers in the adult continuing education and training sectors. It offers practical advice on audiovisual aids and educational technology for tutors and organizers. The first chapter discusses how one learns. Chapter 2 addresses how…

  1. Assessment of a visually guided autonomous exploration robot

    NASA Astrophysics Data System (ADS)

    Harris, C.; Evans, R.; Tidey, E.

    2008-10-01

    A system has been developed to enable a robot vehicle to autonomously explore and map an indoor environment using only visual sensors. The vehicle is equipped with a single camera, whose output is wirelessly transmitted to an off-board standard PC for processing. Visual features within the camera imagery are extracted and tracked, and their 3D positions are calculated using a Structure from Motion algorithm. As the vehicle travels, obstacles in its surroundings are identified and a map of the explored region is generated. This paper discusses suitable criteria for assessing the performance of the system by computer-based simulation and practical experiments with a real vehicle. Performance measures identified include the positional accuracy of the 3D map and the vehicle's location, the efficiency and completeness of the exploration and the system reliability. Selected results are presented and the effect of key system parameters and algorithms on performance is assessed. This work was funded by the Systems Engineering for Autonomous Systems (SEAS) Defence Technology Centre established by the UK Ministry of Defence.

  2. Sort-First, Distributed Memory Parallel Visualization and Rendering

    SciTech Connect

    Bethel, E. Wes; Humphreys, Greg; Paul, Brian; Brederson, J. Dean

    2003-07-15

    While commodity computing and graphics hardware has increased in capacity and dropped in cost, it is still quite difficult to make effective use of such systems for general-purpose parallel visualization and graphics. We describe the results of a recent project that provides a software infrastructure suitable for general-purpose use by parallel visualization and graphics applications. Our work combines and extends two technologies: Chromium, a stream-oriented framework that implements the OpenGL programming interface; and OpenRM Scene Graph, a pipelined-parallel scene graph interface for graphics data management. Using this combination, we implement a sort-first, distributed memory, parallel volume rendering application. We describe the performance characteristics in terms of bandwidth requirements and highlight key algorithmic considerations needed to implement the sort-first system. We characterize system performance using a distributed memory parallel volume rendering application, a nd present performance gains realized by using scene specific knowledge to accelerate rendering through reduced network bandwidth. The contribution of this work is an exploration of general-purpose, sort-first architecture performance characteristics as applied to distributed memory, commodity hardware, along with a description of the algorithmic support needed to realize parallel, sort-first implementations.

  3. Volumetric feature extraction and visualization of tomographic molecular imaging.

    PubMed

    Bajaj, Chandrajit; Yu, Zeyun; Auer, Manfred

    2003-01-01

    Electron tomography is useful for studying large macromolecular complex within their cellular context. The associate problems include crowding and complexity. Data exploration and 3D visualization of complexes require rendering of tomograms as well as extraction of all features of interest. We present algorithms for fully automatic boundary segmentation and skeletonization, and demonstrate their applications in feature extraction and visualization of cell and molecular tomographic imaging. We also introduce an interactive volumetric exploration and visualization tool (Volume Rover), which encapsulates implementations of the above volumetric image processing algorithms, and additionally uses efficient multi-resolution interactive geometry and volume rendering techniques for interactive visualization. PMID:14643216

  4. [Researches of soil normalized difference water index (NDWI) of Yongding River based on multispectral remote sensing technology combined with genetic algorithm].

    PubMed

    Mao, Hai-ying; Feng, Zhong-ke; Gong, Yin-xi; Yu, Jing-xin

    2014-06-01

    Basin soil type, moisture content and vegetation cover index are important factors affecting the basin water of Yongding River, using traditional sampling method to investigate soil moisture and the watershed soil type not only consuming a lot of manpower and material resources but also causing experimental error because of the instrument and other objective factors. This article selecting the Yongding River Basin-Beijing section as the study area, using total station instruments to survey field sampling and determination 34 plots, combined with 6 TM image data from 1978 to 2009 to extract soil information and the relationship between region's soil type, soil moisture and remote sensing factors. Using genetic algorithms normalization to select key factors which influenced NDWI, which is based on the green band and near-infrared bands normalized ratio index, usually used to extract water information in the image. In order to accurate screening and factors related to soil moisture, using genetic algorithms preferred characteristics, accelerate the convergence by controlling the number of iterations to filter key factor. Using multiple regression method to establish NDWI inversion model, which analysis the accuracy of model is 0.987, also use the species outside edges tree to meet accuracy test, which arrived that soil available nitrogen, phosphorus and potassium content and longitude correlation is not obvious, but a positive correlation with latitude and soil, inner precision researched 87.6% when the number of iterations to achieve optimal model calculation Maxgen. Models between NDWI and vegetation cover, topography, climate ect, through remote sensing and field survey methods could calculate the NDWI values compared with the traditional values, arrived the average relative error E is -0.021%, suits accord P reached 87.54%. The establishment of this model will be provide better practical and theoretical basis to the research and analysis of the watershed soil

  5. GROTTO visualization for decision support

    NASA Astrophysics Data System (ADS)

    Lanzagorta, Marco O.; Kuo, Eddy; Uhlmann, Jeffrey K.

    1998-08-01

    In this paper we describe the GROTTO visualization projects being carried out at the Naval Research Laboratory. GROTTO is a CAVE-like system, that is, a surround-screen, surround- sound, immersive virtual reality device. We have explored the GROTTO visualization in a variety of scientific areas including oceanography, meteorology, chemistry, biochemistry, computational fluid dynamics and space sciences. Research has emphasized the applications of GROTTO visualization for military, land and sea-based command and control. Examples include the visualization of ocean current models for the simulation and stud of mine drifting and, inside our computational steering project, the effects of electro-magnetic radiation on missile defense satellites. We discuss plans to apply this technology to decision support applications involving the deployment of autonomous vehicles into contaminated battlefield environments, fire fighter control and hostage rescue operations.

  6. A reference guide for tree analysis and visualization

    PubMed Central

    2010-01-01

    The quantities of data obtained by the new high-throughput technologies, such as microarrays or ChIP-Chip arrays, and the large-scale OMICS-approaches, such as genomics, proteomics and transcriptomics, are becoming vast. Sequencing technologies become cheaper and easier to use and, thus, large-scale evolutionary studies towards the origins of life for all species and their evolution becomes more and more challenging. Databases holding information about how data are related and how they are hierarchically organized expand rapidly. Clustering analysis is becoming more and more difficult to be applied on very large amounts of data since the results of these algorithms cannot be efficiently visualized. Most of the available visualization tools that are able to represent such hierarchies, project data in 2D and are lacking often the necessary user friendliness and interactivity. For example, the current phylogenetic tree visualization tools are not able to display easy to understand large scale trees with more than a few thousand nodes. In this study, we review tools that are currently available for the visualization of biological trees and analysis, mainly developed during the last decade. We describe the uniform and standard computer readable formats to represent tree hierarchies and we comment on the functionality and the limitations of these tools. We also discuss on how these tools can be developed further and should become integrated with various data sources. Here we focus on freely available software that offers to the users various tree-representation methodologies for biological data analysis. PMID:20175922

  7. High Performance Visualization using Query-Driven Visualizationand Analytics

    SciTech Connect

    Bethel, E. Wes; Campbell, Scott; Dart, Eli; Shalf, John; Stockinger, Kurt; Wu, Kesheng

    2006-06-15

    Query-driven visualization and analytics is a unique approach for high-performance visualization that offers new capabilities for knowledge discovery and hypothesis testing. The new capabilities akin to finding needles in haystacks are the result of combining technologies from the fields of scientific visualization and scientific data management. This approach is crucial for rapid data analysis and visualization in the petascale regime. This article describes how query-driven visualization is applied to a hero-sized network traffic analysis problem.

  8. Objectifying "Pain" in the Modern Neurosciences: A Historical Account of the Visualization Technologies Used in the Development of an "Algesiogenic Pathology", 1850 to 2000.

    PubMed

    Stahnisch, Frank W

    2015-01-01

    Particularly with the fundamental works of the Leipzig school of experimental psychophysiology (between the 1850s and 1880s), the modern neurosciences witnessed an increasing interest in attempts to objectify "pain" as a bodily signal and physiological value. This development has led to refined psychological test repertoires and new clinical measurement techniques, which became progressively paired with imaging approaches and sophisticated theories about neuropathological pain etiology. With the advent of electroencephalography since the middle of the 20th century, and through the use of brain stimulation technologies and modern neuroimaging, the chosen scientific route towards an ever more refined "objectification" of pain phenomena took firm root in Western medicine. This article provides a broad overview of landmark events and key imaging technologies, which represent the long developmental path of a field that could be called "algesiogenic pathology." PMID:26593953

  9. Objectifying “Pain” in the Modern Neurosciences: A Historical Account of the Visualization Technologies Used in the Development of an “Algesiogenic Pathology”, 1850 to 2000

    PubMed Central

    Stahnisch, Frank W.

    2015-01-01

    Particularly with the fundamental works of the Leipzig school of experimental psychophysiology (between the 1850s and 1880s), the modern neurosciences witnessed an increasing interest in attempts to objectify “pain” as a bodily signal and physiological value. This development has led to refined psychological test repertoires and new clinical measurement techniques, which became progressively paired with imaging approaches and sophisticated theories about neuropathological pain etiology. With the advent of electroencephalography since the middle of the 20th century, and through the use of brain stimulation technologies and modern neuroimaging, the chosen scientific route towards an ever more refined “objectification” of pain phenomena took firm root in Western medicine. This article provides a broad overview of landmark events and key imaging technologies, which represent the long developmental path of a field that could be called “algesiogenic pathology.” PMID:26593953

  10. Visualization rhetoric: framing effects in narrative visualization.

    PubMed

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation. PMID:22034342

  11. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  12. Visualizers, Visualizations, and Visualizees: Differences in Meaning-Making by Scientific Experts and Novices from Global Visualizations of Ocean Data

    ERIC Educational Resources Information Center

    Stofer, Kathryn A.

    2013-01-01

    Data visualizations designed for academic scientists are not immediately meaningful to everyday scientists. Communicating between a specialized, expert audience and a general, novice public is non-trivial; it requires careful translation. However, more widely available visualization technologies and platforms, including new three-dimensional…

  13. Interactive Terascale Particle Visualization

    NASA Technical Reports Server (NTRS)

    Ellsworth, David; Green, Bryan; Moran, Patrick

    2004-01-01

    This paper describes the methods used to produce an interactive visualization of a 2 TB computational fluid dynamics (CFD) data set using particle tracing (streaklines). We use the method introduced by Bruckschen et al. [2001] that pre-computes a large number of particles, stores them on disk using a space-filling curve ordering that minimizes seeks, and then retrieves and displays the particles according to the user's command. We describe how the particle computation can be performed using a PC cluster, how the algorithm can be adapted to work with a multi-block curvilinear mesh, and how the out-of-core visualization can be scaled to 296 billion particles while still achieving interactive performance on PG hardware. Compared to the earlier work, our data set size and total number of particles are an order of magnitude larger. We also describe a new compression technique that allows the lossless compression of the particles by 41% and speeds the particle retrieval by about 30%.

  14. Visualization of Seifert surfaces.

    PubMed

    van Wijk, Jarke J; Cohen, Arjeh M

    2006-01-01

    The genus of a knot or link can be defined via Seifert surfaces. A Seifert surface of a knot or link is an oriented surface whose boundary coincides with that knot or link. Schematic images of these surfaces are shown in every text book on knot theory, but from these it is hard to understand their shape and structure. In this paper, the visualization of such surfaces is discussed. A method is presented to produce different styles of surface for knots and links, starting from the so-called braid representation. Application of Seifert's algorithm leads to depictions that show the structure of the knot and the surface, while successive relaxation via a physically based model gives shapes that are natural and resemble the familiar representations of knots. Also, we present how to generate closed oriented surfaces in which the knot is embedded, such that the knot subdivides the surface into two parts. These closed surfaces provide a direct visualization of the genus of a knot. All methods have been integrated in a freely available tool, called SeifertView, which can be used for educational and presentation purposes. PMID:16805258

  15. Visual Literacy: An Institutional Imperative

    ERIC Educational Resources Information Center

    Metros, Susan E.; Woolsey, Kristina

    2006-01-01

    Academics have a long history of claiming and defending the superiority of verbal over visual for representing knowledge. By dismissing imagery as mere decoration, they have upheld the sanctity of print for academic discourse. However, in the last decade, digital technologies have broken down the barriers between words and pictures, and many of…

  16. Visualization Tools for Adaptive Mesh Refinement Data

    SciTech Connect

    Weber, Gunther H.; Beckner, Vincent E.; Childs, Hank; Ligocki,Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes

    2007-05-09

    Adaptive Mesh Refinement (AMR) is a highly effective method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations that must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR visualization research and tools and describe how VisIt currently handles AMR data.

  17. Visualization of Scalar Adaptive Mesh Refinement Data

    SciTech Connect

    VACET; Weber, Gunther; Weber, Gunther H.; Beckner, Vince E.; Childs, Hank; Ligocki, Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes

    2007-12-06

    Adaptive Mesh Refinement (AMR) is a highly effective computation method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations, which must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR grids as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR scalar data visualization research.

  18. Visualizing Music

    ERIC Educational Resources Information Center

    Overby, Alexandra

    2009-01-01

    Music has always been an important aspect of teenage life, but with the portability of the newest technological devices, it is harder and harder to separate students from their musical influences. In this article, the author describes a lesson wherein she incorporated their love of song into an engaging art project. In this lesson, she had…

  19. Dynamic Visualization of Co-expression in Systems Genetics Data

    SciTech Connect

    New, Joshua Ryan; Huang, Jian; Chesler, Elissa J

    2008-01-01

    Biologists hope to address grand scientific challenges by exploring the abundance of data made available through modern microarray technology and other high-throughput techniques. The impact of this data, however, is limited unless researchers can effectively assimilate such complex information and integrate it into their daily research; interactive visualization tools are called for to support the effort. Specifically, typical studies of gene co-expression require novel visualization tools that enable the dynamic formulation and fine-tuning of hypotheses to aid the process of evaluating sensitivity of key parameters. These tools should allow biologists to develop an intuitive understanding of the structure of biological networks and discover genes which reside in critical positions in networks and pathways. By using a graph as a universal data representation of correlation in gene expression data, our novel visualization tool employs several techniques that when used in an integrated manner provide innovative analytical capabilities. Our tool for interacting with gene co-expression data integrates techniques such as: graph layout, qualitative subgraph extraction through a novel 2D user interface, quantitative subgraph extraction using graph-theoretic algorithms or by querying an optimized b-tree, dynamic level-of-detail graph abstraction, and template-based fuzzy classification using neural networks. We demonstrate our system using a real-world workflow from a large-scale, systems genetics study of mammalian gene co-expression.

  20. Infrared image enhancement based on human visual properties

    NASA Astrophysics Data System (ADS)

    Chen, Hongyu; Hui, Bin

    2015-10-01

    With the development of modern military, infrared imaging technology is widely used in this field. However, limited by the mechanism of infrared imaging and the detector, infrared images have the disadvantages of low contrast and blurry edge by comparison with the visible image. These shortcomings lead infrared image unsuitable to be observed by both human and computer. Thus image enhancement is required. Traditional image enhancement methods on the application of infrared image, without taking into account the human visual properties, is not convenient for the human observation. This article purposes a new method that combines the layering idea with the human visual properties to enhance the infrared image. The proposed method relies on bilateral filtering to separate a base component, which contains the large amplitude signal and must be compressed, from a detail component, which must be expanded because it contains the small signal variations related to fine texture. The base component is mapped into the proper range which is 8-bit using the human visual properties, and the detail component is applied the method of adaptive gain control. Finally, the two parts are recombined and quantized to 8-bit domain. Experimental results show that this algorithm exceeds most current image enhancement methods in solving the problems of low contrast and blurry detail.

  1. Visual attention on the sphere.

    PubMed

    Bogdanova, Iva; Bur, Alexandre; Hugli, Heinz

    2008-11-01

    Human visual system makes an extensive use of visual attention in order to select the most relevant information and speed-up the vision process. Inspired by visual attention, several computer models have been developed and many computer vision applications rely today on such models. However, the actual algorithms are not suitable to omnidirectional images, which contain a significant amount of geometrical distortion. In this paper, we present a novel computational approach that performs in spherical geometry and thus is suitable for omnidirectional images. Following one of the actual models of visual attention, the spherical saliency map is obtained by fusing together intensity, chromatic, and orientation spherical cue conspicuity maps that are themselves obtained through multiscale analysis on the sphere. Finally, the consecutive maxima in the spherical saliency map represent the spots of attention on the sphere. In the experimental part, the proposed method is then compared to the standard one using a synthetic image. Also, we provide examples of spots detection in real omnidirectional scenes which show its advantages. Finally, an experiment illustrates the homogeneity of the detected visual attention in omnidirectional images. PMID:18854253

  2. Visual search engine for product images

    NASA Astrophysics Data System (ADS)

    Lin, Xiaofan; Gokturk, Burak; Sumengen, Baris; Vu, Diem

    2008-01-01

    Nowadays there are many product comparison web sites. But most of them only use text information. This paper introduces a novel visual search engine for product images, which provides a brand-new way of visually locating products through Content-based Image Retrieval (CBIR) technology. We discusses the unique technical challenges, solutions, and experimental results in the design and implementation of this system.

  3. Introduction and Overview: Visualization, Retrieval, and Knowledge.

    ERIC Educational Resources Information Center

    Rorvig, Mark; Lunin, Lois F.

    1999-01-01

    Describes this perspectives issue that was designed to provide an historical background to visualization in information retrieval. Topics include knowledge, digital technology, the first visual interface to a collection at NASA (National Aeronautics and Space Administration), theoretical foundations, and applications. (LRW)

  4. Interfaces Visualize Data for Airline Safety, Efficiency

    NASA Technical Reports Server (NTRS)

    2014-01-01

    As the A-Train Constellation orbits Earth to gather data, NASA scientists and partners visualize, analyze, and communicate the information. To this end, Langley Research Center awarded SBIR funding to Fairfax, Virginia-based WxAnalyst Ltd. to refine the company's existing user interface for Google Earth to visualize data. Hawaiian Airlines is now using the technology to help manage its flights.

  5. Resources for Visually Impaired or Blind Students.

    ERIC Educational Resources Information Center

    Hart, Elizabeth

    2000-01-01

    Suggests resources for school librarians who need materials for visually impaired or blind students. Highlights include the National Library Service for the Blind and Physically Handicapped; Louis Database of Accessible Materials for People Who Are Blind or Visually Impaired; Braille books; large print books, audio books; assistive technology; and…

  6. Designing Effective Visualizations for Elementary School Science

    ERIC Educational Resources Information Center

    Kali, Yael; Linn, Marcia C.

    2008-01-01

    Research has shown that technology-enhanced visualizations can improve inquiry learning in science when they are designed to support knowledge integration. Visualizations play an especially important role in supporting science learning at elementary and middle school levels because they can make unseen and complex processes visible. We identify 4…

  7. Visual embedding: a model for visualization.

    PubMed

    Demiralp, Çağatay; Scheidegger, Carlos E; Kindlmann, Gordon L; Laidlaw, David H; Heer, Jeffrey

    2014-01-01

    The authors propose visual embedding as a model for automatically generating and evaluating visualizations. A visual embedding is a function from data points to a space of visual primitives that measurably preserves structures in the data (domain) within the mapped perceptual space (range). The authors demonstrate its use with three examples: coloring of neural tracts, scatterplots with icons, and evaluation of alternative diffusion tensor glyphs. They discuss several techniques for generating visual-embedding functions, including probabilistic graphical models for embedding in discrete visual spaces. They also describe two complementary approaches--crowdsourcing and visual product spaces--for building visual spaces with associated perceptual--distance measures. In addition, they recommend several research directions for further developing the visual-embedding model. PMID:24808163

  8. Parallelized Dilate Algorithm for Remote Sensing Image

    PubMed Central

    Zhang, Suli; Hu, Haoran; Pan, Xin

    2014-01-01

    As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm. PMID:24955392

  9. The modeling of miniature UAV flight visualization simulation platform

    NASA Astrophysics Data System (ADS)

    Li, Dong-hui; Li, Xin; Yang, Le-le; Li, Xiong

    2015-12-01

    This paper combines virtual technology with visualization visual simulation theory, construct the framework of visual simulation platform, apply open source software FlightGear simulator combined with GoogleEarth design a small UAV flight visual simulation platform. Using software AC3D to build 3D models of aircraft and complete the model loading based on XML configuration, the design and simulation of visualization modeling visual platform is presented. By using model-driven and data transforming in FlightGear , the design of data transmission module is realized based on Visual Studio 2010 development platform. Finally combined with GoogleEarth it can achieve the tracking and display.

  10. Global Radiological Source Sorting, Tracking, and Monitoring (Gradsstram) Using Emergin RFID and Web 2.0 Technologies to Provide Total Asset and Information Visualization, Paper at 2009 INMM

    SciTech Connect

    Walker, Randy M.; Kopsick, Deborah A.; Gorman, Bryan L.; Ganguly, Auroop R.; Ferren, Mitch; Shankar, Mallikarjun

    2009-01-01

    Background Thousands of shipments of radioisotopes developed in the United States (U.S.) are transported domestically and internationally for medical and industrial applications, including to partner laboratories in European Union (EU) countries. Over the past five years, the Environmental Protection Agency (EPA), the Department of Energy (DOE), and Oak Ridge National Laboratory (ORNL)1 have worked with state first responder personnel, key private sector supply chain stakeholders, the Department of Homeland Security (DHS), the Department of Transportation (DOT), the Department of Defense (DoD) and the Nuclear Regulatory Commission (NRC) on Radio Frequency Identification (RFID) tracking and monitoring of medical, research and industrial radioisotopes in commerce. ORNL was the pioneer of the international radioisotope shipping and production business. Most radioisotopes made and used today were either made or discovered at ORNL. While most of the radioisotopes used in the commercial sector are now produced and sold by the private market, ORNL still leads the world in the production of exotic, high-value and/or sensitive industrial, medical and research isotopes. The ORNL-EPA-DOE Radiological Source Tracking and Monitoring (RadSTraM) project tested, evaluated, and integrated RFID technologies in laboratory settings and at multiple private-sector shipping and distribution facilities (Perkin Elmer and DHL) to track and monitor common radioisotopes used in everyday commerce. The RFID tracking capability was also tested in association with other deployed technologies including radiation detection, chemical/explosives detection, advanced imaging, lasers, and infrared scanning. At the 2007 EU-U.S. Summit, the leaders of the US Department of Commerce (DOC) and EU European Commission (EC) committed to pursue jointly directed Lighthouse Priority Projects. These projects are intended to foster cooperation and reduce regulatory burdens with respect to transatlantic commerce. The

  11. Entry vehicle performance analysis and atmospheric guidance algorithm for precision landing on Mars. M.S. Thesis - Massachusetts Inst. of Technology

    NASA Technical Reports Server (NTRS)

    Dieriam, Todd A.

    1990-01-01

    Future missions to Mars may require pin-point landing precision, possibly on the order of tens of meters. The ability to reach a target while meeting a dynamic pressure constraint to ensure safe parachute deployment is complicated at Mars by low atmospheric density, high atmospheric uncertainty, and the desire to employ only bank angle control. The vehicle aerodynamic performance requirements and guidance necessary for 0.5 to 1.5 lift drag ratio vehicle to maximize the achievable footprint while meeting the constraints are examined. A parametric study of the various factors related to entry vehicle performance in the Mars environment is undertaken to develop general vehicle aerodynamic design requirements. The combination of low lift drag ratio and low atmospheric density at Mars result in a large phugoid motion involving the dynamic pressure which complicates trajectory control. Vehicle ballistic coefficient is demonstrated to be the predominant characteristic affecting final dynamic pressure. Additionally, a speed brake is shown to be ineffective at reducing the final dynamic pressure. An adaptive precision entry atmospheric guidance scheme is presented. The guidance uses a numeric predictor-corrector algorithm to control downrange, an azimuth controller to govern crossrange, and analytic control law to reduce the final dynamic pressure. Guidance performance is tested against a variety of dispersions, and the results from selected tests are presented. Precision entry using bank angle control only is demonstrated to be feasible at Mars.

  12. Accessing and visualizing scientific spatiotemporal data

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.; Bergou, Attila; Berriman, G. Bruce; Block, Gary L.; Collier, Jim; Curkendall, David W.; Good, John; Husman, Laura; Jacob, Joseph C.; Laity, Anastasia; Li, P. Peggy; Miller, Craig; Prince, Tom; Siegel, Herb; Williams, Roy

    2004-01-01

    This paper discusses work done by JPL's Parallel Applications Technologies Group in helping scientists access and visualize very large data sets through the use of multiple computing resources, such as parallel supercomputers, clusters, and grids.

  13. Applications and Tools for Design and Visualization.

    ERIC Educational Resources Information Center

    Hall, Kevin W.; Obregon, Rafael

    2002-01-01

    Describes visualization tools such as VRML, Tool Command Language, iPIX, QuickTIme, and Synchronized Multimedia Integration Language, which are increasingly used in manufacturing. Discusses their uses in technology education. (SK)

  14. Tactical visualization module

    NASA Astrophysics Data System (ADS)

    Kachejian, Kerry C.; Vujcic, Doug

    1999-07-01

    The Tactical Visualization Module (TVM) research effort will develop and demonstrate a portable, tactical information system to enhance the situational awareness of individual warfighters and small military units by providing real-time access to manned and unmanned aircraft, tactically mobile robots, and unattended sensors. TVM consists of a family of portable and hand-held devices being advanced into a next- generation, embedded capability. It enables warfighters to visualize the tactical situation by providing real-time video, imagery, maps, floor plans, and 'fly-through' video on demand. When combined with unattended ground sensors, such as Combat- Q, TVM permits warfighters to validate and verify tactical targets. The use of TVM results in faster target engagement times, increased survivability, and reduction of the potential for fratricide. TVM technology can support both mounted and dismounted tactical forces involved in land, sea, and air warfighting operations. As a PCMCIA card, TVM can be embedded in portable, hand-held, and wearable PCs. Thus, it leverages emerging tactical displays including flat-panel, head-mounted displays. The end result of the program will be the demonstration of the system with U.S. Army and USMC personnel in an operational environment. Raytheon Systems Company, the U.S. Army Soldier Systems Command -- Natick RDE Center (SSCOM- NRDEC) and the Defense Advanced Research Projects Agency (DARPA) are partners in developing and demonstrating the TVM technology.

  15. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  16. Visualization and Data Analysis at the Exascale

    SciTech Connect

    Ahrens, James P.

    2011-01-01

    The scope of our working group is scientific visualization and data analysis. Scientific visualization refers to the process of transforming scientific simulation and experimental data into images to facilitate visual understanding. Data analysis refers to the process of transforming data into an information-rich form via mathematical or computational algorithms to promote better understanding. We share scope on data management with the Storage group. Data management refers to the process of tracking, organizing and enhancing the use of scientific data. The purpose of our work is to enable scientific discovery and understanding. Visualization and data analysis has a broad scope as an integral part of scientific simulations and experiments, it is also a distinct separate service for scientific discovery and documentation purposes. Our scope includes an exascale software and hardware infrastructure that effectively supports visualization and data analysis.

  17. Why Teach Visual Culture?

    ERIC Educational Resources Information Center

    Passmore, Kaye

    2007-01-01

    Visual culture is a hot topic in art education right now as some teachers are dedicated to teaching it and others are adamant that it has no place in a traditional art class. Visual culture, the author asserts, can include just about anything that is visually represented. Although people often think of visual culture as contemporary visuals such…

  18. Concept of visual sensation.

    PubMed

    Bundesen, C

    1977-06-01

    A direct-realist account of visual sensation is outlined. The explanatory notion of elements in visual sensation (atomic sensations) is reinterpreted, and the suggested interpretation is formally justified by constructing a Boolean algebra for visual sensations. The related notion of sensory levels (visual field vs visual world) is discussed. PMID:887374

  19. Wavelet Algorithm for Feature Identification and Image Analysis

    2005-10-01

    WVL are a set of python scripts based on the algorithm described in "A novel 3D wavelet-based filter for visualizing features in noisy biological data, " W. C. Moss et al., J. Microsc. 219, 43-49 (2005)

  20. Recent Advances in VisIt: AMR Streamlines and Query-Driven Visualization

    SciTech Connect

    Weber, Gunther; Ahern, Sean; Bethel, Wes; Borovikov, Sergey; Childs, Hank; Deines, Eduard; Garth, Christoph; Hagen, Hans; Hamann, Bernd; Joy, Kenneth; Martin, David; Meredith, Jeremy; Prabhat,; Pugmire, David; Rubel, Oliver; Van Straalen, Brian; Wu, Kesheng

    2009-11-12

    Adaptive Mesh Refinement (AMR) is a highly effective method for simulations spanning a large range of spatiotemporal scales such as those encountered in astrophysical simulations. Combining research in novel AMR visualization algorithms and basic infrastructure work, the Department of Energy's (DOEs) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) has extended VisIt, an open source visualization tool that can handle AMR data without converting it to alternate representations. This paper focuses on two recent advances in the development of VisIt. First, we have developed streamline computation methods that properly handle multi-domain data sets and utilize effectively multiple processors on parallel machines. Furthermore, we are working on streamline calculation methods that consider an AMR hierarchy and detect transitions from a lower resolution patch into a finer patch and improve interpolation at level boundaries. Second, we focus on visualization of large-scale particle data sets. By integrating the DOE Scientific Data Management (SDM) Center's FastBit indexing technology into VisIt, we are able to reduce particle counts effectively by thresholding and by loading only those particles from disk that satisfy the thresholding criteria. Furthermore, using FastBit it becomes possible to compute parallel coordinate views efficiently, thus facilitating interactive data exploration of massive particle data sets.

  1. Image and video compression/decompression based on human visual perception system and transform coding

    SciTech Connect

    Fu, Chi Yung., Petrich, L.I., Lee, M.

    1997-02-01

    The quantity of information has been growing exponentially, and the form and mix of information have been shifting into the image and video areas. However, neither the storage media nor the available bandwidth can accommodated the vastly expanding requirements for image information. A vital, enabling technology here is compression/decompression. Our compression work is based on a combination of feature-based algorithms inspired by the human visual- perception system (HVS), and some transform-based algorithms (such as our enhanced discrete cosine transform, wavelet transforms), vector quantization and neural networks. All our work was done on desktop workstations using the C++ programming language and commercially available software. During FY 1996, we explored and implemented an enhanced feature-based algorithms, vector quantization, and neural- network-based compression technologies. For example, we improved the feature compression for our feature-based algorithms by a factor of two to ten, a substantial improvement. We also found some promising results when using neural networks and applying them to some video sequences. In addition, we also investigated objective measures to characterize compression results, because traditional means such as the peak signal- to-noise ratio (PSNR) are not adequate to fully characterize the results, since such measures do not take into account the details of human visual perception. We have successfully used our one- year LDRD funding as seed money to explore new research ideas and concepts, the results of this work have led us to obtain external funding from the dud. At this point, we are seeking matching funds from DOE to match the dud funding so that we can bring such technologies into fruition. 9 figs., 2 tabs.

  2. Benchmarking image fusion algorithm performance

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2012-06-01

    Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.

  3. Visual enhancement of laparoscopic nephrectomies using the 3-CCD camera

    NASA Astrophysics Data System (ADS)

    Crane, Nicole J.; Kansal, Neil S.; Dhanani, Nadeem; Alemozaffar, Mehrdad; Kirk, Allan D.; Pinto, Peter A.; Elster, Eric A.; Huffman, Scott W.; Levin, Ira W.

    2006-02-01

    Many surgical techniques are currently shifting from the more conventional, open approach towards minimally invasive laparoscopic procedures. Laparoscopy results in smaller incisions, potentially leading to less postoperative pain and more rapid recoveries . One key disadvantage of laparoscopic surgery is the loss of three-dimensional assessment of organs and tissue perfusion. Advances in laparoscopic technology include high-definition monitors for improved visualization and upgraded single charge coupled device (CCD) detectors to 3-CCD cameras, to provide a larger, more sensitive color palette to increase the perception of detail. In this discussion, we further advance existing laparoscopic technology to create greater enhancement of images obtained during radical and partial nephrectomies in which the assessment of tissue perfusion is crucial but limited with current 3-CCD cameras. By separating the signals received by each CCD in the 3-CCD camera and by introducing a straight forward algorithm, rapid differentiation of renal vessels and perfusion is accomplished and could be performed real time. The newly acquired images are overlaid onto conventional images for reference and comparison. This affords the surgeon the ability to accurately detect changes in tissue oxygenation despite inherent limitations of the visible light image. Such additional capability should impact procedures in which visual assessment of organ vitality is critical.

  4. Dynamic visual image modeling for 3D synthetic scenes in agricultural engineering

    NASA Astrophysics Data System (ADS)

    Gao, Li; Yan, Juntao; Li, Xiaobo; Ji, Yatai; Li, Xin

    The dynamic visual image modeling for 3D synthetic scenes by using dynamic multichannel binocular visual image based on the mobile self-organizing network. Technologies of 3D modeling synthetic scenes have been widely used in kinds of industries. The main purpose of this paper is to use multiple networks of dynamic visual monitors and sensors to observe an unattended area, to use the advantages of mobile network in rural areas for improving existing mobile network information service further and providing personalized information services. The goal of displaying is to provide perfect representation of synthetic scenes. Using low-power dynamic visual monitors and temperature/humidity sensor or GPS installed in the node equipment, monitoring data will be sent at scheduled time. Then through the mobile self-organizing network, 3D model is rebuilt by synthesizing the returned images. On this basis, we formalize a novel algorithm for multichannel binocular visual 3D images based on fast 3D modeling. Taking advantage of these low prices mobile, mobile self-organizing networks can get a large number of video from where is not suitable for human observation or unable to reach, and accurately synthetic 3D scene. This application will play a great role in promoting its application in agriculture.

  5. VHP - An environment for the remote visualization of heuristic processes

    NASA Technical Reports Server (NTRS)

    Crawford, Stuart L.; Leiner, Barry M.

    1991-01-01

    A software system called VHP is introduced which permits the visualization of heuristic algorithms on both resident and remote hardware platforms. The VHP is based on the DCF tool for interprocess communication and is applicable to remote algorithms which can be on different types of hardware and in languages other than VHP. The VHP system is of particular interest to systems in which the visualization of remote processes is required such as robotics for telescience applications.

  6. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  7. Current State of Digital Reference in Primary and Secondary Education; The Technological Challenges of digital Reference; Question Negotiation and the Technological Environment; Evaluation of Chat Reference Service Quality; Visual Resource Reference: Collaboration between Digital Museums and Digital Libraries.

    ERIC Educational Resources Information Center

    Lankes, R. David; Penka, Jeffrey T.; Janes, Joseph; Silverstein, Joanne; White, Marilyn Domas; Abels, Eileen G.; Kaske, Neal; Goodrum, Abby A.

    2003-01-01

    Includes five articles that discuss digital reference in elementary and secondary education; the need to understand the technological environment of digital reference; question negotiation in digital reference; a pilot study that evaluated chat reference service quality; and collaborative digital museum and digital library reference services. (LRW)

  8. Visual Interface for Materials Simulations

    2004-08-01

    VIMES (Visual Inteface for Materials Simulations) is a graphical user interface (GUI) for pre- and post-processing alomistic materials science calculations. The code includes tools for building and visualizing simple crystals, supercells, and surfaces, as well as tools for managing and modifying the input to Sandia materials simulations codes such as Quest (Peter Schultz, SNL 9235) and Towhee (Marcus Martin, SNL 9235). It is often useful to have a graphical interlace to construct input for materialsmore » simulations codes and to analyze the output of these programs. VIMES has been designed not only to build and visualize different materials systems, but also to allow several Sandia codes to be easier to use and analyze. Furthermore. VIMES has been designed to be reasonably easy to extend to new materials programs. We anticipate that users of Sandia materials simulations codes will use VIMCS to simplify the submission and analysis of these simulations. VIMES uses standard OpenGL graphics (as implemented in the Python programming language) to display the molecules. The algorithms used to rotate, zoom, and pan molecules are all standard applications using the OpenGL libraries. VIMES uses the Marching Cubes algorithm for isosurfacing 3D data such as molecular orbitals or electron densities around the molecules.« less

  9. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  10. Visual presentation and computer animation

    SciTech Connect

    Wang, Hua.

    1991-01-01

    This paper presents an assessment of the current computer graphics and video technologies as they applied to the fields of visual presentation and computer animation, including a discussion of inherent incompatibilities between computer graphics and video systems. The near-term technology trend is directed towards the integration of sound, video and computer graphics into a multimedia, desktop presentation system. With the forthcoming High-Definition Television (HDTV) standard, it can be predicted that computer graphics and video will eventually be integrated to a desktop video system. Recent advances in technology development to achieve these goals are described. 3 tabs.

  11. Comparing the quality of accessing medical literature using content-based visual and textual information retrieval

    NASA Astrophysics Data System (ADS)

    Müller, Henning; Kalpathy-Cramer, Jayashree; Kahn, Charles E., Jr.; Hersh, William

    2009-02-01

    Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical information retrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual information retrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals. This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual information retrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently

  12. Fast SIFT design for real-time visual feature extraction.

    PubMed

    Chiu, Liang-Chi; Chang, Tian-Sheuan; Chen, Jiun-Yen; Chang, Nelson Yen-Chung

    2013-08-01

    Visual feature extraction with scale invariant feature transform (SIFT) is widely used for object recognition. However, its real-time implementation suffers from long latency, heavy computation, and high memory storage because of its frame level computation with iterated Gaussian blur operations. Thus, this paper proposes a layer parallel SIFT (LPSIFT) with integral image, and its parallel hardware design with an on-the-fly feature extraction flow for real-time application needs. Compared with the original SIFT algorithm, the proposed approach reduces the computational amount by 90% and memory usage by 95%. The final implementation uses 580-K gate count with 90-nm CMOS technology, and offers 6000 feature points/frame for VGA images at 30 frames/s and ∼ 2000 feature points/frame for 1920 × 1080 images at 30 frames/s at the clock rate of 100 MHz. PMID:23743775

  13. Automatic Perceptual Color Map Generation for Realistic Volume Visualization

    PubMed Central

    Silverstein, Jonathan C.; Parsad, Nigel M.; Tsirline, Victor

    2008-01-01

    Advances in computed tomography imaging technology and inexpensive high performance computer graphics hardware are making high-resolution, full color (24-bit) volume visualizations commonplace. However, many of the color maps used in volume rendering provide questionable value in knowledge representation and are non-perceptual thus biasing data analysis or even obscuring information. These drawbacks, coupled with our need for realistic anatomical volume rendering for teaching and surgical planning, has motivated us to explore the auto-generation of color maps that combine natural colorization with the perceptual discriminating capacity of grayscale. As evidenced by the examples shown that have been created by the algorithm described, the merging of perceptually accurate and realistically colorized virtual anatomy appears to insightfully interpret and impartially enhance volume rendered patient data. PMID:18430609

  14. Visual hallucinations.

    PubMed

    Collerton, Daniel; Mosimann, Urs Peter

    2010-11-01

    Understanding of visual hallucinations is developing rapidly. Single-factor explanations based on specific pathologies have given way to complex multifactor models with wide potential applicability. Clinical studies of disorders with frequent hallucinations-dementia, delirium, eye disease and psychosis-show that dysfunction within many parts of the distributed ventral object perception system is associated with a range of perceptions from simple flashes and dots to complex formed figures and landscapes. Dissociations between these simple and complex hallucinations indicate at least two hallucinatory syndromes, though exact boundaries need clarification. Neural models of hallucinations variably emphasize the importance of constraints from top down dorsolateral frontal systems, bottom up occipital systems, interconnecting tracts, and thalamic and brainstem regulatory systems. No model has yet gained general acceptance. Both qualitative (a small number of necessary and sufficient constraints) and quantitative explanations (an accumulation of many nonspecific factors) fit existing data. Variable associations of hallucinations with emotional distress and thought disorders across and within pathologies may reflect the roles of cognitive and regulatory systems outside of the purely perceptual. Functional imaging demonstrates that hallucinations and veridical perceptions occur in the same brain areas, intimating a key role for the negotiating interface between top down and bottom up processes. Thus, hallucinations occur when a perception that incorporates a hallucinatory element can provide a better match between predicted and actual sensory input than does a purely veridical experience. Translational research that integrates understandings from clinical hallucinations and basic vision science is likely to be the key to better treatments. WIREs Cogn Sci 2010 1 781-786 For further resources related to this article, please visit the WIREs website. PMID:26271777

  15. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  16. A Visual Analytics Agenda

    SciTech Connect

    Thomas, James J.; Cook, Kristin A.

    2006-01-01

    The September 11, 2001 attacks on the World Trade Center and the Pentagon were a wakeup call to the United States. The Hurricane Katrina disaster in August 2005 provided yet another reminder that unprecedented disasters can and do occur. And when they do, we must be able to analyze large amounts of disparate data in order to make sense of exceedingly complex situations and save lives. Responding to an Urgent Need This need to support penetrating analysis of massive data collections is not limited to security, though. From systems biology to human health, from evaluations of product effectiveness to strategizing for competitive positioning, to assessing the results of marketing campaigns, there is a critical need to analyze very large amounts of complex information. Simply put, our ability to collect data far outstrips our ability to analyze the data we have collected. Following the September 11 attacks, the government initiated efforts to evaluate the technologies that are available today or are on the near horizon. Two National Academy of Sciences reports identified serious gaps in the technologies. Making the Nation Safer [Alberts & Wulf, 2002] describes how science and technology can be advanced to protect the nation against terrorism. Information Technology for Counterterrorism [Hennessy et al., 2003] expands upon the work of Making the Nation Safer, focusing specifically on the opportunities for information technology to help counter and respond to terrorist attacks. Significant research progress has been made in disciplines such as scientific and information visualization, statistically-based exploratory and confirmatory analysis, data and knowledge representations, and perceptual and cognitive sciences, However, the research community has not adequately addressed the integration of these subspecialties to advance the ability for analysts to apply their expert human judgment to complex data in pressure-filled situations. Although some research is being done

  17. Learning Visualizations by Analogy: Promoting Visual Literacy through Visualization Morphing.

    PubMed

    Ruchikachorn, Puripant; Mueller, Klaus

    2015-09-01

    We propose the concept of teaching (and learning) unfamiliar visualizations by analogy, that is, demonstrating an unfamiliar visualization method by linking it to another more familiar one, where the in-betweens are designed to bridge the gap of these two visualizations and explain the difference in a gradual manner. As opposed to a textual description, our morphing explains an unfamiliar visualization through purely visual means. We demonstrate our idea by ways of four visualization pair examples: data table and parallel coordinates, scatterplot matrix and hyperbox, linear chart and spiral chart, and hierarchical pie chart and treemap. The analogy is commutative i.e. any member of the pair can be the unfamiliar visualization. A series of studies showed that this new paradigm can be an effective teaching tool. The participants could understand the unfamiliar visualization methods in all of the four pairs either fully or at least significantly better after they observed or interacted with the transitions from the familiar counterpart. The four examples suggest how helpful visualization pairings be identified and they will hopefully inspire other visualization morphings and associated transition strategies to be identified. PMID:26357285

  18. Visual data mining for quantized spatial data

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Kahn, Brian

    2004-01-01

    In previous papers we've shown how a well known data compression algorithm called Entropy-constrained Vector Quantization ( can be modified to reduce the size and complexity of very large, satellite data sets. In this paper, we descuss how to visualize and understand the content of such reduced data sets.

  19. DSP Implementation of the Multiscale Retinex Image Enhancement Algorithm

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2004-01-01

    The Retinex is a general-purpose image enhancement algorithm that is used to produce good visual representations of scenes. It performs a non-linear spatial/ spectral transform that synthesizes strong local contrast enhancement and color constancy. A real-time, video frame rate implementation of the Retinex is required to meet the needs of various potential users. Retinex processing contains a relatively large number of complex computations, thus to achieve real-time performance using current technologies requires specialized hardware and software. In this paper we discuss the design and development of a digital signal processor (DSP) implementation of the Retinex. The target processor is a Texas Instruments TMS320C6711 floating point DSP. NTSC video is captured using a dedicated frame grabber card, Retinex processed, and displayed on a standard monitor. We discuss the optimizations used to achieve real-time performance of the Retinex and also describe our future plans on using alternative architectures.

  20. DSP Implementation of the Retinex Image Enhancement Algorithm

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2004-01-01

    The Retinex is a general-purpose image enhancement algorithm that is used to produce good visual representations of scenes. It performs a non-linear spatial/spectral transform that synthesizes strong local contrast enhancement and color constancy. A real-time, video frame rate implementation of the Retinex is required to meet the needs of various potential users. Retinex processing contains a relatively large number of complex computations, thus to achieve real-time performance using current technologies requires specialized hardware and software. In this paper we discuss the design and development of a digital signal processor (DSP) implementation of the Retinex. The target processor is a Texas Instruments TMS320C6711 floating point DSP. NTSC video is captured using a dedicated frame-grabber card, Retinex processed, and displayed on a standard monitor. We discuss the optimizations used to achieve real-time performance of the Retinex and also describe our future plans on using alternative architectures.