Sample records for computer generated imagery

  1. Is There Computer Graphics after Multimedia?

    ERIC Educational Resources Information Center

    Booth, Kellogg S.

    Computer graphics has been driven by the desire to generate real-time imagery subject to constraints imposed by the human visual system. The future of computer graphics, when off-the-shelf systems have full multimedia capability and when standard computing engines render imagery faster than real-time, remains to be seen. A dedicated pipeline for…

  2. Effect of instructive visual stimuli on neurofeedback training for motor imagery-based brain-computer interface.

    PubMed

    Kondo, Toshiyuki; Saeki, Midori; Hayashi, Yoshikatsu; Nakayashiki, Kosei; Takata, Yohei

    2015-10-01

    Event-related desynchronization (ERD) of the electroencephalogram (EEG) from the motor cortex is associated with execution, observation, and mental imagery of motor tasks. Generation of ERD by motor imagery (MI) has been widely used for brain-computer interfaces (BCIs) linked to neuroprosthetics and other motor assistance devices. Control of MI-based BCIs can be acquired by neurofeedback training to reliably induce MI-associated ERD. To develop more effective training conditions, we investigated the effect of static and dynamic visual representations of target movements (a picture of forearms or a video clip of hand grasping movements) during the BCI neurofeedback training. After 4 consecutive training days, the group that performed MI while viewing the video showed significant improvement in generating MI-associated ERD compared with the group that viewed the static image. This result suggests that passively observing the target movement during MI would improve the associated mental imagery and enhance MI-based BCIs skills. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. From One Pixel to One Earth: Building a Living Atlas in the Cloud to Analyze and Monitor Global Patterns

    NASA Astrophysics Data System (ADS)

    Moody, D.; Brumby, S. P.; Chartrand, R.; Franco, E.; Keisler, R.; Kelton, T.; Kontgis, C.; Mathis, M.; Raleigh, D.; Rudelis, X.; Skillman, S.; Warren, M. S.; Longbotham, N.

    2016-12-01

    The recent computing performance revolution has driven improvements in sensor, communication, and storage technology. Historical, multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes per year of high-resolution imagery with daily global coverage. Cloud computing and storage, combined with recent advances in machine learning and open software, are enabling understanding of the world at an unprecedented scale and detail. We have assembled all available satellite imagery from the USGS Landsat, NASA MODIS, and ESA Sentinel programs, as well as commercial PlanetScope and RapidEye imagery, and have analyzed over 2.8 quadrillion multispectral pixels. We leveraged the commercial cloud to generate a tiled, spatio-temporal mosaic of the Earth for fast iteration and development of new algorithms combining analysis techniques from remote sensing, machine learning, and scalable compute infrastructure. Our data platform enables processing at petabytes per day rates using multi-source data to produce calibrated, georeferenced imagery stacks at desired points in time and space that can be used for pixel level or global scale analysis. We demonstrate our data platform capability by using the European Space Agency's (ESA) published 2006 and 2009 GlobCover 20+ category label maps to train and test a Land Cover Land Use (LCLU) classifier, and generate current self-consistent LCLU maps in Brazil. We train a standard classifier on 2006 GlobCover categories using temporal imagery stacks, and we validate our results on co-registered 2009 Globcover LCLU maps and 2009 imagery. We then extend the derived LCLU model to current imagery stacks to generate an updated, in-season label map. Changes in LCLU labels can now be seamlessly monitored for a given location across the years in order to track, for example, cropland expansion, forest growth, and urban developments. An example of change monitoring is illustrated in the included figure showing rainfed cropland change in the Mato Grosso region of Brazil between 2006 and 2009.

  4. Color Helmet Mounted Display System with Real Time Computer Generated and Video Imagery for In-Flight Simulation

    NASA Technical Reports Server (NTRS)

    Sawyer, Kevin; Jacobsen, Robert; Aiken, Edwin W. (Technical Monitor)

    1995-01-01

    NASA Ames Research Center and the US Army are developing the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) using a Sikorsky UH-60 helicopter for the purpose of flight systems research. A primary use of the RASCAL is in-flight simulation for which the visual scene will use computer generated imagery and synthetic vision. This research is made possible in part to a full color wide field of view Helmet Mounted Display (HMD) system that provides high performance color imagery suitable for daytime operations in a flight-rated package. This paper describes the design and performance characteristics of the HMD system. Emphasis is placed on the design specifications, testing, and integration into the aircraft of Kaiser Electronics' RASCAL HMD system that was designed and built under contract for NASA. The optical performance and design of the Helmet mounted display unit will be discussed as well as the unique capabilities provided by the system's Programmable Display Generator (PDG).

  5. Concepts of integrated satellite surveys. [thematic mapping of land use in Ethiopia, Sudan, and Morocco

    NASA Technical Reports Server (NTRS)

    Howard, J. A.

    1974-01-01

    The United Nations initially contracted with NASA to carry out investigations in three countries; but now as the result of rapidly increasing interest, ERTS imagery has been/is being used in 7 additional projects related to agriculture, forestry, land-use, soils, landforms and hydrology. Initially the ERTS frames were simply used to provide a synoptic view of a large area of a developing country as a basis to regional surveys. From this, interest has extended to using reconstituted false color imagery and latterly, in co-operation with Purdue University, the use of computer generated false color mosaics and computer generated large scale maps. As many developing countries are inadequately mapped and frequently rely on outdated maps, the ERTS imagery is considered to provide a very wide spectrum of valuable data. Thematic maps can be readily prepared at a scale of 1:250,000 using standard NASA imagery. These provide coverage of areas not previously mapped and provide supplementary information and enable existing maps to be up-dated. There is also increasing evidence that ERTS imagery is useful for temporal studies and for providing a new dimension in integrated surveys.

  6. Spline function approximation techniques for image geometric distortion representation. [for registration of multitemporal remote sensor imagery

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1975-01-01

    Least squares approximation techniques were developed for use in computer aided correction of spatial image distortions for registration of multitemporal remote sensor imagery. Polynomials were first used to define image distortion over the entire two dimensional image space. Spline functions were then investigated to determine if the combination of lower order polynomials could approximate a higher order distortion with less computational difficulty. Algorithms for generating approximating functions were developed and applied to the description of image distortion in aircraft multispectral scanner imagery. Other applications of the techniques were suggested for earth resources data processing areas other than geometric distortion representation.

  7. Earth Resources Technology Satellite: US standard catalog No. U-12

    NASA Technical Reports Server (NTRS)

    1973-01-01

    To provide dissemination of information regarding the availability of Earth Resources Technology Satellite (ERTS) imagery, a U.S. Standard Catalog is published on a monthly schedule. The catalogs identify imagery which has been processed and input to the data files during the preceding month. The U.S. Standard Catalog includes imagery covering the Continental United States, Alaska, and Hawaii. As a supplement to these catalogs, an inventory of ERTS imagery on 16 millimeter microfilm is available. The catalogs consist of four parts: (1) annotated maps which graphically depict the geographic areas covered by the imagery listed in the current catalog, (2) a computer-generated listing organized by observation identification number (D) with pertinent information on each image, (3) a computer listing of observations organized by longitude and latitude, and (4) observations which have had changes made in their catalog information since the original entry in the data base.

  8. Application of computer generated color graphic techniques to the processing and display of three dimensional fluid dynamic data

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.; Putt, C. W.; Giamati, C. C.

    1981-01-01

    Color coding techniques used in the processing of remote sensing imagery were adapted and applied to the fluid dynamics problems associated with turbofan mixer nozzles. The computer generated color graphics were found to be useful in reconstructing the measured flow field from low resolution experimental data to give more physical meaning to this information and in scanning and interpreting the large volume of computer generated data from the three dimensional viscous computer code used in the analysis.

  9. Automatic Reconstruction of Spacecraft 3D Shape from Imagery

    NASA Astrophysics Data System (ADS)

    Poelman, C.; Radtke, R.; Voorhees, H.

    We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.

  10. Monitoring land degradation in southern Tunisia: A test of LANDSAT imagery and digital data

    NASA Technical Reports Server (NTRS)

    Hellden, U.; Stern, M.

    1980-01-01

    The possible use of LANDSAT imagery and digital data for monitoring desertification indicators in Tunisia was studied. Field data were sampled in Tunisia for estimation of mapping accuracy in maps generated through interpretation of LANDSAT false color composites and processing of LANDSAT computer compatible tapes respectively. Temporal change studies were carried out through geometric registration of computer classified windows from 1972 to classified data from 1979. Indications on land degradation were noted in some areas. No important differences, concerning results, between the interpretation approach and the computer processing approach were found.

  11. Unique digital imagery interface between a silicon graphics computer and the kinetic kill vehicle hardware-in-the-loop simulator (KHILS) wideband infrared scene projector (WISP)

    NASA Astrophysics Data System (ADS)

    Erickson, Ricky A.; Moren, Stephen E.; Skalka, Marion S.

    1998-07-01

    Providing a flexible and reliable source of IR target imagery is absolutely essential for operation of an IR Scene Projector in a hardware-in-the-loop simulation environment. The Kinetic Kill Vehicle Hardware-in-the-Loop Simulator (KHILS) at Eglin AFB provides the capability, and requisite interfaces, to supply target IR imagery to its Wideband IR Scene Projector (WISP) from three separate sources at frame rates ranging from 30 - 120 Hz. Video can be input from a VCR source at the conventional 30 Hz frame rate. Pre-canned digital imagery and test patterns can be downloaded into stored memory from the host processor and played back as individual still frames or movie sequences up to a 120 Hz frame rate. Dynamic real-time imagery to the KHILS WISP projector system, at a 120 Hz frame rate, can be provided from a Silicon Graphics Onyx computer system normally used for generation of digital IR imagery through a custom CSA-built interface which is available for either the SGI/DVP or SGI/DD02 interface port. The primary focus of this paper is to describe our technical approach and experience in the development of this unique SGI computer and WISP projector interface.

  12. Earth Resources Technology Satellite: Non-US standard catalog No. N-13

    NASA Technical Reports Server (NTRS)

    1973-01-01

    To provide dissemination of information regarding the availability of Earth Resources Technology Satellite (ERTS) imagery, a Non-U.S. Standard Catalog is published on a monthly schedule. The catalogs identify imagery which has been processed and input to the data files during the preceding month. The Non-U.S. Standard Catalog includes imagery covering all areas except that of the United States, Hawaii, and Alaska. Imagery adjacent to the Continental U.S. and Alaska borders will normally appear in the U.S. Standard Catalog. As a supplement to these catalogs, an inventory of ERTS imagery on 16 millimeter microfilm is available. The catalogs consist of four parts: (1) annotated maps which graphically depict the geographic areas covered by the imagery listed in the current catalog, (2) a computer-generated listing organized by observation identification number (ID) with pertinent information for each image, (3) a computer listing of observations organized by longitude and latitude, and (4) observations which have had changes made in their catalog information since the original entry in the data base.

  13. Satellite Imagery Analysis for Automated Global Food Security Forecasting

    NASA Astrophysics Data System (ADS)

    Moody, D.; Brumby, S. P.; Chartrand, R.; Keisler, R.; Mathis, M.; Beneke, C. M.; Nicholaeff, D.; Skillman, S.; Warren, M. S.; Poehnelt, J.

    2017-12-01

    The recent computing performance revolution has driven improvements in sensor, communication, and storage technology. Multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes/year of daily high-resolution global coverage imagery. Cloud computing and storage, combined with recent advances in machine learning, are enabling understanding of the world at a scale and at a level of detail never before feasible. We present results from an ongoing effort to develop satellite imagery analysis tools that aggregate temporal, spatial, and spectral information and that can scale with the high-rate and dimensionality of imagery being collected. We focus on the problem of monitoring food crop productivity across the Middle East and North Africa, and show how an analysis-ready, multi-sensor data platform enables quick prototyping of satellite imagery analysis algorithms, from land use/land cover classification and natural resource mapping, to yearly and monthly vegetative health change trends at the structural field level.

  14. Robotic wheelchair commanded by SSVEP, motor imagery and word generation.

    PubMed

    Bastos, Teodiano F; Muller, Sandra M T; Benevides, Alessandro B; Sarcinelli-Filho, Mario

    2011-01-01

    This work presents a robotic wheelchair that can be commanded by a Brain Computer Interface (BCI) through Steady-State Visual Evoked Potential (SSVEP), Motor Imagery and Word Generation. When using SSVEP, a statistical test is used to extract the evoked response and a decision tree is used to discriminate the stimulus frequency, allowing volunteers to online operate the BCI, with hit rates varying from 60% to 100%, and guide a robotic wheelchair through an indoor environment. When using motor imagery and word generation, three mental task are used: imagination of left or right hand, and imagination of generation of words starting with the same random letter. Linear Discriminant Analysis is used to recognize the mental tasks, and the feature extraction uses Power Spectral Density. The choice of EEG channel and frequency uses the Kullback-Leibler symmetric divergence and a reclassification model is proposed to stabilize the classifier.

  15. Real-time range generation for ladar hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Olson, Eric M.; Coker, Charles F.

    1996-05-01

    Real-time closed loop simulation of LADAR seekers in a hardware-in-the-loop facility can reduce program risk and cost. This paper discusses an implementation of real-time range imagery generated in a synthetic environment at the Kinetic Kill Vehicle Hardware-in-the Loop facility at Eglin AFB, for the stimulation of LADAR seekers and algorithms. The computer hardware platform used was a Silicon Graphics Incorporated Onyx Reality Engine. This computer contains graphics hardware, and is optimized for generating visible or infrared imagery in real-time. A by-produce of the rendering process, in the form of a depth buffer, is generated from all objects in view during its rendering process. The depth buffer is an array of integer values that contributes to the proper rendering of overlapping objects and can be converted to range values using a mathematical formula. This paper presents an optimized software approach to the generation of the scenes, calculation of the range values, and outputting the range data for a LADAR seeker.

  16. An integrated software system for geometric correction of LANDSAT MSS imagery

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Esilva, A. J. F. M.; Camara-Neto, G.; Serra, P. R. M.; Desousa, R. C. M.; Mitsuo, Fernando Augusta, II

    1984-01-01

    A system for geometrically correcting LANDSAT MSS imagery includes all phases of processing, from receiving a raw computer compatible tape (CCT) to the generation of a corrected CCT (or UTM mosaic). The system comprises modules for: (1) control of the processing flow; (2) calculation of satellite ephemeris and attitude parameters, (3) generation of uncorrected files from raw CCT data; (4) creation, management and maintenance of a ground control point library; (5) determination of the image correction equations, using attitude and ephemeris parameters and existing ground control points; (6) generation of corrected LANDSAT file, using the equations determined beforehand; (7) union of LANDSAT scenes to produce and UTM mosaic; and (8) generation of output tape, in super-structure format.

  17. A Platform for Scalable Satellite and Geospatial Data Analysis

    NASA Astrophysics Data System (ADS)

    Beneke, C. M.; Skillman, S.; Warren, M. S.; Kelton, T.; Brumby, S. P.; Chartrand, R.; Mathis, M.

    2017-12-01

    At Descartes Labs, we use the commercial cloud to run global-scale machine learning applications over satellite imagery. We have processed over 5 Petabytes of public and commercial satellite imagery, including the full Landsat and Sentinel archives. By combining open-source tools with a FUSE-based filesystem for cloud storage, we have enabled a scalable compute platform that has demonstrated reading over 200 GB/s of satellite imagery into cloud compute nodes. In one application, we generated global 15m Landsat-8, 20m Sentinel-1, and 10m Sentinel-2 composites from 15 trillion pixels, using over 10,000 CPUs. We recently created a public open-source Python client library that can be used to query and access preprocessed public satellite imagery from within our platform, and made this platform available to researchers for non-commercial projects. In this session, we will describe how you can use the Descartes Labs Platform for rapid prototyping and scaling of geospatial analyses and demonstrate examples in land cover classification.

  18. Single-Frame Cinema. Three Dimensional Computer-Generated Imaging.

    ERIC Educational Resources Information Center

    Cheetham, Edward Joseph, II

    This master's thesis provides a description of the proposed art form called single-frame cinema, which is a category of computer imagery that takes the temporal polarities of photography and cinema and unites them into a single visual vignette of time. Following introductory comments, individual chapters discuss (1) the essential physical…

  19. The extinct animal show: the paleoimagery tradition and computer generated imagery in factual television programs.

    PubMed

    Campbell, Vincent

    2009-03-01

    Extinct animals have always been popular subjects for the media, in both fiction, and factual output. In recent years, a distinctive new type of factual television program has emerged in which computer generated imagery is used extensively to bring extinct animals back to life. Such has been the commercial audience success of these programs that they have generated some public and academic debates about their relative status as science, documentary, and entertainment, as well as about their reflection of trends in factual television production, and the aesthetic tensions in the application of new media technologies. Such discussions ignore a crucial contextual feature of computer generated extinct animal programs, namely the established tradition of paleoimagery. This paper examines a selection of extinct animal shows in terms of the dominant frames of the paleoimagery genre. The paper suggests that such an examination has two consequences. First, it allows for a more context-sensitive evaluation of extinct animal programs, acknowledging rather than ignoring relevant representational traditions. Second, it allows for an appraisal and evaluation of public and critical reception of extinct animal programs above and beyond the traditional debates about tensions between science, documentary, entertainment, and public understanding.

  20. Computer vision-based technologies and commercial best practices for the advancement of the motion imagery tradecraft

    NASA Astrophysics Data System (ADS)

    Phipps, Marja; Capel, David; Srinivasan, James

    2014-06-01

    Motion imagery capabilities within the Department of Defense/Intelligence Community (DoD/IC) have advanced significantly over the last decade, attempting to meet continuously growing data collection, video processing and analytical demands in operationally challenging environments. The motion imagery tradecraft has evolved accordingly, enabling teams of analysts to effectively exploit data and generate intelligence reports across multiple phases in structured Full Motion Video (FMV) Processing Exploitation and Dissemination (PED) cells. Yet now the operational requirements are drastically changing. The exponential growth in motion imagery data continues, but to this the community adds multi-INT data, interoperability with existing and emerging systems, expanded data access, nontraditional users, collaboration, automation, and support for ad hoc configurations beyond the current FMV PED cells. To break from the legacy system lifecycle, we look towards a technology application and commercial adoption model course which will meet these future Intelligence, Surveillance and Reconnaissance (ISR) challenges. In this paper, we explore the application of cutting edge computer vision technology to meet existing FMV PED shortfalls and address future capability gaps. For example, real-time georegistration services developed from computer-vision-based feature tracking, multiple-view geometry, and statistical methods allow the fusion of motion imagery with other georeferenced information sources - providing unparalleled situational awareness. We then describe how these motion imagery capabilities may be readily deployed in a dynamically integrated analytical environment; employing an extensible framework, leveraging scalable enterprise-wide infrastructure and following commercial best practices.

  1. Assessment of synthetic image fidelity

    NASA Astrophysics Data System (ADS)

    Mitchell, Kevin D.; Moorhead, Ian R.; Gilmore, Marilyn A.; Watson, Graham H.; Thomson, Mitch; Yates, T.; Troscianko, Tomasz; Tolhurst, David J.

    2000-07-01

    Computer generated imagery is increasingly used for a wide variety of purposes ranging from computer games to flight simulators to camouflage and sensor assessment. The fidelity required for this imagery is dependent on the anticipated use - for example when used for camouflage design it must be physically correct spectrally and spatially. The rendering techniques used will also depend upon the waveband being simulated, spatial resolution of the sensor and the required frame rate. Rendering of natural outdoor scenes is particularly demanding, because of the statistical variation in materials and illumination, atmospheric effects and the complex geometric structures of objects such as trees. The accuracy of the simulated imagery has tended to be assessed subjectively in the past. First and second order statistics do not capture many of the essential characteristics of natural scenes. Direct pixel comparison would impose an unachievable demand on the synthetic imagery. For many applications, such as camouflage design, it is important that nay metrics used will work in both visible and infrared wavebands. We are investigating a variety of different methods of comparing real and synthetic imagery and comparing synthetic imagery rendered to different levels of fidelity. These techniques will include neural networks (ICA), higher order statistics and models of human contrast perception. This paper will present an overview of the analyses we have carried out and some initial results along with some preliminary conclusions regarding the fidelity of synthetic imagery.

  2. Fostering Recursive Thinking in Combinatorics through the Use of Manipulatives and Computing Technology.

    ERIC Educational Resources Information Center

    Abramovich, Sergei; Pieper, Anne

    1996-01-01

    Describes the use of manipulatives for solving simple combinatorial problems which can lead to the discovery of recurrence relations for permutations and combinations. Numerical evidence and visual imagery generated by a computer spreadsheet through modeling these relations can enable students to experience the ease and power of combinatorial…

  3. Training and Personnel Systems Technology R&D Program Description FY 1988/1989. Revision

    DTIC Science & Technology

    1988-05-20

    scenario software /database, and computer generated imagery (CIG) subsystem resources; (d) investigation of feasibility of, and preparation of plans... computer language to Army flight simulator for demonstration and evaluation. The objective is to have flight simulators which use the same software as...the Automated Performance and Readiness Training System (APARTS), which is a computer software system which facilitates training management through

  4. ATR applications of minimax entropy models of texture and shape

    NASA Astrophysics Data System (ADS)

    Zhu, Song-Chun; Yuille, Alan L.; Lanterman, Aaron D.

    2001-10-01

    Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.

  5. Utilising E-on Vue and Unity 3D scenes to generate synthetic images and videos for visible signature analysis

    NASA Astrophysics Data System (ADS)

    Madden, Christopher S.; Richards, Noel J.; Culpepper, Joanne B.

    2016-10-01

    This paper investigates the ability to develop synthetic scenes in an image generation tool, E-on Vue, and a gaming engine, Unity 3D, which can be used to generate synthetic imagery of target objects across a variety of conditions in land environments. Developments within these tools and gaming engines have allowed the computer gaming industry to dramatically enhance the realism of the games they develop; however they utilise short cuts to ensure that the games run smoothly in real-time to create an immersive effect. Whilst these short cuts may have an impact upon the realism of the synthetic imagery, they do promise a much more time efficient method of developing imagery of different environmental conditions and to investigate the dynamic aspect of military operations that is currently not evaluated in signature analysis. The results presented investigate how some of the common image metrics used in target acquisition modelling, namely the Δμ1, Δμ2, Δμ3, RSS, and Doyle metrics, perform on the synthetic scenes generated by E-on Vue and Unity 3D compared to real imagery of similar scenes. An exploration of the time required to develop the various aspects of the scene to enhance its realism are included, along with an overview of the difficulties associated with trying to recreate specific locations as a virtual scene. This work is an important start towards utilising virtual worlds for visible signature evaluation, and evaluating how equivalent synthetic imagery is to real photographs.

  6. Automated detection and enumeration of marine wildlife using unmanned aircraft systems (UAS) and thermal imagery

    PubMed Central

    Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.

    2017-01-01

    Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95–98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management. PMID:28338047

  7. Automated detection and enumeration of marine wildlife using unmanned aircraft systems (UAS) and thermal imagery

    NASA Astrophysics Data System (ADS)

    Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.

    2017-03-01

    Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95-98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management.

  8. A Semi-Automated Machine Learning Algorithm for Tree Cover Delineation from 1-m Naip Imagery Using a High Performance Computing Architecture

    NASA Astrophysics Data System (ADS)

    Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.

    2014-12-01

    Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.

  9. HCMM and LANDSAT imagery for geological mapping in northwest Queensland. [Australia

    NASA Technical Reports Server (NTRS)

    Cole, M. M.; Edmiston, D. J. (Principal Investigator)

    1980-01-01

    The author has identified the following significant results. Photographic prints made from negatives of day-visible and day-IR cover of selected areas were compared with enhanced color composites generated from LANDSAT computer compatible tapes and films. For geological mapping purposes, HCMM imagery is of limited value. While large scale features like the Mikadoodi anticlinorium, contrasting lithological units, and major structures may be distinguished on day-visible and day-IR cover, the spectral bands are too broad and the resolution too coarse even for regional mapping purposes. The imagery appears to be most useful for drainage studies. Where drainage is seasonal, sequential imagery permits monitoring of broad scale water movement while the day-IR imagery yields valuable information on former channels. In plains areas subject to periodic change of stream courses, comparable IR cover at a larger scale would offer considerable potential for reconstruction of former drainage patterns essential for the correct interpretation of geochemical data relative to mineral exploration.

  10. Image Registration Workshop Proceedings

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline (Editor)

    1997-01-01

    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research.

  11. Laser Signature Prediction Using The VALUE Computer Program

    NASA Astrophysics Data System (ADS)

    Akerman, Alexander; Hoffman, George A.; Patton, Ronald

    1989-09-01

    A variety of enhancements are being made to the 1976-vintage LASERX computer code. These include: - Surface characterization with BDRF tabular data - Specular reflection from transparent surfaces - Generation of glint direction maps - Generation of relative range imagery - Interface to the LOWTRAN atmospheric transmission code - Interface to the LEOPS laser sensor code - User friendly menu prompting for easy setup Versions of VALUE have been written for both VAX/VMS and PC/DOS computer environments. Outputs have also been revised to be user friendly and include tables, plots, and images for (1) intensity, (2) cross section,(3) reflectance, (4) relative range, (5) region type, and (6) silhouette.

  12. Slide Composition for Electronic Presentations

    ERIC Educational Resources Information Center

    Larson, Ronald B.

    2004-01-01

    Instructors who use computer-generated graphics in their lectures have many options to consider when developing their presentations. Experts give different advice on which typefaces, background and letter colors, and background imagery improve communications. This study attempted to resolve these controversies by examining how short-term recall of…

  13. Toward a Model-Based Predictive Controller Design in Brain–Computer Interfaces

    PubMed Central

    Kamrunnahar, M.; Dias, N. S.; Schiff, S. J.

    2013-01-01

    A first step in designing a robust and optimal model-based predictive controller (MPC) for brain–computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8–23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications. PMID:21267657

  14. Toward a model-based predictive controller design in brain-computer interfaces.

    PubMed

    Kamrunnahar, M; Dias, N S; Schiff, S J

    2011-05-01

    A first step in designing a robust and optimal model-based predictive controller (MPC) for brain-computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8-23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications.

  15. An earth imaging camera simulation using wide-scale construction of reflectance surfaces

    NASA Astrophysics Data System (ADS)

    Murthy, Kiran; Chau, Alexandra H.; Amin, Minesh B.; Robinson, M. Dirk

    2013-10-01

    Developing and testing advanced ground-based image processing systems for earth-observing remote sensing applications presents a unique challenge that requires advanced imagery simulation capabilities. This paper presents an earth-imaging multispectral framing camera simulation system called PayloadSim (PaySim) capable of generating terabytes of photorealistic simulated imagery. PaySim leverages previous work in 3-D scene-based image simulation, adding a novel method for automatically and efficiently constructing 3-D reflectance scenes by draping tiled orthorectified imagery over a geo-registered Digital Elevation Map (DEM). PaySim's modeling chain is presented in detail, with emphasis given to the techniques used to achieve computational efficiency. These techniques as well as cluster deployment of the simulator have enabled tuning and robust testing of image processing algorithms, and production of realistic sample data for customer-driven image product development. Examples of simulated imagery of Skybox's first imaging satellite are shown.

  16. Advanced helmet mounted display (AHMD)

    NASA Astrophysics Data System (ADS)

    Sisodia, Ashok; Bayer, Michael; Townley-Smith, Paul; Nash, Brian; Little, Jay; Cassarly, William; Gupta, Anurag

    2007-04-01

    Due to significantly increased U.S. military involvement in deterrent, observer, security, peacekeeping and combat roles around the world, the military expects significant future growth in the demand for deployable virtual reality trainers with networked simulation capability of the battle space visualization process. The use of HMD technology in simulated virtual environments has been initiated by the demand for more effective training tools. The AHMD overlays computer-generated data (symbology, synthetic imagery, enhanced imagery) augmented with actual and simulated visible environment. The AHMD can be used to support deployable reconfigurable training solutions as well as traditional simulation requirements, UAV augmented reality, air traffic control and Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) applications. This paper will describe the design improvements implemented for production of the AHMD System.

  17. Digital processing of Mariner 9 television data.

    NASA Technical Reports Server (NTRS)

    Green, W. B.; Seidman, J. B.

    1973-01-01

    The digital image processing performed by the Image Processing Laboratory (IPL) at JPL in support of the Mariner 9 mission is summarized. The support is divided into the general categories of image decalibration (the removal of photometric and geometric distortions from returned imagery), computer cartographic projections in support of mapping activities, and adaptive experimenter support (flexible support to provide qualitative digital enhancements and quantitative data reduction of returned imagery). Among the tasks performed were the production of maximum discriminability versions of several hundred frames to support generation of a geodetic control net for Mars, and special enhancements supporting analysis of Phobos and Deimos images.

  18. "Data Day" and "Data Night" Definitions - Towards Producing Seamless Global Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Schmaltz, J. E.

    2017-12-01

    For centuries, the art and science of cartography has struggled with the challenge of mapping the round earth on to a flat page, or a flat computer monitor. Earth observing satellites with continuous monitoring of our planet have added the additional complexity of the time dimension to this procedure. The most common current practice is to segment this data by 24-hour Coordinated Universal Time (UTC) day and then split the day into sun side "Data Day" and shadow side "Data Night" global imagery that spans from dateline to dateline. Due to the nature of satellite orbits, simply binning the data by UTC date produces significant discontinuities at the dateline for day images and at Greenwich for night images. Instead, imagery could be generated in a fashion that follows the spatial and temporal progression of the satellite which would produce seamless imagery everywhere on the globe for all times. This presentation will explore approaches to produce such imagery but will also address some of the practical and logistical difficulties in implementing such changes. Topics will include composites versus granule/orbit based imagery, day/night versus ascending/descending definitions, and polar versus global projections.

  19. A comparison of LANDSAT TM to MSS imagery for detecting submerged aquatic vegetation in lower Chesapeake Bay

    NASA Technical Reports Server (NTRS)

    Ackleson, S. G.; Klemas, V.

    1985-01-01

    LANDSAT Thematic Mapper (TM) and Multispectral Scanner (MSS) imagery generated simultaneously over Guinea Marsh, Virginia, are assessed in the ability to detect submerged aquatic, bottom-adhering plant canopies (SAV). An unsupervised clustering algorithm is applied to both image types and the resulting classifications compared to SAV distributions derived from color aerial photography. Class confidence and accuracy are first computed for all water areas and then only shallow areas where water depth is less than 6 feet. In both the TM and MSS imagery, masking water areas deeper than 6 ft. resulted in greater classification accuracy at confidence levels greater than 50%. Both systems perform poorly in detecting SAV with crown cover densities less than 70%. On the basis of the spectral resolution, radiometric sensitivity, and location of visible bands, TM imagery does not offer a significant advantage over MSS data for detecting SAV in Lower Chesapeake Bay. However, because the TM imagery represents a higher spatial resolution, smaller SAV canopies may be detected than is possible with MSS data.

  20. Large-scale feature searches of collections of medical imagery

    NASA Astrophysics Data System (ADS)

    Hedgcock, Marcus W.; Karshat, Walter B.; Levitt, Tod S.; Vosky, D. N.

    1993-09-01

    Large scale feature searches of accumulated collections of medical imagery are required for multiple purposes, including clinical studies, administrative planning, epidemiology, teaching, quality improvement, and research. To perform a feature search of large collections of medical imagery, one can either search text descriptors of the imagery in the collection (usually the interpretation), or (if the imagery is in digital format) the imagery itself. At our institution, text interpretations of medical imagery are all available in our VA Hospital Information System. These are downloaded daily into an off-line computer. The text descriptors of most medical imagery are usually formatted as free text, and so require a user friendly database search tool to make searches quick and easy for any user to design and execute. We are tailoring such a database search tool (Liveview), developed by one of the authors (Karshat). To further facilitate search construction, we are constructing (from our accumulated interpretation data) a dictionary of medical and radiological terms and synonyms. If the imagery database is digital, the imagery which the search discovers is easily retrieved from the computer archive. We describe our database search user interface, with examples, and compare the efficacy of computer assisted imagery searches from a clinical text database with manual searches. Our initial work on direct feature searches of digital medical imagery is outlined.

  1. Comic Book Confidential

    ERIC Educational Resources Information Center

    Osterer, Irv

    2012-01-01

    The author remembers as a youngster poring over DC and Marvel comics, wondering if any of his heroes would reach TV or movie theaters. Today, with the blue-screen techniques and computer-generated imagery (CGI), it seems not a week goes by without seeing one of these characters being featured in a film. "Spiderman" has even reached Broadway! None…

  2. A Computational Analysis of Mental Image Generation: Evidence from Functional Dissociations in Split-Brain Patients.

    DTIC Science & Technology

    1984-08-20

    neuropsychological data on the apraxias and the visual agnosias imply that motor and visual memories can be separately spared or destroyed after brain...agraphia Imagery dissociations 53 and (vice versa), and visual object agnosia without apraxia (and vice versa). We next asked him to *draw the letters in

  3. Transport delay compensation for computer-generated imagery systems

    NASA Technical Reports Server (NTRS)

    Mcfarland, Richard E.

    1988-01-01

    In the problem of pure transport delay in a low-pass system, a trade-off exists with respect to performance within and beyond a frequency bandwidth. When activity beyond the band is attenuated because of other considerations, this trade-off may be used to improve the performance within the band. Specifically, transport delay in computer-generated imagery systems is reduced to a manageable problem by recognizing frequency limits in vehicle activity and manual-control capacity. Based on these limits, a compensation algorithm has been developed for use in aircraft simulation at NASA Ames Research Center. For direct measurement of transport delays, a beam-splitter experiment is presented that accounts for the complete flight simulation environment. Values determined by this experiment are appropriate for use in the compensation algorithm. The algorithm extends the bandwidth of high-frequency flight simulation to well beyond that of normal pilot inputs. Within this bandwidth, the visual scene presentation manifests negligible gain distortion and phase lag. After a year of utilization, two minor exceptions to universal simulation applicability have been identified and subsequently resolved.

  4. Mental Imagery-Based Training to Modify Mood and Cognitive Bias in Adolescents: Effects of Valence and Perspective.

    PubMed

    Burnett Heyes, S; Pictet, A; Mitchell, H; Raeder, S M; Lau, J Y F; Holmes, E A; Blackwell, S E

    2017-01-01

    Mental imagery has a powerful impact on emotion and cognitive processing in adults, and is implicated in emotional disorders. Research suggests the perspective adopted in mental imagery modulates its emotional impact. However, little is known about the impact of mental imagery in adolescence, despite adolescence being the key time for the onset of emotional dysfunction. We administered computerised positive versus mixed valence picture-word mental imagery training to male adolescent participants (N = 60, aged 11-16 years) across separate field and observer perspective sessions. Positive mood increased more following positive than mixed imagery; pleasantness ratings of ambiguous pictures increased following positive versus mixed imagery generated from field but not observer perspective; negative interpretation bias on a novel scrambled sentences task was smaller following positive than mixed imagery particularly when imagery was generated from field perspective. These findings suggest positive mental imagery generation alters mood and cognition in male adolescents, with the latter moderated by imagery perspective. Identifying key components of such training, such as imagery perspective, extends understanding of the relationship between mental imagery, mood, and cognition in adolescence.

  5. Computer-generated imagery for 4-D meteorological data

    NASA Technical Reports Server (NTRS)

    Hibbard, William L.

    1986-01-01

    The University of Wisconsin-Madison Space Science and Engineering Center is developing animated stereo display terminals for use with McIDAS (Man-computer Interactive Data Access System). This paper describes image-generation techniques which have been developed to take maximum advantage of these terminals, integrating large quantities of four-dimensional meteorological data from balloon and satellite soundings, satellite images, Doppler and volumetric radar, and conventional surface observations. The images have been designed to use perspective, shading, hidden-surface removal, and transparency to augment the animation and stereo-display geometry. They create an illusion of a moving three-dimensional model of the atmosphere. This paper describes the design of these images and a number of rules of thumb for generating four-dimensional meteorological displays.

  6. Cultural Adventures for the Google[TM] Generation

    ERIC Educational Resources Information Center

    Dann, Tammy

    2010-01-01

    Google Earth is a computer program that allows users to view the Earth through satellite imagery and maps, to see cities from above and through street views, and to search for addresses and browse locations. Many famous buildings and structures from around the world have detailed 3D views accessible on Google Earth. It is possible to explore the…

  7. A High Performance Computing Approach to Tree Cover Delineation in 1-m NAIP Imagery Using a Probabilistic Learning Framework

    NASA Technical Reports Server (NTRS)

    Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Votava, Petr; Roy, Anshuman; Mukhopadhyay, Supratik; Nemani, Ramakrishna

    2015-01-01

    Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets, which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.

  8. A High Performance Computing Approach to Tree Cover Delineation in 1-m NAIP Imagery using a Probabilistic Learning Framework

    NASA Astrophysics Data System (ADS)

    Basu, S.; Ganguly, S.; Michaelis, A.; Votava, P.; Roy, A.; Mukhopadhyay, S.; Nemani, R. R.

    2015-12-01

    Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.

  9. Information from imagery: ISPRS scientific vision and research agenda

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Dowman, Ian; Li, Songnian; Li, Zhilin; Madden, Marguerite; Mills, Jon; Paparoditis, Nicolas; Rottensteiner, Franz; Sester, Monika; Toth, Charles; Trinder, John; Heipke, Christian

    2016-05-01

    With the increased availability of very high-resolution satellite imagery, terrain based imaging and participatory sensing, inexpensive platforms, and advanced information and communication technologies, the application of imagery is now ubiquitous, playing an important role in many aspects of life and work today. As a leading organisation in this field, the International Society for Photogrammetry and Remote Sensing (ISPRS) has been devoted to effectively and efficiently obtaining and utilising information from imagery since its foundation in the year 1910. This paper examines the significant challenges currently facing ISPRS and its communities, such as providing high-quality information, enabling advanced geospatial computing, and supporting collaborative problem solving. The state-of-the-art in ISPRS related research and development is reviewed and the trends and topics for future work are identified. By providing an overarching scientific vision and research agenda, we hope to call on and mobilise all ISPRS scientists, practitioners and other stakeholders to continue improving our understanding and capacity on information from imagery and to deliver advanced geospatial knowledge that enables humankind to better deal with the challenges ahead, posed for example by global change, ubiquitous sensing, and a demand for real-time information generation.

  10. Assessment of the Quality of Digital Terrain Model Produced from Unmanned Aerial System Imagery

    NASA Astrophysics Data System (ADS)

    Kosmatin Fras, M.; Kerin, A.; Mesarič, M.; Peterman, V.; Grigillo, D.

    2016-06-01

    Production of digital terrain model (DTM) is one of the most usual tasks when processing photogrammetric point cloud generated from Unmanned Aerial System (UAS) imagery. The quality of the DTM produced in this way depends on different factors: the quality of imagery, image orientation and camera calibration, point cloud filtering, interpolation methods etc. However, the assessment of the real quality of DTM is very important for its further use and applications. In this paper we first describe the main steps of UAS imagery acquisition and processing based on practical test field survey and data. The main focus of this paper is to present the approach to DTM quality assessment and to give a practical example on the test field data. For data processing and DTM quality assessment presented in this paper mainly the in-house developed computer programs have been used. The quality of DTM comprises its accuracy, density, and completeness. Different accuracy measures like RMSE, median, normalized median absolute deviation and their confidence interval, quantiles are computed. The completeness of the DTM is very often overlooked quality parameter, but when DTM is produced from the point cloud this should not be neglected as some areas might be very sparsely covered by points. The original density is presented with density plot or map. The completeness is presented by the map of point density and the map of distances between grid points and terrain points. The results in the test area show great potential of the DTM produced from UAS imagery, in the sense of detailed representation of the terrain as well as good height accuracy.

  11. Wakes from submerged obstacles in an open channel flow

    NASA Astrophysics Data System (ADS)

    Smith, Geoffrey B.; Marmorino, George; Dong, Charles; Miller, W. D.; Mied, Richard

    2015-11-01

    Wakes from several submerged obstacles are examined via airborne remote sensing. The primary focus will be bathymetric features in the tidal Potomac river south of Washington, DC, but others may be included as well. In the Potomac the water depth is nominally 10 m with an obstacle height of 8 m, or 80% of the depth. Infrared imagery of the water surface reveals thermal structure suitable both for interpretation of the coherent structures and for estimating surface currents. A novel image processing technique is used to generate two independent scenes with a known time offset from a single overpass from the infrared imagery, suitable for velocity estimation. Color imagery of the suspended sediment also shows suitable texture. Both the `mountain wave' regime and a traditional turbulent wake are observed, depending on flow conditions. Results are validated with in-situ ADCP transects. A computational model is used to further interpret the results.

  12. Fusion of monocular cues to detect man-made structures in aerial imagery

    NASA Technical Reports Server (NTRS)

    Shufelt, Jefferey; Mckeown, David M.

    1991-01-01

    The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.

  13. Visual imagery of famous faces: effects of memory and attention revealed by fMRI.

    PubMed

    Ishai, Alumit; Haxby, James V; Ungerleider, Leslie G

    2002-12-01

    Complex pictorial information can be represented and retrieved from memory as mental visual images. Functional brain imaging studies have shown that visual perception and visual imagery share common neural substrates. The type of memory (short- or long-term) that mediates the generation of mental images, however, has not been addressed previously. The purpose of this study was to investigate the neural correlates underlying imagery generated from short- and long-term memory (STM and LTM). We used famous faces to localize the visual response during perception and to compare the responses during visual imagery generated from STM (subjects memorized specific pictures of celebrities before the imagery task) and imagery from LTM (subjects imagined famous faces without seeing specific pictures during the experimental session). We found that visual perception of famous faces activated the inferior occipital gyri, lateral fusiform gyri, the superior temporal sulcus, and the amygdala. Small subsets of these face-selective regions were activated during imagery. Additionally, visual imagery of famous faces activated a network of regions composed of bilateral calcarine, hippocampus, precuneus, intraparietal sulcus (IPS), and the inferior frontal gyrus (IFG). In all these regions, imagery generated from STM evoked more activation than imagery from LTM. Regardless of memory type, focusing attention on features of the imagined faces (e.g., eyes, lips, or nose) resulted in increased activation in the right IPS and right IFG. Our results suggest differential effects of memory and attention during the generation and maintenance of mental images of faces.

  14. Crop classification using temporal stacks of multispectral satellite imagery

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Brumby, Steven P.; Chartrand, Rick; Keisler, Ryan; Longbotham, Nathan; Mertes, Carly; Skillman, Samuel W.; Warren, Michael S.

    2017-05-01

    The increase in performance, availability, and coverage of multispectral satellite sensor constellations has led to a drastic increase in data volume and data rate. Multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes/year of daily high-resolution global coverage imagery. The data analysis capability, however, has lagged behind storage and compute developments, and has traditionally focused on individual scene processing. We present results from an ongoing effort to develop satellite imagery analysis tools that aggregate temporal, spatial, and spectral information and can scale with the high-rate and dimensionality of imagery being collected. We investigate and compare the performance of pixel-level crop identification using tree-based classifiers and its dependence on both temporal and spectral features. Classification performance is assessed using as ground-truth Cropland Data Layer (CDL) crop masks generated by the US Department of Agriculture (USDA). The CDL maps contain 30m spatial resolution, pixel-level labels for around 200 categories of land cover, but are however only available post-growing season. The analysis focuses on McCook county in South Dakota and shows crop classification using a temporal stack of Landsat 8 (L8) imagery over the growing season, from April through October. Specifically, we consider the temporal L8 stack depth, as well as different normalized band difference indices, and evaluate their contribution to crop identification. We also show an extension of our algorithm to map corn and soy crops in the state of Mato Grosso, Brazil.

  15. Automated land-use mapping from spacecraft data. [Oakland County, Michigan

    NASA Technical Reports Server (NTRS)

    Chase, P. E. (Principal Investigator); Rogers, R. H.; Reed, L. E.

    1974-01-01

    The author has identified the following significant results. In response to the need for a faster, more economical means of producing land use maps, this study evaluated the suitability of using ERTS-1 computer compatible tape (CCT) data as a basis for automatic mapping. Significant findings are: (1) automatic classification accuracy greater than 90% is achieved on categories of deep and shallow water, tended grass, rangeland, extractive (bare earth), urban, forest land, and nonforested wet lands; (2) computer-generated printouts by target class provide a quantitative measure of land use; and (3) the generation of map overlays showing land use from ERTS-1 CCTs offers a significant breakthrough in the rate at which land use maps are generated. Rather than uncorrected classified imagery or computer line printer outputs, the processing results in geometrically-corrected computer-driven pen drawing of land categories, drawn on a transparent material at a scale specified by the operator. These map overlays are economically produced and provide an efficient means of rapidly updating maps showing land use.

  16. Robust infrared target tracking using discriminative and generative approaches

    NASA Astrophysics Data System (ADS)

    Asha, C. S.; Narasimhadhan, A. V.

    2017-09-01

    The process of designing an efficient tracker for thermal infrared imagery is one of the most challenging tasks in computer vision. Although a lot of advancement has been achieved in RGB videos over the decades, textureless and colorless properties of objects in thermal imagery pose hard constraints in the design of an efficient tracker. Tracking of an object using a single feature or a technique often fails to achieve greater accuracy. Here, we propose an effective method to track an object in infrared imagery based on a combination of discriminative and generative approaches. The discriminative technique makes use of two complementary methods such as kernelized correlation filter with spatial feature and AdaBoost classifier with pixel intesity features to operate in parallel. After obtaining optimized locations through discriminative approaches, the generative technique is applied to determine the best target location using a linear search method. Unlike the baseline algorithms, the proposed method estimates the scale of the target by Lucas-Kanade homography estimation. To evaluate the proposed method, extensive experiments are conducted on 17 challenging infrared image sequences obtained from LTIR dataset and a significant improvement of mean distance precision and mean overlap precision is accomplished as compared with the existing trackers. Further, a quantitative and qualitative assessment of the proposed approach with the state-of-the-art trackers is illustrated to clearly demonstrate an overall increase in performance.

  17. Single-trial effective brain connectivity patterns enhance discriminability of mental imagery tasks

    NASA Astrophysics Data System (ADS)

    Rathee, Dheeraj; Cecotti, Hubert; Prasad, Girijesh

    2017-10-01

    Objective. The majority of the current approaches of connectivity based brain-computer interface (BCI) systems focus on distinguishing between different motor imagery (MI) tasks. Brain regions associated with MI are anatomically close to each other, hence these BCI systems suffer from low performances. Our objective is to introduce single-trial connectivity feature based BCI system for cognition imagery (CI) based tasks wherein the associated brain regions are located relatively far away as compared to those for MI. Approach. We implemented time-domain partial Granger causality (PGC) for the estimation of the connectivity features in a BCI setting. The proposed hypothesis has been verified with two publically available datasets involving MI and CI tasks. Main results. The results support the conclusion that connectivity based features can provide a better performance than a classical signal processing framework based on bandpass features coupled with spatial filtering for CI tasks, including word generation, subtraction, and spatial navigation. These results show for the first time that connectivity features can provide a reliable performance for imagery-based BCI system. Significance. We show that single-trial connectivity features for mixed imagery tasks (i.e. combination of CI and MI) can outperform the features obtained by current state-of-the-art method and hence can be successfully applied for BCI applications.

  18. Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  19. Computationally Efficient Resampling of Nonuniform Oversampled SAR Data

    DTIC Science & Technology

    2010-05-01

    noncoherently . The resample data is calculated using both a simple average and a weighted average of the demodulated data. The average nonuniform...trials with randomly varying accelerations. The results are shown in Fig. 5 for the noncoherent power difference and Fig. 6 for and coherent power...simple average. Figure 5. Noncoherent difference between SAR imagery generated with uniform sampling and nonuniform sampling that was resampled

  20. Advanced Training Techniques Using Computer Generated Imagery.

    DTIC Science & Technology

    1983-02-28

    described in this report has been made and is submitted along with this report. Unfortunately, the quality possible on standard monochromic 525 line...video tape is not representative of the quality of the presentations as displayed on a color beam penetration visual system, but one can, through the...YORK - LAGUARDIA (TWILIGHT) SEA SURFACE AND WAKE MINNEAPOLIS - ST. PAUL KC-135 TANKER INTERNATIONAL (TWILIGHT) MINNEAPOLIS - ST. PAUL GROUND TARGETS

  1. Specification and preliminary design of the CARTA system for satellite cartography

    NASA Technical Reports Server (NTRS)

    Machadoesilva, A. J. F. (Principal Investigator); Neto, G. C.; Serra, P. R. M.; Souza, R. C. M.; Mitsuo, Fernando Augusta, II

    1984-01-01

    Digital imagery acquired by satellite have inherent geometrical distortion due to sensor characteristics and to platform variations. In INPE a software system for geometric correction of LANDSAT MSS imagery is under development. Such connected imagery will be useful for map generation. Important examples are the generation of LANDSAT image-charts for the Amazon region and the possibility of integrating digital satellite imagery into a Geographic Information System.

  2. Advanced Helmet Mounted Display (AHMD) for simulator applications

    NASA Astrophysics Data System (ADS)

    Sisodia, Ashok; Riser, Andrew; Bayer, Michael; McGuire, James P.

    2006-05-01

    The Advanced Helmet Mounted Display (AHMD), augmented reality visual system first presented at last year's Cockpit and Future Displays for Defense and Security conference, has now been evaluated in a number of military simulator applications and by L-3 Link Simulation and Training. This paper presents the preliminary results of these evaluations and describes current and future simulator and training applications for HMD technology. The AHMD blends computer-generated data (symbology, synthetic imagery, enhanced imagery) with the actual and simulated visible environment. The AHMD is designed specifically for highly mobile deployable, minimum resource demanding reconfigurable virtual training systems to satisfy the military's in-theater warrior readiness objective. A description of the innovative AHMD system and future enhancements will be discussed.

  3. Change detection in Arctic satellite imagery using clustering of sparse approximations (CoSA) over learned feature dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Wilson, Cathy J.; Rowland, Joel C.; Altmann, Garrett L.

    2015-06-01

    Advanced pattern recognition and computer vision algorithms are of great interest for landscape characterization, change detection, and change monitoring in satellite imagery, in support of global climate change science and modeling. We present results from an ongoing effort to extend neuroscience-inspired models for feature extraction to the environmental sciences, and we demonstrate our work using Worldview-2 multispectral satellite imagery. We use a Hebbian learning rule to derive multispectral, multiresolution dictionaries directly from regional satellite normalized band difference index data. These feature dictionaries are used to build sparse scene representations, from which we automatically generate land cover labels via our CoSA algorithm: Clustering of Sparse Approximations. These data adaptive feature dictionaries use joint spectral and spatial textural characteristics to help separate geologic, vegetative, and hydrologic features. Land cover labels are estimated in example Worldview-2 satellite images of Barrow, Alaska, taken at two different times, and are used to detect and discuss seasonal surface changes. Our results suggest that an approach that learns from both spectral and spatial features is promising for practical pattern recognition problems in high resolution satellite imagery.

  4. Word-level recognition of multifont Arabic text using a feature vector matching approach

    NASA Astrophysics Data System (ADS)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  5. EEG classification for motor imagery and resting state in BCI applications using multi-class Adaboost extreme learning machine

    NASA Astrophysics Data System (ADS)

    Gao, Lin; Cheng, Wei; Zhang, Jinhua; Wang, Jue

    2016-08-01

    Brain-computer interface (BCI) systems provide an alternative communication and control approach for people with limited motor function. Therefore, the feature extraction and classification approach should differentiate the relative unusual state of motion intention from a common resting state. In this paper, we sought a novel approach for multi-class classification in BCI applications. We collected electroencephalographic (EEG) signals registered by electrodes placed over the scalp during left hand motor imagery, right hand motor imagery, and resting state for ten healthy human subjects. We proposed using the Kolmogorov complexity (Kc) for feature extraction and a multi-class Adaboost classifier with extreme learning machine as base classifier for classification, in order to classify the three-class EEG samples. An average classification accuracy of 79.5% was obtained for ten subjects, which greatly outperformed commonly used approaches. Thus, it is concluded that the proposed method could improve the performance for classification of motor imagery tasks for multi-class samples. It could be applied in further studies to generate the control commands to initiate the movement of a robotic exoskeleton or orthosis, which finally facilitates the rehabilitation of disabled people.

  6. Influence of mental imagery on spatial presence and enjoyment assessed in different types of media.

    PubMed

    Weibel, David; Wissmath, Bartholomäus; Mast, Fred W

    2011-10-01

    Previous research studies on spatial presence point out that the users' imagery abilities are of importance. However, this influence has not yet been tested for different media. This is surprising because theoretical considerations suggest that mental imagery comes into play when a mediated environment lacks vividness. The aim of this study was to clarify the influence mental imagery abilities can have on the sensation of presence and enjoyment in different mediated environments. We presented the participants (n = 60) a narrative text, a movie sequence, and a computer game. Across all media, no effect of mental imagery abilities on presence and enjoyment was found, but imagery abilities marginally interacted with the mediated environment. Individuals with high imagery abilities experienced more presence and enjoyment in the text condition. The results were different for the film condition: here, individuals with poor imagery abilities reported marginally higher enjoyment ratings, whereas the presence ratings did not differ between the two groups. Imagery abilities had no influence on presence and enjoyment within the computer game condition. The results suggest that good imagery abilities contribute to the sensations of presence and enjoyment when reading a narrative text. The results for this study have an applied impact for media use because their effectiveness can depend on the individual mental imagery abilities.

  7. Imageability effects on sentence judgement by right-brain-damaged adults

    PubMed Central

    Lederer, Lisa Guttentag; Scott, April Gibbs; Tompkins, Connie A.; Dickey, Michael W.

    2009-01-01

    Background For decades researchers assumed visual image generation was the province of the right hemisphere. The lack of corresponding evidence was only recently noted, yet conflicting results still leave open the possibility that the right hemisphere plays a role. This study assessed imagery generation in adult participants with and without right hemisphere damage (RHD). Imagery was operationalised as the activation of representations retrieved from long-term memory similar to those that underlie sensory experience, in the absence of the usual sensory stimulation, and in the presence of communicative stimuli. Aims The primary aim of the study was to explore the widely held belief that there is an association between the right hemisphere and imagery generation ability. We also investigated whether visual and visuo-motor imagery generation abilities differ in adults with RHD. Methods & Procedures Participants included 34 adults with unilateral RHD due to cerebrovascular accident and 38 adults who served as non-brain-damaged (NBD) controls. To assess the potential effects of RHD on the processing of language stimuli that differ in imageability, participants performed an auditory sentence verification task. Participants listened to high- and low-imageability sentences from Eddy and Glass (1981) and indicated whether each sentence was true or false. The dependent measures for this task were performance accuracy and response times (RT). Outcomes & Results In general, accuracy was higher, and response time lower, for low-imagery than for high-imagery items. Although NBD participants’ RTs for low-imagery items were significantly faster than those for high-imagery items, this difference disappeared in the group with RHD. We confirmed that this result was not due to a speed–accuracy trade-off or to syntactic differences between stimulus sets. A post hoc analysis also suggested that the group with RHD was selectively impaired in motor, rather than visual, imagery generation. Conclusions The disproportionately high RT of participants with RHD in response to low-imagery items suggests that these items had other properties that made their verification difficult for this population. The nature and extent of right hemisphere patients’ deficits in processing different types of imagery should be considered. In addition, the capacity of adults with RHD to generate visual and motor imagery should be investigated separately in future studies. PMID:20054429

  8. Virtual reality: a reality for future military pilotage?

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Martinsen, Gary L.; Marasco, Peter L.; Havig, Paul R.

    2009-05-01

    Virtual reality (VR) systems provide exciting new ways to interact with information and with the world. The visual VR environment can be synthetic (computer generated) or be an indirect view of the real world using sensors and displays. With the potential opportunities of a VR system, the question arises about what benefits or detriments a military pilot might incur by operating in such an environment. Immersive and compelling VR displays could be accomplished with an HMD (e.g., imagery on the visor), large area collimated displays, or by putting the imagery on an opaque canopy. But what issues arise when, instead of viewing the world directly, a pilot views a "virtual" image of the world? Is 20/20 visual acuity in a VR system good enough? To deliver this acuity over the entire visual field would require over 43 megapixels (MP) of display surface for an HMD or about 150 MP for an immersive CAVE system, either of which presents a serious challenge with current technology. Additionally, the same number of sensor pixels would be required to drive the displays to this resolution (and formidable network architectures required to relay this information), or massive computer clusters are necessary to create an entirely computer-generated virtual reality with this resolution. Can we presently implement such a system? What other visual requirements or engineering issues should be considered? With the evolving technology, there are many technological issues and human factors considerations that need to be addressed before a pilot is placed within a virtual cockpit.

  9. Pan Sharpening Quality Investigation of Turkish In-Operation Remote Sensing Satellites: Applications with Rasat and GÖKTÜRK-2 Images

    NASA Astrophysics Data System (ADS)

    Ozendi, Mustafa; Topan, Hüseyin; Cam, Ali; Bayık, Çağlar

    2016-10-01

    Recently two optical remote sensing satellites, RASAT and GÖKTÜRK-2, launched successfully by the Republic of Turkey. RASAT has 7.5 m panchromatic, and 15 m visible bands whereas GÖKTÜRK-2 has 2.5 m panchromatic and 5 m VNIR (Visible and Near Infrared) bands. These bands with various resolutions can be fused by pan-sharpening methods which is an important application area of optical remote sensing imagery. So that, the high geometric resolution of panchromatic band and the high spectral resolution of VNIR bands can be merged. In the literature there are many pan-sharpening methods. However, there is not a standard framework for quality investigation of pan-sharpened imagery. The aim of this study is to investigate pan-sharpening performance of RASAT and GÖKTÜRK-2 images. For this purpose, pan-sharpened images are generated using most popular pan-sharpening methods IHS, Brovey and PCA at first. This procedure is followed by quantitative evaluation of pan-sharpened images using Correlation Coefficient (CC), Root Mean Square Error (RMSE), Relative Average Spectral Error (RASE), Spectral Angle Mapper (SAM) and Erreur Relative Globale Adimensionnelle de Synthése (ERGAS) metrics. For generation of pan-sharpened images and computation of metrics SharpQ tool is used which is developed with MATLAB computing language. According to metrics, PCA derived pan-sharpened image is the most similar one to multispectral image for RASAT, and Brovey derived pan-sharpened image is the most similar one to multispectral image for GÖKTÜRK-2. Finally, pan-sharpened images are evaluated qualitatively in terms of object availability and completeness for various land covers (such as urban, forest and flat areas) by a group of operators who are experienced in remote sensing imagery.

  10. Electrode channel selection based on backtracking search optimization in motor imagery brain-computer interfaces.

    PubMed

    Dai, Shengfa; Wei, Qingguo

    2017-01-01

    Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.

  11. The Shuttle Mission Simulator computer generated imagery

    NASA Technical Reports Server (NTRS)

    Henderson, T. H.

    1984-01-01

    Equipment available in the primary training facility for the Space Transportation System (STS) flight crews includes the Fixed Base Simulator, the Motion Base Simulator, the Spacelab Simulator, and the Guidance and Navigation Simulator. The Shuttle Mission Simulator (SMS) consists of the Fixed Base Simulator and the Motion Base Simulator. The SMS utilizes four visual Computer Generated Image (CGI) systems. The Motion Base Simulator has a forward crew station with six-degrees of freedom motion simulation. Operation of the Spacelab Simulator is planned for the spring of 1983. The Guidance and Navigation Simulator went into operation in 1982. Aspects of orbital visual simulation are discussed, taking into account the earth scene, payload simulation, the generation and display of 1079 stars, the simulation of sun glare, and Reaction Control System jet firing plumes. Attention is also given to landing site visual simulation, and night launch and landing simulation.

  12. Experience With Bayesian Image Based Surface Modeling

    NASA Technical Reports Server (NTRS)

    Stutz, John C.

    2005-01-01

    Bayesian surface modeling from images requires modeling both the surface and the image generation process, in order to optimize the models by comparing actual and generated images. Thus it differs greatly, both conceptually and in computational difficulty, from conventional stereo surface recovery techniques. But it offers the possibility of using any number of images, taken under quite different conditions, and by different instruments that provide independent and often complementary information, to generate a single surface model that fuses all available information. I describe an implemented system, with a brief introduction to the underlying mathematical models and the compromises made for computational efficiency. I describe successes and failures achieved on actual imagery, where we went wrong and what we did right, and how our approach could be improved. Lastly I discuss how the same approach can be extended to distinct types of instruments, to achieve true sensor fusion.

  13. Assessment of satellite and aircraft multispectral scanner data for strip-mine monitoring

    NASA Technical Reports Server (NTRS)

    Spisz, E. W.; Dooley, J. T.

    1980-01-01

    The application of LANDSAT multispectral scanner data to describe the mining and reclamation changes of a hilltop surface coal mine in the rugged, mountainous area of eastern Kentucky is presented. Original single band satellite imagery, computer enhanced single band imagery, and computer classified imagery are presented for four different data sets in order to demonstrate the land cover changes that can be detected. Data obtained with an 11 band multispectral scanner on board a C-47 aircraft at an altitude of 3000 meters are also presented. Comparing the satellite data with color, infrared aerial photography, and ground survey data shows that significant changes in the disrupted area can be detected from LANDSAT band 5 satellite imagery for mines with more than 100 acres of disturbed area. However, band-ratio (bands 5/6) imagery provides greater contrast than single band imagery and can provide a qualitative level 1 classification of the land cover that may be useful for monitoring either the disturbed mining area or the revegetation progress. However, if a quantitative, accurate classification of the barren or revegetated classes is required, it is necessary to perform a detailed, four band computer classification of the data.

  14. Regressive Imagery in Creative Problem-Solving: Comparing Verbal Protocols of Expert and Novice Visual Artists and Computer Programmers

    ERIC Educational Resources Information Center

    Kozbelt, Aaron; Dexter, Scott; Dolese, Melissa; Meredith, Daniel; Ostrofsky, Justin

    2015-01-01

    We applied computer-based text analyses of regressive imagery to verbal protocols of individuals engaged in creative problem-solving in two domains: visual art (23 experts, 23 novices) and computer programming (14 experts, 14 novices). Percentages of words involving primary process and secondary process thought, plus emotion-related words, were…

  15. A hybrid NIRS-EEG system for self-paced brain computer interface with online motor imagery.

    PubMed

    Koo, Bonkon; Lee, Hwan-Gon; Nam, Yunjun; Kang, Hyohyeong; Koh, Chin Su; Shin, Hyung-Cheul; Choi, Seungjin

    2015-04-15

    For a self-paced motor imagery based brain-computer interface (BCI), the system should be able to recognize the occurrence of a motor imagery, as well as the type of the motor imagery. However, because of the difficulty of detecting the occurrence of a motor imagery, general motor imagery based BCI studies have been focusing on the cued motor imagery paradigm. In this paper, we present a novel hybrid BCI system that uses near infrared spectroscopy (NIRS) and electroencephalography (EEG) systems together to achieve online self-paced motor imagery based BCI. We designed a unique sensor frame that records NIRS and EEG simultaneously for the realization of our system. Based on this hybrid system, we proposed a novel analysis method that detects the occurrence of a motor imagery with the NIRS system, and classifies its type with the EEG system. An online experiment demonstrated that our hybrid system had a true positive rate of about 88%, a false positive rate of 7% with an average response time of 10.36 s. As far as we know, there is no report that explored hemodynamic brain switch for self-paced motor imagery based BCI with hybrid EEG and NIRS system. From our experimental results, our hybrid system showed enough reliability for using in a practical self-paced motor imagery based BCI. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. RGB-NDVI colour composites for visualizing forest change dynamics

    NASA Technical Reports Server (NTRS)

    Sader, S. A.; Winne, J. C.

    1992-01-01

    The study presents a simple and logical technique to display and quantify forest change using three dates of satellite imagery. The normalized difference vegetation index (NDVI) was computed for each date of imagery to define high and low vegetation biomass. Color composites were generated by combining each date of NDVI with either the red, green, or blue (RGB) image planes in an image display monitor. Harvest and regeneration areas were quantified by applying a modified parallelepiped classification creating an RGB-NDVI image with 27 classes that were grouped into nine major forest change categories. Aerial photographs and stand history maps are compared with the forest changes indicated by the RGB-NDVI image. The utility of the RGB-NDVI technique for supporting forest inventories and updating forest resource information systems are presented and discussed.

  17. The Evolution of an Imagery Data System

    NASA Astrophysics Data System (ADS)

    Alarcon, C.; De Cesare, C.; Huang, T.; Roberts, J. T.; Rodriguez, J.; Cechini, M. F.; Boller, R. A.; Baynes, K.

    2016-12-01

    NASA's Global Imagery Browse Services (GIBS) has provided visualization of NASA's Earth Science data archives since 2011. The scope of GIBS has expanded over time to include community requested features such as granules, vectors, and profile imagery support. Behind the GIBS system lies the data management and automation package, The Imagery Exchange (TIE). As new features are added to GIBS, TIE must keep up with the capabilities that are required to automate the generation of our products while maintaining a robust generation pipeline. This presentation will focus on the challenges and solutions to expanding the TIE subsystem into a more evolved framework that can support the ever- growing needs of GIBS. This includes the efforts into redesigning the workflow to support sub-daily (e.g. granules) imagery while increasing the overall efficiency of the entire generation lifecycle.

  18. Imagery encoding and false recognition errors: Examining the role of imagery process and imagery content on source misattributions.

    PubMed

    Foley, Mary Ann; Foy, Jeffrey; Schlemmer, Emily; Belser-Ehrlich, Janna

    2010-11-01

    Imagery encoding effects on source-monitoring errors were explored using the Deese-Roediger-McDermott paradigm in two experiments. While viewing thematically related lists embedded in mixed picture/word presentations, participants were asked to generate images of objects or words (Experiment 1) or to simply name the items (Experiment 2). An encoding task intended to induce spontaneous images served as a control for the explicit imagery instruction conditions (Experiment 1). On the picture/word source-monitoring tests, participants were much more likely to report "seeing" a picture of an item presented as a word than the converse particularly when images were induced spontaneously. However, this picture misattribution error was reversed after generating images of words (Experiment 1) and was eliminated after simply labelling the items (Experiment 2). Thus source misattributions were sensitive to the processes giving rise to imagery experiences (spontaneous vs deliberate), the kinds of images generated (object vs word images), and the ways in which materials were presented (as pictures vs words).

  19. The use of ERTS imagery in reservoir management and operation

    NASA Technical Reports Server (NTRS)

    Cooper, S. (Principal Investigator)

    1973-01-01

    There are no author-identified significant results in this report. Preliminary analysis of ERTS-1 imagery suggests that the configuration and areal coverage of surface waters, as well as other hydrologically related terrain features, may be obtained from ERTS-1 imagery to an extent that would be useful. Computer-oriented pattern recognition techniques are being developed to help automate the identification and analysis of hydrologic features. Considerable man-machine interaction is required while training the computer for these tasks.

  20. Advanced Training Techniques Using Computer Generated Imagery.

    DTIC Science & Technology

    1981-09-15

    Annual Technical Report for Period- 16 May 1980 - 15 July 1981 LJ Prepared for AIR FORCE OFFICE OF SCIENTIFIC RESEARCH Director of Life Sciences Building...Simulation Management Branch, ATC, Randolph AFB, TX 78148, November 1977. Allbee, K. F., Semple C. A.; Aircrew Training Devices Life Cycle Cost and Worth...in Simulator Design and Application, Life Sciences, Inc., 227 Lood 820 NE, Hurst, Texas 76053, AFOSR-TR-77- 0965, 30 September 1976 McDonnell Aircraft

  1. A Combination of Hand-Held Models and Computer Imaging Programs Helps Students Answer Oral Questions about Molecular Structure and Function: A Controlled Investigation of Student Learning

    ERIC Educational Resources Information Center

    Harris, Michelle A.; Peck, Ronald F.; Colton, Shannon; Morris, Jennifer; Neto, Elias Chaibub; Kallio, Julie

    2009-01-01

    We conducted a controlled investigation to examine whether a combination of computer imagery and tactile tools helps introductory cell biology laboratory undergraduate students better learn about protein structure/function relationships as compared with computer imagery alone. In all five laboratory sections, students used the molecular imaging…

  2. Conventional Microscopy vs. Computer Imagery in Chiropractic Education.

    PubMed

    Cunningham, Christine M; Larzelere, Elizabeth D; Arar, Ilija

    2008-01-01

    As human tissue pathology slides become increasingly difficult to obtain, other methods of teaching microscopy in educational laboratories must be considered. The purpose of this study was to evaluate our students' satisfaction with newly implemented computer imagery based laboratory instruction and to obtain input from their perspective on the advantages and disadvantages of computerized vs. traditional microscope laboratories. This undertaking involved the creation of a new computer laboratory. Robbins and Cotran Pathologic Basis of Disease, 7(th)ed, was chosen as the required text which gave students access to the Robbins Pathology website, including complete content of text, Interactive Case Study Companion, and Virtual Microscope. Students had experience with traditional microscopes in their histology and microbiology laboratory courses. Student satisfaction with computer based learning was assessed using a 28 question survey which was administered to three successive trimesters of pathology students (n=193) using the computer survey website Zoomerang. Answers were given on a scale of 1-5 and statistically analyzed using weighted averages. The survey data indicated that students were satisfied with computer based learning activities during pathology laboratory instruction. The most favorable aspect to computer imagery was 24-7 availability (weighted avg. 4.16), followed by clarification offered by accompanying text and captions (weighted avg. 4.08). Although advantages and disadvantages exist in using conventional microscopy and computer imagery, current pathology teaching environments warrant investigation of replacing traditional microscope exercises with computer applications. Chiropractic students supported the adoption of computer-assisted instruction in pathology laboratories.

  3. Advanced synthetic image generation models and their application to multi/hyperspectral algorithm development

    NASA Astrophysics Data System (ADS)

    Schott, John R.; Brown, Scott D.; Raqueno, Rolando V.; Gross, Harry N.; Robinson, Gary

    1999-01-01

    The need for robust image data sets for algorithm development and testing has prompted the consideration of synthetic imagery as a supplement to real imagery. The unique ability of synthetic image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on synthetic imagery to reproduce both the spectro-radiometric and spatial character observed in real imagery. This paper describes a synthetic image generation model that strives to include the radiometric processes that affect spectral image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real images. The model is capable of simultaneously generating imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal image inputs for broadband, multi- and hyper-spectral exploitation algorithms.

  4. A Genetic-Based Feature Selection Approach in the Identification of Left/Right Hand Motor Imagery for a Brain-Computer Interface

    PubMed Central

    Yaacoub, Charles; Mhanna, Georges; Rihana, Sandy

    2017-01-01

    Electroencephalography is a non-invasive measure of the brain electrical activity generated by millions of neurons. Feature extraction in electroencephalography analysis is a core issue that may lead to accurate brain mental state classification. This paper presents a new feature selection method that improves left/right hand movement identification of a motor imagery brain-computer interface, based on genetic algorithms and artificial neural networks used as classifiers. Raw electroencephalography signals are first preprocessed using appropriate filtering. Feature extraction is carried out afterwards, based on spectral and temporal signal components, and thus a feature vector is constructed. As various features might be inaccurate and mislead the classifier, thus degrading the overall system performance, the proposed approach identifies a subset of features from a large feature space, such that the classifier error rate is reduced. Experimental results show that the proposed method is able to reduce the number of features to as low as 0.5% (i.e., the number of ignored features can reach 99.5%) while improving the accuracy, sensitivity, specificity, and precision of the classifier. PMID:28124985

  5. A Genetic-Based Feature Selection Approach in the Identification of Left/Right Hand Motor Imagery for a Brain-Computer Interface.

    PubMed

    Yaacoub, Charles; Mhanna, Georges; Rihana, Sandy

    2017-01-23

    Electroencephalography is a non-invasive measure of the brain electrical activity generated by millions of neurons. Feature extraction in electroencephalography analysis is a core issue that may lead to accurate brain mental state classification. This paper presents a new feature selection method that improves left/right hand movement identification of a motor imagery brain-computer interface, based on genetic algorithms and artificial neural networks used as classifiers. Raw electroencephalography signals are first preprocessed using appropriate filtering. Feature extraction is carried out afterwards, based on spectral and temporal signal components, and thus a feature vector is constructed. As various features might be inaccurate and mislead the classifier, thus degrading the overall system performance, the proposed approach identifies a subset of features from a large feature space, such that the classifier error rate is reduced. Experimental results show that the proposed method is able to reduce the number of features to as low as 0.5% (i.e., the number of ignored features can reach 99.5%) while improving the accuracy, sensitivity, specificity, and precision of the classifier.

  6. Towards a real-time wide area motion imagery system

    NASA Astrophysics Data System (ADS)

    Young, R. I.; Foulkes, S. B.

    2015-10-01

    It is becoming increasingly important in both the defence and security domains to conduct persistent wide area surveillance (PWAS) of large populations of targets. Wide Area Motion Imagery (WAMI) is a key technique for achieving this wide area surveillance. The recent development of multi-million pixel sensors has provided sensors with wide field of view replete with sufficient resolution for detection and tracking of objects of interest to be achieved across these extended areas of interest. WAMI sensors simultaneously provide high spatial and temporal resolutions, giving extreme pixel counts over large geographical areas. The high temporal resolution is required to enable effective tracking of targets. The provision of wide area coverage with high frame rates generates data deluge issues; these are especially profound if the sensor is mounted on an airborne platform, with finite data-link bandwidth and processing power that is constrained by size, weight and power (SWAP) limitations. These issues manifest themselves either as bottlenecks in the transmission of the imagery off-board or as latency in the time taken to analyse the data due to limited computational processing power.

  7. Imagery Induction in the Pre-Imagery Child. Technical Report No. 282.

    ERIC Educational Resources Information Center

    Levin, Joel R.; And Others

    This study extends some recently acquired knowledge about the development of visual imagery as an associative-learning strategy. Incorporating the present findings into the data already gathered, it appears that as a facilitator, sentence production precedes imagery generation since preoperational children benefit from instructions to engage in…

  8. The development of a land use inventory for regional planning using satellite imagery

    NASA Technical Reports Server (NTRS)

    Hessling, A. H.; Mara, T. G.

    1975-01-01

    Water quality planning in Ohio, Kentucky, and Indiana is reviewed in terms of use of land use data and satellite imagery. A land use inventory applicable to water quality planning and developed through computer processing of LANDSAT-1 imagery is described.

  9. Atmospheric Correction of High-Spatial-Resolution Commercial Satellite Imagery Products Using MODIS Atmospheric Products

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Holekamp, Kara; Ryan, Robert E.; Vaughan, Ronand; Russell, Jeff; Prados, Don; Stanley, Thomas

    2005-01-01

    Remotely sensed ground reflectance is the foundation of any interoperability or change detection technique. Satellite intercomparisons and accurate vegetation indices, such as the Normalized Difference Vegetation Index (NDVI), require the generation of accurate reflectance maps (NDVI is used to describe or infer a wide variety of biophysical parameters and is defined in terms of near-infrared (NIR) and red band reflectances). Accurate reflectance-map generation from satellite imagery relies on the removal of solar and satellite geometry and of atmospheric effects and is generally referred to as atmospheric correction. Atmospheric correction of remotely sensed imagery to ground reflectance has been widely applied to a few systems only. The ability to obtain atmospherically corrected imagery and products from various satellites is essential to enable widescale use of remotely sensed, multitemporal imagery for a variety of applications. An atmospheric correction approach derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) that can be applied to high-spatial-resolution satellite imagery under many conditions was evaluated to demonstrate a reliable, effective reflectance map generation method. Additional information is included in the original extended abstract.

  10. Viking image processing. [digital stereo imagery and computer mosaicking

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.

  11. Characterizing Intimate Mixtures of Materials in Hyperspectral Imagery with Albedo-based and Kernel-based Approaches

    DTIC Science & Technology

    2015-09-01

    scattering albedo (SSA) according to Hapke theory assuming bidirectional scattering at nadir look angles and uses a constrained linear model on the computed...following Hapke 9 (1993); and Mustard and Pieters 18 (1987)) assuming the reflectance spectra are bidirectional . SSA spectra were also generated...from AVIRIS data collected during a JPL/USGS campaign in response to the Deep Water Horizon (DWH) oil spill incident. 27 Out of the numerous

  12. Computer vision enhances mobile eye-tracking to expose expert cognition in natural-scene visual-search tasks

    NASA Astrophysics Data System (ADS)

    Keane, Tommy P.; Cahill, Nathan D.; Tarduno, John A.; Jacobs, Robert A.; Pelz, Jeff B.

    2014-02-01

    Mobile eye-tracking provides the fairly unique opportunity to record and elucidate cognition in action. In our research, we are searching for patterns in, and distinctions between, the visual-search performance of experts and novices in the geo-sciences. Traveling to regions resultant from various geological processes as part of an introductory field studies course in geology, we record the prima facie gaze patterns of experts and novices when they are asked to determine the modes of geological activity that have formed the scene-view presented to them. Recording eye video and scene video in natural settings generates complex imagery that requires advanced applications of computer vision research to generate registrations and mappings between the views of separate observers. By developing such mappings, we could then place many observers into a single mathematical space where we can spatio-temporally analyze inter- and intra-subject fixations, saccades, and head motions. While working towards perfecting these mappings, we developed an updated experiment setup that allowed us to statistically analyze intra-subject eye-movement events without the need for a common domain. Through such analyses we are finding statistical differences between novices and experts in these visual-search tasks. In the course of this research we have developed a unified, open-source, software framework for processing, visualization, and interaction of mobile eye-tracking and high-resolution panoramic imagery.

  13. Integration of satellite data with other data for nowcasting

    NASA Astrophysics Data System (ADS)

    Birkenheuer, Daniel L.

    The Program for Regional Observing and Forecasting Services (PROFS) operates its own Geostationary Operational Environmental Satellite (GOES) mode AA and mode A groundstations and generates both VISSR (Visible and Infrared Spin-Scan Radiometer) and VAS (VISSR Atmospheric Sounder) image products for its advanced meteorological workstation in real time. Derived VAS temperature soundings are received daily from the University of Wisconsin-Madison. PROFS has been improving its real-time workstation since 1981 and has used it to study mesoscale nowcasting. The workstation provides efficient access, display, and loop control of satellite products and other conventional and advanced meteorological data. Data are integrated by displaying products on standard map projections so that imagery and graphics can be combined at the workstation, by using data from a variety of sources to compute image products, and through machine analysis and modeling. The workstation's capabilities have been assessed during PROFS real-time nowcasting experiments. Nowcasts are made with the workstation, and chase teams track and observe severe weather to evaluate these nowcasts. Five-minute rapid scan visible imagery was found to be quite useful in conjunction with Doppler radar data for nowcasting. In contrast, 30-minute infrared (IR) and VAS data were beneficial for short-range forecasts. Loops of VAS water vapor imagery along with conventional IR imagery at national and regional scales showed the greatest overall utility of the satellite imagery studied. Processed sounding data showed some success depicting unstable regions prior to convection.

  14. Quantum Theory of Three-Dimensional Superresolution Using Rotating-PSF Imagery

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Yu, Z.

    The inverse of the quantum Fisher information (QFI) matrix (and extensions thereof) provides the ultimate lower bound on the variance of any unbiased estimation of a parameter from statistical data, whether of intrinsically quantum mechanical or classical character. We calculate the QFI for Poisson-shot-noise-limited imagery using the rotating PSF that can localize and resolve point sources fully in all three dimensions. We also propose an experimental approach based on the use of computer generated hologram and projective measurements to realize the QFI-limited variance for the problem of super-resolving a closely spaced pair of point sources at a highly reduced photon cost. The paper presents a preliminary analysis of quantum-limited three-dimensional (3D) pair optical super-resolution (OSR) problem with potential applications to astronomical imaging and 3D space-debris localization.

  15. Mental Imagery in Depression: Phenomenology, Potential Mechanisms, and Treatment Implications.

    PubMed

    Holmes, Emily A; Blackwell, Simon E; Burnett Heyes, Stephanie; Renner, Fritz; Raes, Filip

    2016-01-01

    Mental imagery is an experience like perception in the absence of a percept. It is a ubiquitous feature of human cognition, yet it has been relatively neglected in the etiology, maintenance, and treatment of depression. Imagery abnormalities in depression include an excess of intrusive negative mental imagery; impoverished positive imagery; bias for observer perspective imagery; and overgeneral memory, in which specific imagery is lacking. We consider the contribution of imagery dysfunctions to depressive psychopathology and implications for cognitive behavioral interventions. Treatment advances capitalizing on the representational format of imagery (as opposed to its content) are reviewed, including imagery rescripting, positive imagery generation, and memory specificity training. Consideration of mental imagery can contribute to clinical assessment and imagery-focused psychological therapeutic techniques and promote investigation of underlying mechanisms for treatment innovation. Research into mental imagery in depression is at an early stage. Work that bridges clinical psychology and neuroscience in the investigation of imagery-related mechanisms is recommended.

  16. A spectral-structural bag-of-features scene classifier for very high spatial resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhao, Bei; Zhong, Yanfei; Zhang, Liangpei

    2016-06-01

    Land-use classification of very high spatial resolution remote sensing (VHSR) imagery is one of the most challenging tasks in the field of remote sensing image processing. However, the land-use classification is hard to be addressed by the land-cover classification techniques, due to the complexity of the land-use scenes. Scene classification is considered to be one of the expected ways to address the land-use classification issue. The commonly used scene classification methods of VHSR imagery are all derived from the computer vision community that mainly deal with terrestrial image recognition. Differing from terrestrial images, VHSR images are taken by looking down with airborne and spaceborne sensors, which leads to the distinct light conditions and spatial configuration of land cover in VHSR imagery. Considering the distinct characteristics, two questions should be answered: (1) Which type or combination of information is suitable for the VHSR imagery scene classification? (2) Which scene classification algorithm is best for VHSR imagery? In this paper, an efficient spectral-structural bag-of-features scene classifier (SSBFC) is proposed to combine the spectral and structural information of VHSR imagery. SSBFC utilizes the first- and second-order statistics (the mean and standard deviation values, MeanStd) as the statistical spectral descriptor for the spectral information of the VHSR imagery, and uses dense scale-invariant feature transform (SIFT) as the structural feature descriptor. From the experimental results, the spectral information works better than the structural information, while the combination of the spectral and structural information is better than any single type of information. Taking the characteristic of the spatial configuration into consideration, SSBFC uses the whole image scene as the scope of the pooling operator, instead of the scope generated by a spatial pyramid (SP) commonly used in terrestrial image classification. The experimental results show that the whole image as the scope of the pooling operator performs better than the scope generated by SP. In addition, SSBFC codes and pools the spectral and structural features separately to avoid mutual interruption between the spectral and structural features. The coding vectors of spectral and structural features are then concatenated into a final coding vector. Finally, SSBFC classifies the final coding vector by support vector machine (SVM) with a histogram intersection kernel (HIK). Compared with the latest scene classification methods, the experimental results with three VHSR datasets demonstrate that the proposed SSBFC performs better than the other classification methods for VHSR image scenes.

  17. Improving the Accuracy and Training Speed of Motor Imagery Brain-Computer Interfaces Using Wavelet-Based Combined Feature Vectors and Gaussian Mixture Model-Supervectors.

    PubMed

    Lee, David; Park, Sang-Hoon; Lee, Sang-Goog

    2017-10-07

    In this paper, we propose a set of wavelet-based combined feature vectors and a Gaussian mixture model (GMM)-supervector to enhance training speed and classification accuracy in motor imagery brain-computer interfaces. The proposed method is configured as follows: first, wavelet transforms are applied to extract the feature vectors for identification of motor imagery electroencephalography (EEG) and principal component analyses are used to reduce the dimensionality of the feature vectors and linearly combine them. Subsequently, the GMM universal background model is trained by the expectation-maximization (EM) algorithm to purify the training data and reduce its size. Finally, a purified and reduced GMM-supervector is used to train the support vector machine classifier. The performance of the proposed method was evaluated for three different motor imagery datasets in terms of accuracy, kappa, mutual information, and computation time, and compared with the state-of-the-art algorithms. The results from the study indicate that the proposed method achieves high accuracy with a small amount of training data compared with the state-of-the-art algorithms in motor imagery EEG classification.

  18. Visualizing Cloud Properties and Satellite Imagery: A Tool for Visualization and Information Integration

    NASA Astrophysics Data System (ADS)

    Chee, T.; Nguyen, L.; Smith, W. L., Jr.; Spangenberg, D.; Palikonda, R.; Bedka, K. M.; Minnis, P.; Thieman, M. M.; Nordeen, M.

    2017-12-01

    Providing public access to research products including cloud macro and microphysical properties and satellite imagery are a key concern for the NASA Langley Research Center Cloud and Radiation Group. This work describes a web based visualization tool and API that allows end users to easily create customized cloud product and satellite imagery, ground site data and satellite ground track information that is generated dynamically. The tool has two uses, one to visualize the dynamically created imagery and the other to provide access to the dynamically generated imagery directly at a later time. Internally, we leverage our practical experience with large, scalable application practices to develop a system that has the largest potential for scalability as well as the ability to be deployed on the cloud to accommodate scalability issues. We build upon NASA Langley Cloud and Radiation Group's experience with making real-time and historical satellite cloud product information, satellite imagery, ground site data and satellite track information accessible and easily searchable. This tool is the culmination of our prior experience with dynamic imagery generation and provides a way to build a "mash-up" of dynamically generated imagery and related kinds of information that are visualized together to add value to disparate but related information. In support of NASA strategic goals, our group aims to make as much scientific knowledge, observations and products available to the citizen science, research and interested communities as well as for automated systems to acquire the same information for data mining or other analytic purposes. This tool and the underlying API's provide a valuable research tool to a wide audience both as a standalone research tool and also as an easily accessed data source that can easily be mined or used with existing tools.

  19. The Phenomenology and Generation of Positive Mental Imagery in Early Psychosis.

    PubMed

    Laing, Jennifer; Morland, Tristan; Fornells-Ambrojo, Miriam

    2016-11-01

    Theoretical models of depression and bipolar disorder emphasise the importance of positive mental imagery in mood and behaviour. Distressing, intrusive images are common in psychosis; however, little is known about positive imagery experiences or their association with clinical symptoms. The aim of the current study was to examine the phenomenology of positive imagery in early psychosis and the relationship between the characteristics of positive, future-oriented imagery and symptom severity. Characteristics, thematic content and appraisals of recent self-reported images were examined in 31 people with early psychosis. The vividness and perceived likelihood of deliberately generated, future-oriented images were investigated in relation to clinical symptoms. Eighty-four percent of participants reported experiencing a recent positive image. Themes included the achievement of personal goals, spending enjoyable time with peers and family, loving, intimate relationships and escape from current circumstances. The vividness and perceived likelihood of generated prospective imagery were negatively correlated with levels of depression and social anxiety. The relationship between emotional problems and the ability to imagine positive, future events may have implications for motivation, mood and goal-directed behaviour in psychosis. Everyday experiences of positive imagery may represent the simulation of future goals, attempts to cope or avoid aversive experiences or idealised fantasy. Copyright © 2015 John Wiley & Sons, Ltd. The majority of participants experienced a recent positive image with themes related to goal attainment and social relationships. Depression and social anxiety levels were correlated with the vividness of intentionally generated positive future-oriented images and their perceived likelihood. The assessment of positive imagery in early psychosis appears warranted and may provide insights regarding individual coping strategies, values and goals. Copyright © 2015 John Wiley & Sons, Ltd.

  20. The Remote Analysis Station (RAS) as an instructional system

    NASA Technical Reports Server (NTRS)

    Rogers, R. H.; Wilson, C. L.; Dye, R. H.; Jaworski, E.

    1981-01-01

    "Hands-on" training in LANDSAT data analysis techniques can be obtained using a desk-top, interactive remote analysis station (RAS) which consists of a color CRT imagery display, with alphanumeric overwrite and keyboard, as well as a cursor controller and modem. This portable station can communicate via modem and dial-up telephone with a host computer at 1200 baud or it can be hardwired to a host computer at 9600 baud. A Z80 microcomputer controls the display refresh memory and remote station processing. LANDSAT data is displayed as three-band false-color imagery, one-band color-sliced imagery, or color-coded processed imagery. Although the display memory routinely operates at 256 x 256 picture elements, a display resolution of 128 x 128 can be selected to fill the display faster. In the false color mode the computer packs the data into one 8-bit character. When the host is not sending pictorial information the characters sent are in ordinary ASCII code. System capabilities are described.

  1. Laying the foundation to use Raspberry Pi 3 V2 camera module imagery for scientific and engineering purposes

    NASA Astrophysics Data System (ADS)

    Pagnutti, Mary; Ryan, Robert E.; Cazenavette, George; Gold, Maxwell; Harlan, Ryan; Leggett, Edward; Pagnutti, James

    2017-01-01

    A comprehensive radiometric characterization of raw-data format imagery acquired with the Raspberry Pi 3 and V2.1 camera module is presented. The Raspberry Pi is a high-performance single-board computer designed to educate and solve real-world problems. This small computer supports a camera module that uses a Sony IMX219 8 megapixel CMOS sensor. This paper shows that scientific and engineering-grade imagery can be produced with the Raspberry Pi 3 and its V2.1 camera module. Raw imagery is shown to be linear with exposure and gain (ISO), which is essential for scientific and engineering applications. Dark frame, noise, and exposure stability assessments along with flat fielding results, spectral response measurements, and absolute radiometric calibration results are described. This low-cost imaging sensor, when calibrated to produce scientific quality data, can be used in computer vision, biophotonics, remote sensing, astronomy, high dynamic range imaging, and security applications, to name a few.

  2. Deploying a quantum annealing processor to detect tree cover in aerial imagery of California

    PubMed Central

    Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Mukhopadhyay, Supratik; Nemani, Ramakrishna R.

    2017-01-01

    Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA. PMID:28241028

  3. LANDSAT 4 band 6 data evaluation

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Computer modelled atmospheric transmittance and path radiance values were compared with empirical values derived from aircraft underflight data. Aircraft thermal infrared imagery and calibration data were available on two dates as were corresponding atmospheric radiosonde data. The radiosonde data were used as input to the LOWTRAN 5A code. The aircraft data were calibrated and utilized to generate analogous measurements. The results of the analysis indicate that there is a tendancy for the LOWTRAN model to underestimate atmospheric path radiance and overestimate atmospheric transmittance.

  4. Virtual reality and planetary exploration

    NASA Technical Reports Server (NTRS)

    Mcgreevy, Michael W.

    1992-01-01

    NASA-Ames is intensively developing virtual-reality (VR) capabilities that can extend and augment computer-generated and remote spatial environments. VR is envisioned not only as a basis for improving human/machine interactions involved in planetary exploration, but also as a medium for the more widespread sharing of the experience of exploration, thereby broadening the support-base for the lunar and planetary-exploration endeavors. Imagery representative of Mars are being gathered for VR presentation at such terrestrial sites as Antarctica and Death Valley.

  5. Motion/imagery secure cloud enterprise architecture analysis

    NASA Astrophysics Data System (ADS)

    DeLay, John L.

    2012-06-01

    Cloud computing with storage virtualization and new service-oriented architectures brings a new perspective to the aspect of a distributed motion imagery and persistent surveillance enterprise. Our existing research is focused mainly on content management, distributed analytics, WAN distributed cloud networking performance issues of cloud based technologies. The potential of leveraging cloud based technologies for hosting motion imagery, imagery and analytics workflows for DOD and security applications is relatively unexplored. This paper will examine technologies for managing, storing, processing and disseminating motion imagery and imagery within a distributed network environment. Finally, we propose areas for future research in the area of distributed cloud content management enterprises.

  6. A Deep Learning Scheme for Motor Imagery Classification based on Restricted Boltzmann Machines.

    PubMed

    Lu, Na; Li, Tengfei; Ren, Xiaodong; Miao, Hongyu

    2017-06-01

    Motor imagery classification is an important topic in brain-computer interface (BCI) research that enables the recognition of a subject's intension to, e.g., implement prosthesis control. The brain dynamics of motor imagery are usually measured by electroencephalography (EEG) as nonstationary time series of low signal-to-noise ratio. Although a variety of methods have been previously developed to learn EEG signal features, the deep learning idea has rarely been explored to generate new representation of EEG features and achieve further performance improvement for motor imagery classification. In this study, a novel deep learning scheme based on restricted Boltzmann machine (RBM) is proposed. Specifically, frequency domain representations of EEG signals obtained via fast Fourier transform (FFT) and wavelet package decomposition (WPD) are obtained to train three RBMs. These RBMs are then stacked up with an extra output layer to form a four-layer neural network, which is named the frequential deep belief network (FDBN). The output layer employs the softmax regression to accomplish the classification task. Also, the conjugate gradient method and backpropagation are used to fine tune the FDBN. Extensive and systematic experiments have been performed on public benchmark datasets, and the results show that the performance improvement of FDBN over other selected state-of-the-art methods is statistically significant. Also, several findings that may be of significant interest to the BCI community are presented in this article.

  7. Trace gas detection in hyperspectral imagery using the wavelet packet subspace

    NASA Astrophysics Data System (ADS)

    Salvador, Mark A. Z.

    This dissertation describes research into a new remote sensing method to detect trace gases in hyperspectral and ultra-spectral data. This new method is based on the wavelet packet transform. It attempts to improve both the computational tractability and the detection of trace gases in airborne and spaceborne spectral imagery. Atmospheric trace gas research supports various Earth science disciplines to include climatology, vulcanology, pollution monitoring, natural disasters, and intelligence and military applications. Hyperspectral and ultra-spectral data significantly increases the data glut of existing Earth science data sets. Spaceborne spectral data in particular significantly increases spectral resolution while performing daily global collections of the earth. Application of the wavelet packet transform to the spectral space of hyperspectral and ultra-spectral imagery data potentially improves remote sensing detection algorithms. It also facilities the parallelization of these methods for high performance computing. This research seeks two science goals, (1) developing a new spectral imagery detection algorithm, and (2) facilitating the parallelization of trace gas detection in spectral imagery data.

  8. CG2Real: Improving the Realism of Computer Generated Images Using a Large Collection of Photographs.

    PubMed

    Johnson, Micah K; Dale, Kevin; Avidan, Shai; Pfister, Hanspeter; Freeman, William T; Matusik, Wojciech

    2011-09-01

    Computer-generated (CG) images have achieved high levels of realism. This realism, however, comes at the cost of long and expensive manual modeling, and often humans can still distinguish between CG and real images. We introduce a new data-driven approach for rendering realistic imagery that uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our hybrid images appear more realistic than the originals.

  9. Analysis of geologic terrain models for determination of optimum SAR sensor configuration and optimum information extraction for exploration of global non-renewable resources. Pilot study: Arkansas Remote Sensing Laboratory, part 1, part 2, and part 3

    NASA Technical Reports Server (NTRS)

    Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.; Stiles, J. A.; Frost, F. S.; Shanmugam, K. S.; Smith, S. A.; Narayanan, V.; Holtzman, J. C. (Principal Investigator)

    1982-01-01

    Computer-generated radar simulations and mathematical geologic terrain models were used to establish the optimum radar sensor operating parameters for geologic research. An initial set of mathematical geologic terrain models was created for three basic landforms and families of simulated radar images were prepared from these models for numerous interacting sensor, platform, and terrain variables. The tradeoffs between the various sensor parameters and the quantity and quality of the extractable geologic data were investigated as well as the development of automated techniques of digital SAR image analysis. Initial work on a texture analysis of SEASAT SAR imagery is reported. Computer-generated radar simulations are shown for combinations of two geologic models and three SAR angles of incidence.

  10. Ocean Wave Energy Estimation Using Active Satellite Imagery as a Solution of Energy Scarce in Indonesia Case Study: Poteran Island's Water, Madura

    NASA Astrophysics Data System (ADS)

    Nadzir, Z. A.; Karondia, L. A.; Jaelani, L. M.; Sulaiman, A.; Pamungkas, A.; Koenhardono, E. S.; Sulisetyono, A.

    2015-10-01

    Ocean wave energy is one of the ORE (Ocean Renewable Energies) sources, which potential, in which this energy has several advantages over fossil energy and being one of the most researched energy in developed countries nowadays. One of the efforts for mapping ORE potential is by computing energy potential generated from ocean wave, symbolized by Watt per area unit using various methods of observation. SAR (Synthetic Aperture Radar) is one of the hyped and most developed Remote Sensing method used to monitor and map the ocean wave energy potential effectively and fast. SAR imagery processing can be accomplished not only in remote sensing data applications, but using Matrices processing application as well such as MATLAB that utilizing Fast Fourier Transform and Band-Pass Filtering methods undergoing Pre-Processing stage. In this research, the processing and energy estimation from ALOSPALSAR satellite imagery acquired on the 5/12/2009 was accomplished using 2 methods (i.e Magnitude and Wavelength). This resulted in 9 potential locations of ocean wave energy between 0-228 W/m2, and 7 potential locations with ranged value between 182-1317 W/m2. After getting through buffering process with value of 2 km (to facilitate the construction of power plant installation), 9 sites of location were estimated to be the most potential location of ocean wave energy generation in the ocean with average depth of 8.058 m and annual wind speed of 6.553 knot.

  11. A randomized trial of computer-based communications using imagery and text information to alter representations of heart disease risk and motivate protective behaviour.

    PubMed

    Lee, Tarryn J; Cameron, Linda D; Wünsche, Burkhard; Stevens, Carey

    2011-02-01

    Advances in web-based animation technologies provide new opportunities to develop graphic health communications for dissemination throughout communities. We developed imagery and text contents of brief, computer-based programmes about heart disease risk, with both imagery and text contents guided by the common-sense model (CSM) of self-regulation. The imagery depicts a three-dimensional, beating heart tailored to user-specific information. A 2 × 2 × 4 factorial design was used to manipulate concrete imagery (imagery vs. no imagery) and conceptual information (text vs. no text) about heart disease risk in prevention-oriented programmes and assess changes in representations and behavioural motivations from baseline to 2 days, 2 weeks, and 4 weeks post-intervention. Sedentary young adults (N= 80) were randomized to view one of four programmes: imagery plus text, imagery only, text only, or control. Participants completed measures of risk representations, worry, and physical activity and healthy diet intentions and behaviours at baseline, 2 days post-intervention (except behaviours), and 2 weeks (intentions and behaviours only) and 4 weeks later. The imagery contents increased representational beliefs and mental imagery relating to heart disease, worry, and intentions at post-intervention. Increases in sense of coherence (understanding of heart disease) and worry were sustained after 1 month. The imagery contents also increased healthy diet efforts after 2 weeks. The text contents increased beliefs about causal factors, mental images of clogged arteries, and worry at post-intervention, and increased physical activity 2 weeks later and sense of coherence 1 month later. The CSM-based programmes induced short-term changes in risk representations and behaviour motivation. The combination of CSM-based text and imagery appears to be most effective in instilling risk representations that motivate protective behaviour. ©2010 The British Psychological Society.

  12. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    NASA Astrophysics Data System (ADS)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  13. Automatic Mosaicking of Satellite Imagery Considering the Clouds

    NASA Astrophysics Data System (ADS)

    Kang, Yifei; Pan, Li; Chen, Qi; Zhang, Tong; Zhang, Shasha; Liu, Zhang

    2016-06-01

    With the rapid development of high resolution remote sensing for earth observation technology, satellite imagery is widely used in the fields of resource investigation, environment protection, and agricultural research. Image mosaicking is an important part of satellite imagery production. However, the existence of clouds leads to lots of disadvantages for automatic image mosaicking, mainly in two aspects: 1) Image blurring may be caused during the process of image dodging, 2) Cloudy areas may be passed through by automatically generated seamlines. To address these problems, an automatic mosaicking method is proposed for cloudy satellite imagery in this paper. Firstly, modified Otsu thresholding and morphological processing are employed to extract cloudy areas and obtain the percentage of cloud cover. Then, cloud detection results are used to optimize the process of dodging and mosaicking. Thus, the mosaic image can be combined with more clear-sky areas instead of cloudy areas. Besides, clear-sky areas will be clear and distortionless. The Chinese GF-1 wide-field-of-view orthoimages are employed as experimental data. The performance of the proposed approach is evaluated in four aspects: the effect of cloud detection, the sharpness of clear-sky areas, the rationality of seamlines and efficiency. The evaluation results demonstrated that the mosaic image obtained by our method has fewer clouds, better internal color consistency and better visual clarity compared with that obtained by traditional method. The time consumed by the proposed method for 17 scenes of GF-1 orthoimages is within 4 hours on a desktop computer. The efficiency can meet the general production requirements for massive satellite imagery.

  14. Kinesthetic perception based on integration of motor imagery and afferent inputs from antagonistic muscles with tendon vibration.

    PubMed

    Shibata, E; Kaneko, F

    2013-04-29

    The perceptual integration of afferent inputs from two antagonistic muscles, or the perceptual integration of afferent input and motor imagery are related to the generation of a kinesthetic sensation. However, it has not been clarified how, or indeed whether, a kinesthetic perception would be generated by motor imagery if afferent inputs from two antagonistic muscles were simultaneously induced by tendon vibration. The purpose of this study was to investigate how a kinesthetic perception would be generated by motor imagery during co-vibration of the two antagonistic muscles at the same frequency. Healthy subjects participated in this experiment. Illusory movement was evoked by tendon vibration. Next, the subjects imaged wrist flexion movement simultaneously with tendon vibration. Wrist flexor and extensor muscles were vibrated according to 4 patterns such that the difference between the two vibration frequencies was zero. After each trial, the perceived movement sensations were quantified on the basis of the velocity and direction of the ipsilateral hand-tracking movements. When the difference in frequency applied to the wrist flexor and the extensor was 0Hz, no subjects perceived movements without motor imagery. However, during motor imagery, the flexion velocity of the perceived movement was higher than the flexion velocity without motor imagery. This study clarified that the afferent inputs from the muscle spindle interact with motor imagery, to evoke a kinesthetic perception, even when the difference in frequency applied to the wrist flexor and extensor was 0Hz. Furthermore, the kinesthetic perception resulting from integrations of vibration and motor imagery increased depending on the vibration frequency to the two antagonistic muscles. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Time Counts! Some Comments on System Latency in Head-Referenced Displays

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Adelstein, Bernard D.

    2013-01-01

    System response latency is a prominent characteristic of human-computer interaction. Laggy systems are; however, not simply annoying but substantially reduce user productivity. The impact of latency on head referenced display systems, particularly head-mounted systems, is especially disturbing since not only can it interfere with dynamic registration in augmented reality displays but it also can in some cases indirectly contribute to motion sickness. We will summarize several experiments using standard psychophysical discrimination techniques that suggest what system latencies will be required to achieve perceptual stability for spatially referenced computer-generated imagery. In conclusion I will speculate about other system performance characteristics that I would hope to have for a dream augmented reality system.

  16. 3-D components of a biological neural network visualized in computer generated imagery. I - Macular receptive field organization

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cutler, Lynn; Meyer, Glenn; Lam, Tony; Vaziri, Parshaw

    1990-01-01

    Computer-assisted, 3-dimensional reconstructions of macular receptive fields and of their linkages into a neural network have revealed new information about macular functional organization. Both type I and type II hair cells are included in the receptive fields. The fields are rounded, oblong, or elongated, but gradations between categories are common. Cell polarizations are divergent. Morphologically, each calyx of oblong and elongated fields appears to be an information processing site. Intrinsic modulation of information processing is extensive and varies with the kind of field. Each reconstructed field differs in detail from every other, suggesting that an element of randomness is introduced developmentally and contributes to endorgan adaptability.

  17. Automated Production of Movies on a Cluster of Computers

    NASA Technical Reports Server (NTRS)

    Nail, Jasper; Le, Duong; Nail, William L.; Nail, William

    2008-01-01

    A method of accelerating and facilitating production of video and film motion-picture products, and software and generic designs of computer hardware to implement the method, are undergoing development. The method provides for automation of most of the tedious and repetitive tasks involved in editing and otherwise processing raw digitized imagery into final motion-picture products. The method was conceived to satisfy requirements, in industrial and scientific testing, for rapid processing of multiple streams of simultaneously captured raw video imagery into documentation in the form of edited video imagery and video derived data products for technical review and analysis. In the production of such video technical documentation, unlike in production of motion-picture products for entertainment, (1) it is often necessary to produce multiple video derived data products, (2) there are usually no second chances to repeat acquisition of raw imagery, (3) it is often desired to produce final products within minutes rather than hours, days, or months, and (4) consistency and quality, rather than aesthetics, are the primary criteria for judging the products. In the present method, the workflow has both serial and parallel aspects: processing can begin before all the raw imagery has been acquired, each video stream can be subjected to different stages of processing simultaneously on different computers that may be grouped into one or more cluster(s), and the final product may consist of multiple video streams. Results of processing on different computers are shared, so that workers can collaborate effectively.

  18. A square root ensemble Kalman filter application to a motor-imagery brain-computer interface.

    PubMed

    Kamrunnahar, M; Schiff, S J

    2011-01-01

    We here investigated a non-linear ensemble Kalman filter (SPKF) application to a motor imagery brain computer interface (BCI). A square root central difference Kalman filter (SR-CDKF) was used as an approach for brain state estimation in motor imagery task performance, using scalp electroencephalography (EEG) signals. Healthy human subjects imagined left vs. right hand movements and tongue vs. bilateral toe movements while scalp EEG signals were recorded. Offline data analysis was conducted for training the model as well as for decoding the imagery movements. Preliminary results indicate the feasibility of this approach with a decoding accuracy of 78%-90% for the hand movements and 70%-90% for the tongue-toes movements. Ongoing research includes online BCI applications of this approach as well as combined state and parameter estimation using this algorithm with different system dynamic models.

  19. Generating high temporal and spatial resolution thermal band imagery using robust sharpening approach

    USDA-ARS?s Scientific Manuscript database

    Thermal infrared band imagery provides key information for detecting wild fires, mapping land surface energy fluxes and evapotranspiration, monitoring urban heat fluxes and drought monitoring. Thermal infrared (TIR) imagery at fine resolution is required for field scale applications. However, therma...

  20. ERTS-B imagery interpretation techniques in the Tennessee Valley

    NASA Technical Reports Server (NTRS)

    Gonzalez, R. C. (Principal Investigator)

    1973-01-01

    There are no author-identified significant results in this report. The proposed investigation is a continuation of an ERTS-1 project. The principal missions are to serve as the principal supporter on computer and image processing problems for the multidisciplinary ERTS effort of the University of Tennessee, and to carry out research in improved methods for the computer processing, enhancement, and recognition of ERTS imagery.

  1. Land utilization and ecological aspects in the Sylhet-Mymensingh Haor Region of Bangladesh: An analysis of LANDSAT data

    NASA Technical Reports Server (NTRS)

    Chowdhury, M. I.; Elahi, K. M.

    1977-01-01

    The use of remote sensing data from LANDSAT (ERTS) imageries in identifying, evaluating and mapping land use patterns of the Haor area in Bangladesh was investigated. Selected cloud free imageries of the area for the period 1972-75 were studied. Imageries in bands 4, 5 and 7 were mostly used. The method of analysis involved utilization of both human and computer services of information from ground, aerial photographs taken during this period and space imageries.

  2. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery

    PubMed Central

    Qi, Baogui; Zhuang, Yin; Chen, He; Chen, Liang

    2018-01-01

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited. PMID:29693585

  3. Evaluation of Alternate Concepts for Synthetic Vision Flight Displays With Weather-Penetrating Sensor Image Inserts During Simulated Landing Approaches

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.; Nold, Dean E.

    2003-01-01

    A simulation study was conducted in 1994 at Langley Research Center that used 12 commercial airline pilots repeatedly flying complex Microwave Landing System (MLS)-type approaches to parallel runways under Category IIIc weather conditions. Two sensor insert concepts of 'Synthetic Vision Systems' (SVS) were used in the simulated flights, with a more conventional electro-optical display (similar to a Head-Up Display with raster capability for sensor imagery), flown under less restrictive visibility conditions, used as a control condition. The SVS concepts combined the sensor imagery with a computer-generated image (CGI) of an out-the-window scene based on an onboard airport database. Various scenarios involving runway traffic incursions (taxiing aircraft and parked fuel trucks) and navigational system position errors (both static and dynamic) were used to assess the pilots' ability to manage the approach task with the display concepts. The two SVS sensor insert concepts contrasted the simple overlay of sensor imagery on the CGI scene without additional image processing (the SV display) to the complex integration (the AV display) of the CGI scene with pilot-decision aiding using both object and edge detection techniques for detection of obstacle conflicts and runway alignment errors.

  4. Development of a Novel Motor Imagery Control Technique and Application in a Gaming Environment.

    PubMed

    Li, Ting; Zhang, Jinhua; Xue, Tao; Wang, Baozeng

    2017-01-01

    We present a methodology for a hybrid brain-computer interface (BCI) system, with the recognition of motor imagery (MI) based on EEG and blink EOG signals. We tested the BCI system in a 3D Tetris and an analogous 2D game playing environment. To enhance player's BCI control ability, the study focused on feature extraction from EEG and control strategy supporting Game-BCI system operation. We compared the numerical differences between spatial features extracted with common spatial pattern (CSP) and the proposed multifeature extraction. To demonstrate the effectiveness of 3D game environment at enhancing player's event-related desynchronization (ERD) and event-related synchronization (ERS) production ability, we set the 2D Screen Game as the comparison experiment. According to a series of statistical results, the group performing MI in the 3D Tetris environment showed more significant improvements in generating MI-associated ERD/ERS. Analysis results of game-score indicated that the players' scores presented an obvious uptrend in 3D Tetris environment but did not show an obvious downward trend in 2D Screen Game. It suggested that the immersive and rich-control environment for MI would improve the associated mental imagery and enhance MI-based BCI skills.

  5. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery.

    PubMed

    Qi, Baogui; Shi, Hao; Zhuang, Yin; Chen, He; Chen, Liang

    2018-04-25

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited.

  6. Selective loss of verbal imagery.

    PubMed

    Mehta, Z; Newcombe, F

    1996-05-01

    This single case study of the ability to generate verbal and non-verbal imagery in a woman who sustained a gunshot wound to the brain reports a significant difficulty in generating images of word shapes but not a significant problem in generating object images. Further dissociation, however, was observed in her ability to generate images of living vs non-living material. She made more errors in imagery and factual information tasks for non-living items than for living items. This pattern contrasts with our previous report of the agnosic patient, M.S., who had severe difficulty in generating images of living material, whereas his ability to image the shape of words was comparable to that of normal control subjects. Furthermore, with regard to the generation of images of living compared with non-living material, M.S. shows more errors with living than nonliving items. In contrast, the present patient, S.M., made significantly more errors with non-living relative to living items. There appear to be two types of double dissociation which reinforce the growing evidence of dissociable impairments in the ability to generate images for different types of verbal and non-verbal material. Such dissociations, presumably related to sensory and cognitive processing demands, address the problem of the neural basis of imagery.

  7. A Comparison of Independent Event-Related Desynchronization Responses in Motor-Related Brain Areas to Movement Execution, Movement Imagery, and Movement Observation.

    PubMed

    Duann, Jeng-Ren; Chiou, Jin-Chern

    2016-01-01

    Electroencephalographic (EEG) event-related desynchronization (ERD) induced by movement imagery or by observing biological movements performed by someone else has recently been used extensively for brain-computer interface-based applications, such as applications used in stroke rehabilitation training and motor skill learning. However, the ERD responses induced by the movement imagery and observation might not be as reliable as the ERD responses induced by movement execution. Given that studies on the reliability of the EEG ERD responses induced by these activities are still lacking, here we conducted an EEG experiment with movement imagery, movement observation, and movement execution, performed multiple times each in a pseudorandomized order in the same experimental runs. Then, independent component analysis (ICA) was applied to the EEG data to find the common motor-related EEG source activity shared by the three motor tasks. Finally, conditional EEG ERD responses associated with the three movement conditions were computed and compared. Among the three motor conditions, the EEG ERD responses induced by motor execution revealed the alpha power suppression with highest strengths and longest durations. The ERD responses of the movement imagery and movement observation only partially resembled the ERD pattern of the movement execution condition, with slightly better detectability for the ERD responses associated with the movement imagery and faster ERD responses for movement observation. This may indicate different levels of involvement in the same motor-related brain circuits during different movement conditions. In addition, because the resulting conditional EEG ERD responses from the ICA preprocessing came with minimal contamination from the non-related and/or artifactual noisy components, this result can play a role of the reference for devising a brain-computer interface using the EEG ERD features of movement imagery or observation.

  8. Globally scalable generation of high-resolution land cover from multispectral imagery

    NASA Astrophysics Data System (ADS)

    Stutts, S. Craig; Raskob, Benjamin L.; Wenger, Eric J.

    2017-05-01

    We present an automated method of generating high resolution ( 2 meter) land cover using a pattern recognition neural network trained on spatial and spectral features obtained from over 9000 WorldView multispectral images (MSI) in six distinct world regions. At this resolution, the network can classify small-scale objects such as individual buildings, roads, and irrigation ponds. This paper focuses on three key areas. First, we describe our land cover generation process, which involves the co-registration and aggregation of multiple spatially overlapping MSI, post-aggregation processing, and the registration of land cover to OpenStreetMap (OSM) road vectors using feature correspondence. Second, we discuss the generation of land cover derivative products and their impact in the areas of region reduction and object detection. Finally, we discuss the process of globally scaling land cover generation using cloud computing via Amazon Web Services (AWS).

  9. Building high-performance system for processing a daily large volume of Chinese satellites imagery

    NASA Astrophysics Data System (ADS)

    Deng, Huawu; Huang, Shicun; Wang, Qi; Pan, Zhiqiang; Xin, Yubin

    2014-10-01

    The number of Earth observation satellites from China increases dramatically recently and those satellites are acquiring a large volume of imagery daily. As the main portal of image processing and distribution from those Chinese satellites, the China Centre for Resources Satellite Data and Application (CRESDA) has been working with PCI Geomatics during the last three years to solve two issues in this regard: processing the large volume of data (about 1,500 scenes or 1 TB per day) in a timely manner and generating geometrically accurate orthorectified products. After three-year research and development, a high performance system has been built and successfully delivered. The high performance system has a service oriented architecture and can be deployed to a cluster of computers that may be configured with high end computing power. The high performance is gained through, first, making image processing algorithms into parallel computing by using high performance graphic processing unit (GPU) cards and multiple cores from multiple CPUs, and, second, distributing processing tasks to a cluster of computing nodes. While achieving up to thirty (and even more) times faster in performance compared with the traditional practice, a particular methodology was developed to improve the geometric accuracy of images acquired from Chinese satellites (including HJ-1 A/B, ZY-1-02C, ZY-3, GF-1, etc.). The methodology consists of fully automatic collection of dense ground control points (GCP) from various resources and then application of those points to improve the photogrammetric model of the images. The delivered system is up running at CRESDA for pre-operational production and has been and is generating good return on investment by eliminating a great amount of manual labor and increasing more than ten times of data throughput daily with fewer operators. Future work, such as development of more performance-optimized algorithms, robust image matching methods and application workflows, is identified to improve the system in the coming years.

  10. Automated synthetic scene generation

    NASA Astrophysics Data System (ADS)

    Givens, Ryan N.

    Physics-based simulations generate synthetic imagery to help organizations anticipate system performance of proposed remote sensing systems. However, manually constructing synthetic scenes which are sophisticated enough to capture the complexity of real-world sites can take days to months depending on the size of the site and desired fidelity of the scene. This research, sponsored by the Air Force Research Laboratory's Sensors Directorate, successfully developed an automated approach to fuse high-resolution RGB imagery, lidar data, and hyperspectral imagery and then extract the necessary scene components. The method greatly reduces the time and money required to generate realistic synthetic scenes and developed new approaches to improve material identification using information from all three of the input datasets.

  11. A square root ensemble Kalman filter application to a motor-imagery brain-computer interface

    PubMed Central

    Kamrunnahar, M.; Schiff, S. J.

    2017-01-01

    We here investigated a non-linear ensemble Kalman filter (SPKF) application to a motor imagery brain computer interface (BCI). A square root central difference Kalman filter (SR-CDKF) was used as an approach for brain state estimation in motor imagery task performance, using scalp electroencephalography (EEG) signals. Healthy human subjects imagined left vs. right hand movements and tongue vs. bilateral toe movements while scalp EEG signals were recorded. Offline data analysis was conducted for training the model as well as for decoding the imagery movements. Preliminary results indicate the feasibility of this approach with a decoding accuracy of 78%–90% for the hand movements and 70%–90% for the tongue-toes movements. Ongoing research includes online BCI applications of this approach as well as combined state and parameter estimation using this algorithm with different system dynamic models. PMID:22255799

  12. Development of an autonomous video rendezvous and docking system, phase 2

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Richardson, T. E.

    1983-01-01

    The critical elements of an autonomous video rendezvous and docking system were built and used successfully in a physical laboratory simulation. The laboratory system demonstrated that a small, inexpensive electronic package and a flight computer of modest size can analyze television images to derive guidance information for spacecraft. In the ultimate application, the system would use a docking aid consisting of three flashing lights mounted on a passive target spacecraft. Television imagery of the docking aid would be processed aboard an active chase vehicle to derive relative positions and attitudes of the two spacecraft. The demonstration system used scale models of the target spacecraft with working docking aids. A television camera mounted on a 6 degree of freedom (DOF) simulator provided imagery of the target to simulate observations from the chase vehicle. A hardware video processor extracted statistics from the imagery, from which a computer quickly computed position and attitude. Computer software known as a Kalman filter derived velocity information from position measurements.

  13. Music to the inner ears: exploring individual differences in musical imagery.

    PubMed

    Beaty, Roger E; Burgin, Chris J; Nusbaum, Emily C; Kwapil, Thomas R; Hodges, Donald A; Silvia, Paul J

    2013-12-01

    In two studies, we explored the frequency and phenomenology of musical imagery. Study 1 used retrospective reports of musical imagery to assess the contribution of individual differences to imagery characteristics. Study 2 used an experience sampling design to assess the phenomenology of musical imagery over the course of one week in a sample of musicians and non-musicians. Both studies found episodes of musical imagery to be common and positive: people rarely wanted such experiences to end and often heard music that was personally meaningful. Several variables predicted musical imagery, including personality, musical preferences, and positive mood. Musicians tended to hear musical imagery more often, but they reported less frequent episodes of deliberately-generated imagery. Taken together, the present research provides new insights into individual differences in musical imagery, and it supports the emerging view that such experiences are common, positive, and more voluntary than previously recognized. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Individually adapted imagery improves brain-computer interface performance in end-users with disability.

    PubMed

    Scherer, Reinhold; Faller, Josef; Friedrich, Elisabeth V C; Opisso, Eloy; Costa, Ursula; Kübler, Andrea; Müller-Putz, Gernot R

    2015-01-01

    Brain-computer interfaces (BCIs) translate oscillatory electroencephalogram (EEG) patterns into action. Different mental activities modulate spontaneous EEG rhythms in various ways. Non-stationarity and inherent variability of EEG signals, however, make reliable recognition of modulated EEG patterns challenging. Able-bodied individuals who use a BCI for the first time achieve - on average - binary classification performance of about 75%. Performance in users with central nervous system (CNS) tissue damage is typically lower. User training generally enhances reliability of EEG pattern generation and thus also robustness of pattern recognition. In this study, we investigated the impact of mental tasks on binary classification performance in BCI users with central nervous system (CNS) tissue damage such as persons with stroke or spinal cord injury (SCI). Motor imagery (MI), that is the kinesthetic imagination of movement (e.g. squeezing a rubber ball with the right hand), is the "gold standard" and mainly used to modulate EEG patterns. Based on our recent results in able-bodied users, we hypothesized that pair-wise combination of "brain-teaser" (e.g. mental subtraction and mental word association) and "dynamic imagery" (e.g. hand and feet MI) tasks significantly increases classification performance of induced EEG patterns in the selected end-user group. Within-day (How stable is the classification within a day?) and between-day (How well does a model trained on day one perform on unseen data of day two?) analysis of variability of mental task pair classification in nine individuals confirmed the hypothesis. We found that the use of the classical MI task pair hand vs. feed leads to significantly lower classification accuracy - in average up to 15% less - in most users with stroke or SCI. User-specific selection of task pairs was again essential to enhance performance. We expect that the gained evidence will significantly contribute to make imagery-based BCI technology become accessible to a larger population of users including individuals with special needs due to CNS damage.

  15. Individually Adapted Imagery Improves Brain-Computer Interface Performance in End-Users with Disability

    PubMed Central

    Scherer, Reinhold; Faller, Josef; Friedrich, Elisabeth V. C.; Opisso, Eloy; Costa, Ursula; Kübler, Andrea; Müller-Putz, Gernot R.

    2015-01-01

    Brain-computer interfaces (BCIs) translate oscillatory electroencephalogram (EEG) patterns into action. Different mental activities modulate spontaneous EEG rhythms in various ways. Non-stationarity and inherent variability of EEG signals, however, make reliable recognition of modulated EEG patterns challenging. Able-bodied individuals who use a BCI for the first time achieve - on average - binary classification performance of about 75%. Performance in users with central nervous system (CNS) tissue damage is typically lower. User training generally enhances reliability of EEG pattern generation and thus also robustness of pattern recognition. In this study, we investigated the impact of mental tasks on binary classification performance in BCI users with central nervous system (CNS) tissue damage such as persons with stroke or spinal cord injury (SCI). Motor imagery (MI), that is the kinesthetic imagination of movement (e.g. squeezing a rubber ball with the right hand), is the "gold standard" and mainly used to modulate EEG patterns. Based on our recent results in able-bodied users, we hypothesized that pair-wise combination of "brain-teaser" (e.g. mental subtraction and mental word association) and "dynamic imagery" (e.g. hand and feet MI) tasks significantly increases classification performance of induced EEG patterns in the selected end-user group. Within-day (How stable is the classification within a day?) and between-day (How well does a model trained on day one perform on unseen data of day two?) analysis of variability of mental task pair classification in nine individuals confirmed the hypothesis. We found that the use of the classical MI task pair hand vs. feed leads to significantly lower classification accuracy - in average up to 15% less - in most users with stroke or SCI. User-specific selection of task pairs was again essential to enhance performance. We expect that the gained evidence will significantly contribute to make imagery-based BCI technology become accessible to a larger population of users including individuals with special needs due to CNS damage. PMID:25992718

  16. Digital elevation model generation from satellite interferometric synthetic aperture radar: Chapter 5

    USGS Publications Warehouse

    Lu, Zhong; Dzurisin, Daniel; Jung, Hyung-Sup; Zhang, Lei; Lee, Wonjin; Lee, Chang-Wook

    2012-01-01

    An accurate digital elevation model (DEM) is a critical data set for characterizing the natural landscape, monitoring natural hazards, and georeferencing satellite imagery. The ideal interferometric synthetic aperture radar (InSAR) configuration for DEM production is a single-pass two-antenna system. Repeat-pass single-antenna satellite InSAR imagery, however, also can be used to produce useful DEMs. DEM generation from InSAR is advantageous in remote areas where the photogrammetric approach to DEM generation is hindered by inclement weather conditions. There are many sources of errors in DEM generation from repeat-pass InSAR imagery, for example, inaccurate determination of the InSAR baseline, atmospheric delay anomalies, and possible surface deformation because of tectonic, volcanic, or other sources during the time interval spanned by the images. This chapter presents practical solutions to identify and remove various artifacts in repeat-pass satellite InSAR images to generate a high-quality DEM.

  17. Virtual and Actual Humanoid Robot Control with Four-Class Motor-Imagery-Based Optical Brain-Computer Interface

    PubMed Central

    Kim, Youngmoo E.

    2017-01-01

    Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training. PMID:28804712

  18. Virtual and Actual Humanoid Robot Control with Four-Class Motor-Imagery-Based Optical Brain-Computer Interface.

    PubMed

    Batula, Alyssa M; Kim, Youngmoo E; Ayaz, Hasan

    2017-01-01

    Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.

  19. User's Self-Prediction of Performance in Motor Imagery Brain-Computer Interface.

    PubMed

    Ahn, Minkyu; Cho, Hohyun; Ahn, Sangtae; Jun, Sung C

    2018-01-01

    Performance variation is a critical issue in motor imagery brain-computer interface (MI-BCI), and various neurophysiological, psychological, and anatomical correlates have been reported in the literature. Although the main aim of such studies is to predict MI-BCI performance for the prescreening of poor performers, studies which focus on the user's sense of the motor imagery process and directly estimate MI-BCI performance through the user's self-prediction are lacking. In this study, we first test each user's self-prediction idea regarding motor imagery experimental datasets. Fifty-two subjects participated in a classical, two-class motor imagery experiment and were asked to evaluate their easiness with motor imagery and to predict their own MI-BCI performance. During the motor imagery experiment, an electroencephalogram (EEG) was recorded; however, no feedback on motor imagery was given to subjects. From EEG recordings, the offline classification accuracy was estimated and compared with several questionnaire scores of subjects, as well as with each subject's self-prediction of MI-BCI performance. The subjects' performance predictions during motor imagery task showed a high positive correlation ( r = 0.64, p < 0.01). Interestingly, it was observed that the self-prediction became more accurate as the subjects conducted more motor imagery tasks in the Correlation coefficient (pre-task to 2nd run: r = 0.02 to r = 0.54, p < 0.01) and root mean square error (pre-task to 3rd run: 17.7% to 10%, p < 0.01). We demonstrated that subjects may accurately predict their MI-BCI performance even without feedback information. This implies that the human brain is an active learning system and, by self-experiencing the endogenous motor imagery process, it can sense and adopt the quality of the process. Thus, it is believed that users may be able to predict MI-BCI performance and results may contribute to a better understanding of low performance and advancing BCI.

  20. Seafloor identification in sonar imagery via simulations of Helmholtz equations and discrete optimization

    NASA Astrophysics Data System (ADS)

    Engquist, Björn; Frederick, Christina; Huynh, Quyen; Zhou, Haomin

    2017-06-01

    We present a multiscale approach for identifying features in ocean beds by solving inverse problems in high frequency seafloor acoustics. The setting is based on Sound Navigation And Ranging (SONAR) imaging used in scientific, commercial, and military applications. The forward model incorporates multiscale simulations, by coupling Helmholtz equations and geometrical optics for a wide range of spatial scales in the seafloor geometry. This allows for detailed recovery of seafloor parameters including material type. Simulated backscattered data is generated using numerical microlocal analysis techniques. In order to lower the computational cost of the large-scale simulations in the inversion process, we take advantage of a pre-computed library of representative acoustic responses from various seafloor parameterizations.

  1. A Stimulus-Independent Hybrid BCI Based on Motor Imagery and Somatosensory Attentional Orientation.

    PubMed

    Yao, Lin; Sheng, Xinjun; Zhang, Dingguo; Jiang, Ning; Mrachacz-Kersting, Natalie; Zhu, Xiangyang; Farina, Dario

    2017-09-01

    Distinctive EEG signals from the motor and somatosensory cortex are generated during mental tasks of motor imagery (MI) and somatosensory attentional orientation (SAO). In this paper, we hypothesize that a combination of these two signal modalities provides improvements in a brain-computer interface (BCI) performance with respect to using the two methods separately, and generate novel types of multi-class BCI systems. Thirty two subjects were randomly divided into a Control-Group and a Hybrid-Group. In the Control-Group, the subjects performed left and right hand motor imagery (i.e., L-MI and R-MI). In the Hybrid-Group, the subjects performed the four mental tasks (i.e., L-MI, R-MI, L-SAO, and R-SAO). The results indicate that combining two of the tasks in a hybrid manner (such as L-SAO and R-MI) resulted in a significantly greater classification accuracy than when using two MI tasks. The hybrid modality reached 86.1% classification accuracy on average, with a 7.70% increase with respect to MI ( ), and 7.21% to SAO ( ) alone. Moreover, all 16 subjects in the hybrid modality reached at least 70% accuracy, which is considered the threshold for BCI illiteracy. In addition to the two-class results, the classification accuracy was 68.1% and 54.1% for the three-class and four-class hybrid BCI. Combining the induced brain signals from motor and somatosensory cortex, the proposed stimulus-independent hybrid BCI has shown improved performance with respect to individual modalities, reducing the portion of BCI-illiterate subjects, and provided novel types of multi-class BCIs.

  2. Multiple template-based image matching using alpha-rooted quaternion phase correlation

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2010-04-01

    In computer vision applications, image matching performed on quality-degraded imagery is difficult due to image content distortion and noise effects. State-of-the art keypoint based matchers, such as SURF and SIFT, work very well on clean imagery. However, performance can degrade significantly in the presence of high noise and clutter levels. Noise and clutter cause the formation of false features which can degrade recognition performance. To address this problem, previously we developed an extension to the classical amplitude and phase correlation forms, which provides improved robustness and tolerance to image geometric misalignments and noise. This extension, called Alpha-Rooted Phase Correlation (ARPC), combines Fourier domain-based alpha-rooting enhancement with classical phase correlation. ARPC provides tunable parameters to control the alpha-rooting enhancement. These parameter values can be optimized to tradeoff between high narrow correlation peaks, and more robust wider, but smaller peaks. Previously, we applied ARPC in the radon transform domain for logo image recognition in the presence of rotational image misalignments. In this paper, we extend ARPC to incorporate quaternion Fourier transforms, thereby creating Alpha-Rooted Quaternion Phase Correlation (ARQPC). We apply ARQPC to the logo image recognition problem. We use ARQPC to perform multiple-reference logo template matching by representing multiple same-class reference templates as quaternion-valued images. We generate recognition performance results on publicly-available logo imagery, and compare recognition results to results generated from standard approaches. We show that small deviations in reference templates of sameclass logos can lead to improved recognition performance using the joint matching inherent in ARQPC.

  3. Application of a common spatial pattern-based algorithm for an fNIRS-based motor imagery brain-computer interface.

    PubMed

    Zhang, Shen; Zheng, Yanchun; Wang, Daifa; Wang, Ling; Ma, Jianai; Zhang, Jing; Xu, Weihao; Li, Deyu; Zhang, Dan

    2017-08-10

    Motor imagery is one of the most investigated paradigms in the field of brain-computer interfaces (BCIs). The present study explored the feasibility of applying a common spatial pattern (CSP)-based algorithm for a functional near-infrared spectroscopy (fNIRS)-based motor imagery BCI. Ten participants performed kinesthetic imagery of their left- and right-hand movements while 20-channel fNIRS signals were recorded over the motor cortex. The CSP method was implemented to obtain the spatial filters specific for both imagery tasks. The mean, slope, and variance of the CSP filtered signals were taken as features for BCI classification. Results showed that the CSP-based algorithm outperformed two representative channel-wise methods for classifying the two imagery statuses using either data from all channels or averaged data from imagery responsive channels only (oxygenated hemoglobin: CSP-based: 75.3±13.1%; all-channel: 52.3±5.3%; averaged: 64.8±13.2%; deoxygenated hemoglobin: CSP-based: 72.3±13.0%; all-channel: 48.8±8.2%; averaged: 63.3±13.3%). Furthermore, the effectiveness of the CSP method was also observed for the motor execution data to a lesser extent. A partial correlation analysis revealed significant independent contributions from all three types of features, including the often-ignored variance feature. To our knowledge, this is the first study demonstrating the effectiveness of the CSP method for fNIRS-based motor imagery BCIs. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. The comparative evaluation of ERTS-1 imagery for resource inventory in land use planning. [Oregon

    NASA Technical Reports Server (NTRS)

    Simonson, G. H. (Principal Investigator); Paine, D. P.; Lawrence, R. D.; Pyott, W. T.; Herzog, J. H.; Murray, R. J.; Norgren, J. A.; Cornwell, J. A.; Rogers, R. A.

    1973-01-01

    The author has identified the following significant results. Multidiscipline team interpretation and mapping of resources for Crook County is nearly complete on 1:250,000 scale enlargements of ERTS-1 imagery. Maps of geology, landforms, soils and vegetation-land use are being interpreted to show limitations, suitabilities and geologic hazards for land use planning. Mapping of lineaments and structures from ERTS-1 imagery has shown a number of features not previously mapped in Oregon. A timber inventory of Ochoco National Forest has been made. Inventory of forest clear-cutting practices has been successfully demonstrated with ERTS-1 color composites. Soil tonal differences in fallow fields shown on ERTS-1 correspond with major soil boundaries in loess-mantled terrain. A digital classification system used for discriminating natural vegetation and geologic materials classes has been successful in separation of most major classes around Newberry Cauldera, Mt. Washington and Big Summit Prairie. Computer routines are available for correction of scanner data variations; and for matching scales and coordinates between digital and photographic imagery. Methods of Diazo film color printing of computer classifications and elevation-slope perspective plots with computer are being developed.

  5. Global, Persistent, Real-time Multi-sensor Automated Satellite Image Analysis and Crop Forecasting in Commercial Cloud

    NASA Astrophysics Data System (ADS)

    Brumby, S. P.; Warren, M. S.; Keisler, R.; Chartrand, R.; Skillman, S.; Franco, E.; Kontgis, C.; Moody, D.; Kelton, T.; Mathis, M.

    2016-12-01

    Cloud computing, combined with recent advances in machine learning for computer vision, is enabling understanding of the world at a scale and at a level of space and time granularity never before feasible. Multi-decadal Earth remote sensing datasets at the petabyte scale (8×10^15 bits) are now available in commercial cloud, and new satellite constellations will generate daily global coverage at a few meters per pixel. Public and commercial satellite observations now provide a wide range of sensor modalities, from traditional visible/infrared to dual-polarity synthetic aperture radar (SAR). This provides the opportunity to build a continuously updated map of the world supporting the academic community and decision-makers in government, finanace and industry. We report on work demonstrating country-scale agricultural forecasting, and global-scale land cover/land, use mapping using a range of public and commercial satellite imagery. We describe processing over a petabyte of compressed raw data from 2.8 quadrillion pixels (2.8 petapixels) acquired by the US Landsat and MODIS programs over the past 40 years. Using commodity cloud computing resources, we convert the imagery to a calibrated, georeferenced, multiresolution tiled format suited for machine-learning analysis. We believe ours is the first application to process, in less than a day, on generally available resources, over a petabyte of scientific image data. We report on work combining this imagery with time-series SAR collected by ESA Sentinel 1. We report on work using this reprocessed dataset for experiments demonstrating country-scale food production monitoring, an indicator for famine early warning. We apply remote sensing science and machine learning algorithms to detect and classify agricultural crops and then estimate crop yields and detect threats to food security (e.g., flooding, drought). The software platform and analysis methodology also support monitoring water resources, forests and other general indicators of environmental health, and can detect growth and changes in cities that are displacing historical agricultural zones.

  6. A method for diagnosing surface parameters using geostationary satellite imagery and a boundary-layer model. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Polansky, A. C.

    1982-01-01

    A method for diagnosing surface parameters on a regional scale via geosynchronous satellite imagery is presented. Moisture availability, thermal inertia, atmospheric heat flux, and total evaporation are determined from three infrared images obtained from the Geostationary Operational Environmental Satellite (GOES). Three GOES images (early morning, midafternoon, and night) are obtained from computer tape. Two temperature-difference images are then created. The boundary-layer model is run, and its output is inverted via cubic regression equations. The satellite imagery is efficiently converted into output-variable fields. All computations are executed on a PDP 11/34 minicomputer. Output fields can be produced within one hour of the availability of aligned satellite subimages of a target area.

  7. Impervious surfaces mapping using high resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Shirmeen, Tahmina

    In recent years, impervious surfaces have emerged not only as an indicator of the degree of urbanization, but also as an indicator of environmental quality. As impervious surface area increases, storm water runoff increases in velocity, quantity, temperature and pollution load. Any of these attributes can contribute to the degradation of natural hydrology and water quality. Various image processing techniques have been used to identify the impervious surfaces, however, most of the existing impervious surface mapping tools used moderate resolution imagery. In this project, the potential of standard image processing techniques to generate impervious surface data for change detection analysis using high-resolution satellite imagery was evaluated. The city of Oxford, MS was selected as the study site for this project. Standard image processing techniques, including Normalized Difference Vegetation Index (NDVI), Principal Component Analysis (PCA), a combination of NDVI and PCA, and image classification algorithms, were used to generate impervious surfaces from multispectral IKONOS and QuickBird imagery acquired in both leaf-on and leaf-off conditions. Accuracy assessments were performed, using truth data generated by manual classification, with Kappa statistics and Zonal statistics to select the most appropriate image processing techniques for impervious surface mapping. The performance of selected image processing techniques was enhanced by incorporating Soil Brightness Index (SBI) and Greenness Index (GI) derived from Tasseled Cap Transformed (TCT) IKONOS and QuickBird imagery. A time series of impervious surfaces for the time frame between 2001 and 2007 was made using the refined image processing techniques to analyze the changes in IS in Oxford. It was found that NDVI and the combined NDVI--PCA methods are the most suitable image processing techniques for mapping impervious surfaces in leaf-off and leaf-on conditions respectively, using high resolution multispectral imagery. It was also found that IS data generated by these techniques can be refined by removing the conflicting dry soil patches using SBI and GI obtained from TCT of the same imagery used for IS data generation. The change detection analysis of the IS time series shows that Oxford experienced the major changes in IS from the year 2001 to 2004 and 2006 to 2007.

  8. Correlation between high-resolution remote-sensing imagery and detailed field mapping in Cordilleran Miogeocline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldman, S.C.; Taranik, J.V.

    1986-05-01

    Selected areas were mapped at a scale of 1:6000 in the southern hot Creek Range (south-central Nevada), which is underlain by Paleozoic autochthonous limestone, shale, and sandstone, Paleozoic allochthonous chert and siltstone, and Tertiary rhyolitic to dactitic ash flow tuff. The mapping was compared with computer-processed Airborne Imaging Spectrometer (AIS) data and Landsat Thematic Mapper (TM) imagery. The AIS imagery of the Hot Creek Range was acquired in 1984 by a NASA C-130 aircraft; it has a spatial resolution of 12 m, and swath width of 380 m. The sensor was developed by the Jet Propulsion Laboratory and is themore » first in a series of NASA imaging spectrometers. The AIS collects 128 spectral bands, having a bandwidth of approximately 9 nm, in the short-wave infrared between 1.2 and 2.4 ..mu..m. This part of the spectrum contains important narrow spectral absorption features for the carbonate ion, hydroxyl ion, and water of hydration. Using computer-processed AIS imagery, therefore, the authors can separate calcite from dolomite, and kaolinite from illite and montmorillonite as well as differentiate geologic units containing these minerals. On the AIS imagery, the Upper Mississippian Tripon Pass Limestone shows a distinctive calcite absorption feature at 2.34 ..mu..m; this feature is not as pronounced in Cambrian and Ordovician limestones. The dolomitized Nevada Formation exhibits the dolomite absorption feature at 2.32 ..mu..m. Clay mineral absorption features near 2.2 ..mu..m can be distinguished in altered volcanics. Mineralogic identification was confirmed with field and laboratory spectroradiometer measurements, thin-section examination, and x-ray analysis. AIS results and field mapping were also compared to computer-processed Landsat TM imagery, the highest spectral and spatial resolution worldwide data set currently available.« less

  9. Using object-oriented classification and high-resolution imagery to map fuel types in a Mediterranean region.

    Treesearch

    L. Arroyo; S.P. Healey; W.B. Cohen; D. Cocero; J.A. Manzanera

    2006-01-01

    Knowledge of fuel load and composition is critical in fighting, preventing, and understanding wildfires. Commonly, the generation of fuel maps from remotely sensed imagery has made use of medium-resolution sensors such as Landsat. This paper presents a methodology to generate fuel type maps from high spatial resolution satellite data through object-oriented...

  10. Automatic orientation and 3D modelling from markerless rock art imagery

    NASA Astrophysics Data System (ADS)

    Lerma, J. L.; Navarro, S.; Cabrelles, M.; Seguí, A. E.; Hernández, D.

    2013-02-01

    This paper investigates the use of two detectors and descriptors on image pyramids for automatic image orientation and generation of 3D models. The detectors and descriptors replace manual measurements and are used to detect, extract and match features across multiple imagery. The Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF) will be assessed based on speed, number of features, matched features, and precision in image and object space depending on the adopted hierarchical matching scheme. The influence of applying in addition Area Based Matching (ABM) with normalised cross-correlation (NCC) and least squares matching (LSM) is also investigated. The pipeline makes use of photogrammetric and computer vision algorithms aiming minimum interaction and maximum accuracy from a calibrated camera. Both the exterior orientation parameters and the 3D coordinates in object space are sequentially estimated combining relative orientation, single space resection and bundle adjustment. The fully automatic image-based pipeline presented herein to automate the image orientation step of a sequence of terrestrial markerless imagery is compared with manual bundle block adjustment and terrestrial laser scanning (TLS) which serves as ground truth. The benefits of applying ABM after FBM will be assessed both in image and object space for the 3D modelling of a complex rock art shelter.

  11. D Modelling and Rapid Prototyping for Cardiovascular Surgical Planning - Two Case Studies

    NASA Astrophysics Data System (ADS)

    Nocerino, E.; Remondino, F.; Uccheddu, F.; Gallo, M.; Gerosa, G.

    2016-06-01

    In the last years, cardiovascular diagnosis, surgical planning and intervention have taken advantages from 3D modelling and rapid prototyping techniques. The starting data for the whole process is represented by medical imagery, in particular, but not exclusively, computed tomography (CT) or multi-slice CT (MCT) and magnetic resonance imaging (MRI). On the medical imagery, regions of interest, i.e. heart chambers, valves, aorta, coronary vessels, etc., are segmented and converted into 3D models, which can be finally converted in physical replicas through 3D printing procedure. In this work, an overview on modern approaches for automatic and semiautomatic segmentation of medical imagery for 3D surface model generation is provided. The issue of accuracy check of surface models is also addressed, together with the critical aspects of converting digital models into physical replicas through 3D printing techniques. A patient-specific 3D modelling and printing procedure (Figure 1), for surgical planning in case of complex heart diseases was developed. The procedure was applied to two case studies, for which MCT scans of the chest are available. In the article, a detailed description on the implemented patient-specific modelling procedure is provided, along with a general discussion on the potentiality and future developments of personalized 3D modelling and printing for surgical planning and surgeons practice.

  12. Development of a Novel Motor Imagery Control Technique and Application in a Gaming Environment

    PubMed Central

    Xue, Tao

    2017-01-01

    We present a methodology for a hybrid brain-computer interface (BCI) system, with the recognition of motor imagery (MI) based on EEG and blink EOG signals. We tested the BCI system in a 3D Tetris and an analogous 2D game playing environment. To enhance player's BCI control ability, the study focused on feature extraction from EEG and control strategy supporting Game-BCI system operation. We compared the numerical differences between spatial features extracted with common spatial pattern (CSP) and the proposed multifeature extraction. To demonstrate the effectiveness of 3D game environment at enhancing player's event-related desynchronization (ERD) and event-related synchronization (ERS) production ability, we set the 2D Screen Game as the comparison experiment. According to a series of statistical results, the group performing MI in the 3D Tetris environment showed more significant improvements in generating MI-associated ERD/ERS. Analysis results of game-score indicated that the players' scores presented an obvious uptrend in 3D Tetris environment but did not show an obvious downward trend in 2D Screen Game. It suggested that the immersive and rich-control environment for MI would improve the associated mental imagery and enhance MI-based BCI skills. PMID:28572817

  13. A Low-Signal-to-Noise-Ratio Sensor Framework Incorporating Improved Nighttime Capabilities in DIRSIG

    NASA Astrophysics Data System (ADS)

    Rizzuto, Anthony P.

    When designing new remote sensing systems, it is difficult to make apples-to-apples comparisons between designs because of the number of sensor parameters that can affect the final image. Using synthetic imagery and a computer sensor model allows for comparisons to be made between widely different sensor designs or between competing design parameters. Little work has been done in fully modeling low-SNR systems end-to-end for these types of comparisons. Currently DIRSIG has limited capability to accurately model nighttime scenes under new moon conditions or near large cities. An improved DIRSIG scene modeling capability is presented that incorporates all significant sources of nighttime radiance, including new models for urban glow and airglow, both taken from the astronomy community. A low-SNR sensor modeling tool is also presented that accounts for sensor components and noise sources to generate synthetic imagery from a DIRSIG scene. The various sensor parameters that affect SNR are discussed, and example imagery is shown with the new sensor modeling tool. New low-SNR detectors have recently been designed and marketed for remote sensing applications. A comparison of system parameters for a state-of-the-art low-SNR sensor is discussed, and a sample design trade study is presented for a hypothetical scene and sensor.

  14. Mimicking human expert interpretation of remotely sensed raster imagery by using a novel segmentation analysis within ArcGIS

    NASA Astrophysics Data System (ADS)

    Le Bas, Tim; Scarth, Anthony; Bunting, Peter

    2015-04-01

    Traditional computer based methods for the interpretation of remotely sensed imagery use each pixel individually or the average of a small window of pixels to calculate a class or thematic value, which provides an interpretation. However when a human expert interprets imagery, the human eye is excellent at finding coherent and homogenous areas and edge features. It may therefore be advantageous for computer analysis to mimic human interpretation. A new toolbox for ArcGIS 10.x will be presented that segments the data layers into a set of polygons. Each polygon is defined by a K-means clustering and region growing algorithm, thus finding areas, their edges and any lineations in the imagery. Attached to each polygon are the characteristics of the imagery such as mean and standard deviation of the pixel values, within the polygon. The segmentation of imagery into a jigsaw of polygons also has the advantage that the human interpreter does not need to spend hours digitising the boundaries. The segmentation process has been taken from the RSGIS library of analysis and classification routines (Bunting et al., 2014). These routines are freeware and have been modified to be available in the ArcToolbox under the Windows (v7) operating system. Input to the segmentation process is a multi-layered raster image, for example; a Landsat image, or a set of raster datasets made up from derivatives of topography. The size and number of polygons are set by the user and are dependent on the imagery used. Examples will be presented of data from the marine environment utilising bathymetric depth, slope, rugosity and backscatter from a multibeam system. Meaningful classification of the polygons using their numerical characteristics is the next goal. Object based image analysis (OBIA) should help this workflow. Fully calibrated imagery systems will allow numerical classification to be translated into more readily understandable terms. Peter Bunting, Daniel Clewley, Richard M. Lucas and Sam Gillingham. 2014. The Remote Sensing and GIS Software Library (RSGISLib), Computers & Geosciences. Volume 62, Pages 216-226 http://dx.doi.org/10.1016/j.cageo.2013.08.007.

  15. California coast nearshore processes study. [nearshore currents, sediment transport, estuaries, and river discharge

    NASA Technical Reports Server (NTRS)

    Pirie, D. M.; Steller, D. D. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Large scale sediment plumes from intermittent streams and rivers form detectable seasonal patterns on ERTS-1 imagery. The ocean current systems, as plotted from three California coast ERTS mosaics, were identified. Offshore patterns of sediment in areas such as the Santa Barbara Channel are traceable. These patterns extend offshore to heretofore unanticipated ranges as shown on the ERTS-1 imagery. Flying spot scanner enhancements of NASA tapes resulted in details of subtle and often invisible (to the eye) nearshore features. The suspended sediments off San Francisco and in Monterey Bay are emphasized in detail. These are areas of extremely changeable offshore sediment transport patterns. Computer generated contouring of radiance levels resulted in maps that can be used in determining surface and nearsurface suspended sediment distribution. Tentative calibrations of ERTS-1 spectral brightness against sediment load have been made using shipboard measurements. Information from the combined enhancement and interpretation techniques is applicable to operational coastal engineering programs.

  16. Quality Analysis on 3d Buidling Models Reconstructed from Uav Imagery

    NASA Astrophysics Data System (ADS)

    Jarzabek-Rychard, M.; Karpina, M.

    2016-06-01

    Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning), a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1) comparing generated SfM point cloud to ALS data; (2) computing internal consistency measures of the reconstruction process; (3) analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.

  17. Mental Task Evaluation for Hybrid NIRS-EEG Brain-Computer Interfaces

    PubMed Central

    Gupta, Rishabh; Falk, Tiago H.

    2017-01-01

    Based on recent electroencephalography (EEG) and near-infrared spectroscopy (NIRS) studies that showed that tasks such as motor imagery and mental arithmetic induce specific neural response patterns, we propose a hybrid brain-computer interface (hBCI) paradigm in which EEG and NIRS data are fused to improve binary classification performance. We recorded simultaneous NIRS-EEG data from nine participants performing seven mental tasks (word generation, mental rotation, subtraction, singing and navigation, and motor and face imagery). Classifiers were trained for each possible pair of tasks using (1) EEG features alone, (2) NIRS features alone, and (3) EEG and NIRS features combined, to identify the best task pairs and assess the usefulness of a multimodal approach. The NIRS-EEG approach led to an average increase in peak kappa of 0.03 when using features extracted from one-second windows (equivalent to an increase of 1.5% in classification accuracy for balanced classes). The increase was much stronger (0.20, corresponding to an 10% accuracy increase) when focusing on time windows of high NIRS performance. The EEG and NIRS analyses further unveiled relevant brain regions and important feature types. This work provides a basis for future NIRS-EEG hBCI studies aiming to improve classification performance toward more efficient and flexible BCIs. PMID:29181021

  18. Extraction of linear features on SAR imagery

    NASA Astrophysics Data System (ADS)

    Liu, Junyi; Li, Deren; Mei, Xin

    2006-10-01

    Linear features are usually extracted from SAR imagery by a few edge detectors derived from the contrast ratio edge detector with a constant probability of false alarm. On the other hand, the Hough Transform is an elegant way of extracting global features like curve segments from binary edge images. Randomized Hough Transform can reduce the computation time and memory usage of the HT drastically. While Randomized Hough Transform will bring about a great deal of cells invalid during the randomized sample. In this paper, we propose a new approach to extract linear features on SAR imagery, which is an almost automatic algorithm based on edge detection and Randomized Hough Transform. The presented improved method makes full use of the directional information of each edge candidate points so as to solve invalid cumulate problems. Applied result is in good agreement with the theoretical study, and the main linear features on SAR imagery have been extracted automatically. The method saves storage space and computational time, which shows its effectiveness and applicability.

  19. Integrating satellite imagery with simulation modeling to improve burn severity mapping

    Treesearch

    Eva C. Karau; Pamela G. Sikkink; Robert E. Keane; Gregory K. Dillon

    2014-01-01

    Both satellite imagery and spatial fire effects models are valuable tools for generating burn severity maps that are useful to fire scientists and resource managers. The purpose of this study was to test a new mapping approach that integrates imagery and modeling to create more accurate burn severity maps. We developed and assessed a statistical model that combines the...

  20. A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery.

    PubMed

    Huang, Huasheng; Deng, Jizhong; Lan, Yubin; Yang, Aqing; Deng, Xiaoling; Zhang, Lei

    2018-01-01

    Appropriate Site Specific Weed Management (SSWM) is crucial to ensure the crop yields. Within SSWM of large-scale area, remote sensing is a key technology to provide accurate weed distribution information. Compared with satellite and piloted aircraft remote sensing, unmanned aerial vehicle (UAV) is capable of capturing high spatial resolution imagery, which will provide more detailed information for weed mapping. The objective of this paper is to generate an accurate weed cover map based on UAV imagery. The UAV RGB imagery was collected in 2017 October over the rice field located in South China. The Fully Convolutional Network (FCN) method was proposed for weed mapping of the collected imagery. Transfer learning was used to improve generalization capability, and skip architecture was applied to increase the prediction accuracy. After that, the performance of FCN architecture was compared with Patch_based CNN algorithm and Pixel_based CNN method. Experimental results showed that our FCN method outperformed others, both in terms of accuracy and efficiency. The overall accuracy of the FCN approach was up to 0.935 and the accuracy for weed recognition was 0.883, which means that this algorithm is capable of generating accurate weed cover maps for the evaluated UAV imagery.

  1. Photogrammetry of the Viking-Lander imagery.

    USGS Publications Warehouse

    Wu, S.S.C.; Schafer, F.J.

    1982-01-01

    We have solved the problem of photogrammetric mapping from the Viking Lander photography in two ways: 1) by converting the azimuth and elevation scanning imagery to the equivalent of a frame picture by means of computerized rectification; and 2) by interfacing a high-speed, general-purpose computer to the AS-11A analytical plotter so that all computations of corrections can be performed in real time during the process of model orientation and map compilation. Examples are presented of photographs and maps of Earth and Mars. -from Authors

  2. High-End Scientific Computing

    EPA Pesticide Factsheets

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  3. Supervised classification of aerial imagery and multi-source data fusion for flood assessment

    NASA Astrophysics Data System (ADS)

    Sava, E.; Harding, L.; Cervone, G.

    2015-12-01

    Floods are among the most devastating natural hazards and the ability to produce an accurate and timely flood assessment before, during, and after an event is critical for their mitigation and response. Remote sensing technologies have become the de-facto approach for observing the Earth and its environment. However, satellite remote sensing data are not always available. For these reasons, it is crucial to develop new techniques in order to produce flood assessments during and after an event. Recent advancements in data fusion techniques of remote sensing with near real time heterogeneous datasets have allowed emergency responders to more efficiently extract increasingly precise and relevant knowledge from the available information. This research presents a fusion technique using satellite remote sensing imagery coupled with non-authoritative data such as Civil Air Patrol (CAP) and tweets. A new computational methodology is proposed based on machine learning algorithms to automatically identify water pixels in CAP imagery. Specifically, wavelet transformations are paired with multiple classifiers, run in parallel, to build models discriminating water and non-water regions. The learned classification models are first tested against a set of control cases, and then used to automatically classify each image separately. A measure of uncertainty is computed for each pixel in an image proportional to the number of models classifying the pixel as water. Geo-tagged tweets are continuously harvested and stored on a MongoDB and queried in real time. They are fused with CAP classified data, and with satellite remote sensing derived flood extent results to produce comprehensive flood assessment maps. The final maps are then compared with FEMA generated flood extents to assess their accuracy. The proposed methodology is applied on two test cases, relative to the 2013 floods in Boulder CO, and the 2015 floods in Texas.

  4. A brain-computer interface with vibrotactile biofeedback for haptic information.

    PubMed

    Chatterjee, Aniruddha; Aggarwal, Vikram; Ramos, Ander; Acharya, Soumyadipta; Thakor, Nitish V

    2007-10-17

    It has been suggested that Brain-Computer Interfaces (BCI) may one day be suitable for controlling a neuroprosthesis. For closed-loop operation of BCI, a tactile feedback channel that is compatible with neuroprosthetic applications is desired. Operation of an EEG-based BCI using only vibrotactile feedback, a commonly used method to convey haptic senses of contact and pressure, is demonstrated with a high level of accuracy. A Mu-rhythm based BCI using a motor imagery paradigm was used to control the position of a virtual cursor. The cursor position was shown visually as well as transmitted haptically by modulating the intensity of a vibrotactile stimulus to the upper limb. A total of six subjects operated the BCI in a two-stage targeting task, receiving only vibrotactile biofeedback of performance. The location of the vibration was also systematically varied between the left and right arms to investigate location-dependent effects on performance. Subjects are able to control the BCI using only vibrotactile feedback with an average accuracy of 56% and as high as 72%. These accuracies are significantly higher than the 15% predicted by random chance if the subject had no voluntary control of their Mu-rhythm. The results of this study demonstrate that vibrotactile feedback is an effective biofeedback modality to operate a BCI using motor imagery. In addition, the study shows that placement of the vibrotactile stimulation on the biceps ipsilateral or contralateral to the motor imagery introduces a significant bias in the BCI accuracy. This bias is consistent with a drop in performance generated by stimulation of the contralateral limb. Users demonstrated the capability to overcome this bias with training.

  5. Feature Masking in Computer Game Promotes Visual Imagery

    ERIC Educational Resources Information Center

    Smith, Glenn Gordon; Morey, Jim; Tjoe, Edwin

    2007-01-01

    Can learning of mental imagery skills for visualizing shapes be accelerated with feature masking? Chemistry, physics fine arts, military tactics, and laparoscopic surgery often depend on mentally visualizing shapes in their absence. Does working with "spatial feature-masks" (skeletal shapes, missing key identifying portions) encourage people to…

  6. Automated Synthetic Scene Generation

    DTIC Science & Technology

    2014-07-01

    Using the Beard-Maxwell BRDF model , the BRDF from Equations (3.3) and (3.4) is composed of specular, diffuse, and volumetric terms such that x y zSun... models help organizations developing new remote sensing instruments anticipate sensor performance by enabling the ability to create synthetic imagery...for proposed sensor before a sensor is built. One of the largest challenges in modeling realistic synthetic imagery, however, is generating the

  7. Multi-class geospatial object detection based on a position-sensitive balancing framework for high spatial resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Yanfei; Han, Xiaobing; Zhang, Liangpei

    2018-04-01

    Multi-class geospatial object detection from high spatial resolution (HSR) remote sensing imagery is attracting increasing attention in a wide range of object-related civil and engineering applications. However, the distribution of objects in HSR remote sensing imagery is location-variable and complicated, and how to accurately detect the objects in HSR remote sensing imagery is a critical problem. Due to the powerful feature extraction and representation capability of deep learning, the deep learning based region proposal generation and object detection integrated framework has greatly promoted the performance of multi-class geospatial object detection for HSR remote sensing imagery. However, due to the translation caused by the convolution operation in the convolutional neural network (CNN), although the performance of the classification stage is seldom influenced, the localization accuracies of the predicted bounding boxes in the detection stage are easily influenced. The dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage has not been addressed for HSR remote sensing imagery, and causes position accuracy problems for multi-class geospatial object detection with region proposal generation and object detection. In order to further improve the performance of the region proposal generation and object detection integrated framework for HSR remote sensing imagery object detection, a position-sensitive balancing (PSB) framework is proposed in this paper for multi-class geospatial object detection from HSR remote sensing imagery. The proposed PSB framework takes full advantage of the fully convolutional network (FCN), on the basis of a residual network, and adopts the PSB framework to solve the dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage. In addition, a pre-training mechanism is utilized to accelerate the training procedure and increase the robustness of the proposed algorithm. The proposed algorithm is validated with a publicly available 10-class object detection dataset.

  8. Simulation and evaluation of the Sh-2F helicopter in a shipboard environment using the interchangeable cab system

    NASA Technical Reports Server (NTRS)

    Paulk, C. H., Jr.; Astill, D. L.; Donley, S. T.

    1983-01-01

    The operation of the SH-2F helicopter from the decks of small ships in adverse weather was simulated using a large amplitude vertical motion simulator, a wide angle computer generated imagery visual system, and an interchangeable cab (ICAB). The simulation facility, the mathematical programs, and the validation method used to ensure simulation fidelity are described. The results show the simulator to be a useful tool in simulating the ship-landing problem. Characteristics of the ICAB system and ways in which the simulation can be improved are presented.

  9. Moonwalk

    NASA Astrophysics Data System (ADS)

    Waite, C. T.

    2013-04-01

    Moonwalk is a stroll on the Moon through time and space, a lyrical history of humanity's scientific and allegorical relationship with the Moon from the beginnings of culture to the Space Age and the memories of the Cold-War generation. It is an experimental film in both genre and form, a computer animation designed for projection on a planetarium cupola. A hemispherical film, Moonwalk creates an immersive experience. The fulldome format presents aesthetic and technical challenges to create a new form of imagery and spatial montage. A seven-minute excerpt of the work-in-progress was shown at INSAPV in the Adler Planetarium, Chicago.

  10. Coral Reef Remote Sensing Using Simulated VIIRS and LDCM Imagery

    NASA Technical Reports Server (NTRS)

    Estep, Leland; Spruce, Joseph P.; Blonski, Slawomir; Moore, Roxzana

    2008-01-01

    The Rapid Prototyping Capability (RPC) node at NASA Stennis Space Center, MS, was used to simulate NASA next-generation sensor imagery over well-known coral reef areas: Looe Key, FL, and Kaneohe Bay, HI. The objective was to assess the degree to which next-generation sensor systems-the Visible/Infrared Imager/Radiometer Suite (VIIRS) and the Landsat Data Continuity Mission (LDCM)- might provide key input to the National Oceanographic and Atmospheric Administration (NOAA) Integrated Coral Observing Network (ICON)/Coral Reef Early Warning System (CREWS) Decision Support Tool (DST). The DST data layers produced from the simulated imagery concerned water quality and benthic classification map layers. The water optical parameters of interest were chlorophyll (Chl) and the absorption coefficient (a). The input imagery used by the RPC for simulation included spaceborne (Hyperion) and airborne (AVIRIS) hyperspectral data. Specific field data to complement and aid in validation of the overflight data was used when available. The results of the experiment show that the next-generation sensor systems are capable of providing valuable data layer resources to NOAA s ICON/CREWS DST.

  11. Coral Reef Remote Sensing using Simulated VIIRS and LDCM Imagery

    NASA Technical Reports Server (NTRS)

    Estep, Leland; Spruce, Joseph P.

    2007-01-01

    The Rapid Prototyping Capability (RPC) node at NASA Stennis Space Center, MS, was used to simulate NASA next-generation sensor imagery over well-known coral reef areas: Looe Key, FL, and Kaneohe Bay, HI. The objective was to assess the degree to which next-generation sensor systems the Visible/Infrared Imager/Radiometer Suite (VIIRS) and the Landsat Data Continuity Mission (LDCM) might provide key input to the National Oceanographic and Atmospheric Administration (NOAA) Integrated Coral Observing Network (ICON)/Coral Reef Early Warning System (CREWS) Decision Support Tool (DST). The DST data layers produced from the simulated imagery concerned water quality and benthic classification map layers. The water optical parameters of interest were chlorophyll (Chl) and the absorption coefficient (a). The input imagery used by the RPC for simulation included spaceborne (Hyperion) and airborne (AVIRIS) hyperspectral data. Specific field data to complement and aid in validation of the overflight data was used when available. The results of the experiment show that the next-generation sensor systems are capable of providing valuable data layer resources to NOAA's ICON/CREWS DST.

  12. Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha

    2012-11-01

    Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.

  13. Compiling and editing agricultural strata boundaries with remotely sensed imagery and map attribute data using graphics workstations

    NASA Technical Reports Server (NTRS)

    Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt

    1991-01-01

    The USDA presently uses labor-intensive photographic interpretation procedures to delineate large geographical areas into manageable size sampling units for the estimation of domestic crop and livestock production. Computer software to automate the boundary delineation procedure, called the computer-assisted stratification and sampling (CASS) system, was developed using a Hewlett Packard color-graphics workstation. The CASS procedures display Thematic Mapper (TM) satellite digital imagery on a graphics display workstation as the backdrop for the onscreen delineation of sampling units. USGS Digital Line Graph (DLG) data for roads and waterways are displayed over the TM imagery to aid in identifying potential sample unit boundaries. Initial analysis conducted with three Missouri counties indicated that CASS was six times faster than the manual techniques in delineating sampling units.

  14. Processing the Viking lander camera data

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Tucker, R.; Green, W.; Jones, K. L.

    1977-01-01

    Over 1000 camera events were returned from the two Viking landers during the Primary Mission. A system was devised for processing camera data as they were received, in real time, from the Deep Space Network. This system provided a flexible choice of parameters for three computer-enhanced versions of the data for display or hard-copy generation. Software systems allowed all but 0.3% of the imagery scan lines received on earth to be placed correctly in the camera data record. A second-order processing system was developed which allowed extensive interactive image processing including computer-assisted photogrammetry, a variety of geometric and photometric transformations, mosaicking, and color balancing using six different filtered images of a common scene. These results have been completely cataloged and documented to produce an Experiment Data Record.

  15. Geographic applications of ERTS-1 imagery to rural landscape change in eastern Tennessee

    NASA Technical Reports Server (NTRS)

    Rehder, J. B. (Principal Investigator); Omalley, J. R.

    1973-01-01

    There are no author-identified significant results in this report. A multistage sampling experiment was conducted using low (10,000') and high (60,000') altitude aircraft imagery in comparison with orbital (560 miles) ERTS imagery. Although the aircraft data provide detailed landscape observations similar to ground truth data, they cover relatively small areas per image frame for irregular static slices of time. By comparison, ERTS provides repetitive observations in a regional perspective for broad areal coverage. Microdensitometric and computer techniques are being used to analyze the ERTS imagery for gray tone signatures, comparisons, and ultimately for landscape change detection.

  16. A Wearable Channel Selection-Based Brain-Computer Interface for Motor Imagery Detection.

    PubMed

    Lo, Chi-Chun; Chien, Tsung-Yi; Chen, Yu-Chun; Tsai, Shang-Ho; Fang, Wai-Chi; Lin, Bor-Shyh

    2016-02-06

    Motor imagery-based brain-computer interface (BCI) is a communication interface between an external machine and the brain. Many kinds of spatial filters are used in BCIs to enhance the electroencephalography (EEG) features related to motor imagery. The approach of channel selection, developed to reserve meaningful EEG channels, is also an important technique for the development of BCIs. However, current BCI systems require a conventional EEG machine and EEG electrodes with conductive gel to acquire multi-channel EEG signals and then transmit these EEG signals to the back-end computer to perform the approach of channel selection. This reduces the convenience of use in daily life and increases the limitations of BCI applications. In order to improve the above issues, a novel wearable channel selection-based brain-computer interface is proposed. Here, retractable comb-shaped active dry electrodes are designed to measure the EEG signals on a hairy site, without conductive gel. By the design of analog CAR spatial filters and the firmware of EEG acquisition module, the function of spatial filters could be performed without any calculation, and channel selection could be performed in the front-end device to improve the practicability of detecting motor imagery in the wearable EEG device directly or in commercial mobile phones or tablets, which may have relatively low system specifications. Finally, the performance of the proposed BCI is investigated, and the experimental results show that the proposed system is a good wearable BCI system prototype.

  17. Incorporating structure from motion uncertainty into image-based pose estimation

    NASA Astrophysics Data System (ADS)

    Ludington, Ben T.; Brown, Andrew P.; Sheffler, Michael J.; Taylor, Clark N.; Berardi, Stephen

    2015-05-01

    A method for generating and utilizing structure from motion (SfM) uncertainty estimates within image-based pose estimation is presented. The method is applied to a class of problems in which SfM algorithms are utilized to form a geo-registered reference model of a particular ground area using imagery gathered during flight by a small unmanned aircraft. The model is then used to form camera pose estimates in near real-time from imagery gathered later. The resulting pose estimates can be utilized by any of the other onboard systems (e.g. as a replacement for GPS data) or downstream exploitation systems, e.g., image-based object trackers. However, many of the consumers of pose estimates require an assessment of the pose accuracy. The method for generating the accuracy assessment is presented. First, the uncertainty in the reference model is estimated. Bundle Adjustment (BA) is utilized for model generation. While the high-level approach for generating a covariance matrix of the BA parameters is straightforward, typical computing hardware is not able to support the required operations due to the scale of the optimization problem within BA. Therefore, a series of sparse matrix operations is utilized to form an exact covariance matrix for only the parameters that are needed at a particular moment. Once the uncertainty in the model has been determined, it is used to augment Perspective-n-Point pose estimation algorithms to improve the pose accuracy and to estimate the resulting pose uncertainty. The implementation of the described method is presented along with results including results gathered from flight test data.

  18. Application of ERTS-1 imagery in the Vermont-New York dispute over pollution of Lake Champlain

    NASA Technical Reports Server (NTRS)

    Lind, A. O. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. ERTS-1 imagery and a composite map derived from ERTS-1 imagery were presented as evidence in a U.S. Supreme Court case involving the pollution of an interstate water body (Lake Champlain). A pollution problem generated by a large paper mill forms the basis of the suit (Vermont vs. International Paper Co. and State of New York) and ERTS-1 imagery shows the effluent pattern on the lake surface as extending into Vermont during three different times.

  19. Visible and thermal spectrum synthetic image generation with DIRSIG and MuSES for ground vehicle identification training

    NASA Astrophysics Data System (ADS)

    May, Christopher M.; Maurer, Tana O.; Sanders, Jeffrey S.

    2017-05-01

    There is a ubiquitous and never ending need in the US armed forces for training materials that provide the warfighter with the skills needed to differentiate between friendly and enemy forces on the battlefield. The current state of the art in battlefield identification training is the Recognition of Combat Vehicles (ROCV) tool created and maintained by the Communications - Electronics Research, Development and Engineering Center Night Vision and Electronic Sensors Directorate (CERDEC NVESD). The ROC-V training package utilizes measured visual and thermal imagery to train soldiers about the critical visual and thermal cues needed to accurately identify modern military vehicles and combatants. This paper presents an approach that has been developed to augment the existing ROC-V imagery database with synthetically generated multi-spectral imagery that will allow NVESD to provide improved training imagery at significantly lower costs.

  20. Multispectral image fusion using neural networks

    NASA Technical Reports Server (NTRS)

    Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.

    1990-01-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.

  1. Generation of co-speech gestures based on spatial imagery from the right-hemisphere: evidence from split-brain patients.

    PubMed

    Kita, Sotaro; Lausberg, Hedda

    2008-02-01

    It has been claimed that the linguistically dominant (left) hemisphere is obligatorily involved in production of spontaneous speech-accompanying gestures (Kimura, 1973a, 1973b; Lavergne and Kimura, 1987). We examined this claim for the gestures that are based on spatial imagery: iconic gestures with observer viewpoint (McNeill, 1992) and abstract deictic gestures (McNeill, et al. 1993). We observed gesture production in three patients with complete section of the corpus callosum in commissurotomy or callosotomy (two with left-hemisphere language, and one with bilaterally represented language) and nine healthy control participants. All three patients produced spatial-imagery gestures with the left-hand as well as with the right-hand. However, unlike healthy controls and the split-brain patient with bilaterally represented language, the two patients with left-hemispheric language dominance coordinated speech and spatial-imagery gestures more poorly in the left-hand than in the right-hand. It is concluded that the linguistically non-dominant (right) hemisphere alone can generate co-speech gestures based on spatial imagery, just as the left-hemisphere can.

  2. A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion

    NASA Astrophysics Data System (ADS)

    Bolick, Leslie; Harguess, Josh

    2016-05-01

    An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.

  3. Brightening the Day With Flashes of Positive Mental Imagery: A Case Study of an Individual With Depression

    PubMed Central

    Holmes, Emily A.

    2017-01-01

    This article presents a case example of an individual with current major depression engaging in a positive mental imagery intervention, specifically a computerized cognitive training paradigm involving repeated practice in generating positive imagery in response to ambiguous scenarios. The patient's reported experience of the intervention suggests the potential of the positive imagery intervention to “brighten” everyday life via promoting involuntary “flashes” of positive mental imagery in situations related to the scenarios, with associated beneficial effects on positive affect, future expectations, and behavior. Enhancing this aspect of the training–i.e., involuntary positive imagery in contexts where it is adaptive–may hold particular promise for reducing anhedonic symptoms of depression. Developing simple computerized interventions to increase the experience of positive mental imagery in everyday life could therefore provide a useful addition to the drive to improve treatment outcomes. PMID:28152198

  4. Simulation of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Richsmeier, Steven C.; Singer-Berk, Alexander; Bernstein, Lawrence S.

    2004-01-01

    A software package generates simulated hyperspectral imagery for use in validating algorithms that generate estimates of Earth-surface spectral reflectance from hyperspectral images acquired by airborne and spaceborne instruments. This software is based on a direct simulation Monte Carlo approach for modeling three-dimensional atmospheric radiative transport, as well as reflections from surfaces characterized by spatially inhomogeneous bidirectional reflectance distribution functions. In this approach, "ground truth" is accurately known through input specification of surface and atmospheric properties, and it is practical to consider wide variations of these properties. The software can treat both land and ocean surfaces, as well as the effects of finite clouds with surface shadowing. The spectral/spatial data cubes computed by use of this software can serve both as a substitute for, and a supplement to, field validation data.

  5. Empirical measurement and model validation of infrared spectra of contaminated surfaces

    NASA Astrophysics Data System (ADS)

    Archer, Sean; Gartley, Michael; Kerekes, John; Cosofret, Bogdon; Giblin, Jay

    2015-05-01

    Liquid-contaminated surfaces generally require more sophisticated radiometric modeling to numerically describe surface properties. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) Model utilizes radiative transfer modeling to generate synthetic imagery. Within DIRSIG, a micro-scale surface property model (microDIRSIG) was used to calculate numerical bidirectional reflectance distribution functions (BRDF) of geometric surfaces with applied concentrations of liquid contamination. Simple cases where the liquid contamination was well described by optical constants on optically at surfaces were first analytically evaluated by ray tracing and modeled within microDIRSIG. More complex combinations of surface geometry and contaminant application were then incorporated into the micro-scale model. The computed microDIRSIG BRDF outputs were used to describe surface material properties in the encompassing DIRSIG simulation. These DIRSIG generated outputs were validated with empirical measurements obtained from a Design and Prototypes (D&P) Model 102 FTIR spectrometer. Infrared spectra from the synthetic imagery and the empirical measurements were iteratively compared to identify quantitative spectral similarity between the measured data and modeled outputs. Several spectral angles between the predicted and measured emissivities differed by less than 1 degree. Synthetic radiance spectra produced from the microDIRSIG/DIRSIG combination had a RMS error of 0.21-0.81 watts/(m2-sr-μm) when compared to the D&P measurements. Results from this comparison will facilitate improved methods for identifying spectral features and detecting liquid contamination on a variety of natural surfaces.

  6. Flood Extent Mapping Using Dual-Polarimetric SENTINEL-1 Synthetic Aperture Radar Imagery

    NASA Astrophysics Data System (ADS)

    Jo, M.-J.; Osmanoglu, B.; Zhang, B.; Wdowinski, S.

    2018-04-01

    Rapid generation of synthetic aperture radar (SAR) based flood extent maps provide valuable data in disaster response efforts thanks to the cloud penetrating ability of microwaves. We present a method using dual-polarimetric SAR imagery acquired on Sentinel-1a/b satellites. A false-colour map is generated using pre- and post- disaster imagery, allowing operators to distinguish between existing standing water pre-flooding, and recently flooded areas. The method works best in areas of standing water and provides mixed results in urban areas. A flood depth map is also estimated by using an external DEM. We will present the methodology, it's estimated accuracy as well as investigations into improving the response in urban areas.

  7. BisQue: cloud-based system for management, annotation, visualization, analysis and data mining of underwater and remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Fedorov, D.; Miller, R. J.; Kvilekval, K. G.; Doheny, B.; Sampson, S.; Manjunath, B. S.

    2016-02-01

    Logistical and financial limitations of underwater operations are inherent in marine science, including biodiversity observation. Imagery is a promising way to address these challenges, but the diversity of organisms thwarts simple automated analysis. Recent developments in computer vision methods, such as convolutional neural networks (CNN), are promising for automated classification and detection tasks but are typically very computationally expensive and require extensive training on large datasets. Therefore, managing and connecting distributed computation, large storage and human annotations of diverse marine datasets is crucial for effective application of these methods. BisQue is a cloud-based system for management, annotation, visualization, analysis and data mining of underwater and remote sensing imagery and associated data. Designed to hide the complexity of distributed storage, large computational clusters, diversity of data formats and inhomogeneous computational environments behind a user friendly web-based interface, BisQue is built around an idea of flexible and hierarchical annotations defined by the user. Such textual and graphical annotations can describe captured attributes and the relationships between data elements. Annotations are powerful enough to describe cells in fluorescent 4D images, fish species in underwater videos and kelp beds in aerial imagery. Presently we are developing BisQue-based analysis modules for automated identification of benthic marine organisms. Recent experiments with drop-out and CNN based classification of several thousand annotated underwater images demonstrated an overall accuracy above 70% for the 15 best performing species and above 85% for the top 5 species. Based on these promising results, we have extended bisque with a CNN-based classification system allowing continuous training on user-provided data.

  8. Real-time image processing for passive mmW imagery

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron; Bonnett, James; Harrity, Charles; Mackrides, Daniel; Dillon, Thomas E.; Martin, Richard D.; Schuetz, Christopher A.; Kelmelis, Eric; Prather, Dennis W.

    2015-05-01

    The transmission characteristics of millimeter waves (mmWs) make them suitable for many applications in defense and security, from airport preflight scanning to penetrating degraded visual environments such as brownout or heavy fog. While the cold sky provides sufficient illumination for these images to be taken passively in outdoor scenarios, this utility comes at a cost; the diffraction limit of the longer wavelengths involved leads to lower resolution imagery compared to the visible or IR regimes, and the low power levels inherent to passive imagery allow the data to be more easily degraded by noise. Recent techniques leveraging optical upconversion have shown significant promise, but are still subject to fundamental limits in resolution and signal-to-noise ratio. To address these issues we have applied techniques developed for visible and IR imagery to decrease noise and increase resolution in mmW imagery. We have developed these techniques into fieldable software, making use of GPU platforms for real-time operation of computationally complex image processing algorithms. We present data from a passive, 77 GHz, distributed aperture, video-rate imaging platform captured during field tests at full video rate. These videos demonstrate the increase in situational awareness that can be gained through applying computational techniques in real-time without needing changes in detection hardware.

  9. Picture archiving and computing systems: the key to enterprise digital imaging.

    PubMed

    Krohn, Richard

    2002-09-01

    The utopian view of the electronic medical record includes the digital transformation of all aspects of patient information. Historically, imagery from the radiology, cardiology, ophthalmology, and pathology departments, as well as the emergency room, has been a morass of paper, film, and other media, isolated within each department's system architecture. In answer to this dilemma, picture archiving and computing systems have become the focal point of efforts to create a single platform for the collection, storage, and distribution of clinical imagery throughout the health care enterprise.

  10. Assessing motor imagery in brain-computer interface training: Psychological and neurophysiological correlates.

    PubMed

    Vasilyev, Anatoly; Liburkina, Sofya; Yakovlev, Lev; Perepelkina, Olga; Kaplan, Alexander

    2017-03-01

    Motor imagery (MI) is considered to be a promising cognitive tool for improving motor skills as well as for rehabilitation therapy of movement disorders. It is believed that MI training efficiency could be improved by using the brain-computer interface (BCI) technology providing real-time feedback on person's mental attempts. While BCI is indeed a convenient and motivating tool for practicing MI, it is not clear whether it could be used for predicting or measuring potential positive impact of the training. In this study, we are trying to establish whether the proficiency in BCI control is associated with any of the neurophysiological or psychological correlates of motor imagery, as well as to determine possible interrelations among them. For that purpose, we studied motor imagery in a group of 19 healthy BCI-trained volunteers and performed a correlation analysis across various quantitative assessment metrics. We examined subjects' sensorimotor event-related EEG events, corticospinal excitability changes estimated with single-pulse transcranial magnetic stimulation (TMS), BCI accuracy and self-assessment reports obtained with specially designed questionnaires and interview routine. Our results showed, expectedly, that BCI performance is dependent on the subject's capability to suppress EEG sensorimotor rhythms, which in turn is correlated with the idle state amplitude of those oscillations. Neither BCI accuracy nor the EEG features associated with MI were found to correlate with the level of corticospinal excitability increase during motor imagery, and with assessed imagery vividness. Finally, a significant correlation was found between the level of corticospinal excitability increase and kinesthetic vividness of imagery (KVIQ-20 questionnaire). Our results suggest that two distinct neurophysiological mechanisms might mediate possible effects of motor imagery: the non-specific cortical sensorimotor disinhibition and the focal corticospinal excitability increase. Acquired data suggests that BCI-based approach is unreliable in assessing motor imagery due to its high dependence on subject's innate EEG features (e.g. resting mu-rhythm amplitude). Therefore, employment of additional assessment protocols, such as TMS and psychological testing, is required for more comprehensive evaluation of the subject's motor imagery training efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Brain computer interfaces for neurorehabilitation – its current status as a rehabilitation strategy post-stroke.

    PubMed

    van Dokkum, L E H; Ward, T; Laffont, I

    2015-02-01

    The idea of using brain computer interfaces (BCI) for rehabilitation emerged relatively recently. Basically, BCI for neurorehabilitation involves the recording and decoding of local brain signals generated by the patient, as he/her tries to perform a particular task (even if imperfect), or during a mental imagery task. The main objective is to promote the recruitment of selected brain areas involved and to facilitate neural plasticity. The recorded signal can be used in several ways: (i) to objectify and strengthen motor imagery-based training, by providing the patient feedback on the imagined motor task, for example, in a virtual environment; (ii) to generate a desired motor task via functional electrical stimulation or rehabilitative robotic orthoses attached to the patient's limb – encouraging and optimizing task execution as well as "closing" the disrupted sensorimotor loop by giving the patient the appropriate sensory feedback; (iii) to understand cerebral reorganizations after lesion, in order to influence or even quantify plasticity-induced changes in brain networks. For example, applying cerebral stimulation to re-equilibrate inter-hemispheric imbalance as shown by functional recording of brain activity during movement may help recovery. Its potential usefulness for a patient population has been demonstrated on various levels and its diverseness in interface applications makes it adaptable to a large population. The position and status of these very new rehabilitation systems should now be considered with respect to our current and more or less validated traditional methods, as well as in the light of the wide range of possible brain damage. The heterogeneity in post-damage expression inevitably complicates the decoding of brain signals and thus their use in pathological conditions, asking for controlled clinical trials. Copyright © 2015. Published by Elsevier Masson SAS.

  12. Interdisciplinary research on the application of ERTS-1 data to the regional land use planning process

    NASA Technical Reports Server (NTRS)

    Clapp, J. L. (Principal Investigator); Kiefer, R. W.; Mccarthy, M. M.; Niemann, B. J., Jr.

    1972-01-01

    The author has identified the following significant results. Although the degree to which ERTS-1 imagery can satisfy regional land use planning data needs is not yet known, it appears to offer means by which the data acquisition process can be immeasurably improved. The initial experiences of an interdisciplinary group attempting to formulate ways of analyzing the effectiveness of ERTS-1 imagery as a base for environmental monitoring and the resolution of regional land allocation problems are documented. Application of imagery to the regional planning process consists of utilizing representative geographical regions within the state of Wisconsin. Because of the need to describe and depict regional resource complexity in an interrelatable state, certain resources within the geographical regions have been inventoried and stored in a two-dimensional computer-based map form. Computer oriented processes were developed to provide for the economical storage, analysis, and spatial display of natural and cultural data for regional land use planning purposes. The authors are optimistic that the imagery will provide revelant data for land use decision making at regional levels.

  13. Exploring differences between left and right hand motor imagery via spatio-temporal EEG microstate.

    PubMed

    Liu, Weifeng; Liu, Xiaoming; Dai, Ruomeng; Tang, Xiaoying

    2017-12-01

    EEG-based motor imagery is very useful in brain-computer interface. How to identify the imaging movement is still being researched. Electroencephalography (EEG) microstates reflect the spatial configuration of quasi-stable electrical potential topographies. Different microstates represent different brain functions. In this paper, microstate method was used to process the EEG-based motor imagery to obtain microstate. The single-trial EEG microstate sequences differences between two motor imagery tasks - imagination of left and right hand movement were investigated. The microstate parameters - duration, time coverage and occurrence per second as well as the transition probability of the microstate sequences were obtained with spatio-temporal microstate analysis. The results were shown significant differences (P < 0.05) with paired t-test between the two tasks. Then these microstate parameters were used as features and a linear support vector machine (SVM) was utilized to classify the two tasks with mean accuracy 89.17%, superior performance compared to the other methods. These indicate that the microstate can be a promising feature to improve the performance of the brain-computer interface classification.

  14. An Evolutionary Algorithm for Fast Intensity Based Image Matching Between Optical and SAR Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Fischer, Peter; Schuegraf, Philipp; Merkle, Nina; Storch, Tobias

    2018-04-01

    This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.

  15. [Implications of mental image processing in the deficits of verbal information coding during normal aging].

    PubMed

    Plaie, Thierry; Thomas, Delphine

    2008-06-01

    Our study specifies the contributions of image generation and image maintenance processes occurring at the time of imaginal coding of verbal information in memory during normal aging. The memory capacities of 19 young adults (average age of 24 years) and 19 older adults (average age of 75 years) were assessed using recall tasks according to the imagery value of the stimuli to learn. The mental visual imagery capacities are assessed using tasks of image generation and temporary storage of mental imagery. The variance analysis indicates a more important decrease with age of the concretness effect. The major contribution of our study rests on the fact that the decline with age of dual coding of verbal information in memory would result primarily from the decline of image maintenance capacities and from a slowdown in image generation. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  16. Utilization of ERTS data to detect plant diseases and nutrient deficiencies, soil types and moisture levels

    NASA Technical Reports Server (NTRS)

    Parks, W. L.; Sewell, J. I. (Principal Investigator); Hilty, J. W.; Rennie, J. C.

    1972-01-01

    The author has identified the following significant results. A significant finding is the identification and delineation of a large soil association in Obion County, West Tennessee. These data are now being processed through the scanner and computer and will be included in the next report along with pictures of printout and imagery. Channel 7 appears to provide the most useful imagery related to soil differences. Soil types have been identified through the use of aircraft imagery. However, a soil association map appears to be the best that space imagery will provide. The exception to this will be large areas of a uniform soil type as occurs in the great plains.

  17. Application of ERTS-1 imagery to land use, forest density and soil investigations in Greece

    NASA Technical Reports Server (NTRS)

    Yassoglou, N. J.; Skordalakis, E.; Koutalos, A.

    1974-01-01

    Photographic and digital imagery received from ERTS-1 was analyzed and evaluated as to its usefulness for the assessment of agricultural and forest land resources. Black and white, and color composite imagery provided spectral and spatial data, which, when matched with temporal land information, provided the basis for a semidetailed land use and forest site evaluation cartography. Color composite photographs have provided some information on the status of irrigation of agricultural lands. Computer processed digital imagery was successfully used for detailed crop classification and semidetailed soil evaluation. The results and techniques of this investigation are applicable to ecological and geological conditions similar to those prevailing in the Eastern Mediterranean.

  18. Multi Sensor Data Integration for AN Accurate 3d Model Generation

    NASA Astrophysics Data System (ADS)

    Chhatkuli, S.; Satoh, T.; Tachibana, K.

    2015-05-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  19. Automated Generation of the Alaska Coastline Using High-Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Roth, G.; Porter, C. C.; Cloutier, M. D.; Clementz, M. E.; Reim, C.; Morin, P. J.

    2015-12-01

    Previous campaigns to map Alaska's coast at high resolution have relied on airborne, marine, or ground-based surveying and manual digitization. The coarse temporal resolution, inability to scale geographically, and high cost of field data acquisition in these campaigns is inadequate for the scale and speed of recent coastal change in Alaska. Here, we leverage the Polar Geospatial Center (PGC) archive of DigitalGlobe, Inc. satellite imagery to produce a state-wide coastline at 2 meter resolution. We first select multispectral imagery based on time and quality criteria. We then extract the near-infrared (NIR) band from each processed image, and classify each pixel as water or land with a pre-determined NIR threshold value. Processing continues with vectorizing the water-land boundary, removing extraneous data, and attaching metadata. Final coastline raster and vector products maintain the original accuracy of the orthorectified satellite data, which is often within the local tidal range. The repeat frequency of coastline production can range from 1 month to 3 years, depending on factors such as satellite capacity, cloud cover, and floating ice. Shadows from trees or structures complicate the output and merit further data cleaning. The PGC's imagery archive, unique expertise, and computing resources enabled us to map the Alaskan coastline in a few months. The DigitalGlobe archive allows us to update this coastline as new imagery is acquired, and facilitates baseline data for studies of coastal change and improvement of topographic datasets. Our results are not simply a one-time coastline, but rather a system for producing multi-temporal, automated coastlines. Workflows and tools produced with this project can be freely distributed and utilized globally. Researchers and government agencies must now consider how they can incorporate and quality-control this high-frequency, high-resolution data to meet their mapping standards and research objectives.

  20. Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing

    NASA Astrophysics Data System (ADS)

    Williams, McKay D.

    Reference data ("ground truth") maps traditionally have been used to assess the accuracy of imaging spectrometer classification algorithms. However, these reference data can be prohibitively expensive to produce, often do not include sub-pixel abundance estimates necessary to assess spectral unmixing algorithms, and lack published validation reports. Our research proposes methodologies to efficiently generate, validate, and apply abundance map reference data (AMRD) to airborne remote sensing scenes. We generated scene-wide AMRD for three different remote sensing scenes using our remotely sensed reference data (RSRD) technique, which spatially aggregates unmixing results from fine scale imagery (e.g., 1-m Ground Sample Distance (GSD)) to co-located coarse scale imagery (e.g., 10-m GSD or larger). We validated the accuracy of this methodology by estimating AMRD in 51 randomly-selected 10 m x 10 m plots, using seven independent methods and observers, including field surveys by two observers, imagery analysis by two observers, and RSRD using three algorithms. Results indicated statistically-significant differences between all versions of AMRD, suggesting that all forms of reference data need to be validated. Given these significant differences between the independent versions of AMRD, we proposed that the mean of all (MOA) versions of reference data for each plot and class were most likely to represent true abundances. We then compared each version of AMRD to MOA. Best case accuracy was achieved by a version of imagery analysis, which had a mean coverage area error of 2.0%, with a standard deviation of 5.6%. One of the RSRD algorithms was nearly as accurate, achieving a mean error of 3.0%, with a standard deviation of 6.3%, showing the potential of RSRD-based AMRD generation. Application of validated AMRD to specific coarse scale imagery involved three main parts: 1) spatial alignment of coarse and fine scale imagery, 2) aggregation of fine scale abundances to produce coarse scale imagery-specific AMRD, and 3) demonstration of comparisons between coarse scale unmixing abundances and AMRD. Spatial alignment was performed using our scene-wide spectral comparison (SWSC) algorithm, which aligned imagery with accuracy approaching the distance of a single fine scale pixel. We compared simple rectangular aggregation to coarse sensor point spread function (PSF) aggregation, and found that the PSF approach returned lower error, but that rectangular aggregation more accurately estimated true abundances at ground level. We demonstrated various metrics for comparing unmixing results to AMRD, including mean absolute error (MAE) and linear regression (LR). We additionally introduced reference data mean adjusted MAE (MA-MAE), and reference data confidence interval adjusted MAE (CIA-MAE), which account for known error in the reference data itself. MA-MAE analysis indicated that fully constrained linear unmixing of coarse scale imagery across all three scenes returned an error of 10.83% per class and pixel, with regression analysis yielding a slope = 0.85, intercept = 0.04, and R2 = 0.81. Our reference data research has demonstrated a viable methodology to efficiently generate, validate, and apply AMRD to specific examples of airborne remote sensing imagery, thereby enabling direct quantitative assessment of spectral unmixing performance.

  1. New perspectives on archaeological prospecting: Multispectral imagery analysis from Army City, Kansas, USA

    NASA Astrophysics Data System (ADS)

    Banks, Benjamin Daniel

    Aerial imagery analysis has a long history in European archaeology and despite early attempts little progress has been made to promote its use in North America. Recent advances in multispectral satellite and aerial sensors are helping to make aerial imagery analysis more effective in North America, and more cost effective. A site in northeastern Kansas is explored using multispectral aerial and satellite imagery allowing buried features to be mapped. Many of the problems associated with early aerial imagery analysis are explored, such as knowledge of archeological processes that contribute to crop mark formation. Use of multispectral imagery provides a means of detecting and enhancing crop marks not easily distinguishable in visible spectrum imagery. Unsupervised computer classifications of potential archaeological features permits their identification and interpretation while supervised classifications, incorporating limited amounts of geophysical data, provide a more detailed understanding of the site. Supervised classifications allow archaeological processes contributing to crop mark formation to be explored. Aerial imagery analysis is argued to be useful to a wide range of archeological problems, reducing person hours and expenses needed for site delineation and mapping. This technology may be especially useful for cultural resources management.

  2. Processing ground-based near-infrared imagery of space shuttle re-entries

    NASA Astrophysics Data System (ADS)

    Spisz, Thomas S.; Taylor, Jeff C.; Kennerly, Stephen W.; Osei-Wusu, Kwame; Gibson, David M.; Horvath, Thomas J.; Zalameda, Joseph N.; Kerns, Robert V.; Shea, Edward J.; Mercer, C. David; Schwartz, Richard J.; Dantowitz, Ronald F.; Kozubal, Marek J.

    2012-06-01

    Ground-based high-resolution, calibrated, near-infrared (NIR) imagery of the Space Shuttle STS-134 Endeavour during reentry has been obtained as part of NASA's HYTHIRM (Hypersonic Thermodynamic InfraRed Measurements) project. The long-range optical sensor package called MARS (Mobile Aerospace Reconnaissance System) was positioned in advance to acquire and track part of the shuttle re-entry. Imagery was acquired during a few minutes, with the best imagery being processed when the shuttle was at 133 kft at Mach 5.8. This paper describes the processing of the NIR imagery, building upon earlier work from the airborne imagery collections of several prior shuttle missions. Our goal is to calculate the temperature distribution of the shuttle's bottom surface as accurately as possible, considering both random and systematic errors, while maintaining all physical features in the imagery, especially local intensity variations. The processing areas described are: 1) radiometric calibration, 2) improvement of image quality, 3) atmospheric compensation, and 4) conversion to temperature. The computed temperature image will be shown, as well as comparisons with thermocouples at different positions on the shuttle. A discussion of the uncertainties of the temperature estimates using the NIR imagery is also given.

  3. D Surface Generation from Aerial Thermal Imagery

    NASA Astrophysics Data System (ADS)

    Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.

    2015-12-01

    Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.

  4. Application of tripolar concentric electrodes and prefeature selection algorithm for brain-computer interface.

    PubMed

    Besio, Walter G; Cao, Hongbao; Zhou, Peng

    2008-04-01

    For persons with severe disabilities, a brain-computer interface (BCI) may be a viable means of communication. Lapalacian electroencephalogram (EEG) has been shown to improve classification in EEG recognition. In this work, the effectiveness of signals from tripolar concentric electrodes and disc electrodes were compared for use as a BCI. Two sets of left/right hand motor imagery EEG signals were acquired. An autoregressive (AR) model was developed for feature extraction with a Mahalanobis distance based linear classifier for classification. An exhaust selection algorithm was employed to analyze three factors before feature extraction. The factors analyzed were 1) length of data in each trial to be used, 2) start position of data, and 3) the order of the AR model. The results showed that tripolar concentric electrodes generated significantly higher classification accuracy than disc electrodes.

  5. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    NASA Astrophysics Data System (ADS)

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  6. Achieving a hybrid brain-computer interface with tactile selective attention and motor imagery.

    PubMed

    Ahn, Sangtae; Ahn, Minkyu; Cho, Hohyun; Chan Jun, Sung

    2014-12-01

    We propose a new hybrid brain-computer interface (BCI) system that integrates two different EEG tasks: tactile selective attention (TSA) using a vibro-tactile stimulator on the left/right finger and motor imagery (MI) of left/right hand movement. Event-related desynchronization (ERD) from the MI task and steady-state somatosensory evoked potential (SSSEP) from the TSA task are retrieved and combined into two hybrid senses. One hybrid approach is to measure two tasks simultaneously; the features of each task are combined for testing. Another hybrid approach is to measure two tasks consecutively (TSA first and MI next) using only MI features. For comparison with the hybrid approaches, the TSA and MI tasks are measured independently. Using a total of 16 subject datasets, we analyzed the BCI classification performance for MI, TSA and two hybrid approaches in a comparative manner; we found that the consecutive hybrid approach outperformed the others, yielding about a 10% improvement in classification accuracy relative to MI alone. It is understood that TSA may play a crucial role as a prestimulus in that it helps to generate earlier ERD prior to MI and thus sustains ERD longer and to a stronger degree; this ERD may give more discriminative information than ERD in MI alone. Overall, our proposed consecutive hybrid approach is very promising for the development of advanced BCI systems.

  7. Achieving a hybrid brain-computer interface with tactile selective attention and motor imagery

    NASA Astrophysics Data System (ADS)

    Ahn, Sangtae; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan

    2014-12-01

    Objective. We propose a new hybrid brain-computer interface (BCI) system that integrates two different EEG tasks: tactile selective attention (TSA) using a vibro-tactile stimulator on the left/right finger and motor imagery (MI) of left/right hand movement. Event-related desynchronization (ERD) from the MI task and steady-state somatosensory evoked potential (SSSEP) from the TSA task are retrieved and combined into two hybrid senses. Approach. One hybrid approach is to measure two tasks simultaneously; the features of each task are combined for testing. Another hybrid approach is to measure two tasks consecutively (TSA first and MI next) using only MI features. For comparison with the hybrid approaches, the TSA and MI tasks are measured independently. Main results. Using a total of 16 subject datasets, we analyzed the BCI classification performance for MI, TSA and two hybrid approaches in a comparative manner; we found that the consecutive hybrid approach outperformed the others, yielding about a 10% improvement in classification accuracy relative to MI alone. It is understood that TSA may play a crucial role as a prestimulus in that it helps to generate earlier ERD prior to MI and thus sustains ERD longer and to a stronger degree; this ERD may give more discriminative information than ERD in MI alone. Significance. Overall, our proposed consecutive hybrid approach is very promising for the development of advanced BCI systems.

  8. Diminished kinesthetic and visual motor imagery ability in adults with chronic low back pain.

    PubMed

    La Touche, Roy; Grande-Alonso, Mónica; Cuenca-Martínez, Ferran; Gónzalez-Ferrero, Luis; Suso-Martí, Luis; Paris-Alemany, Alba

    2018-06-14

    Low back pain (LBP) is the most prevalent musculoskeletal problem among adults. It has been observed that patients with chronic pain have maladaptive neuroplastic changes and difficulty in imagination processes. To assess the ability of patients with chronic LBP (CLBP) to generate kinesthetic and visual motor images and the time they spent on this mental task compared with asymptomatic participants. Prospective, A cross-sectional study. Primary health care center in Madrid, Spain. A total of 200 participants were classified into two groups: asymptomatic participants (n = 100) and patients with CLBP (n = 100). After consenting to participate, all recruited participants received a sociodemographic questionnaire, a set of self-report measures and completed the Revised Movement Imagery Questionnaire (MIQ-R). Visual and Kinesthetic Motor Imagery Ability using the Revised Movement Imagery Questionnaire (MIQ-R). A mental chronometry using a stopwatch and psychosocial variables using self-reported questionnaires. Our results indicated that patients with CLBP had difficulty generating kinesthetic and visual motor images and also took a longer time to imagine them. A regression analysis indicated that in the CLBP group, the predictor variable for fear of activity and coping symptom self-efficacy was visual motor imagery (explaining 16.2% of the variance); however, the predictor variable for LBP disability and pain management self-efficacy was kinesthetic motor imagery (explaining 17.8% of the variance). It appears that patients with CLBP have greater difficulty generating visual and kinesthetic motor images compared with asymptomatic participants, and they also need more time to perform these mental tasks. II. Copyright © 2018 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  9. The association between brain activity and motor imagery during motor illusion induction by vibratory stimulation.

    PubMed

    Kodama, Takayuki; Nakano, Hideki; Katayama, Osamu; Murata, Shin

    2017-01-01

    The association between motor imagery ability and brain neural activity that leads to the manifestation of a motor illusion remains unclear. In this study, we examined the association between the ability to generate motor imagery and brain neural activity leading to the induction of a motor illusion by vibratory stimulation. The sample consisted of 20 healthy individuals who did not have movement or sensory disorders. We measured the time between the starting and ending points of a motor illusion (the time to illusion induction, TII) and performed electroencephalography (EEG). We conducted a temporo-spatial analysis on brain activity leading to the induction of motor illusions using the EEG microstate segmentation method. Additionally, we assessed the ability to generate motor imagery using the Japanese version of the Movement Imagery Questionnaire-Revised (JMIQ-R) prior to performing the task and examined the associations among brain neural activity levels as identified by microstate segmentation method, TII, and the JMIQ-R scores. The results showed four typical microstates during TII and significantly higher neural activity in the ventrolateral prefrontal cortex, primary sensorimotor area, supplementary motor area (SMA), and inferior parietal lobule (IPL). Moreover, there were significant negative correlations between the neural activity of the primary motor cortex (MI), SMA, IPL, and TII, and a significant positive correlation between the neural activity of the SMA and the JMIQ-R scores. These findings suggest the possibility that a neural network primarily comprised of the neural activity of SMA and M1, which are involved in generating motor imagery, may be the neural basis for inducing motor illusions. This may aid in creating a new approach to neurorehabilitation that enables a more robust reorganization of the neural base for patients with brain dysfunction with a motor function disorder.

  10. Providing Access and Visualization to Global Cloud Properties from GEO Satellites

    NASA Astrophysics Data System (ADS)

    Chee, T.; Nguyen, L.; Minnis, P.; Spangenberg, D.; Palikonda, R.; Ayers, J. K.

    2015-12-01

    Providing public access to cloud macro and microphysical properties is a key concern for the NASA Langley Research Center Cloud and Radiation Group. This work describes a tool and method that allows end users to easily browse and access cloud information that is otherwise difficult to acquire and manipulate. The core of the tool is an application-programming interface that is made available to the public. One goal of the tool is to provide a demonstration to end users so that they can use the dynamically generated imagery as an input into their own work flows for both image generation and cloud product requisition. This project builds upon NASA Langley Cloud and Radiation Group's experience with making real-time and historical satellite cloud product imagery accessible and easily searchable. As we see the increasing use of virtual supply chains that provide additional value at each link there is value in making satellite derived cloud product information available through a simple access method as well as allowing users to browse and view that imagery as they need rather than in a manner most convenient for the data provider. Using the Open Geospatial Consortium's Web Processing Service as our access method, we describe a system that uses a hybrid local and cloud based parallel processing system that can return both satellite imagery and cloud product imagery as well as the binary data used to generate them in multiple formats. The images and cloud products are sourced from multiple satellites and also "merged" datasets created by temporally and spatially matching satellite sensors. Finally, the tool and API allow users to access information that spans the time ranges that our group has information available. In the case of satellite imagery, the temporal range can span the entire lifetime of the sensor.

  11. Visual object imagery and autobiographical memory: Object Imagers are better at remembering their personal past.

    PubMed

    Vannucci, Manila; Pelagatti, Claudia; Chiorri, Carlo; Mazzoni, Giuliana

    2016-01-01

    In the present study we examined whether higher levels of object imagery, a stable characteristic that reflects the ability and preference in generating pictorial mental images of objects, facilitate involuntary and voluntary retrieval of autobiographical memories (ABMs). Individuals with high (High-OI) and low (Low-OI) levels of object imagery were asked to perform an involuntary and a voluntary ABM task in the laboratory. Results showed that High-OI participants generated more involuntary and voluntary ABMs than Low-OI, with faster retrieval times. High-OI also reported more detailed memories compared to Low-OI and retrieved memories as visual images. Theoretical implications of these findings for research on voluntary and involuntary ABMs are discussed.

  12. SkySat-1: very high-resolution imagery from a small satellite

    NASA Astrophysics Data System (ADS)

    Murthy, Kiran; Shearn, Michael; Smiley, Byron D.; Chau, Alexandra H.; Levine, Josh; Robinson, M. Dirk

    2014-10-01

    This paper presents details of the SkySat-1 mission, which is the first microsatellite-class commercial earth- observation system to generate sub-meter resolution panchromatic imagery, in addition to sub-meter resolution 4-band pan-sharpened imagery. SkySat-1 was built and launched for an order of magnitude lower cost than similarly performing missions. The low-cost design enables the deployment of a large imaging constellation that can provide imagery with both high temporal resolution and high spatial resolution. One key enabler of the SkySat-1 mission was simplifying the spacecraft design and instead relying on ground- based image processing to achieve high-performance at the system level. The imaging instrument consists of a custom-designed high-quality optical telescope and commercially-available high frame rate CMOS image sen- sors. While each individually captured raw image frame shows moderate quality, ground-based image processing algorithms improve the raw data by combining data from multiple frames to boost image signal-to-noise ratio (SNR) and decrease the ground sample distance (GSD) in a process Skybox calls "digital TDI". Careful qual-ity assessment and tuning of the spacecraft, payload, and algorithms was necessary to generate high-quality panchromatic, multispectral, and pan-sharpened imagery. Furthermore, the framing sensor configuration en- abled the first commercial High-Definition full-frame rate panchromatic video to be captured from space, with approximately 1 meter ground sample distance. Details of the SkySat-1 imaging instrument and ground-based image processing system are presented, as well as an overview of the work involved with calibrating and validating the system. Examples of raw and processed imagery are shown, and the raw imagery is compared to pre-launch simulated imagery used to tune the image processing algorithms.

  13. Computational aspects of geometric correction data generation in the LANDSAT-D imagery processing

    NASA Technical Reports Server (NTRS)

    Levine, I.

    1981-01-01

    A method is presented for systematic and geodetic correction data calculation. It is based on presentation of image distortions as a sum of nominal distortions and linear effects caused by variation of the spacecraft position and attitude variables from their nominals. The method may be used for both MSS and TM image data and it is incorporated into the processing by means of mostly offline calculations. Modeling shows that the maximal of the method are of the order of 5m at the worst point in a frame; the standard deviations of the average errors less than .8m.

  14. Enhanced tactical radar correlator (ETRAC): true interoperability of the 1990s

    NASA Astrophysics Data System (ADS)

    Guillen, Frank J.

    1994-10-01

    The enhanced tactical radar correlator (ETRAC) system is under development at Westinghouse Electric Corporation for the Army Space Program Office (ASPO). ETRAC is a real-time synthetic aperture radar (SAR) processing system that provides tactical IMINT to the corps commander. It features an open architecture comprised of ruggedized commercial-off-the-shelf (COTS), UNIX based workstations and processors. The architecture features the DoD common SAR processor (CSP), a multisensor computing platform to accommodate a variety of current and future imaging needs. ETRAC's principal functions include: (1) Mission planning and control -- ETRAC provides mission planning and control for the U-2R and ASARS-2 sensor, including capability for auto replanning, retasking, and immediate spot. (2) Image formation -- the image formation processor (IFP) provides the CPU intensive processing capability to produce real-time imagery for all ASARS imaging modes of operation. (3) Image exploitation -- two exploitation workstations are provided for first-phase image exploitation, manipulation, and annotation. Products include INTEL reports, annotated NITF SID imagery, high resolution hard copy prints and targeting data. ETRAC is transportable via two C-130 aircraft, with autonomous drive on/off capability for high mobility. Other autonomous capabilities include rapid setup/tear down, extended stand-alone support, internal environmental control units (ECUs) and power generation. ETRAC's mission is to provide the Army field commander with accurate, reliable, and timely imagery intelligence derived from collections made by the ASARS-2 sensor, located on-board the U-2R aircraft. To accomplish this mission, ETRAC receives video phase history (VPH) directly from the U-2R aircraft and converts it in real time into soft copy imagery for immediate exploitation and dissemination to the tactical users.

  15. Textural features for image classification

    NASA Technical Reports Server (NTRS)

    Haralick, R. M.; Dinstein, I.; Shanmugam, K.

    1973-01-01

    Description of some easily computable textural features based on gray-tone spatial dependances, and illustration of their application in category-identification tasks of three different kinds of image data - namely, photomicrographs of five kinds of sandstones, 1:20,000 panchromatic aerial photographs of eight land-use categories, and ERTS multispectral imagery containing several land-use categories. Two kinds of decision rules are used - one for which the decision regions are convex polyhedra (a piecewise-linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89% for the photomicrographs, 82% for the aerial photographic imagery, and 83% for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

  16. Performance variation in motor imagery brain-computer interface: a brief review.

    PubMed

    Ahn, Minkyu; Jun, Sung Chan

    2015-03-30

    Brain-computer interface (BCI) technology has attracted significant attention over recent decades, and has made remarkable progress. However, BCI still faces a critical hurdle, in that performance varies greatly across and even within subjects, an obstacle that degrades the reliability of BCI systems. Understanding the causes of these problems is important if we are to create more stable systems. In this short review, we report the most recent studies and findings on performance variation, especially in motor imagery-based BCI, which has found that low-performance groups have a less-developed brain network that is incapable of motor imagery. Further, psychological and physiological states influence performance variation within subjects. We propose a possible strategic approach to deal with this variation, which may contribute to improving the reliability of BCI. In addition, the limitations of current work and opportunities for future studies are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. The use of LANDSAT-1 imagery in mapping and managing soil and range resources in the Sand Hills region of Nebraska

    NASA Technical Reports Server (NTRS)

    Seevers, P. M. (Principal Investigator); Drew, J. V.

    1976-01-01

    The author has identified the following significant results. Evaluation of ERTS-1 imagery for the Sand Hills region of Nebraska has shown that the data can be used to effectively measure several parameters of inventory needs. (1) Vegetative biomass can be estimated with a high degree of confidence using computer compatable tape data. (2) Soils can be mapped to the subgroup level with high altitude aircraft color infrared photography and to the association level with multitemporal ERTS-1 imagery. (3) Water quality in Sand Hills lakes can be estimated utilizing computer compatable tape data. (4) Center pivot irrigation can be inventoried from satellite data and can be monitored regarding site selection and relative success of establishment from high altitude aircraft color infrared photography. (5) ERTS-1 data is of exceptional value in wide-area inventory of natural resource data in the Sand Hills region of Nebraska.

  18. Remote sensing of turbidity plumes in Lake Ontario

    NASA Technical Reports Server (NTRS)

    Pluhowski, E. J.

    1973-01-01

    Preliminary analyses of ERTS-1 imagery demonstrates the utility of the satellite to monitor turbidity plumes generated by the Welland Canal, and the Genese and Oswego Rivers. Although visible in high altitude photographs, the Niagara River plume is not readily identifiable from satellite imagery.

  19. Scene-based nonuniformity correction with video sequences and registration.

    PubMed

    Hardie, R C; Hayat, M M; Armstrong, E; Yasuda, B

    2000-03-10

    We describe a new, to our knowledge, scene-based nonuniformity correction algorithm for array detectors. The algorithm relies on the ability to register a sequence of observed frames in the presence of the fixed-pattern noise caused by pixel-to-pixel nonuniformity. In low-to-moderate levels of nonuniformity, sufficiently accurate registration may be possible with standard scene-based registration techniques. If the registration is accurate, and motion exists between the frames, then groups of independent detectors can be identified that observe the same irradiance (or true scene value). These detector outputs are averaged to generate estimates of the true scene values. With these scene estimates, and the corresponding observed values through a given detector, a curve-fitting procedure is used to estimate the individual detector response parameters. These can then be used to correct for detector nonuniformity. The strength of the algorithm lies in its simplicity and low computational complexity. Experimental results, to illustrate the performance of the algorithm, include the use of visible-range imagery with simulated nonuniformity and infrared imagery with real nonuniformity.

  20. SP mountain data analysis

    NASA Technical Reports Server (NTRS)

    Rawson, R. F.; Hamilton, R. E.; Liskow, C. L.; Dias, A. R.; Jackson, P. L.

    1981-01-01

    An analysis of synthetic aperture radar data of SP Mountain was undertaken to demonstrate the use of digital image processing techniques to aid in geologic interpretation of SAR data. These data were collected with the ERIM X- and L-band airborne SAR using like- and cross-polarizations. The resulting signal films were used to produce computer compatible tapes, from which four-channel imagery was generated. Slant range-to-ground range and range-azimuth-scale corrections were made in order to facilitate image registration; intensity corrections were also made. Manual interpretation of the imagery showed that L-band represented the geology of the area better than X-band. Several differences between the various images were also noted. Further digital analysis of the corrected data was done for enhancement purposes. This analysis included application of an MSS differencing routine and development of a routine for removal of relief displacement. It was found that accurate registration of the SAR channels is critical to the effectiveness of the differencing routine. Use of the relief displacement algorithm on the SP Mountain data demonstrated the feasibility of the technique.

  1. Evaluation of SLAR and thematic mapper MSS data for forest cover mapping using computer-aided analysis techniques

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M. (Principal Investigator)

    1981-01-01

    Training and test data sets for CAM1S from NS-001 MSS data for two dates (geometrically adjusted to 30 meter resolution) were used to evaluate wavelength band. Two sets of tapes containing digitized HH and HV polarization data were obtained. Because the SAR data on the 9 track tapes contained no meaningful data, the 7 track tapes were copied onto 9 track tapes at LARS. The LARSYS programs were modified and a program was written to reformat the digitized SAR data into a LARSYS format. The radar imagery is being qualitatively interpreted. Results are to be used to identify possible cover types, to produce a classification map to aid in the numerical evaluation classification of radar data, and to develop an interpretation key for radar imagery. The four spatial resolution data sets were analyzed. A program was developed to reduce the spatial distortions resulting from variable viewing distance, and geometrically adjusted data sets were generated. A flowchart of steps taken to geometrically adjust a data set from the NS-001 scanner is presented.

  2. Man-Made Object Extraction from Remote Sensing Imagery by Graph-Based Manifold Ranking

    NASA Astrophysics Data System (ADS)

    He, Y.; Wang, X.; Hu, X. Y.; Liu, S. H.

    2018-04-01

    The automatic extraction of man-made objects from remote sensing imagery is useful in many applications. This paper proposes an algorithm for extracting man-made objects automatically by integrating a graph model with the manifold ranking algorithm. Initially, we estimate a priori value of the man-made objects with the use of symmetric and contrast features. The graph model is established to represent the spatial relationships among pre-segmented superpixels, which are used as the graph nodes. Multiple characteristics, namely colour, texture and main direction, are used to compute the weights of the adjacent nodes. Manifold ranking effectively explores the relationships among all the nodes in the feature space as well as initial query assignment; thus, it is applied to generate a ranking map, which indicates the scores of the man-made objects. The man-made objects are then segmented on the basis of the ranking map. Two typical segmentation algorithms are compared with the proposed algorithm. Experimental results show that the proposed algorithm can extract man-made objects with high recognition rate and low omission rate.

  3. Analyzing power spectral of electroencephalogram (EEG) signal to identify motoric arm movement using EMOTIV EPOC+

    NASA Astrophysics Data System (ADS)

    Bustomi, A.; Wijaya, S. K.; Prawito

    2017-07-01

    Rehabilitation of motoric dysfunction from the body becomes the main objective of developing Brain Computer Interface (BCI) technique, especially in the field of medical rehabilitation technology. BCI technology based on electrical activity of the brain, allow patient to be able to restore motoric disfunction of the body and help them to overcome the shortcomings mobility. In this study, EEG signal phenomenon was obtained from EMOTIV EPOC+, the signals were generated from the imagery of lifting arm, and look for any correlation between the imagery of motoric muscle movement against the recorded signals. The signals processing were done in the time-frequency domain, using Wavelet relative power (WRP) as feature extraction, and Support vector machine (SVM) as the classifier. In this study, it was obtained the result of maximum accuracy of 81.3 % using 8 channel (AF3, F7, F3, FC5, FC6, F4, F8, and AF4), 6 channel remaining on EMOTIV EPOC + does not contribute to the improvement of the accuracy of the classification system

  4. Michigan resource inventories: Characteristics and costs of selected projects using high altitude color infrared imagery. Remote Sensing Project

    NASA Technical Reports Server (NTRS)

    Enslin, W. R.; Hill-Rowley, R.

    1976-01-01

    The procedures and costs associated with mapping land cover/use and forest resources from high altitude color infrared (CIR) imagery are documented through an evaluation of several inventory efforts. CIR photos (1:36,000) were used to classify the forests of Mason County, Michigan into six species groups, three stocking levels, and three maturity classes at a cost of $4.58/sq. km. The forest data allow the pinpointing of marketable concentrations of selected timber types, and facilitate the establishment of new forest management cooperatives. Land cover/use maps and area tabulations were prepared from small scale CIR photography at a cost of $4.28/sq. km. and $3.03/sq. km. to support regional planning programs of two Michigan agencies. procedures were also developed to facilitate analysis of this data with other natural resource information. Eleven thematic maps were generated from Windsor Township, Michigan at a cost of $1,500 by integrating grid-geocoded land cover/use, soils, topographic, and well log data using an analytical computer program.

  5. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    NASA Astrophysics Data System (ADS)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  6. Decoding Intention at Sensorimotor Timescales

    PubMed Central

    Salvaris, Mathew; Haggard, Patrick

    2014-01-01

    The ability to decode an individual's intentions in real time has long been a ‘holy grail’ of research on human volition. For example, a reliable method could be used to improve scientific study of voluntary action by allowing external probe stimuli to be delivered at different moments during development of intention and action. Several Brain Computer Interface applications have used motor imagery of repetitive actions to achieve this goal. These systems are relatively successful, but only if the intention is sustained over a period of several seconds; much longer than the timescales identified in psychophysiological studies for normal preparation for voluntary action. We have used a combination of sensorimotor rhythms and motor imagery training to decode intentions in a single-trial cued-response paradigm similar to those used in human and non-human primate motor control research. Decoding accuracy of over 0.83 was achieved with twelve participants. With this approach, we could decode intentions to move the left or right hand at sub-second timescales, both for instructed choices instructed by an external stimulus and for free choices generated intentionally by the participant. The implications for volition are considered. PMID:24523855

  7. Application of NASA ERTS-1 satellite imagery in coastal studies

    NASA Technical Reports Server (NTRS)

    Magoon, O. T.; Berg, D. W. (Principal Investigator); Hallermeier, R. J.

    1973-01-01

    There are no author-identified significant results in this report. Review of ERTS-1 imagery indicates that it contains information of great value in coastal engineering studies. A brief introduction is given to the methods by which imagery is generated, and examples of its application to coastal engineering. Specific applications discussed include study of the movement of coastal and nearshore sediment-laden water masses and information for planning and construction in remote areas of the world.

  8. Computer image processing - The Viking experience. [digital enhancement techniques

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.

  9. Exact Rayleigh scattering calculations for use with the Nimbus-7 Coastal Zone Color Scanner

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Brown, James W.; Evans, Robert H.

    1988-01-01

    The radiance reflected from a plane-parallel atmosphere and flat sea surface in the absence of aerosols has been determined with an exact multiple scattering code to improve the analysis of Nimbus-7 CZCS imagery. It is shown that the single scattering approximation normally used to compute this radiance can result in errors of up to 5 percent for small and moderate solar zenith angles. A scheme to include the effect of variations in the surface pressure in the exact computation of the Rayleigh radiance is discussed. The results of an application of these computations to CZCS imagery suggest that accurate atmospheric corrections can be obtained for solar zenith angles at least as large as 65 deg.

  10. Computer image processing in marine resource exploration

    NASA Technical Reports Server (NTRS)

    Paluzzi, P. R.; Normark, W. R.; Hess, G. R.; Hess, H. D.; Cruickshank, M. J.

    1976-01-01

    Pictographic data or imagery is commonly used in marine exploration. Pre-existing image processing techniques (software) similar to those used on imagery obtained from unmanned planetary exploration were used to improve marine photography and side-scan sonar imagery. Features and details not visible by conventional photo processing methods were enhanced by filtering and noise removal on selected deep-sea photographs. Information gained near the periphery of photographs allows improved interpretation and facilitates construction of bottom mosaics where overlapping frames are available. Similar processing techniques were applied to side-scan sonar imagery, including corrections for slant range distortion, and along-track scale changes. The use of digital data processing and storage techniques greatly extends the quantity of information that can be handled, stored, and processed.

  11. Investigating the effects of visual distractors on the performance of a motor imagery brain-computer interface.

    PubMed

    Emami, Zahra; Chau, Tom

    2018-06-01

    Brain-computer interfaces (BCIs) allow users to operate a device or application by means of cognitive activity. This technology will ultimately be used in real-world environments which include the presence of distractors. The purpose of the study was to determine the effect of visual distractors on BCI performance. Sixteen able-bodied participants underwent neurofeedback training to achieve motor imagery-guided BCI control in an online paradigm using electroencephalography (EEG) to measure neural signals. Participants then completed two sessions of the motor imagery EEG-BCI protocol in the presence of infrequent, small visual distractors. BCI performance was determined based on classification accuracy. The presence of distractors was found to affect motor imagery-specific patterns in mu and beta power. However, the distractors did not significantly affect the BCI classification accuracy; across participants, the mean classification accuracy was 81.5 ± 14% for non-distractor trials, and 78.3 ± 17% for distractor trials. This minimal consequence suggests that the BCI was robust to distractor effects, despite motor imagery-related brain activity being attenuated amid distractors. A BCI system that mitigates distraction-related effects may improve the ease of its use and ultimately facilitate the effective translation of the technology from the lab to the home. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  12. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    NASA Astrophysics Data System (ADS)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  13. The IRGen infrared data base modeler

    NASA Technical Reports Server (NTRS)

    Bernstein, Uri

    1993-01-01

    IRGen is a modeling system which creates three-dimensional IR data bases for real-time simulation of thermal IR sensors. Starting from a visual data base, IRGen computes the temperature and radiance of every data base surface with a user-specified thermal environment. The predicted gray shade of each surface is then computed from the user specified sensor characteristics. IRGen is based on first-principles models of heat transport and heat flux sources, and it accurately simulates the variations of IR imagery with time of day and with changing environmental conditions. The starting point for creating an IRGen data base is a visual faceted data base, in which every facet has been labeled with a material code. This code is an index into a material data base which contains surface and bulk thermal properties for the material. IRGen uses the material properties to compute the surface temperature at the specified time of day. IRGen also supports image generator features such as texturing and smooth shading, which greatly enhance image realism.

  14. Improved Volitional Recall of Motor-Imagery-Related Brain Activation Patterns Using Real-Time Functional MRI-Based Neurofeedback.

    PubMed

    Bagarinao, Epifanio; Yoshida, Akihiro; Ueno, Mika; Terabe, Kazunori; Kato, Shohei; Isoda, Haruo; Nakai, Toshiharu

    2018-01-01

    Motor imagery (MI), a covert cognitive process where an action is mentally simulated but not actually performed, could be used as an effective neurorehabilitation tool for motor function improvement or recovery. Recent approaches employing brain-computer/brain-machine interfaces to provide online feedback of the MI during rehabilitation training have promising rehabilitation outcomes. In this study, we examined whether participants could volitionally recall MI-related brain activation patterns when guided using neurofeedback (NF) during training. The participants' performance was compared to that without NF. We hypothesized that participants would be able to consistently generate the relevant activation pattern associated with the MI task during training with NF compared to that without NF. To assess activation consistency, we used the performance of classifiers trained to discriminate MI-related brain activation patterns. Our results showed significantly higher predictive values of MI-related activation patterns during training with NF. Additionally, this improvement in the classification performance tends to be associated with the activation of middle temporal gyrus/inferior occipital gyrus, a region associated with visual motion processing, suggesting the importance of performance monitoring during MI task training. Taken together, these findings suggest that the efficacy of MI training, in terms of generating consistent brain activation patterns relevant to the task, can be enhanced by using NF as a mechanism to enable participants to volitionally recall task-related brain activation patterns.

  15. Exact Rayleigh scattering calculations for use with the Nimbus-7 Coastal Zone Color Scanner.

    PubMed

    Gordon, H R; Brown, J W; Evans, R H

    1988-03-01

    For improved analysis of Coastal Zone Color Scanner (CZCS) imagery, the radiance reflected from a planeparallel atmosphere and flat sea surface in the absence of aerosols (Rayleigh radiance) has been computed with an exact multiple scattering code, i.e., including polarization. The results indicate that the single scattering approximation normally used to compute this radiance can cause errors of up to 5% for small and moderate solar zenith angles. At large solar zenith angles, such as encountered in the analysis of high-latitude imagery, the errors can become much larger, e.g.,>10% in the blue band. The single scattering error also varies along individual scan lines. Comparison with multiple scattering computations using scalar transfer theory, i.e., ignoring polarization, show that scalar theory can yield errors of approximately the same magnitude as single scattering when compared with exact computations at small to moderate values of the solar zenith angle. The exact computations can be easily incorporated into CZCS processing algorithms, and, for application to future instruments with higher radiometric sensitivity, a scheme is developed with which the effect of variations in the surface pressure could be easily and accurately included in the exact computation of the Rayleigh radiance. Direct application of these computations to CZCS imagery indicates that accurate atmospheric corrections can be made with solar zenith angles at least as large as 65 degrees and probably up to at least 70 degrees with a more sensitive instrument. This suggests that the new Rayleigh radiance algorithm should produce more consistent pigment retrievals, particularly at high latitudes.

  16. Self-generated visual imagery alters the mere exposure effect.

    PubMed

    Craver-Lemley, Catherine; Bornstein, Robert F

    2006-12-01

    To determine whether self-generated visual imagery alters liking ratings of merely exposed stimuli, 79 college students were repeatedly exposed to the ambiguous duck-rabbit figure. Half the participants were told to picture the image as a duck and half to picture it as a rabbit. When participants made liking ratings of both disambiguated versions of the figure, they rated the version consistent with earlier encoding more positively than the alternate version. Implications of these findings for theoretical models of the exposure effect are discussed.

  17. Three-Dimensional Displays In The Future Flight Station

    NASA Astrophysics Data System (ADS)

    Bridges, Alan L.

    1984-10-01

    This review paper summarizes the development and applications of computer techniques for the representation of three-dimensional data in the future flight station. It covers the development of the Lockheed-NASA Advanced Concepts Flight Station (ACFS) research simulators. These simulators contain: A Pilot's Desk Flight Station (PDFS) with five 13- inch diagonal, color, cathode ray tubes on the main instrument panel; a computer-generated day and night visual system; a six-degree-of-freedom motion base; and a computer complex. This paper reviews current research, development, and evaluation of easily modifiable display systems and software requirements for three-dimensional displays that may be developed for the PDFS. This includes the analysis and development of a 3-D representation of the entire flight profile. This 3-D flight path, or "Highway-in-the-Sky", will utilize motion and perspective cues to tightly couple the human responses of the pilot to the aircraft control systems. The use of custom logic, e.g., graphics engines, may provide the processing power and architecture required for 3-D computer-generated imagery (CGI) or visual scene simulation (VSS). Diffraction or holographic head-up displays (HUDs) will also be integrated into the ACFS simulator to permit research on the requirements and use of these "out-the-window" projection systems. Future research may include the retrieval of high-resolution, perspective view terrain maps which could then be overlaid with current weather information or other selectable cultural features.

  18. Key issues in making and using satellite-based maps in ecology: a primer.

    Treesearch

    Karin S. Fassnacht; Warren B. Cohen; Thomas A. Spies

    2006-01-01

    The widespread availability of satellite imagery and image processing software has made it relatively easy for ecologists to use satellite imagery to address questions at the landscape and regional scales. However, as often happens with complex tools that are rendered easy to use by computer software, technology may be misused or used without an understanding of some...

  19. Computer Images for Research, Teaching, and Publication in Art History and Related Disciplines.

    ERIC Educational Resources Information Center

    Rhyne, Charles S.

    The future of digital imagery has emerged as one of the central concerns of professionals in many fields, yet only a handful of art historians have taken advantage of the profession's unique expertise in the reading and interpretation of images. Art historians need to participate in scholarship defining the roles and uses of digital imagery,…

  20. Stream network analysis from orbital and suborbital imagery, Colorado River Basin, Texas

    NASA Technical Reports Server (NTRS)

    Baker, V. R. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Orbital SL-2 imagery (earth terrain camera S-190B), received September 5, 1973, was subjected to quantitative network analysis and compared to 7.5 minute topographic mapping (scale: 1/24,000) and U.S.D.A. conventional black and white aerial photography (scale: 1/22,200). Results can only be considered suggestive because detail on the SL-2 imagery was badly obscured by heavy cloud cover. The upper Bee Creek basin was chosen for analysis because it appeared in a relatively cloud-free portion of the orbital imagery. Drainage maps were drawn from the three sources digitized into a computer-compatible format, and analyzed by the WATER system computer program. Even at its small scale (1/172,000) and with bad haze the orbital photo showed much drainage detail. The contour-like character of the Glen Rose Formation's resistant limestone units allowed channel definition. The errors in pattern recognition can be attributed to local areas of dense vegetation and to other areas of very high albedo caused by surficial exposure of caliche. The latter effect caused particular difficulty in the determination of drainage divides.

  1. [The Changes in the Hemodynamic Activity of the Brain during Moroe Imagery Training with the Use of Brain-Computer Interface].

    PubMed

    Frolov, A A; Husek, D; Silchenko, A V; Tintera, Y; Rydlo, J

    2016-01-01

    With the use of functional MRI (fMRI), we studied the changes in brain hemodynamic activity of healthy subjects during motor imagery training with the use brain-computer interface (BCI), which is based on the recognition of EEG patterns of imagined movements. ANOVA dispersion analysis showed there are 14 areas of the brain where statistically sgnificant changes were registered. Detailed analysis of the activity in these areas before and after training (Student's and Mann-Whitney tests) reduced the amount of areas with significantly changed activity to five; these are Brodmann areas 44 and 45, insula, middle frontal gyrus, and anterior cingulate gyrus. We suggest that these changes are caused by the formation of memory traces of those brain activity patterns which are most accurately recognized by BCI classifiers as correspondent with limb movements. We also observed a tendency of increase in the activity of motor imagery after training. The hemodynamic activity in all these 14 areas during real movements was either approximatly the same or significantly higher than during motor imagery; activity during imagined leg movements was higher that that during imagined arm movements, except for the areas of representation of arms.

  2. Point Cloud and Digital Surface Model Generation from High Resolution Multiple View Stereo Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Gong, K.; Fritsch, D.

    2018-05-01

    Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.

  3. Amazonas project: Application of remote sensing techniques for the integrated survey of natural resources in Amazonas. [Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator)

    1981-01-01

    The use of LANDSAT multispectral scanner and return beam vidicon imagery for surveying the natural resources of the Brazilian Amazonas is described. Purposes of the Amazonas development project are summarized. The application of LANDSAT imagery to identification of vegetation coverage and soil use, identification of soil types, geomorphology, and geology and highway planning is discussed. An evaluation of the worth of LANDSAT imagery in mapping the region is presented. Maps generated by the project are included.

  4. Operational data fusion framework for building frequent Landsat-like imagery in a cloudy region

    USDA-ARS?s Scientific Manuscript database

    An operational data fusion framework is built to generate dense time-series Landsat-like images for a cloudy region by fusing Moderate Resolution Imaging Spectroradiometer (MODIS) data products and Landsat imagery. The Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) is integrated in ...

  5. Computer mapping of turbidity and circulation patterns in Saginaw Bay, Michigan from LANDSAT data

    NASA Technical Reports Server (NTRS)

    Rogers, R. H. (Principal Investigator); Reed, L. E.; Smith, V. E.

    1975-01-01

    The author has identified the following significant results. LANDSAT was used as a basis for producing geometrically-corrected, color-coded imagery of turbidity and circulation patterns in Saginaw Bay, Michigan (Lake Huron). This imagery shows nine discrete categories of turbidity, as indicated by nine Secchi depths between 0.3 and 3.3 meters. The categorized imagery provided an economical basis for extrapolating water quality parameters from point samples to unsample areas. LANDSAT furnished a synoptic view of water mass boundaries that no amount of ground sampling or monitoring could provide.

  6. An electrophysiological study of task demands on concreteness effects: evidence for dual coding theory.

    PubMed

    Welcome, Suzanne E; Paivio, Allan; McRae, Ken; Joanisse, Marc F

    2011-07-01

    We examined ERP responses during the generation of word associates or mental images in response to concrete and abstract concepts. Of interest were the predictions of dual coding theory (DCT), which proposes that processing lexical concepts depends on functionally independent but interconnected verbal and nonverbal systems. ERP responses were time-locked to either stimulus onset or response to compensate for potential latency differences across conditions. During word associate generation, but not mental imagery, concrete items elicited a greater N400 than abstract items. A concreteness effect emerged at a later time point during the mental imagery task. Data were also analyzed using time-frequency analysis that investigated synchronization of neuronal populations over time during processing. Concrete words elicited an enhanced late going desynchronization of theta-band power (723-938 ms post stimulus onset) during associate generation. During mental imagery, abstract items elicited greater delta-band power from 800 to 1,000 ms following stimulus onset, theta-band power from 350 to 205 ms before response, and alpha-band power from 900 to 800 ms before response. Overall, the findings support DCT in suggesting that lexical concepts are not amodal and that concreteness effects are modulated by tasks that focus participants on verbal versus nonverbal, imagery-based knowledge.

  7. Edge-diffraction effects in RCS predictions and their importance in systems analysis

    NASA Astrophysics Data System (ADS)

    Friess, W. F.; Klement, D.; Ruppel, M.; Stein, Volker

    1996-06-01

    In developing RCS prediction codes a variety of physical effects such as the edge diffraction effect have to be considered with the consequence that the computer effort increases considerably. This fact limits the field of application of such codes, especially if the RCS data serve as input parameters for system simulators which very often need these data for a high number of observation angles and/or frequencies. Vice versa the issues of a system analysis can be used to estimate the relevance of physical effects under system viewpoints and to rank them according to their magnitude. This paper tries to evaluate the importance of RCS predictions containing an edge diffracted field for systems analysis. A double dihedral with a strong depolarizing behavior and a generic airplane design containing many arbitrarily oriented edges are used as test structures. Data of the scattered field are generated by the RCS computer code SIGMA with and without including edge diffraction effects. These data are submitted to the code DORA to determine radar range and radar detectibility and to a SAR simulator code to generate SAR imagery. In both cases special scenarios are assumed. The essential features of the computer codes in their current state are described, the results are presented and discussed under systems viewpoints.

  8. Techniques in processing multi-frequency multi-polarization spaceborne SAR data

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Chang, C. Y.

    1991-01-01

    This paper presents the algorithm design of the SIR-C ground data processor, with emphasis on the unique elements involved in the production of registered multifrequency polarimetric data products. A quick-look processing algorithm used for generation of low-resolution browse image products and estimation of echo signal parameters is also presented. Specifically the discussion covers: (1) azimuth reference function generation to produce registered polarimetric imagery; (2) geometric rectification to accommondate cross-track and along-track Doppler drifts; (3) multilook filtering designed to generate output imagery with a uniform resolution; and (4) efficient coding to compress the polarimetric image data for distribution.

  9. INPE LANDSAT-D thematic mapper computer compatible tape format specification

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Desouza, R. C. M.

    1982-01-01

    The format of the computer compatible tapes (CCT) which contain Thematic Mapper (TM) imagery data acquired from the LANDSAT D and D Prime satellites by the INSTITUTO DE PERSQUISAS ESPACIALS (CNPq-INPE/BRAZIL) is defined.

  10. Utilization of ERTS data to detect plant diseases and nutrient deficiencies, soil types and moisture levels

    NASA Technical Reports Server (NTRS)

    Parks, W. L. (Principal Investigator); Sewell, J. I.; Hilty, J. W.; Rennie, J. C.

    1973-01-01

    The author has identified the following significant results. A significant finding to date is the delineation of the Memphis soil association in Obion County, Dyer County, and in portions of Kentucky. This soil association was delineated mechanically through the use of imagery in the digital tape format, appropriate computer software, and an IBM/360/05 computer. The Waverly-Swamp association as well as the Obion River have been identified on the ERTS-1 imagery as well as on the computer printout. These findings demonstrate the feasibility of delineating major soil associations through vegetative cover common to the association. Channel 7 provides the most information for studies of this type. Computer density printouts assist markedly in making density separations and delineating major soil moisture differences; however, signatures for soil moisture classification for this area of mixed land uses in relatively small tracts have not yet been developed.

  11. Comparison of different detection methods for persistent multiple hypothesis tracking in wide area motion imagery

    NASA Astrophysics Data System (ADS)

    Hartung, Christine; Spraul, Raphael; Schuchert, Tobias

    2017-10-01

    Wide area motion imagery (WAMI) acquired by an airborne multicamera sensor enables continuous monitoring of large urban areas. Each image can cover regions of several square kilometers and contain thousands of vehicles. Reliable vehicle tracking in this imagery is an important prerequisite for surveillance tasks, but remains challenging due to low frame rate and small object size. Most WAMI tracking approaches rely on moving object detections generated by frame differencing or background subtraction. These detection methods fail when objects slow down or stop. Recent approaches for persistent tracking compensate for missing motion detections by combining a detection-based tracker with a second tracker based on appearance or local context. In order to avoid the additional complexity introduced by combining two trackers, we employ an alternative single tracker framework that is based on multiple hypothesis tracking and recovers missing motion detections with a classifierbased detector. We integrate an appearance-based similarity measure, merge handling, vehicle-collision tests, and clutter handling to adapt the approach to the specific context of WAMI tracking. We apply the tracking framework on a region of interest of the publicly available WPAFB 2009 dataset for quantitative evaluation; a comparison to other persistent WAMI trackers demonstrates state of the art performance of the proposed approach. Furthermore, we analyze in detail the impact of different object detection methods and detector settings on the quality of the output tracking results. For this purpose, we choose four different motion-based detection methods that vary in detection performance and computation time to generate the input detections. As detector parameters can be adjusted to achieve different precision and recall performance, we combine each detection method with different detector settings that yield (1) high precision and low recall, (2) high recall and low precision, and (3) best f-score. Comparing the tracking performance achieved with all generated sets of input detections allows us to quantify the sensitivity of the tracker to different types of detector errors and to derive recommendations for detector and parameter choice.

  12. Learning about Computers through Art History and Art Practice.

    ERIC Educational Resources Information Center

    Lichtman, Loy

    1996-01-01

    Describes a Victoria University (Australia) program that combines art history, computer graphics, and studio practice. Discusses the social applications of technology, the creation and manipulation of computer imagery, and the ways that these impact traditional concepts of art. The program has proven particularly successful with female students.…

  13. Cloud GIS Based Watershed Management

    NASA Astrophysics Data System (ADS)

    Bediroğlu, G.; Colak, H. E.

    2017-11-01

    In this study, we generated a Cloud GIS based watershed management system with using Cloud Computing architecture. Cloud GIS is used as SAAS (Software as a Service) and DAAS (Data as a Service). We applied GIS analysis on cloud in terms of testing SAAS and deployed GIS datasets on cloud in terms of DAAS. We used Hybrid cloud computing model in manner of using ready web based mapping services hosted on cloud (World Topology, Satellite Imageries). We uploaded to system after creating geodatabases including Hydrology (Rivers, Lakes), Soil Maps, Climate Maps, Rain Maps, Geology and Land Use. Watershed of study area has been determined on cloud using ready-hosted topology maps. After uploading all the datasets to systems, we have applied various GIS analysis and queries. Results shown that Cloud GIS technology brings velocity and efficiency for watershed management studies. Besides this, system can be easily implemented for similar land analysis and management studies.

  14. Spectral discrimination of lithologic facies in the granite of the Pedra Branca Goias using LANDSAT 1 digital imagery

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J.; Almeido, R., Jr.

    1982-01-01

    The applicability of LANDSAT MSS imagery for discriminating geobotanical associations observed in zones of cassiterite-rich metasomatic alterations in the granitic body of Serra da Pedra Branca was investigated. Computer compatible tapes of dry and rainy season imagery were analyzed. Image enlargement, corrections, linear contrast stretch, and ratioing of noncorrelated spectral bands were performed using the Image 100 with a grey scale of 256 levels between zero and 255. Only bands 5 and 7 were considered. Band ratioing of noncorrelated channels (5 and 7) of rainy season imagery permits distinction of areas with different vegetation coverage percentage, which corresponds to geobotanial associations in the area studied. The linear contrast stretch of channel 5, especially of the dry season image is very unsatisfactory in this area.

  15. How Visuo-Spatial Mental Imagery Develops: Image Generation and Maintenance

    PubMed Central

    Wimmer, Marina C.; Maras, Katie L.; Robinson, Elizabeth J; Doherty, Martin J; Pugeault, Nicolas

    2015-01-01

    Two experiments examined the nature of visuo-spatial mental imagery generation and maintenance in 4-, 6-, 8-, 10-year old children and adults (N = 211). The key questions were how image generation and maintenance develop (Experiment 1) and how accurately children and adults coordinate mental and visually perceived images (Experiment 2). Experiment 1 indicated that basic image generation and maintenance abilities are present at 4 years of age but the precision with which images are generated and maintained improves particularly between 4 and 8 years. In addition to increased precision, Experiment 2 demonstrated that generated and maintained mental images become increasingly similar to visually perceived objects. Altogether, findings suggest that for simple tasks demanding image generation and maintenance, children attain adult-like precision younger than previously reported. This research also sheds new light on the ability to coordinate mental images with visual images in children and adults. PMID:26562296

  16. Object-based habitat mapping using very high spatial resolution multispectral and hyperspectral imagery with LiDAR data

    NASA Astrophysics Data System (ADS)

    Onojeghuo, Alex Okiemute; Onojeghuo, Ajoke Ruth

    2017-07-01

    This study investigated the combined use of multispectral/hyperspectral imagery and LiDAR data for habitat mapping across parts of south Cumbria, North West England. The methodology adopted in this study integrated spectral information contained in pansharp QuickBird multispectral/AISA Eagle hyperspectral imagery and LiDAR-derived measures with object-based machine learning classifiers and ensemble analysis techniques. Using the LiDAR point cloud data, elevation models (such as the Digital Surface Model and Digital Terrain Model raster) and intensity features were extracted directly. The LiDAR-derived measures exploited in this study included Canopy Height Model, intensity and topographic information (i.e. mean, maximum and standard deviation). These three LiDAR measures were combined with spectral information contained in the pansharp QuickBird and Eagle MNF transformed imagery for image classification experiments. A fusion of pansharp QuickBird multispectral and Eagle MNF hyperspectral imagery with all LiDAR-derived measures generated the best classification accuracies, 89.8 and 92.6% respectively. These results were generated with the Support Vector Machine and Random Forest machine learning algorithms respectively. The ensemble analysis of all three learning machine classifiers for the pansharp QuickBird and Eagle MNF fused data outputs did not significantly increase the overall classification accuracy. Results of the study demonstrate the potential of combining either very high spatial resolution multispectral or hyperspectral imagery with LiDAR data for habitat mapping.

  17. Integration of aerial oblique imagery and terrestrial imagery for optimized 3D modeling in urban areas

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Xie, Linfu; Hu, Han; Zhu, Qing; Yau, Eric

    2018-05-01

    Photorealistic three-dimensional (3D) models are fundamental to the spatial data infrastructure of a digital city, and have numerous potential applications in areas such as urban planning, urban management, urban monitoring, and urban environmental studies. Recent developments in aerial oblique photogrammetry based on aircraft or unmanned aerial vehicles (UAVs) offer promising techniques for 3D modeling. However, 3D models generated from aerial oblique imagery in urban areas with densely distributed high-rise buildings may show geometric defects and blurred textures, especially on building façades, due to problems such as occlusion and large camera tilt angles. Meanwhile, mobile mapping systems (MMSs) can capture terrestrial images of close-range objects from a complementary view on the ground at a high level of detail, but do not offer full coverage. The integration of aerial oblique imagery with terrestrial imagery offers promising opportunities to optimize 3D modeling in urban areas. This paper presents a novel method of integrating these two image types through automatic feature matching and combined bundle adjustment between them, and based on the integrated results to optimize the geometry and texture of the 3D models generated from aerial oblique imagery. Experimental analyses were conducted on two datasets of aerial and terrestrial images collected in Dortmund, Germany and in Hong Kong. The results indicate that the proposed approach effectively integrates images from the two platforms and thereby improves 3D modeling in urban areas.

  18. Remote Measurement of Heat Flux from Power Plant Cooling Lakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, Alfred J.; Kurzeja, Robert J.; Villa-Aleman, Eliel

    2013-06-01

    Laboratory experiments have demonstrated a correlation between the rate of heat loss q" from an experimental fluid to the air above and the standard deviation σ of the thermal variability in images of the fluid surface. These experimental results imply that q" can be derived directly from thermal imagery by computing σ. This paper analyses thermal imagery collected over two power plant cooling lakes to determine if the same relationship exists. Turbulent boundary layer theory predicts a linear relationship between q" and σ when both forced (wind driven) and free (buoyancy driven) convection are present. Datasets derived from ground- andmore » helicopter-based imagery collections had correlation coefficients between σ and q" of 0.45 and 0.76, respectively. Values of q" computed from a function of σ and friction velocity u* derived from turbulent boundary layer theory had higher correlations with measured values of q" (0.84 and 0.89). Finally, this research may be applicable to the problem of calculating losses of heat from the ocean to the atmosphere during high-latitude cold-air outbreaks because it does not require the information typically needed to compute sensible, evaporative, and thermal radiation energy losses to the atmosphere.« less

  19. Improving estimates of streamflow characteristics using LANDSAT-1 (ERTS-1) imagery. [Delmarva Peninsula

    NASA Technical Reports Server (NTRS)

    Hollyday, E. F. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. Streamflow characteristics in the Delmarva Peninsula derived from the records of daily discharge of 20 gaged basins are representative of the full range in flow conditions and include all of those commonly used for design or planning purposes. They include annual flood peaks with recurrence intervals of 2, 5, 10, 25, and 50 years, mean annual discharge, standard deviation of the mean annual discharge, mean monthly discharges, standard deviation of the mean monthly discharges, low-flow characteristics, flood volume characteristics, and the discharge equalled or exceeded 50 percent of the time. Streamflow and basin characteristics were related by a technique of multiple regression using a digital computer. A control group of equations was computed using basin characteristics derived from maps and climatological records. An experimental group of equations was computed using basin characteristics derived from LANDSAT imagery as well as from maps and climatological records. Based on a reduction in standard error of estimate equal to or greater than 10 percent, the equations for 12 stream flow characteristics were substantially improved by adding to the analyses basin characteristics derived from LANDSAT imagery.

  20. Mental imagery in emotion and emotional disorders.

    PubMed

    Holmes, Emily A; Mathews, Andrew

    2010-04-01

    Mental imagery has been considered relevant to psychopathology due to its supposed special relationship with emotion, although evidence for this assumption has been conspicuously lacking. The present review is divided into four main sections: (1) First, we review evidence that imagery can evoke emotion in at least three ways: a direct influence on emotional systems in the brain that are responsive to sensory signals; overlap between processes involved in mental imagery and perception which can lead to responding "as if" to real emotion-arousing events; and the capacity of images to make contact with memories for emotional episodes in the past. (2) Second, we describe new evidence confirming that imagery does indeed evoke greater emotional responses than verbal representation, although the extent of emotional response depends on the image perspective adopted. (3) Third, a heuristic model is presented that contrasts the generation of language-based representations with imagery and offers an account of their differing effects on emotion, beliefs and behavior. (4) Finally, based on the foregoing review, we discuss the role of imagery in maintaining emotional disorders, and its uses in psychological treatment. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. Physiological correlates of imagery-induced orgasm in women.

    PubMed

    Whipple, B; Ogden, G; Komisaruk, B R

    1992-04-01

    Orgasm has been reported to occur in response to imagery in the absence of any physical stimulation. This study was undertaken to ascertain whether the subjective report of imagery-induced orgasm is accompanied by physiological and perceptual events that are characteristic of genitally stimulated orgasm. Subjects were women who claimed that they could experience orgasm from imagery alone. Orgasm from self-induced imagery or genital self-stimulation generated significant increases in systolic blood pressure, heart rate, pupil diameter, pain detection threshold, and pain tolerance threshold over resting control conditions. These findings provide evidence that orgasm from self-induced imagery and genital self-stimulation can each produce significant and substantial net sympathetic activation and concomitant significant increases in pain thresholds. The increases in the self-induced imagery orgasm condition were comparable in magnitude to those in the genital self-stimulation-produced orgasm condition. On this basis we state that physical genital stimulation is evidently not necessary to produce a state that is reported to be an orgasm and that a reassessment of the nature of orgasm is warranted.

  2. Spatio-Temporal Analysis of Forest Fire Risk and Danger Using LANDSAT Imagery.

    PubMed

    Saglam, Bülent; Bilgili, Ertugrul; Dincdurmaz, Bahar; Kadiogulari, Ali Ihsan; Kücük, Ömer

    2008-06-20

    Computing fire danger and fire risk on a spatio-temporal scale is of crucial importance in fire management planning, and in the simulation of fire growth and development across a landscape. However, due to the complex nature of forests, fire risk and danger potential maps are considered one of the most difficult thematic layers to build up. Remote sensing and digital terrain data have been introduced for efficient discrete classification of fire risk and fire danger potential. In this study, two time-series data of Landsat imagery were used for determining spatio-temporal change of fire risk and danger potential in Korudag forest planning unit in northwestern Turkey. The method comprised the following two steps: (1) creation of indices of the factors influencing fire risk and danger; (2) evaluation of spatio-temporal changes in fire risk and danger of given areas using remote sensing as a quick and inexpensive means and determining the pace of forest cover change. Fire risk and danger potential indices were based on species composition, stand crown closure, stand development stage, insolation, slope and, proximity of agricultural lands to forest and distance from settlement areas. Using the indices generated, fire risk and danger maps were produced for the years 1987 and 2000. Spatio-temporal analyses were then realized based on the maps produced. Results obtained from the study showed that the use of Landsat imagery provided a valuable characterization and mapping of vegetation structure and type with overall classification accuracy higher than 83%.

  3. Irdis: A Digital Scene Storage And Processing System For Hardware-In-The-Loop Missile Testing

    NASA Astrophysics Data System (ADS)

    Sedlar, Michael F.; Griffith, Jerry A.

    1988-07-01

    This paper describes the implementation of a Seeker Evaluation and Test Simulation (SETS) Facility at Eglin Air Force Base. This facility will be used to evaluate imaging infrared (IIR) guided weapon systems by performing various types of laboratory tests. One such test is termed Hardware-in-the-Loop (HIL) simulation (Figure 1) in which the actual flight of a weapon system is simulated as closely as possible in the laboratory. As shown in the figure, there are four major elements in the HIL test environment; the weapon/sensor combination, an aerodynamic simulator, an imagery controller, and an infrared imagery system. The paper concentrates on the approaches and methodologies used in the imagery controller and infrared imaging system elements for generating scene information. For procurement purposes, these two elements have been combined into an Infrared Digital Injection System (IRDIS) which provides scene storage, processing, and output interface to drive a radiometric display device or to directly inject digital video into the weapon system (bypassing the sensor). The paper describes in detail how standard and custom image processing functions have been combined with off-the-shelf mass storage and computing devices to produce a system which provides high sample rates (greater than 90 Hz), a large terrain database, high weapon rates of change, and multiple independent targets. A photo based approach has been used to maximize terrain and target fidelity, thus providing a rich and complex scene for weapon/tracker evaluation.

  4. Toward a 30m resolution time series of historical global urban expansion: exploring variation in North American cities

    NASA Astrophysics Data System (ADS)

    Stuhlmacher, M.; Wang, C.; Georgescu, M.; Tellman, B.; Balling, R.; Clinton, N. E.; Collins, L.; Goldblatt, R.; Hanson, G.

    2016-12-01

    Global representations of modern day urban land use and land cover (LULC) extent are becoming increasingly prevalent. Yet considerable uncertainties in the representation of built environment extent (i.e. global classifications generated from 250m resolution MODIS imagery or the United States' National Land Cover Database) remain because of the lack of a systematic, globally consistent methodological approach. We aim to increase resolution, accuracy, and improve upon past efforts by establishing a data-driven definition of the urban landscape, based on Landsat 5, 7 & 8 imagery and ancillary data sets. Continuous and discrete machine learning classification algorithms have been developed in Google Earth Engine (GEE), a powerful online cloud-based geospatial storage and parallel-computing platform. Additionally, thousands of ground truth points have been selected from high resolution imagery to fill in the previous lack of accurate data to be used for training and validation. We will present preliminary classification and accuracy assessments for select cities in the United States and Mexico. Our approach has direct implications for development of projected urban growth that is grounded on realistic identification of urbanizing hot-spots, with consequences for local to regional scale climate change, energy demand, water stress, human health, urban-ecological interactions, and efforts used to prioritize adaptation and mitigation strategies to offset large-scale climate change. Future work to apply the built-up detection algorithm globally and yearly is underway in a partnership between GEE, University of California in San Diego, and Arizona State University.

  5. Thermal Texture Generation and 3d Model Reconstruction Using SFM and Gan

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Mizginov, V. A.

    2018-05-01

    Realistic 3D models with textures representing thermal emission of the object are widely used in such fields as dynamic scene analysis, autonomous driving, and video surveillance. Structure from Motion (SfM) methods provide a robust approach for the generation of textured 3D models in the visible range. Still, automatic generation of 3D models from the infrared imagery is challenging due to an absence of the feature points and low sensor resolution. Recent advances in Generative Adversarial Networks (GAN) have proved that they can perform complex image-to-image transformations such as a transformation of day to night and generation of imagery in a different spectral range. In this paper, we propose a novel method for generation of realistic 3D models with thermal textures using the SfM pipeline and GAN. The proposed method uses visible range images as an input. The images are processed in two ways. Firstly, they are used for point matching and dense point cloud generation. Secondly, the images are fed into a GAN that performs the transformation from the visible range to the thermal range. We evaluate the proposed method using real infrared imagery captured with a FLIR ONE PRO camera. We generated a dataset with 2000 pairs of real images captured in thermal and visible range. The dataset is used to train the GAN network and to generate 3D models using SfM. The evaluation of the generated 3D models and infrared textures proved that they are similar to the ground truth model in both thermal emissivity and geometrical shape.

  6. Radar data processing and analysis

    NASA Technical Reports Server (NTRS)

    Ausherman, D.; Larson, R.; Liskow, C.

    1976-01-01

    Digitized four-channel radar images corresponding to particular areas from the Phoenix and Huntington test sites were generated in conjunction with prior experiments performed to collect X- and L-band synthetic aperture radar imagery of these two areas. The methods for generating this imagery are documented. A secondary objective was the investigation of digital processing techniques for extraction of information from the multiband radar image data. Following the digitization, the remaining resources permitted a preliminary machine analysis to be performed on portions of the radar image data. The results, although necessarily limited, are reported.

  7. INS integrated motion analysis for autonomous vehicle navigation

    NASA Technical Reports Server (NTRS)

    Roberts, Barry; Bazakos, Mike

    1991-01-01

    The use of inertial navigation system (INS) measurements to enhance the quality and robustness of motion analysis techniques used for obstacle detection is discussed with particular reference to autonomous vehicle navigation. The approach to obstacle detection used here employs motion analysis of imagery generated by a passive sensor. Motion analysis of imagery obtained during vehicle travel is used to generate range measurements to points within the field of view of the sensor, which can then be used to provide obstacle detection. Results obtained with an INS integrated motion analysis approach are reviewed.

  8. Inertial navigation sensor integrated motion analysis for autonomous vehicle navigation

    NASA Technical Reports Server (NTRS)

    Roberts, Barry; Bhanu, Bir

    1992-01-01

    Recent work on INS integrated motion analysis is described. Results were obtained with a maximally passive system of obstacle detection (OD) for ground-based vehicles and rotorcraft. The OD approach involves motion analysis of imagery acquired by a passive sensor in the course of vehicle travel to generate range measurements to world points within the sensor FOV. INS data and scene analysis results are used to enhance interest point selection, the matching of the interest points, and the subsequent motion-based computations, tracking, and OD. The most important lesson learned from the research described here is that the incorporation of inertial data into the motion analysis program greatly improves the analysis and makes the process more robust.

  9. The association between brain activity and motor imagery during motor illusion induction by vibratory stimulation

    PubMed Central

    Kodama, Takayuki; Nakano, Hideki; Katayama, Osamu; Murata, Shin

    2017-01-01

    Background: The association between motor imagery ability and brain neural activity that leads to the manifestation of a motor illusion remains unclear. Objective: In this study, we examined the association between the ability to generate motor imagery and brain neural activity leading to the induction of a motor illusion by vibratory stimulation. Methods: The sample consisted of 20 healthy individuals who did not have movement or sensory disorders. We measured the time between the starting and ending points of a motor illusion (the time to illusion induction, TII) and performed electroencephalography (EEG). We conducted a temporo-spatial analysis on brain activity leading to the induction of motor illusions using the EEG microstate segmentation method. Additionally, we assessed the ability to generate motor imagery using the Japanese version of the Movement Imagery Questionnaire-Revised (JMIQ-R) prior to performing the task and examined the associations among brain neural activity levels as identified by microstate segmentation method, TII, and the JMIQ-R scores. Results: The results showed four typical microstates during TII and significantly higher neural activity in the ventrolateral prefrontal cortex, primary sensorimotor area, supplementary motor area (SMA), and inferior parietal lobule (IPL). Moreover, there were significant negative correlations between the neural activity of the primary motor cortex (MI), SMA, IPL, and TII, and a significant positive correlation between the neural activity of the SMA and the JMIQ-R scores. Conclusion: These findings suggest the possibility that a neural network primarily comprised of the neural activity of SMA and M1, which are involved in generating motor imagery, may be the neural basis for inducing motor illusions. This may aid in creating a new approach to neurorehabilitation that enables a more robust reorganization of the neural base for patients with brain dysfunction with a motor function disorder. PMID:29172013

  10. Automatic digital surface model (DSM) generation from aerial imagery data

    NASA Astrophysics Data System (ADS)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  11. EEG datasets for motor imagery brain-computer interface.

    PubMed

    Cho, Hohyun; Ahn, Minkyu; Ahn, Sangtae; Kwon, Moonyoung; Jun, Sung Chan

    2017-07-01

    Most investigators of brain-computer interface (BCI) research believe that BCI can be achieved through induced neuronal activity from the cortex, but not by evoked neuronal activity. Motor imagery (MI)-based BCI is one of the standard concepts of BCI, in that the user can generate induced activity by imagining motor movements. However, variations in performance over sessions and subjects are too severe to overcome easily; therefore, a basic understanding and investigation of BCI performance variation is necessary to find critical evidence of performance variation. Here we present not only EEG datasets for MI BCI from 52 subjects, but also the results of a psychological and physiological questionnaire, EMG datasets, the locations of 3D EEG electrodes, and EEGs for non-task-related states. We validated our EEG datasets by using the percentage of bad trials, event-related desynchronization/synchronization (ERD/ERS) analysis, and classification analysis. After conventional rejection of bad trials, we showed contralateral ERD and ipsilateral ERS in the somatosensory area, which are well-known patterns of MI. Finally, we showed that 73.08% of datasets (38 subjects) included reasonably discriminative information. Our EEG datasets included the information necessary to determine statistical significance; they consisted of well-discriminated datasets (38 subjects) and less-discriminative datasets. These may provide researchers with opportunities to investigate human factors related to MI BCI performance variation, and may also achieve subject-to-subject transfer by using metadata, including a questionnaire, EEG coordinates, and EEGs for non-task-related states. © The Authors 2017. Published by Oxford University Press.

  12. Digital techniques for processing Landsat imagery

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1978-01-01

    An overview of the basic techniques used to process Landsat images with a digital computer, and the VICAR image processing software developed at JPL and available to users through the NASA sponsored COSMIC computer program distribution center is presented. Examples of subjective processing performed to improve the information display for the human observer, such as contrast enhancement, pseudocolor display and band rationing, and of quantitative processing using mathematical models, such as classification based on multispectral signatures of different areas within a given scene and geometric transformation of imagery into standard mapping projections are given. Examples are illustrated by Landsat scenes of the Andes mountains and Altyn-Tagh fault zone in China before and after contrast enhancement and classification of land use in Portland, Oregon. The VICAR image processing software system which consists of a language translator that simplifies execution of image processing programs and provides a general purpose format so that imagery from a variety of sources can be processed by the same basic set of general applications programs is described.

  13. Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.

    2015-12-01

    Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.

  14. Evaluation of SLAR and thematic mapper MSS data for forest cover mapping using computer-aided analysis techniques

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M. (Principal Investigator); Knowlton, D. J.; Dean, M. E.

    1981-01-01

    A set of training statistics for the 30 meter resolution simulated thematic mapper MSS data was generated based on land use/land cover classes. In addition to this supervised data set, a nonsupervised multicluster block of training statistics is being defined in order to compare the classification results and evaluate the effect of the different training selection methods on classification performance. Two test data sets, defined using a stratified sampling procedure incorporating a grid system with dimensions of 50 lines by 50 columns, and another set based on an analyst supervised set of test fields were used to evaluate the classifications of the TMS data. The supervised training data set generated training statistics, and a per point Gaussian maximum likelihood classification of the 1979 TMS data was obtained. The August 1980 MSS data was radiometrically adjusted. The SAR data was redigitized and the SAR imagery was qualitatively analyzed.

  15. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications of our technology to the special problems of telemedicine.

  16. Digital imaging and remote sensing image generator (DIRSIG) as applied to NVESD sensor performance modeling

    NASA Astrophysics Data System (ADS)

    Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.

    2016-05-01

    The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.

  17. Imagining a brighter future: the effect of positive imagery training on mood, prospective mental imagery and emotional bias in older adults.

    PubMed

    Murphy, Susannah E; Clare O'Donoghue, M; Drazich, Erin H S; Blackwell, Simon E; Christina Nobre, Anna; Holmes, Emily A

    2015-11-30

    Positive affect and optimism play an important role in healthy ageing and are associated with improved physical and cognitive health outcomes. This study investigated whether it is possible to boost positive affect and associated positive biases in this age group using cognitive training. The effect of computerised imagery-based cognitive bias modification on positive affect, vividness of positive prospective imagery and interpretation biases in older adults was measured. 77 older adults received 4 weeks (12 sessions) of imagery cognitive bias modification or a control condition. They were assessed at baseline, post-training and at a one-month follow-up. Both groups reported decreased negative affect and trait anxiety, and increased optimism across the three assessments. Imagery cognitive bias modification significantly increased the vividness of positive prospective imagery post-training, compared with the control training. Contrary to our hypothesis, there was no difference between the training groups in negative interpretation bias. This is a useful demonstration that it is possible to successfully engage older adults in computer-based cognitive training and to enhance the vividness of positive imagery about the future in this group. Future studies are needed to assess the longer-term consequences of such training and the impact on affect and wellbeing in more vulnerable groups. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  18. Coastal Survey Using Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    Walker, G.

    2012-12-01

    Generating high-resolution 3-dimensional costal imagery from imagery collected on small-unmanned aircraft is opening many opportunities to study marine wildlife and its use of costal habitats as well as climate change effects on northern coasts where storm surges are radically altering the coastline. Additionally, the technology is being evaluated for oil spill response planning and preparation. The University of Alaska Fairbanks works extensively with small-unmanned aircraft and recently began evaluating the aircraft utility for generating survey grade mapping of topographic features. When generating 3-D maps of coastal regions however there are added challenges that the University have identified and are trying to address. Recent projects with Alaska fisheries and BP Exploration Alaska have demonstrated that small-unmanned aircraft can support the generation of map-based products that are nearly impossible to generate with other technologies.

  19. Contours identification of elements in a cone beam computed tomography for investigating maxillary cysts

    NASA Astrophysics Data System (ADS)

    Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia

    2013-10-01

    Digital processing of two-dimensional cone beam computer tomography slicesstarts by identification of the contour of elements within. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating and implementation of algorithms in dental 2D imagery.

  20. A Method for Identifying Contours in Processing Digital Images from Computer Tomograph

    NASA Astrophysics Data System (ADS)

    Roşu, Şerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Roşu, Doina; Fratila, Mihaela

    2011-09-01

    The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.

  1. Image-based surface reconstruction in geomorphometry - merits, limits and developments

    NASA Astrophysics Data System (ADS)

    Eltner, Anette; Kaiser, Andreas; Castillo, Carlos; Rock, Gilles; Neugirg, Fabian; Abellán, Antonio

    2016-05-01

    Photogrammetry and geosciences have been closely linked since the late 19th century due to the acquisition of high-quality 3-D data sets of the environment, but it has so far been restricted to a limited range of remote sensing specialists because of the considerable cost of metric systems for the acquisition and treatment of airborne imagery. Today, a wide range of commercial and open-source software tools enable the generation of 3-D and 4-D models of complex geomorphological features by geoscientists and other non-experts users. In addition, very recent rapid developments in unmanned aerial vehicle (UAV) technology allow for the flexible generation of high-quality aerial surveying and ortho-photography at a relatively low cost.The increasing computing capabilities during the last decade, together with the development of high-performance digital sensors and the important software innovations developed by computer-based vision and visual perception research fields, have extended the rigorous processing of stereoscopic image data to a 3-D point cloud generation from a series of non-calibrated images. Structure-from-motion (SfM) workflows are based upon algorithms for efficient and automatic orientation of large image sets without further data acquisition information, examples including robust feature detectors like the scale-invariant feature transform for 2-D imagery. Nevertheless, the importance of carrying out well-established fieldwork strategies, using proper camera settings, ground control points and ground truth for understanding the different sources of errors, still needs to be adapted in the common scientific practice.This review intends not only to summarise the current state of the art on using SfM workflows in geomorphometry but also to give an overview of terms and fields of application. Furthermore, this article aims to quantify already achieved accuracies and used scales, using different strategies in order to evaluate possible stagnations of current developments and to identify key future challenges. It is our belief that some lessons learned from former articles, scientific reports and book chapters concerning the identification of common errors or "bad practices" and some other valuable information may help in guiding the future use of SfM photogrammetry in geosciences.

  2. The Effects of Program Embedded Learning Strategies, Using an Imagery Cue Strategy and an Attention Directing Strategy, to Improve Learning from Micro Computer Based Instruction (MCBI).

    ERIC Educational Resources Information Center

    Taylor, William; And Others

    The effects of the Attention Directing Strategy and Imagery Cue Strategy as program embedded learning strategies for microcomputer-based instruction (MCBI) were examined in this study. Eight learning conditions with identical instructional content on the parts and operation of the human heart were designed: either self-paced or externally-paced,…

  3. A self-paced motor imagery based brain-computer interface for robotic wheelchair control.

    PubMed

    Tsui, Chun Sing Louis; Gan, John Q; Hu, Huosheng

    2011-10-01

    This paper presents a simple self-paced motor imagery based brain-computer interface (BCI) to control a robotic wheelchair. An innovative control protocol is proposed to enable a 2-class self-paced BCI for wheelchair control, in which the user makes path planning and fully controls the wheelchair except for the automatic obstacle avoidance based on a laser range finder when necessary. In order for the users to train their motor imagery control online safely and easily, simulated robot navigation in a specially designed environment was developed. This allowed the users to practice motor imagery control with the core self-paced BCI system in a simulated scenario before controlling the wheelchair. The self-paced BCI can then be applied to control a real robotic wheelchair using a protocol similar to that controlling the simulated robot. Our emphasis is on allowing more potential users to use the BCI controlled wheelchair with minimal training; a simple 2-class self paced system is adequate with the novel control protocol, resulting in a better transition from offline training to online control. Experimental results have demonstrated the usefulness of the online practice under the simulated scenario, and the effectiveness of the proposed self-paced BCI for robotic wheelchair control.

  4. Information fusion performance evaluation for motion imagery data using mutual information: initial study

    NASA Astrophysics Data System (ADS)

    Grieggs, Samuel M.; McLaughlin, Michael J.; Ezekiel, Soundararajan; Blasch, Erik

    2015-06-01

    As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.

  5. Thermal study of the Missouri River in North Dakota using infrared imagery

    NASA Technical Reports Server (NTRS)

    Crosby, O. A.

    1971-01-01

    Studies of infrared imagery obtained from aircraft at 305- to 1,524-meter altitudes indicate the feasibility of monitoring thermal changes attributable to the operation of thermal electric plants and storage reservoirs, as well as natural phenomena such as tributary inflow and ground water seeps in large rivers. No identifiable sources of ground water inflow below the surface of the river could be found in the imagery. The thermal patterns from the generating plants and the major tributary inflow are readily apparent in imagery obtained from an altitude of 305 meters. Portions of the tape-recorded imagery were processed in a color-coded quantization to enhance the displays and to attach quantitative significance to the data. The study indicates a marked decrease in water temperature in the Missouri River prior to early fall and a moderate increase in temperature in late fall because of the Lake Sakakawea impoundment.

  6. Biocybernetic factors in human perception and memory

    NASA Technical Reports Server (NTRS)

    Lai, D. C.

    1975-01-01

    The objective of this research is to develop biocybernetic techniques for use in the analysis and development of skills required for the enhancement of concrete images of the 'eidetic' type. The scan patterns of the eye during inspection of scenes are treated as indicators of the brain's strategy for the intake of visual information. The authors determine the features that differentiate visual scan patterns associated with superior imagery from scan patterns associated with inferior imagery, and simultaneously differentiate the EEG features correlated with superior imagery from those correlated with inferior imagery. A closely-coupled man-machine system has been designed to generate image enhancement and to train the individual to exert greater voluntary control over his own imagery. The models for EEG signals and saccadic eye movement in the man-machine system have been completed. The report describes the details of these models and discusses their usefulness.

  7. Observation of wave celerity evolution in the nearshore using digital video imagery

    NASA Astrophysics Data System (ADS)

    Yoo, J.; Fritz, H. M.; Haas, K. A.; Work, P. A.; Barnes, C. F.; Cho, Y.

    2008-12-01

    Celerity of incident waves in the nearshore is observed from oblique video imagery collected at Myrtle Beach, S.C.. The video camera covers the field view of length scales O(100) m. Celerity of waves propagating in shallow water including the surf zone is estimated by applying advanced image processing and analysis methods to the individual video images sampled at 3 Hz. Original image sequences are processed through video image frame differencing, directional low-pass image filtering to reduce the noise arising from foam in the surf zone. The breaking wave celerity is computed along a cross-shore transect from the wave crest tracks extracted by a Radon transform-based line detection method. The observed celerity from the nearshore video imagery is larger than the linear wave celerity computed from the measured water depths over the entire surf zone. Compared to the nonlinear shallow water wave equation (NSWE)-based celerity computed using the measured depths and wave heights, in general, the video-based celerity shows good agreements over the surf zone except the regions across the incipient wave breaking locations. In the regions across the breaker points, the observed wave celerity is even larger than the NSWE-based celerity due to the transition of wave crest shapes. The observed celerity using the video imagery can be used to monitor the nearshore geometry through depth inversion based on the nonlinear wave celerity theories. For this purpose, the exceeding celerity across the breaker points needs to be corrected accordingly compared to a nonlinear wave celerity theory applied.

  8. Determination of turbidity patterns in Lake Chicot from LANDSAT MSS imagery

    NASA Technical Reports Server (NTRS)

    Lecroy, S. R. (Principal Investigator)

    1982-01-01

    A historical analysis of all the applicable LANDSAT imagery was conducted on the turbidity patterns of Lake Chicot, located in the southeastern corner of Arkansas. By examining the seasonal and regional turbidity patterns, a record of sediment dynamics and possible disposition can be obtained. Sketches were generated from the suitable imagery, displaying different intensities of brightness observed in bands 5 and 7 of LANDSAT's multispectral scanner data. Differences in and between bands 5 and 7 indicate variances in the levels of surface sediment concentrations. High sediment loads are revealed when distinct patterns appear in the band 7 imagery. Additionally, the upwelled signal is exponential in nature and saturates in band 5 at low wavelengths for large concentrations of suspended solids.

  9. Imagery Based Elaboration as an Index of EMR Children's Creativity and Incidental Associative Learning.

    ERIC Educational Resources Information Center

    Greeson, Larry E.; Vane, Raymond J.

    1986-01-01

    Educable mentally retarded (EMR) 13- to 15-year-olds (N=19) and matched mental-age comparison subjects (N=22) participated in an imagery-based, associative learning pictorial elaboration task, followed by a delayed test of incidental learning. Both groups were able to generate original elaborations, although fluency and incidental learning scores…

  10. Forest and range mapping in the Houston area with ERTS-1

    NASA Technical Reports Server (NTRS)

    Heath, G. R.; Parker, H. D.

    1973-01-01

    ERTS-1 data acquired over the Houston area has been analyzed for applications to forest and range mapping. In the field of forestry the Sam Houston National Forest (Texas) was chosen as a test site, (Scene ID 1037-16244). Conventional imagery interpretation as well as computer processing methods were used to make classification maps of timber species, condition and land-use. The results were compared with timber stand maps which were obtained from aircraft imagery and checked in the field. The preliminary investigations show that conventional interpretation techniques indicated an accuracy in classification of 63 percent. The computer-aided interpretations made by a clustering technique gave 70 percent accuracy. Computer-aided and conventional multispectral analysis techniques were applied to range vegetation type mapping in the gulf coast marsh. Two species of salt marsh grasses were mapped.

  11. The 3D Recognition, Generation, Fusion, Update and Refinement (RG4) Concept

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Cheeseman, Peter; Smelyanskyi, Vadim N.; Kuehnel, Frank; Morris, Robin D.; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper describes an active (real time) recognition strategy whereby information is inferred iteratively across several viewpoints in descent imagery. We will show how we use inverse theory within the context of parametric model generation, namely height and spectral reflection functions, to generate model assertions. Using this strategy in an active context implies that, from every viewpoint, the proposed system must refine its hypotheses taking into account the image and the effect of uncertainties as well. The proposed system employs probabilistic solutions to the problem of iteratively merging information (images) from several viewpoints. This involves feeding the posterior distribution from all previous images as a prior for the next view. Novel approaches will be developed to accelerate the inversion search using novel statistic implementations and reducing the model complexity using foveated vision. Foveated vision refers to imagery where the resolution varies across the image. In this paper, we allow the model to be foveated where the highest resolution region is called the foveation region. Typically, the images will have dynamic control of the location of the foveation region. For descent imagery in the Entry, Descent, and Landing (EDL) process, it is possible to have more than one foveation region. This research initiative is directed towards descent imagery in connection with NASA's EDL applications. Three-Dimensional Model Recognition, Generation, Fusion, Update, and Refinement (RGFUR or RG4) for height and the spectral reflection characteristics are in focus for various reasons, one of which is the prospect that their interpretation will provide for real time active vision for automated EDL.

  12. Task-selective memory effects for successfully implemented encoding strategies.

    PubMed

    Leshikar, Eric D; Duarte, Audrey; Hertzog, Christopher

    2012-01-01

    Previous behavioral evidence suggests that instructed strategy use benefits associative memory formation in paired associate tasks. Two such effective encoding strategies--visual imagery and sentence generation--facilitate memory through the production of different types of mediators (e.g., mental images and sentences). Neuroimaging evidence suggests that regions of the brain support memory reflecting the mental operations engaged at the time of study. That work, however, has not taken into account self-reported encoding task success (i.e., whether participants successfully generated a mediator). It is unknown, therefore, whether task-selective memory effects specific to each strategy might be found when encoding strategies are successfully implemented. In this experiment, participants studied pairs of abstract nouns under either visual imagery or sentence generation encoding instructions. At the time of study, participants reported their success at generating a mediator. Outside of the scanner, participants further reported the quality of the generated mediator (e.g., images, sentences) for each word pair. We observed task-selective memory effects for visual imagery in the left middle occipital gyrus, the left precuneus, and the lingual gyrus. No such task-selective effects were observed for sentence generation. Intriguingly, activity at the time of study in the left precuneus was modulated by the self-reported quality (vividness) of the generated mental images with greater activity for trials given higher ratings of quality. These data suggest that regions of the brain support memory in accord with the encoding operations engaged at the time of study.

  13. Fractal analysis of seafloor textures for target detection in synthetic aperture sonar imagery

    NASA Astrophysics Data System (ADS)

    Nabelek, T.; Keller, J.; Galusha, A.; Zare, A.

    2018-04-01

    Fractal analysis of an image is a mathematical approach to generate surface related features from an image or image tile that can be applied to image segmentation and to object recognition. In undersea target countermeasures, the targets of interest can appear as anomalies in a variety of contexts, visually different textures on the seafloor. In this paper, we evaluate the use of fractal dimension as a primary feature and related characteristics as secondary features to be extracted from synthetic aperture sonar (SAS) imagery for the purpose of target detection. We develop three separate methods for computing fractal dimension. Tiles with targets are compared to others from the same background textures without targets. The different fractal dimension feature methods are tested with respect to how well they can be used to detect targets vs. false alarms within the same contexts. These features are evaluated for utility using a set of image tiles extracted from a SAS data set generated by the U.S. Navy in conjunction with the Office of Naval Research. We find that all three methods perform well in the classification task, with a fractional Brownian motion model performing the best among the individual methods. We also find that the secondary features are just as useful, if not more so, in classifying false alarms vs. targets. The best classification accuracy overall, in our experimentation, is found when the features from all three methods are combined into a single feature vector.

  14. Window-based method for approximating the Hausdorff in three-dimensional range imagery

    DOEpatents

    Koch, Mark W [Albuquerque, NM

    2009-06-02

    One approach to pattern recognition is to use a template from a database of objects and match it to a probe image containing the unknown. Accordingly, the Hausdorff distance can be used to measure the similarity of two sets of points. In particular, the Hausdorff can measure the goodness of a match in the presence of occlusion, clutter, and noise. However, existing 3D algorithms for calculating the Hausdorff are computationally intensive, making them impractical for pattern recognition that requires scanning of large databases. The present invention is directed to a new method that can efficiently, in time and memory, compute the Hausdorff for 3D range imagery. The method uses a window-based approach.

  15. Real time system design of motor imagery brain-computer interface based on multi band CSP and SVM

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Li, Xiaoqin; Bian, Yan

    2018-04-01

    Motion imagery (MT) is an effective method to promote the recovery of limbs in patients after stroke. Though an online MT brain computer interface (BCT) system, which apply MT, can enhance the patient's participation and accelerate their recovery process. The traditional method deals with the electroencephalogram (EEG) induced by MT by common spatial pattern (CSP), which is used to extract information from a frequency band. Tn order to further improve the classification accuracy of the system, information of two characteristic frequency bands is extracted. The effectiveness of the proposed feature extraction method is verified by off-line analysis of competition data and the analysis of online system.

  16. Visualization of simulated urban spaces: inferring parameterized generation of streets, parcels, and aerial imagery.

    PubMed

    Vanegas, Carlos A; Aliaga, Daniel G; Benes, Bedrich; Waddell, Paul

    2009-01-01

    Urban simulation models and their visualization are used to help regional planning agencies evaluate alternative transportation investments, land use regulations, and environmental protection policies. Typical urban simulations provide spatially distributed data about number of inhabitants, land prices, traffic, and other variables. In this article, we build on a synergy of urban simulation, urban visualization, and computer graphics to automatically infer an urban layout for any time step of the simulation sequence. In addition to standard visualization tools, our method gathers data of the original street network, parcels, and aerial imagery and uses the available simulation results to infer changes to the original urban layout and produce a new and plausible layout for the simulation results. In contrast with previous work, our approach automatically updates the layout based on changes in the simulation data and thus can scale to a large simulation over many years. The method in this article offers a substantial step forward in building integrated visualization and behavioral simulation systems for use in community visioning, planning, and policy analysis. We demonstrate our method on several real cases using a 200 GB database for a 16,300 km2 area surrounding Seattle.

  17. Atmospheric Correction of High-Spatial-Resolution Commercial Satellite Imagery Products Using MODIS Atmospheric Products

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Holekamp, Kara; Ryan, Robert E.; Vaughan, Ronald; Russell, Jeffrey A.; Prados, Don; Stanley, Thomas

    2005-01-01

    Remotely sensed ground reflectance is the basis for many inter-sensor interoperability or change detection techniques. Satellite inter-comparisons and accurate vegetation indices such as the Normalized Difference Vegetation Index, which is used to describe or to imply a wide variety of biophysical parameters and is defined in terms of near-infrared and redband reflectance, require the generation of accurate reflectance maps. This generation relies upon the removal of solar illumination, satellite geometry, and atmospheric effects and is generally referred to as atmospheric correction. Atmospheric correction of remotely sensed imagery to ground reflectance, however, has been widely applied to only a few systems. In this study, we atmospherically corrected commercially available, high spatial resolution IKONOS and QuickBird imagery using several methods to determine the accuracy of the resulting reflectance maps. We used extensive ground measurement datasets for nine IKONOS and QuickBird scenes acquired over a two-year period to establish reflectance map accuracies. A correction approach using atmospheric products derived from Moderate Resolution Imaging Spectrometer data created excellent reflectance maps and demonstrated a reliable, effective method for reflectance map generation.

  18. A comparison of airborne GEMS/SAR with satellite-borne Seasat/SAR radar imagery - The value of archived multiple data sets

    NASA Technical Reports Server (NTRS)

    Hanson, Bradford C.; Dellwig, Louis F.

    1988-01-01

    In a study concerning the value of using radar imagery from systems with diverse parameters, X-band images of the Northern Louisiana Salt dome area generated by the airborne Goodyear electronic mapping system (GEMS) are analyzed in conjunction with imagery generated by the satelliteborne Seasat/SAR. The GEMS operated with an incidence angle of 75 to 85 deg and a resolution of 12 m, whereas the Seasat/SAR operated with an incidence angle of 23 deg and a resolution of 25 m. It is found that otherwise unattainable data on land management activities, improved delineation of the drainage net, better definition of surface roughness in cleared areas, and swamp identification, became accessible when adjustments for the time lapse between the two missions were made and supporting ground data concerning the physical and vegetative characteristics of the terrain were acquired.

  19. Real-time orthorectification by FPGA-based hardware acceleration

    NASA Astrophysics Data System (ADS)

    Kuo, David; Gordon, Don

    2010-10-01

    Orthorectification that corrects the perspective distortion of remote sensing imagery, providing accurate geolocation and ease of correlation to other images is a valuable first-step in image processing for information extraction. However, the large amount of metadata and the floating-point matrix transformations required to operate on each pixel make this a computation and I/O (Input/Output) intensive process. As result much imagery is either left unprocessed or loses timesensitive value in the long processing cycle. However, the computation on each pixel can be reduced substantially by using computational results of the neighboring pixels and accelerated by special pipelined hardware architecture in one to two orders of magnitude. A specialized coprocessor that is implemented inside an FPGA (Field Programmable Gate Array) chip and surrounded by vendorsupported hardware IP (Intellectual Property) shares the computation workload with CPU through PCI-Express interface. The ultimate speed of one pixel per clock (125 MHz) is achieved by the pipelined systolic array architecture. The optimal partition between software and hardware, the timing profile among image I/O and computation, and the highly automated GUI (Graphical User Interface) that fully exploits this speed increase to maximize overall image production throughput will also be discussed. The software that runs on a workstation with the acceleration hardware orthorectifies 16 Megapixels per second, which is 16 times faster than without the hardware. It turns the production time from months to days. A real-life successful story of an imaging satellite company that adopted such workstations for their orthorectified imagery production will be presented. The potential candidacy of the image processing computation that can be accelerated more efficiently by the same approach will also be analyzed.

  20. Xpatch prediction improvements to support multiple ATR applications

    NASA Astrophysics Data System (ADS)

    Andersh, Dennis J.; Lee, Shung W.; Moore, John T.; Sullivan, Douglas P.; Hughes, Jeff A.; Ling, Hao

    1998-08-01

    This paper describes an electromagnetic computer prediction code for generating radar cross section (RCS), time-domain signature sand synthetic aperture radar (SAR) images of realistic 3D vehicles. The vehicle, typically an airplane or a ground vehicle, is represented by a computer-aided design (CAD) file with triangular facets, IGES curved surfaces, or solid geometries.The computer code, Xpatch, based on the shooting-and-bouncing-ray technique, is used to calculate the polarimetric radar return from the vehicles represented by these different CAD files. Xpatch computers the first- bounce physical optics (PO) plus the physical theory of diffraction (PTD) contributions. Xpatch calculates the multi-bounce ray contributions by using geometric optics and PO for complex vehicles with materials. It has been found that the multi-bounce calculations, the radar return in typically 10 to 15 dB too low. Examples of predicted range profiles, SAR, imagery, and RCS for several different geometries are compared with measured data to demonstrate the quality of the predictions. Recent enhancements to Xpatch include improvements for millimeter wave applications and hybridization with finite element method for small geometric features and augmentation of additional IGES entities to support trimmed and untrimmed surfaces.

  1. XPATCH: a high-frequency electromagnetic scattering prediction code using shooting and bouncing rays

    NASA Astrophysics Data System (ADS)

    Hazlett, Michael; Andersh, Dennis J.; Lee, Shung W.; Ling, Hao; Yu, C. L.

    1995-06-01

    This paper describes an electromagnetic computer prediction code for generating radar cross section (RCS), time domain signatures, and synthetic aperture radar (SAR) images of realistic 3-D vehicles. The vehicle, typically an airplane or a ground vehicle, is represented by a computer-aided design (CAD) file with triangular facets, curved surfaces, or solid geometries. The computer code, XPATCH, based on the shooting and bouncing ray technique, is used to calculate the polarimetric radar return from the vehicles represented by these different CAD files. XPATCH computes the first-bounce physical optics plus the physical theory of diffraction contributions and the multi-bounce ray contributions for complex vehicles with materials. It has been found that the multi-bounce contributions are crucial for many aspect angles of all classes of vehicles. Without the multi-bounce calculations, the radar return is typically 10 to 15 dB too low. Examples of predicted range profiles, SAR imagery, and radar cross sections (RCS) for several different geometries are compared with measured data to demonstrate the quality of the predictions. The comparisons are from the UHF through the Ka frequency ranges. Recent enhancements to XPATCH for MMW applications and target Doppler predictions are also presented.

  2. Content-based image exploitation for situational awareness

    NASA Astrophysics Data System (ADS)

    Gains, David

    2008-04-01

    Image exploitation is of increasing importance to the enterprise of building situational awareness from multi-source data. It involves image acquisition, identification of objects of interest in imagery, storage, search and retrieval of imagery, and the distribution of imagery over possibly bandwidth limited networks. This paper describes an image exploitation application that uses image content alone to detect objects of interest, and that automatically establishes and preserves spatial and temporal relationships between images, cameras and objects. The application features an intuitive user interface that exposes all images and information generated by the system to an operator thus facilitating the formation of situational awareness.

  3. Strategies in Reading Comprehension: VIII. Pictures, Imagery, and Retarded Children's Story Recall. Working Paper No. 214.

    ERIC Educational Resources Information Center

    Bender, Bruce G.; Levin, Joel R.

    Ninety-six educable mentally retarded individuals (10-16 years old) were randomly assigned to one of four experimental conditions to listen to a 20-sentence story. Picture Ss viewed illustrations of the story, Imagery Ss were instructed to generate mental pictures of the story, Repitition Control Ss heard each sentence of the story twice, and…

  4. MODTRAN Radiance Modeling of Multi-Angle Worldview-2 Imagery

    DTIC Science & Technology

    2013-09-01

    this thesis, multi-angle CHRIS data has been used to validate canopy BRDF models generated using PROSPECT and SAILH radiative transfer models (D’Urso...67 1. MODTRAN Modeling using BRDF Algorithms .............................67 2. MODTRAN Modeling of Hyperspectral Data...associated with BRDF , and (2) develop software- 2 based atmospheric models , using parameters similar to those found in the imagery, for comparison to

  5. Regional Land Use Mapping: the Phoenix Pilot Project

    NASA Technical Reports Server (NTRS)

    Anderson, J. R.; Place, J. L.

    1971-01-01

    The Phoenix Pilot Program has been designed to make effective use of past experience in making land use maps and collecting land use information. Conclusions reached from the project are: (1) Land use maps and accompanying statistical information of reasonable accuracy and quality can be compiled at a scale of 1:250,000 from orbital imagery. (2) Orbital imagery used in conjunction with other sources of information when available can significantly enhance the collection and analysis of land use information. (3) Orbital imagery combined with modern computer technology will help resolve the problem of obtaining land use data quickly and on a regular basis, which will greatly enhance the usefulness of such data in regional planning, land management, and other applied programs. (4) Agreement on a framework or scheme of land use classification for use with orbital imagery will be necessary for effective use of land use data.

  6. Stream network analysis and geomorphic flood plain mapping from orbital and suborbital remote sensing imagery application to flood hazard studies in central Texas

    NASA Technical Reports Server (NTRS)

    Baker, V. R. (Principal Investigator); Holz, R. K.; Hulke, S. D.; Patton, P. C.; Penteado, M. M.

    1975-01-01

    The author has identified the following significant results. Development of a quantitative hydrogeomorphic approach to flood hazard evaluation was hindered by (1) problems of resolution and definition of the morphometric parameters which have hydrologic significance, and (2) mechanical difficulties in creating the necessary volume of data for meaningful analysis. Measures of network resolution such as drainage density and basin Shreve magnitude indicated that large scale topographic maps offered greater resolution than small scale suborbital imagery and orbital imagery. The disparity in network resolution capabilities between orbital and suborbital imagery formats depends on factors such as rock type, vegetation, and land use. The problem of morphometric data analysis was approached by developing a computer-assisted method for network analysis. The system allows rapid identification of network properties which can then be related to measures of flood response.

  7. Remote imagery for unmanned ground vehicles: the future of path planning for ground robotics

    NASA Astrophysics Data System (ADS)

    Frederick, Philip A.; Theisen, Bernard L.; Ward, Derek

    2006-10-01

    Remote Imagery for Unmanned Ground Vehicles (RIUGV) uses a combination of high-resolution multi-spectral satellite imagery and advanced commercial off-the-self (COTS) object-oriented image processing software to provide automated terrain feature extraction and classification. This information, along with elevation data, infrared imagery, a vehicle mobility model and various meta-data (local weather reports, Zobler Soil map, etc...), is fed into automated path planning software to provide a stand-alone ability to generate rapidly updateable dynamic mobility maps for Manned or Unmanned Ground Vehicles (MGVs or UGVs). These polygon based mobility maps can reside on an individual platform or a tactical network. When new information is available, change files are generated and ingested into existing mobility maps based on user selected criteria. Bandwidth concerns are mitigated by the use of shape files for the representation of the data (e.g. each object in the scene is represented by a shape file and thus can be transmitted individually). User input (desired level of stealth, required time of arrival, etc...) determines the priority in which objects are tagged for updates. This paper will also discuss the planned July 2006 field experiment.

  8. Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.

    PubMed

    Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward

    2006-08-01

    Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.

  9. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units

    PubMed Central

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-01-01

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684

  10. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units.

    PubMed

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-02-12

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  11. Task-Selective Memory Effects for Successfully Implemented Encoding Strategies

    PubMed Central

    Leshikar, Eric D.; Duarte, Audrey; Hertzog, Christopher

    2012-01-01

    Previous behavioral evidence suggests that instructed strategy use benefits associative memory formation in paired associate tasks. Two such effective encoding strategies–visual imagery and sentence generation–facilitate memory through the production of different types of mediators (e.g., mental images and sentences). Neuroimaging evidence suggests that regions of the brain support memory reflecting the mental operations engaged at the time of study. That work, however, has not taken into account self-reported encoding task success (i.e., whether participants successfully generated a mediator). It is unknown, therefore, whether task-selective memory effects specific to each strategy might be found when encoding strategies are successfully implemented. In this experiment, participants studied pairs of abstract nouns under either visual imagery or sentence generation encoding instructions. At the time of study, participants reported their success at generating a mediator. Outside of the scanner, participants further reported the quality of the generated mediator (e.g., images, sentences) for each word pair. We observed task-selective memory effects for visual imagery in the left middle occipital gyrus, the left precuneus, and the lingual gyrus. No such task-selective effects were observed for sentence generation. Intriguingly, activity at the time of study in the left precuneus was modulated by the self-reported quality (vividness) of the generated mental images with greater activity for trials given higher ratings of quality. These data suggest that regions of the brain support memory in accord with the encoding operations engaged at the time of study. PMID:22693593

  12. Involuntary memories after a positive film are dampened by a visuospatial task: unhelpful in depression but helpful in mania?

    PubMed

    Davies, Charlotte; Malik, Aiysha; Pictet, Arnaud; Blackwell, Simon E; Holmes, Emily A

    2012-01-01

    Spontaneous negative mental images have been extensively researched due to the crucial role they play in conditions such as post-traumatic stress disorder. However, people can also experience spontaneous positive mental images, and these are little understood. Positive images may play a role in promoting healthy positive mood and may be lacking in conditions such as depression. However, they may also occur in problematic states of elevated mood, such as in bipolar disorder. Can we apply an understanding of spontaneous imagery gained by the study of spontaneous negative images to spontaneous positive images? In an analogue of the trauma film studies, 69 volunteers viewed an explicitly positive (rather than traumatic) film. Participants were randomly allocated post-film either to perform a visuospatial task (the computer game 'Tetris') or to a no-task control condition. Viewing the film enhanced positive mood and immediately post-film increased goal setting on a questionnaire measure. The film was successful in generating involuntary memories of specific scenes over the following week. As predicted, compared with the control condition, participants in the visuospatial task condition reported significantly fewer involuntary memories from the film in a diary over the subsequent week. Furthermore, scores on a recognition memory test at 1 week indicated an impairment in voluntary recall of the film in the visuospatial task condition. Clinical implications regarding the modulation of positive imagery after a positive emotional experience are discussed. Generally, boosting positive imagery may be a useful strategy for the recovery of depressed mood. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Meta-image navigation augmenters for unmanned aircraft systems (MINA for UAS)

    NASA Astrophysics Data System (ADS)

    Òªelik, Koray; Somani, Arun K.; Schnaufer, Bernard; Hwang, Patrick Y.; McGraw, Gary A.; Nadke, Jeremy

    2013-05-01

    GPS is a critical sensor for Unmanned Aircraft Systems (UASs) due to its accuracy, global coverage and small hardware footprint, but is subject to denial due to signal blockage or RF interference. When GPS is unavailable, position, velocity and attitude (PVA) performance from other inertial and air data sensors is not sufficient, especially for small UASs. Recently, image-based navigation algorithms have been developed to address GPS outages for UASs, since most of these platforms already include a camera as standard equipage. Performing absolute navigation with real-time aerial images requires georeferenced data, either images or landmarks, as a reference. Georeferenced imagery is readily available today, but requires a large amount of storage, whereas collections of discrete landmarks are compact but must be generated by pre-processing. An alternative, compact source of georeferenced data having large coverage area is open source vector maps from which meta-objects can be extracted for matching against real-time acquired imagery. We have developed a novel, automated approach called MINA (Meta Image Navigation Augmenters), which is a synergy of machine-vision and machine-learning algorithms for map aided navigation. As opposed to existing image map matching algorithms, MINA utilizes publicly available open-source geo-referenced vector map data, such as OpenStreetMap, in conjunction with real-time optical imagery from an on-board, monocular camera to augment the UAS navigation computer when GPS is not available. The MINA approach has been experimentally validated with both actual flight data and flight simulation data and results are presented in the paper.

  14. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution DSMs and multispectral imagery obtained from an unmanned aerial vehicle.

    PubMed

    Diaz-Varela, R A; Zarco-Tejada, P J; Angileri, V; Loudjani, P

    2014-02-15

    Agricultural terraces are features that provide a number of ecosystem services. As a result, their maintenance is supported by measures established by the European Common Agricultural Policy (CAP). In the framework of CAP implementation and monitoring, there is a current and future need for the development of robust, repeatable and cost-effective methodologies for the automatic identification and monitoring of these features at farm scale. This is a complex task, particularly when terraces are associated to complex vegetation cover patterns, as happens with permanent crops (e.g. olive trees). In this study we present a novel methodology for automatic and cost-efficient identification of terraces using only imagery from commercial off-the-shelf (COTS) cameras on board unmanned aerial vehicles (UAVs). Using state-of-the-art computer vision techniques, we generated orthoimagery and digital surface models (DSMs) at 11 cm spatial resolution with low user intervention. In a second stage, these data were used to identify terraces using a multi-scale object-oriented classification method. Results show the potential of this method even in highly complex agricultural areas, both regarding DSM reconstruction and image classification. The UAV-derived DSM had a root mean square error (RMSE) lower than 0.5 m when the height of the terraces was assessed against field GPS data. The subsequent automated terrace classification yielded an overall accuracy of 90% based exclusively on spectral and elevation data derived from the UAV imagery. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Involuntary Memories after a Positive Film Are Dampened by a Visuospatial Task: Unhelpful in Depression but Helpful in Mania?

    PubMed Central

    Charlotte, Davies; Malik, Aiysha; Pictet, Arnaud; Blackwell, Simon E; Holmes, Emily A

    2012-01-01

    Spontaneous negative mental images have been extensively researched due to the crucial role they play in conditions such as post-traumatic stress disorder. However, people can also experience spontaneous positive mental images, and these are little understood. Positive images may play a role in promoting healthy positive mood and may be lacking in conditions such as depression. However, they may also occur in problematic states of elevated mood, such as in bipolar disorder. Can we apply an understanding of spontaneous imagery gained by the study of spontaneous negative images to spontaneous positive images? In an analogue of the trauma film studies, 69 volunteers viewed an explicitly positive (rather than traumatic) film. Participants were randomly allocated post-film either to perform a visuospatial task (the computer game ‘Tetris’) or to a no-task control condition. Viewing the film enhanced positive mood and immediately post-film increased goal setting on a questionnaire measure. The film was successful in generating involuntary memories of specific scenes over the following week. As predicted, compared with the control condition, participants in the visuospatial task condition reported significantly fewer involuntary memories from the film in a diary over the subsequent week. Furthermore, scores on a recognition memory test at 1 week indicated an impairment in voluntary recall of the film in the visuospatial task condition. Clinical implications regarding the modulation of positive imagery after a positive emotional experience are discussed. Generally, boosting positive imagery may be a useful strategy for the recovery of depressed mood. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22570062

  16. Generating Daily Synthetic Landsat Imagery by Combining Landsat and MODIS Data

    PubMed Central

    Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao

    2015-01-01

    Owing to low temporal resolution and cloud interference, there is a shortage of high spatial resolution remote sensing data. To address this problem, this study introduces a modified spatial and temporal data fusion approach (MSTDFA) to generate daily synthetic Landsat imagery. This algorithm was designed to avoid the limitations of the conditional spatial temporal data fusion approach (STDFA) including the constant window for disaggregation and the sensor difference. An adaptive window size selection method is proposed in this study to select the best window size and moving steps for the disaggregation of coarse pixels. The linear regression method is used to remove the influence of differences in sensor systems using disaggregated mean coarse reflectance by testing and validation in two study areas located in Xinjiang Province, China. The results show that the MSTDFA algorithm can generate daily synthetic Landsat imagery with a high correlation coefficient (R) ranged from 0.646 to 0.986 between synthetic images and the actual observations. We further show that MSTDFA can be applied to 250 m 16-day MODIS MOD13Q1 products and the Landsat Normalized Different Vegetation Index (NDVI) data by generating a synthetic NDVI image highly similar to actual Landsat NDVI observation with a high R of 0.97. PMID:26393607

  17. Generating Daily Synthetic Landsat Imagery by Combining Landsat and MODIS Data.

    PubMed

    Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao

    2015-09-18

    Owing to low temporal resolution and cloud interference, there is a shortage of high spatial resolution remote sensing data. To address this problem, this study introduces a modified spatial and temporal data fusion approach (MSTDFA) to generate daily synthetic Landsat imagery. This algorithm was designed to avoid the limitations of the conditional spatial temporal data fusion approach (STDFA) including the constant window for disaggregation and the sensor difference. An adaptive window size selection method is proposed in this study to select the best window size and moving steps for the disaggregation of coarse pixels. The linear regression method is used to remove the influence of differences in sensor systems using disaggregated mean coarse reflectance by testing and validation in two study areas located in Xinjiang Province, China. The results show that the MSTDFA algorithm can generate daily synthetic Landsat imagery with a high correlation coefficient (R) ranged from 0.646 to 0.986 between synthetic images and the actual observations. We further show that MSTDFA can be applied to 250 m 16-day MODIS MOD13Q1 products and the Landsat Normalized Different Vegetation Index (NDVI) data by generating a synthetic NDVI image highly similar to actual Landsat NDVI observation with a high R of 0.97.

  18. EEG-based classification of imaginary left and right foot movements using beta rebound.

    PubMed

    Hashimoto, Yasunari; Ushiba, Junichi

    2013-11-01

    The purpose of this study was to investigate cortical lateralization of event-related (de)synchronization during left and right foot motor imagery tasks and to determine classification accuracy of the two imaginary movements in a brain-computer interface (BCI) paradigm. We recorded 31-channel scalp electroencephalograms (EEGs) from nine healthy subjects during brisk imagery tasks of left and right foot movements. EEG was analyzed with time-frequency maps and topographies, and the accuracy rate of classification between left and right foot movements was calculated. Beta rebound at the end of imagination (increase of EEG beta rhythm amplitude) was identified from the two EEGs derived from the right-shift and left-shift bipolar pairs at the vertex. This process enabled discrimination between right or left foot imagery at a high accuracy rate (maximum 81.6% in single trial analysis). These data suggest that foot motor imagery has potential to elicit left-right differences in EEG, while BCI using the unilateral foot imagery can achieve high classification accuracy, similar to ordinary BCI, based on hand motor imagery. By combining conventional discrimination techniques, the left-right discrimination of unilateral foot motor imagery provides a novel BCI system that could control a foot neuroprosthesis or a robotic foot. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  19. Generating high-accuracy urban distribution map for short-term change monitoring based on convolutional neural network by utilizing SAR imagery

    NASA Astrophysics Data System (ADS)

    Iino, Shota; Ito, Riho; Doi, Kento; Imaizumi, Tomoyuki; Hikosaka, Shuhei

    2017-10-01

    In the developing countries, urban areas are expanding rapidly. With the rapid developments, a short term monitoring of urban changes is important. A constant observation and creation of urban distribution map of high accuracy and without noise pollution are the key issues for the short term monitoring. SAR satellites are highly suitable for day or night and regardless of atmospheric weather condition observations for this type of study. The current study highlights the methodology of generating high-accuracy urban distribution maps derived from the SAR satellite imagery based on Convolutional Neural Network (CNN), which showed the outstanding results for image classification. Several improvements on SAR polarization combinations and dataset construction were performed for increasing the accuracy. As an additional data, Digital Surface Model (DSM), which are useful to classify land cover, were added to improve the accuracy. From the obtained result, high-accuracy urban distribution map satisfying the quality for short-term monitoring was generated. For the evaluation, urban changes were extracted by taking the difference of urban distribution maps. The change analysis with time series of imageries revealed the locations of urban change areas for short-term. Comparisons with optical satellites were performed for validating the results. Finally, analysis of the urban changes combining X-band, L-band and C-band SAR satellites was attempted to increase the opportunity of acquiring satellite imageries. Further analysis will be conducted as future work of the present study

  20. Time delays in flight simulator visual displays

    NASA Technical Reports Server (NTRS)

    Crane, D. F.

    1980-01-01

    It is pointed out that the effects of delays of less than 100 msec in visual displays on pilot dynamic response and system performance are of particular interest at this time because improvements in the latest computer-generated imagery (CGI) systems are expected to reduce CGI displays delays to this range. Attention is given to data which quantify the effects of display delays in the range of 0-100 msec on system stability and performance, and pilot dynamic response for a particular choice of aircraft dynamics, display, controller, and task. The conventional control system design methods are reviewed, the pilot response data presented, and data for long delays, all suggest lead filter compensation of display delay. Pilot-aircraft system crossover frequency information guides compensation filter specification.

  1. The eyes prefer real images

    NASA Technical Reports Server (NTRS)

    Roscoe, Stanley N.

    1989-01-01

    For better or worse, virtual imaging displays are with us in the form of narrow-angle combining-glass presentations, head-up displays (HUD), and head-mounted projections of wide-angle sensor-generated or computer-animated imagery (HMD). All military and civil aviation services and a large number of aerospace companies are involved in one way or another in a frantic competition to develop the best virtual imaging display system. The success or failure of major weapon systems hangs in the balance, and billions of dollars in potential business are at stake. Because of the degree to which national defense is committed to the perfection of virtual imaging displays, a brief consideration of their status, an investigation and analysis of their problems, and a search for realistic alternatives are long overdue.

  2. A METHODOLOGY FOR INTEGRATING IMAGES AND TEXT FOR OBJECT IDENTIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Hohimer, Ryan E.; Doucette, Peter J.

    2006-02-13

    Often text and imagery contain information that must be combined to solve a problem. One approach begins with transforming the raw text and imagery into a common structure that contains the critical information in a usable form. This paper presents an application in which the imagery of vehicles and the text from police reports were combined to demonstrate the power of data fusion to correctly identify the target vehicle--e.g., a red 2002 Ford truck identified in a police report--from a collection of diverse vehicle images. The imagery was abstracted into a common signature by first capturing the conceptual models ofmore » the imagery experts in software. Our system then (1) extracted fundamental features (e.g., wheel base, color), (2) made inferences about the information (e.g., it’s a red Ford) and then (3) translated the raw information into an abstract knowledge signature that was designed to both capture the important features and account for uncertainty. Likewise, the conceptual models of text analysis experts were instantiated into software that was used to generate an abstract knowledge signature that could be readily compared to the imagery knowledge signature. While this experiment primary focus was to demonstrate the power of text and imagery fusion for a specific example it also suggested several ways that text and geo-registered imagery could be combined to help solve other types of problems.« less

  3. The use of ERTS imagery in reservoir management and operation. [New England

    NASA Technical Reports Server (NTRS)

    Cooper, S. (Principal Investigator); Bock, P.; Horowitz, J.; Foran, D.

    1975-01-01

    The author has identified the following significant results. Real time data collection by orbiting satellite relay was found to be both reliable and feasible. ERTS imagery was assessed and it was shown that in most cases better spatial resolution and/or additional spectral bands would be required to satisfy NED's needs. A man-computer interactive system, using cathode ray tube display could solve the problem of an unwieldy mass of data for interpretation.

  4. New Methodology for Computing Subaerial Landslide-Tsunamis: Application to the 2015 Tyndall Glacier Landslide, Alaska

    NASA Astrophysics Data System (ADS)

    George, D. L.; Iverson, R. M.; Cannon, C. M.

    2016-12-01

    Landslide-generated tsunamis pose significant hazards to coastal communities and infrastructure, but developing models to assess these hazards presents challenges beyond those confronted when modeling seismically generated tsunamis. We present a new methodology in which our depth-averaged two-phase model D-Claw (Proc. Roy. Soc. A, 2014, doi: 10.1098/rspa.2013.0819 and doi:10.1098/rspa.2013.0820) is used to simulate all stages of landslide dynamics and subsequent tsunami generation and propagation. D-Claw was developed to simulate landslides and debris-flows, but if granular solids are absent, then the D-Claw equations reduce to the shallow-water equations commonly used to model tsunamis. Because the model describes the evolution of solid and fluid volume fractions, it treats both landslides and tsunamis as special cases of a more general class of phenomena, and the landslide and tsunami can be simulated as a single-layer continuum with spatially and temporally evolving solid-grain concentrations. This seamless approach accommodates wave generation via mass displacement and longitudinal momentum transfer, the dominant mechanisms producing impulse waves when large subaerial landslides impact relatively shallow bodies of water. To test our methodology, we used D-Claw to model a large subaerial landslide and resulting tsunami that occurred on October, 17, 2015, in Taan Fjord near the terminus of Tyndall Glacier, Alaska. The estimated landslide volume derived from radiated long-period seismicity (C. Stark (2015), Abstract EP51D-08, AGU Fall Meeting) was about 70-80 million cubic meters. Guided by satellite imagery and this volume estimate, we inferred an approximate landslide basal slip surface, and we used material property values identical to those used in our previous modeling of the 2014 Oso, Washington, landslide. With these inputs the modeled tsunami inundation patterns on shorelines compare well with observations derived from satellite imagery.

  5. Utilizing remote sensing of Thematic Mapper data to improve our understanding of estuarine processes and their influence on the productivity of estuarine-dependent fisheries

    NASA Technical Reports Server (NTRS)

    Browder, J. A.; May, L. N., Jr.; Rosenthal, A.; Baumann, R. H.; Gosselink, J. G.

    1986-01-01

    LANDSAT thematic mapper (TM) data are being used to refine and validate a stochastic spatial computer model to be applied to coastal resource management problems in Louisiana. Two major aspects of the research are: (1) the measurement of area of land (or emergent vegetation) and water and the length of the interface between land and water in TM imagery of selected coastal wetlands (sample marshes); and (2) the comparison of spatial patterns of land and water in the sample marshes of the imagery to that in marshes simulated by a computer model. In addition to activities in these two areas, the potential use of a published autocorrelation statistic is analyzed.

  6. A system to geometrically rectify and map airborne scanner imagery and to estimate ground area. [by computer

    NASA Technical Reports Server (NTRS)

    Spencer, M. M.; Wolf, J. M.; Schall, M. A.

    1974-01-01

    A system of computer programs were developed which performs geometric rectification and line-by-line mapping of airborne multispectral scanner data to ground coordinates and estimates ground area. The system requires aircraft attitude and positional information furnished by ancillary aircraft equipment, as well as ground control points. The geometric correction and mapping procedure locates the scan lines, or the pixels on each line, in terms of map grid coordinates. The area estimation procedure gives ground area for each pixel or for a predesignated parcel specified in map grid coordinates. The results of exercising the system with simulated data showed the uncorrected video and corrected imagery and produced area estimates accurate to better than 99.7%.

  7. Motor prediction in Brain-Computer Interfaces for controlling mobile robots.

    PubMed

    Geng, Tao; Gan, John Q

    2008-01-01

    EEG-based Brain-Computer Interface (BCI) can be regarded as a new channel for motor control except that it does not involve muscles. Normal neuromuscular motor control has two fundamental components: (1) to control the body, and (2) to predict the consequences of the control command, which is called motor prediction. In this study, after training with a specially designed BCI paradigm based on motor imagery, two subjects learnt to predict the time course of some features of the EEG signals. It is shown that, with this newly-obtained motor prediction skill, subjects can use motor imagery of feet to directly control a mobile robot to avoid obstacles and reach a small target in a time-critical scenario.

  8. Evaluation of High Resolution Imagery and Elevation Data

    DTIC Science & Technology

    2009-06-01

    the value of cutting-edge geospatial tools while keeping the data constant, the present experiment evaluated the effect of higher resolution imagery...and elevation data while keeping the tools constant. The high resolution data under evaluation was generated from TEC’s Buckeye system, an...results. As researchers and developers provide increasingly advanced tools to process data more quickly and accurately, it is necessary to assess each

  9. Dissociations between Imagery and Language Processing.

    DTIC Science & Technology

    1984-08-20

    to form the image on the basis of information stored in memory . We wanted to eliminate such processing in order to assess image maintenance ability...of imagery described in Kosslyn (1980), three processing modules are used in generating an image from information stored in long-term memory . The...PICTURE processing module simply activates the stored information, forming an image in short-term memory . However, this processing module only activates

  10. Multispectral simulation environment for modeling low-light-level sensor systems

    NASA Astrophysics Data System (ADS)

    Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.

    1998-11-01

    Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.

  11. Human V4 Activity Patterns Predict Behavioral Performance in Imagery of Object Color.

    PubMed

    Bannert, Michael M; Bartels, Andreas

    2018-04-11

    Color is special among basic visual features in that it can form a defining part of objects that are engrained in our memory. Whereas most neuroimaging research on human color vision has focused on responses related to external stimulation, the present study investigated how sensory-driven color vision is linked to subjective color perception induced by object imagery. We recorded fMRI activity in male and female volunteers during viewing of abstract color stimuli that were red, green, or yellow in half of the runs. In the other half we asked them to produce mental images of colored, meaningful objects (such as tomato, grapes, banana) corresponding to the same three color categories. Although physically presented color could be decoded from all retinotopically mapped visual areas, only hV4 allowed predicting colors of imagined objects when classifiers were trained on responses to physical colors. Importantly, only neural signal in hV4 was predictive of behavioral performance in the color judgment task on a trial-by-trial basis. The commonality between neural representations of sensory-driven and imagined object color and the behavioral link to neural representations in hV4 identifies area hV4 as a perceptual hub linking externally triggered color vision with color in self-generated object imagery. SIGNIFICANCE STATEMENT Humans experience color not only when visually exploring the outside world, but also in the absence of visual input, for example when remembering, dreaming, and during imagery. It is not known where neural codes for sensory-driven and internally generated hue converge. In the current study we evoked matching subjective color percepts, one driven by physically presented color stimuli, the other by internally generated color imagery. This allowed us to identify area hV4 as the only site where neural codes of corresponding subjective color perception converged regardless of its origin. Color codes in hV4 also predicted behavioral performance in an imagery task, suggesting it forms a perceptual hub for color perception. Copyright © 2018 the authors 0270-6474/18/383657-12$15.00/0.

  12. Orienting attention to visual or verbal/auditory imagery differentially impairs the processing of visual stimuli.

    PubMed

    Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio

    2016-05-15

    When attention is oriented toward inner thoughts, as spontaneously occurs during mind wandering, the processing of external information is attenuated. However, the potential effects of thought's content regarding sensory attenuation are still unknown. The present study aims to assess if the representational format of thoughts, such as visual imagery or inner speech, might differentially affect the sensory processing of external stimuli. We recorded the brain activity of 20 participants (12 women) while they were exposed to a probe visual stimulus in three different conditions: executing a task on the visual probe (externally oriented attention), and two conditions involving inward-turned attention i.e. generating inner speech and performing visual imagery. Event-related potentials results showed that the P1 amplitude, related with sensory response, was significantly attenuated during both task involving inward attention compared with external task. When both representational formats were compared, the visual imagery condition showed stronger attenuation in sensory processing than inner speech condition. Alpha power in visual areas was measured as an index of cortical inhibition. Larger alpha amplitude was found when participants engaged in an internal thought contrasted with the external task, with visual imagery showing even more alpha power than inner speech condition. Our results show, for the first time to our knowledge, that visual attentional processing to external stimuli during self-generated thoughts is differentially affected by the representational format of the ongoing train of thoughts. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Real Time, On Line Crop Monitoring and Analysis with Near Global Landsat-class Mosaics

    NASA Astrophysics Data System (ADS)

    Varlyguin, D.; Hulina, S.; Crutchfield, J.; Reynolds, C. A.; Frantz, R.

    2015-12-01

    The presentation will discuss the current status of GDA technology for operational, automated generation of 10-30 meter near global mosaics of Landsat-class data for visualization, monitoring, and analysis. Current version of the mosaic combines Landsat 8 and Landsat 7. Sentinel-2A imagery will be added once it is operationally available. The mosaics are surface reflectance calibrated and are analysis ready. They offer full spatial resolution and all multi-spectral bands of the source imagery. Each mosaic covers all major agricultural regions of the world and 16 day time window. 2014-most current dates are supported. The mosaics are updated in real-time, as soon as GDA downloads Landsat imagery, calibrates it to the surface reflectances, and generates data gap masks (all typically under 10 minutes for a Landsat scene). The technology eliminates the complex, multi-step, hands-on process of data preparation and provides imagery ready for repetitive, field-to-country analysis of crop conditions, progress, acreages, yield, and production. The mosaics can be used for real-time, on-line interactive mapping and time series drilling via GeoSynergy webGIS platform. The imagery is of great value for improved, persistent monitoring of global croplands and for the operational in-season analysis and mapping of crops across the globe in USDA FAS purview as mandated by the US government. The presentation will overview operational processing of Landsat-class mosaics in support of USDA FAS efforts and will look into 2015 and beyond.

  14. Forest Stand Segmentation Using Airborne LIDAR Data and Very High Resolution Multispectral Imagery

    NASA Astrophysics Data System (ADS)

    Dechesne, Clément; Mallet, Clément; Le Bris, Arnaud; Gouet, Valérie; Hervieu, Alexandre

    2016-06-01

    Forest stands are the basic units for forest inventory and mapping. Stands are large forested areas (e.g., ≥ 2 ha) of homogeneous tree species composition. The accurate delineation of forest stands is usually performed by visual analysis of human operators on very high resolution (VHR) optical images. This work is highly time consuming and should be automated for scalability purposes. In this paper, a method based on the fusion of airborne laser scanning data (or lidar) and very high resolution multispectral imagery for automatic forest stand delineation and forest land-cover database update is proposed. The multispectral images give access to the tree species whereas 3D lidar point clouds provide geometric information on the trees. Therefore, multi-modal features are computed, both at pixel and object levels. The objects are individual trees extracted from lidar data. A supervised classification is performed at the object level on the computed features in order to coarsely discriminate the existing tree species in the area of interest. The analysis at tree level is particularly relevant since it significantly improves the tree species classification. A probability map is generated through the tree species classification and inserted with the pixel-based features map in an energetical framework. The proposed energy is then minimized using a standard graph-cut method (namely QPBO with α-expansion) in order to produce a segmentation map with a controlled level of details. Comparison with an existing forest land cover database shows that our method provides satisfactory results both in terms of stand labelling and delineation (matching ranges between 94% and 99%).

  15. Investigating the effects of a sensorimotor rhythm-based BCI training on the cortical activity elicited by mental imagery

    NASA Astrophysics Data System (ADS)

    Toppi, J.; Risetti, M.; Quitadamo, L. R.; Petti, M.; Bianchi, L.; Salinari, S.; Babiloni, F.; Cincotti, F.; Mattia, D.; Astolfi, L.

    2014-06-01

    Objective. It is well known that to acquire sensorimotor (SMR)-based brain-computer interface (BCI) control requires a training period before users can achieve their best possible performances. Nevertheless, the effect of this training procedure on the cortical activity related to the mental imagery ability still requires investigation to be fully elucidated. The aim of this study was to gain insights into the effects of SMR-based BCI training on the cortical spectral activity associated with the performance of different mental imagery tasks. Approach. Linear cortical estimation and statistical brain mapping techniques were applied on high-density EEG data acquired from 18 healthy participants performing three different mental imagery tasks. Subjects were divided in two groups, one of BCI trained subjects, according to their previous exposure (at least six months before this study) to motor imagery-based BCI training, and one of subjects who were naive to any BCI paradigms. Main results. Cortical activation maps obtained for trained and naive subjects indicated different spectral and spatial activity patterns in response to the mental imagery tasks. Long-term effects of the previous SMR-based BCI training were observed on the motor cortical spectral activity specific to the BCI trained motor imagery task (simple hand movements) and partially generalized to more complex motor imagery task (playing tennis). Differently, mental imagery with spatial attention and memory content could elicit recognizable cortical spectral activity even in subjects completely naive to (BCI) training. Significance. The present findings contribute to our understanding of BCI technology usage and might be of relevance in those clinical conditions when training to master a BCI application is challenging or even not possible.

  16. Investigating the effects of a sensorimotor rhythm-based BCI training on the cortical activity elicited by mental imagery.

    PubMed

    Toppi, J; Risetti, M; Quitadamo, L R; Petti, M; Bianchi, L; Salinari, S; Babiloni, F; Cincotti, F; Mattia, D; Astolfi, L

    2014-06-01

    It is well known that to acquire sensorimotor (SMR)-based brain-computer interface (BCI) control requires a training period before users can achieve their best possible performances. Nevertheless, the effect of this training procedure on the cortical activity related to the mental imagery ability still requires investigation to be fully elucidated. The aim of this study was to gain insights into the effects of SMR-based BCI training on the cortical spectral activity associated with the performance of different mental imagery tasks. Linear cortical estimation and statistical brain mapping techniques were applied on high-density EEG data acquired from 18 healthy participants performing three different mental imagery tasks. Subjects were divided in two groups, one of BCI trained subjects, according to their previous exposure (at least six months before this study) to motor imagery-based BCI training, and one of subjects who were naive to any BCI paradigms. Cortical activation maps obtained for trained and naive subjects indicated different spectral and spatial activity patterns in response to the mental imagery tasks. Long-term effects of the previous SMR-based BCI training were observed on the motor cortical spectral activity specific to the BCI trained motor imagery task (simple hand movements) and partially generalized to more complex motor imagery task (playing tennis). Differently, mental imagery with spatial attention and memory content could elicit recognizable cortical spectral activity even in subjects completely naive to (BCI) training. The present findings contribute to our understanding of BCI technology usage and might be of relevance in those clinical conditions when training to master a BCI application is challenging or even not possible.

  17. Mental imagery. Effects on static balance and attentional demands of the elderly.

    PubMed

    Hamel, M F; Lajoie, Yves

    2005-06-01

    Several studies have demonstrated the effectiveness of mental imagery in improving motor performance. However, no research has studied the effectiveness of such a technique on static balance in the elderly. This study evaluated the efficiency of a mental imagery technique, aimed at improving static balance by reducing postural oscillations and attentional demands in the elderly. Twenty subjects aged 65 to 90 years old, divided into two groups (8 in Control group and 12 in Experimental group) participated in the study. The experimental participants underwent daily mental imagery training for a period of six weeks. Antero-posterior and lateral oscillations, reaction times during the use of the double-task paradigm were measured, and the Berg Balance Scale, Activities-specific Balance Confidence Scale, and VMIQ questionnaire were answered during both pre-test and post-test. Attentional demands and postural oscillations (antero-posterior) decreased significantly in the group with mental imagery training compared with those of the Control group. Subjects in the mental imagery group became significantly better in their aptitudes to generate clear vivid mental images, as indicated by the VMIQ questionnaire, whereas no significant difference was observed for the Activities-specific Balance Confidence Scale or Berg Scale. The results support psychoneuromuscular and motor coding theories associated with mental imagery.

  18. Assessing mental imagery in clinical psychology: A review of imagery measures and a guiding framework

    PubMed Central

    Pearson, David G.; Deeprose, Catherine; Wallace-Hadrill, Sophie M.A.; Heyes, Stephanie Burnett; Holmes, Emily A.

    2013-01-01

    Mental imagery is an under-explored field in clinical psychology research but presents a topic of potential interest and relevance across many clinical disorders, including social phobia, schizophrenia, depression, and post-traumatic stress disorder. There is currently a lack of a guiding framework from which clinicians may select the domains or associated measures most likely to be of appropriate use in mental imagery research. We adopt an interdisciplinary approach and present a review of studies across experimental psychology and clinical psychology in order to highlight the key domains and measures most likely to be of relevance. This includes a consideration of methods for experimentally assessing the generation, maintenance, inspection and transformation of mental images; as well as subjective measures of characteristics such as image vividness and clarity. We present a guiding framework in which we propose that cognitive, subjective and clinical aspects of imagery should be explored in future research. The guiding framework aims to assist researchers in the selection of measures for assessing those aspects of mental imagery that are of most relevance to clinical psychology. We propose that a greater understanding of the role of mental imagery in clinical disorders will help drive forward advances in both theory and treatment. PMID:23123567

  19. Mapping surface disturbance of energy-related infrastructure in southwest Wyoming--An assessment of methods

    USGS Publications Warehouse

    Germaine, Stephen S.; O'Donnell, Michael S.; Aldridge, Cameron L.; Baer, Lori; Fancher, Tammy; McBeth, Jamie; McDougal, Robert R.; Waltermire, Robert; Bowen, Zachary H.; Diffendorfer, James; Garman, Steven; Hanson, Leanne

    2012-01-01

    We evaluated how well three leading information-extraction software programs (eCognition, Feature Analyst, Feature Extraction) and manual hand digitization interpreted information from remotely sensed imagery of a visually complex gas field in Wyoming. Specifically, we compared how each mapped the area of and classified the disturbance features present on each of three remotely sensed images, including 30-meter-resolution Landsat, 10-meter-resolution SPOT (Satellite Pour l'Observation de la Terre), and 0.6-meter resolution pan-sharpened QuickBird scenes. Feature Extraction mapped the spatial area of disturbance features most accurately on the Landsat and QuickBird imagery, while hand digitization was most accurate on the SPOT imagery. Footprint non-overlap error was smallest on the Feature Analyst map of the Landsat imagery, the hand digitization map of the SPOT imagery, and the Feature Extraction map of the QuickBird imagery. When evaluating feature classification success against a set of ground-truthed control points, Feature Analyst, Feature Extraction, and hand digitization classified features with similar success on the QuickBird and SPOT imagery, while eCognition classified features poorly relative to the other methods. All maps derived from Landsat imagery classified disturbance features poorly. Using the hand digitized QuickBird data as a reference and making pixel-by-pixel comparisons, Feature Extraction classified features best overall on the QuickBird imagery, and Feature Analyst classified features best overall on the SPOT and Landsat imagery. Based on the entire suite of tasks we evaluated, Feature Extraction performed best overall on the Landsat and QuickBird imagery, while hand digitization performed best overall on the SPOT imagery, and eCognition performed worst overall on all three images. Error rates for both area measurements and feature classification were prohibitively high on Landsat imagery, while QuickBird was time and cost prohibitive for mapping large spatial extents. The SPOT imagery produced map products that were far more accurate than Landsat and did so at a far lower cost than QuickBird imagery. Consideration of degree of map accuracy required, costs associated with image acquisition, software, operator and computation time, and tradeoffs in the form of spatial extent versus resolution should all be considered when evaluating which combination of imagery and information-extraction method might best serve any given land use mapping project. When resources permit, attaining imagery that supports the highest classification and measurement accuracy possible is recommended.

  20. Detection of people in military and security context imagery

    NASA Astrophysics Data System (ADS)

    Shannon, Thomas M. L.; Spier, Emmet H.; Wiltshire, Ben

    2014-10-01

    A high level of manual visual surveillance of complex scenes is dependent solely on the awareness of human operators whereas an autonomous person detection solution could assist by drawing their attention to potential issues, in order to reduce cognitive burden and achieve more with less manpower. Our research addressed the challenge of the reliable identification of persons in a scene who may be partially obscured by structures or by handling weapons or tools. We tested the efficacy of a recently published computer vision approach based on the construction of cascaded, non-linear classifiers from part-based deformable models by assessing performance using imagery containing infantrymen in the open or when obscured, undertaking low level tactics or acting as civilians using tools. Results were compared with those obtained from published upright pedestrian imagery. The person detector yielded a precision of approximately 65% for a recall rate of 85% for military context imagery as opposed to a precision of 85% for the upright pedestrian image cases. These results compared favorably with those reported by the authors when applied to a range of other on-line imagery databases. Our conclusion is that the deformable part-based model method may be a potentially useful people detection tool in the challenging environment of military and security context imagery.

  1. Evaluation of SLAR and thematic mapper MSS data for forest cover mapping using computer-aided analysis techniques

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M. (Principal Investigator)

    1979-01-01

    The spatial characteristics of the data were evaluated. A program was developed to reduce the spatial distortions resulting from variable viewing distance, and geometrically adjusted data sets were generated. The potential need for some level of radiometric adjustment was evidenced by an along track band of high reflectance across different cover types in the Varian imagery. A multiple regression analysis was employed to explore the viewing angle effect on measured reflectance. Areas in the data set which appeared to have no across track stratification of cover type were identified. A program was developed which computed the average reflectance by column for each channel, over all of the scan lines in the designated areas. A regression analysis was then run using the first, second, and third degree polynomials, for each channel. An atmospheric effect as a component of the viewing angle source of variance is discussed. Cover type maps were completed and training and test field selection was initiated.

  2. Predicting water quality by relating secchi-disk transparency and chlorophyll a measurements to satellite imagery for Michigan Inland Lakes, August 2002

    USGS Publications Warehouse

    Fuller, L.M.; Aichele, Stephen S.; Minnerick, R.J.

    2004-01-01

    Inland lakes are an important economic and environmental resource for Michigan. The U.S. Geological Survey and the Michigan Department of Environmental Quality have been cooperatively monitoring the quality of selected lakes in Michigan through the Lake Water Quality Assessment program. Through this program, approximately 730 of Michigan's 11,000 inland lakes will be monitored once during this 15-year study. Targeted lakes will be sampled during spring turnover and again in late summer to characterize water quality. Because more extensive and more frequent sampling is not economically feasible in the Lake Water Quality Assessment program, the U.S. Geological Survey and Michigan Department of Environmental Quality investigate the use of satellite imagery as a means of estimating water quality in unsampled lakes. Satellite imagery has been successfully used in Minnesota, Wisconsin, and elsewhere to compute the trophic state of inland lakes from predicted secchi-disk measurements. Previous attempts of this kind in Michigan resulted in a poorer fit between observed and predicted data than was found for Minnesota or Wisconsin. This study tested whether estimates could be improved by using atmospherically corrected satellite imagery, whether a more appropriate regression model could be obtained for Michigan, and whether chlorophyll a concentrations could be reliably predicted from satellite imagery in order to compute trophic state of inland lakes. Although the atmospheric-correction did not significantly improve estimates of lake-water quality, a new regression equation was identified that consistently yielded better results than an equation obtained from the literature. A stepwise regression was used to determine an equation that accurately predicts chlorophyll a concentrations in northern Lower Michigan.

  3. Environmental study of ERTS-1 imagery Lake Champlain Basin and Vermont

    NASA Technical Reports Server (NTRS)

    Lind, A. O. (Principal Investigator)

    1972-01-01

    The author has idenfified the following significant results. A first approximation land-type map using three categories of classification was generated for the Burlington area. The identification and mapping of a major turbidity front separating turbid waters of the southern arm of Lake Champlain from the clearer main water mass was reported on RBV 1 and 2 imagery and on subsequent MSS bands 4 and 5. Significant industrial pollution of Lake Champlain has degraded environmental quality in certain sections of the lake. Wetlands were detected and recognized using a combination of RBV bands 2 and 3. Using first-look RBV band 2 imagery, major ice marginal features were identified by using tonal patterns associated with vegetative cover. Major rivers were detected and recognized through the use of RBV band 3 imagery and MSS bands 6 and 7.

  4. Developing consistent Landsat data sets for large area applications: the MRLC 2001 protocol

    USGS Publications Warehouse

    Chander, G.; Huang, Chengquan; Yang, Limin; Homer, Collin G.; Larson, C.

    2009-01-01

    One of the major efforts in large area land cover mapping over the last two decades was the completion of two U.S. National Land Cover Data sets (NLCD), developed with nominal 1992 and 2001 Landsat imagery under the auspices of the MultiResolution Land Characteristics (MRLC) Consortium. Following the successful generation of NLCD 1992, a second generation MRLC initiative was launched with two primary goals: (1) to develop a consistent Landsat imagery data set for the U.S. and (2) to develop a second generation National Land Cover Database (NLCD 2001). One of the key enhancements was the formulation of an image preprocessing protocol and implementation of a consistent image processing method. The core data set of the NLCD 2001 database consists of Landsat 7 Enhanced Thematic Mapper Plus (ETM+) images. This letter details the procedures for processing the original ETM+ images and more recent scenes added to the database. NLCD 2001 products include Anderson Level II land cover classes, percent tree canopy, and percent urban imperviousness at 30-m resolution derived from Landsat imagery. The products are freely available for download to the general public from the MRLC Consortium Web site at http://www.mrlc.gov.

  5. To What Extent Can Motor Imagery Replace Motor Execution While Learning a Fine Motor Skill?

    PubMed Central

    Sobierajewicz, Jagna; Szarkiewicz, Sylwia; Przekoracka-Krawczyk, Anna; Jaśkowski, Wojciech; van der Lubbe, Rob

    2016-01-01

    Motor imagery is generally thought to share common mechanisms with motor execution. In the present study, we examined to what extent learning a fine motor skill by motor imagery may substitute physical practice. Learning effects were assessed by manipulating the proportion of motor execution and motor imagery trials. Additionally, learning effects were compared between participants with an explicit motor imagery instruction and a control group. A Go/NoGo discrete sequence production (DSP) task was employed, wherein a five-stimulus sequence presented on each trial indicated the required sequence of finger movements after a Go signal. In the case of a NoGo signal, participants either had to imagine carrying out the response sequence (the motor imagery group), or the response sequence had to be withheld (the control group). Two practice days were followed by a final test day on which all sequences had to be executed. Learning effects were assessed by computing response times (RTs) and the percentages of correct responses (PCs). The electroencephalogram (EEG ) was additionally measured on this test day to examine whether motor preparation and the involvement of visual short term memory (VST M) depended on the amount of physical/mental practice. Accuracy data indicated strong learning effects. However, a substantial amount of physical practice was required to reach an optimal speed. EEG results suggest the involvement of VST M for sequences that had less or no physical practice in both groups. The absence of differences between the motor imagery and the control group underlines the possibility that motor preparation may actually resemble motor imagery. PMID:28154614

  6. To What Extent Can Motor Imagery Replace Motor Execution While Learning a Fine Motor Skill?

    PubMed

    Sobierajewicz, Jagna; Szarkiewicz, Sylwia; Przekoracka-Krawczyk, Anna; Jaśkowski, Wojciech; van der Lubbe, Rob

    2016-01-01

    Motor imagery is generally thought to share common mechanisms with motor execution. In the present study, we examined to what extent learning a fine motor skill by motor imagery may substitute physical practice. Learning effects were assessed by manipulating the proportion of motor execution and motor imagery trials. Additionally, learning effects were compared between participants with an explicit motor imagery instruction and a control group. A Go/NoGo discrete sequence production (DSP) task was employed, wherein a five-stimulus sequence presented on each trial indicated the required sequence of finger movements after a Go signal. In the case of a NoGo signal, participants either had to imagine carrying out the response sequence (the motor imagery group), or the response sequence had to be withheld (the control group). Two practice days were followed by a final test day on which all sequences had to be executed. Learning effects were assessed by computing response times (RTs) and the percentages of correct responses (PCs). The electroencephalogram (EEG ) was additionally measured on this test day to examine whether motor preparation and the involvement of visual short term memory (VST M) depended on the amount of physical/mental practice. Accuracy data indicated strong learning effects. However, a substantial amount of physical practice was required to reach an optimal speed. EEG results suggest the involvement of VST M for sequences that had less or no physical practice in both groups. The absence of differences between the motor imagery and the control group underlines the possibility that motor preparation may actually resemble motor imagery.

  7. Parallelization of a blind deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  8. Visual Odometry for Autonomous Deep-Space Navigation

    NASA Technical Reports Server (NTRS)

    Robinson, Shane; Pedrotty, Sam

    2016-01-01

    Visual Odometry fills two critical needs shared by all future exploration architectures considered by NASA: Autonomous Rendezvous and Docking (AR&D), and autonomous navigation during loss of comm. To do this, a camera is combined with cutting-edge algorithms (called Visual Odometry) into a unit that provides accurate relative pose between the camera and the object in the imagery. Recent simulation analyses have demonstrated the ability of this new technology to reliably, accurately, and quickly compute a relative pose. This project advances this technology by both preparing the system to process flight imagery and creating an activity to capture said imagery. This technology can provide a pioneering optical navigation platform capable of supporting a wide variety of future missions scenarios: deep space rendezvous, asteroid exploration, loss-of-comm.

  9. Strategy combination during execution of memory strategies in young and older adults.

    PubMed

    Hinault, Thomas; Lemaire, Patrick; Touron, Dayna

    2017-05-01

    The present study investigated whether people can combine two memory strategies to encode pairs of words more efficiently than with a single strategy, and age-related differences in such strategy combination. Young and older adults were asked to encode pairs of words (e.g., satellite-tunnel). For each item, participants were told to use either the interactive-imagery strategy (e.g., mentally visualising the two words and making them interact), the sentence-generation strategy (i.e., generate a sentence linking the two words), or with strategy combination (i.e., generating a sentence while mentally visualising it). Participants obtained better recall performance on items encoded with strategy combination than on items encoded with interactive-imagery or sentence-generation strategies. Moreover, we found age-related decline in such strategy combination. These findings have important implications to further our understanding of execution of memory strategies, and suggest that strategy combination occurs in a variety of cognitive domains.

  10. Brain-computer interface training combined with transcranial direct current stimulation in patients with chronic severe hemiparesis: Proof of concept study.

    PubMed

    Kasashima-Shindo, Yuko; Fujiwara, Toshiyuki; Ushiba, Junichi; Matsushika, Yayoi; Kamatani, Daiki; Oto, Misa; Ono, Takashi; Nishimoto, Atsuko; Shindo, Keiichiro; Kawakami, Michiyuki; Tsuji, Tetsuya; Liu, Meigen

    2015-04-01

    Brain-computer interface technology has been applied to stroke patients to improve their motor function. Event-related desynchronization during motor imagery, which is used as a brain-computer interface trigger, is sometimes difficult to detect in stroke patients. Anodal transcranial direct current stimulation (tDCS) is known to increase event-related desynchronization. This study investigated the adjunctive effect of anodal tDCS for brain-computer interface training in patients with severe hemiparesis. Eighteen patients with chronic stroke. A non-randomized controlled study. Subjects were divided between a brain-computer interface group and a tDCS- brain-computer interface group and participated in a 10-day brain-computer interface training. Event-related desynchronization was detected in the affected hemisphere during motor imagery of the affected fingers. The tDCS-brain-computer interface group received anodal tDCS before brain-computer interface training. Event-related desynchronization was evaluated before and after the intervention. The Fugl-Meyer Assessment upper extremity motor score (FM-U) was assessed before, immediately after, and 3 months after, the intervention. Event-related desynchronization was significantly increased in the tDCS- brain-computer interface group. The FM-U was significantly increased in both groups. The FM-U improvement was maintained at 3 months in the tDCS-brain-computer interface group. Anodal tDCS can be a conditioning tool for brain-computer interface training in patients with severe hemiparetic stroke.

  11. Real-time maritime scene simulation for ladar sensors

    NASA Astrophysics Data System (ADS)

    Christie, Chad L.; Gouthas, Efthimios; Swierkowski, Leszek; Williams, Owen M.

    2011-06-01

    Continuing interest exists in the development of cost-effective synthetic environments for testing Laser Detection and Ranging (ladar) sensors. In this paper we describe a PC-based system for real-time ladar scene simulation of ships and small boats in a dynamic maritime environment. In particular, we describe the techniques employed to generate range imagery accompanied by passive radiance imagery. Our ladar scene generation system is an evolutionary extension of the VIRSuite infrared scene simulation program and includes all previous features such as ocean wave simulation, the physically-realistic representation of boat and ship dynamics, wake generation and simulation of whitecaps, spray, wake trails and foam. A terrain simulation extension is also under development. In this paper we outline the development, capabilities and limitations of the VIRSuite extensions.

  12. Image selection system. [computerized data storage and retrieval system

    NASA Technical Reports Server (NTRS)

    Knutson, M. A.; Hurd, D.; Hubble, L.; Kroeck, R. M.

    1974-01-01

    An image selection (ISS) was developed for the NASA-Ames Research Center Earth Resources Aircraft Project. The ISS is an interactive, graphics oriented, computer retrieval system for aerial imagery. An analysis of user coverage requests and retrieval strategies is presented, followed by a complete system description. Data base structure, retrieval processors, command language, interactive display options, file structures, and the system's capability to manage sets of selected imagery are described. A detailed example of an area coverage request is graphically presented.

  13. Integrating Landsat-8, Sentinel-2, and nano-satellite data for deriving atmospherically corrected vegetation indices at enhanced spatio-temporal resolution

    NASA Astrophysics Data System (ADS)

    Houborg, Rasmus; McCabe, Matthew F.; Ershadi, Ali

    2017-04-01

    Flocks of nano-satellites are emerging as an economic resource for overcoming spatio-temporal constraints of conventional single-sensor satellite missions. Planet Labs operates an expanding constellation of currently more than 40 CubeSats (30x10x10 cm3), which will facilitate daily capture of broadband RGB and near-infrared (NIR) imagery for every location on earth at a 3-5 m ground sampling distance. However, data acquired by these miniaturized satellites lack rigorous radiometric corrections and radiance conversions and should be used in synergy with high quality imagery required by conventional large satellites such as Landsat-8 (L8) and Sentinel-2 (S2) in order to realize the full potential of this game changing observational resource. This study integrates L8, S2 and Planet data acquired over sites in Saudi Arabia and the state of California for deriving cross-sensor consistent and atmospherically corrected Vegetation Indices (VI) that may serve as important metrics for vegetation growth, health, and productivity. An automated framework, based on 6S and satellite retrieved atmospheric state and aerosol inputs, is first applied to L8 and S2 at-sensor radiances for the production of atmospherically corrected VIs. Scale-consistent Planet RGB and NIR imagery is then related to the corrected VI data using a selective, scene-specific, and computationally fast machine learning approach. The developed technique uses the closest pair of Planet and L8/S2 scenes in the training of the predictive VI models and accounts for changes in cover conditions over the acquisition timespan. Application of the models to full resolution Planet imagery results in cross-sensor consistent VI estimates at the scale and time of the nano-satellite acquisition. The utility of the approach for reproducing spatial features in L8 and S2 based indices based on Planet imagery is evaluated. The technique is generic, computationally efficient, and extendable and serves well for implementation within a cloud computing framework for processing over larger domains and time intervals.

  14. Pure visual imagery as a potential approach to achieve three classes of control for implementation of BCI in non-motor disorders

    NASA Astrophysics Data System (ADS)

    Sousa, Teresa; Amaral, Carlos; Andrade, João; Pires, Gabriel; Nunes, Urbano J.; Castelo-Branco, Miguel

    2017-08-01

    Objective. The achievement of multiple instances of control with the same type of mental strategy represents a way to improve flexibility of brain-computer interface (BCI) systems. Here we test the hypothesis that pure visual motion imagery of an external actuator can be used as a tool to achieve three classes of electroencephalographic (EEG) based control, which might be useful in attention disorders. Approach. We hypothesize that different numbers of imagined motion alternations lead to distinctive signals, as predicted by distinct motion patterns. Accordingly, a distinct number of alternating sensory/perceptual signals would lead to distinct neural responses as previously demonstrated using functional magnetic resonance imaging (fMRI). We anticipate that differential modulations should also be observed in the EEG domain. EEG recordings were obtained from twelve participants using three imagery tasks: imagery of a static dot, imagery of a dot with two opposing motions in the vertical axis (two motion directions) and imagery of a dot with four opposing motions in vertical or horizontal axes (four directions). The data were analysed offline. Main results. An increase of alpha-band power was found in frontal and central channels as a result of visual motion imagery tasks when compared with static dot imagery, in contrast with the expected posterior alpha decreases found during simple visual stimulation. The successful classification and discrimination between the three imagery tasks confirmed that three different classes of control based on visual motion imagery can be achieved. The classification approach was based on a support vector machine (SVM) and on the alpha-band relative spectral power of a small group of six frontal and central channels. Patterns of alpha activity, as captured by single-trial SVM closely reflected imagery properties, in particular the number of imagined motion alternations. Significance. We found a new mental task based on visual motion imagery with potential for the implementation of multiclass (3) BCIs. Our results are consistent with the notion that frontal alpha synchronization is related with high internal processing demands, changing with the number of alternation levels during imagery. Together, these findings suggest the feasibility of pure visual motion imagery tasks as a strategy to achieve multiclass control systems with potential for BCI and in particular, neurofeedback applications in non-motor (attentional) disorders.

  15. Quantification of shoreline change along Hatteras Island, North Carolina: Oregon Inlet to Cape Hatteras, 1978-2002, and associated vector shoreline data

    USGS Publications Warehouse

    Hapke, Cheryl J.; Henderson, Rachel E.

    2015-01-01

    Shoreline change spanning twenty-four years was assessed along the coastline of Cape Hatteras National Seashore, at Hatteras Island, North Carolina. The shorelines used in the analysis were generated from georeferenced historical aerial imagery and are used to develop shoreline change rates for Hatteras Island, from Oregon Inlet to Cape Hatteras. A total of 14 dates of aerial photographs ranging from 1978 through 2002 were obtained from the U.S. Army Corp of Engineers Field Research Facility in Duck, North Carolina, and scanned to generate digital imagery. The digital imagery was georeferenced and high water line shorelines (interpreted from the wet/dry line) were digitized from each date to produce a time series of shorelines for the study area. Rates of shoreline change were calculated for three periods: the full span of the time series, 1978 through 2002, and two approximately decadal subsets, 1978–89 and 1989–2002.

  16. Tell Me a Story About Healthy Snacking and I Will Follow: Comparing the Effectiveness of Self-Generated Versus Message-Aided Implementation Intentions on Promoting Healthy Snacking Habits Among College Students.

    PubMed

    Oh, Hyun Jung; Larose, Robert

    2015-01-01

    In the context of healthy snacking, this study examines whether the quality of mental imagery determines the effectiveness of combining the implementation intention (II) intervention with mental imagery. This study further explores whether providing narrative healthy snacking scenarios prior to forming an II enhances people's mental imagery experience when they are not motivated to snack healthfully. A 2 × 2 factorial design was employed to test the main effect of providing healthy snacking scenarios prior to II formation, and whether such effect depends on people's motivation level. The results from the experiment (N =148) showed significant main as well as interaction effects of the manipulation (with vs. without reading healthy snacking scenarios prior to II formation) and motivation level on ease and vividness of mental imagery. The regression model with the experiment and follow-up survey data (n = 128) showed a significant relationship between ease of mental imagery and actual snacking behavior after controlling for habit strength. The findings suggest that adding a narrative message to the II intervention can be useful, especially when the intervention involves mental imagery and invites less motivated people.

  17. Classification of functional near-infrared spectroscopy signals corresponding to the right- and left-wrist motor imagery for development of a brain-computer interface.

    PubMed

    Naseer, Noman; Hong, Keum-Shik

    2013-10-11

    This paper presents a study on functional near-infrared spectroscopy (fNIRS) indicating that the hemodynamic responses of the right- and left-wrist motor imageries have distinct patterns that can be classified using a linear classifier for the purpose of developing a brain-computer interface (BCI). Ten healthy participants were instructed to imagine kinesthetically the right- or left-wrist flexion indicated on a computer screen. Signals from the right and left primary motor cortices were acquired simultaneously using a multi-channel continuous-wave fNIRS system. Using two distinct features (the mean and the slope of change in the oxygenated hemoglobin concentration), the linear discriminant analysis classifier was used to classify the right- and left-wrist motor imageries resulting in average classification accuracies of 73.35% and 83.0%, respectively, during the 10s task period. Moreover, when the analysis time was confined to the 2-7s span within the overall 10s task period, the average classification accuracies were improved to 77.56% and 87.28%, respectively. These results demonstrate the feasibility of an fNIRS-based BCI and the enhanced performance of the classifier by removing the initial 2s span and/or the time span after the peak value. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Synthetic Aperture Radar (SAR) data processing

    NASA Technical Reports Server (NTRS)

    Beckner, F. L.; Ahr, H. A.; Ausherman, D. A.; Cutrona, L. J.; Francisco, S.; Harrison, R. E.; Heuser, J. S.; Jordan, R. L.; Justus, J.; Manning, B.

    1978-01-01

    The available and optimal methods for generating SAR imagery for NASA applications were identified. The SAR image quality and data processing requirements associated with these applications were studied. Mathematical operations and algorithms required to process sensor data into SAR imagery were defined. The architecture of SAR image formation processors was discussed, and technology necessary to implement the SAR data processors used in both general purpose and dedicated imaging systems was addressed.

  19. Influence of circadian rhythms on the temporal features of motor imagery for older adult inpatients.

    PubMed

    Rulleau, Thomas; Mauvieux, Benoit; Toussaint, Lucette

    2015-07-01

    To examine the circadian modulation on motor imagery quality for older adult inpatients to determine the best time of day to use motor imagery in rehabilitation activities. Time series posttest only. Inpatient rehabilitation center. Participants included older adult inpatients (N=34) who were hospitalized for diverse geriatric or neurogeriatric reasons. They were able to sit without assistance, manipulate objects, and walk 10m in <30 seconds without technical help or with a walking stick. None. The executed and imagined durations of writing and walking movements were recorded 7 times a day (9:15 am-4:45 pm), at times compatible with the hours of rehabilitation activities. Motor imagery quality was evaluated by computing the isochrony index (ie, absolute difference between the average duration of executed and imagined actions) for each trial and each inpatient. The cosinor method was used to analyze the time series for circadian rhythmicity. Imagined movements duration and isochrony index exhibited circadian modulations, whereas no such rhythmic changes appeared for executed movements. Motor imagery quality was better late in the morning, at approximately 10:18 am and 12:10 pm for writing and walking, respectively. Cognitive and sensorimotor aspects of motor behaviors differed among the older adults. The temporal features of motor imagery showed a clear circadian variation. From a practical perspective, this study offers information on an effective schedule for motor imagery in rehabilitation activities with older adult inpatients. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  20. Vehicle classification in WAMI imagery using deep network

    NASA Astrophysics Data System (ADS)

    Yi, Meng; Yang, Fan; Blasch, Erik; Sheaff, Carolyn; Liu, Kui; Chen, Genshe; Ling, Haibin

    2016-05-01

    Humans have always had a keen interest in understanding activities and the surrounding environment for mobility, communication, and survival. Thanks to recent progress in photography and breakthroughs in aviation, we are now able to capture tens of megapixels of ground imagery, namely Wide Area Motion Imagery (WAMI), at multiple frames per second from unmanned aerial vehicles (UAVs). WAMI serves as a great source for many applications, including security, urban planning and route planning. These applications require fast and accurate image understanding which is time consuming for humans, due to the large data volume and city-scale area coverage. Therefore, automatic processing and understanding of WAMI imagery has been gaining attention in both industry and the research community. This paper focuses on an essential step in WAMI imagery analysis, namely vehicle classification. That is, deciding whether a certain image patch contains a vehicle or not. We collect a set of positive and negative sample image patches, for training and testing the detector. Positive samples are 64 × 64 image patches centered on annotated vehicles. We generate two sets of negative images. The first set is generated from positive images with some location shift. The second set of negative patches is generated from randomly sampled patches. We also discard those patches if a vehicle accidentally locates at the center. Both positive and negative samples are randomly divided into 9000 training images and 3000 testing images. We propose to train a deep convolution network for classifying these patches. The classifier is based on a pre-trained AlexNet Model in the Caffe library, with an adapted loss function for vehicle classification. The performance of our classifier is compared to several traditional image classifier methods using Support Vector Machine (SVM) and Histogram of Oriented Gradient (HOG) features. While the SVM+HOG method achieves an accuracy of 91.2%, the accuracy of our deep network-based classifier reaches 97.9%.

  1. The sensory strength of voluntary visual imagery predicts visual working memory capacity.

    PubMed

    Keogh, Rebecca; Pearson, Joel

    2014-10-09

    How much we can actively hold in mind is severely limited and differs greatly from one person to the next. Why some individuals have greater capacities than others is largely unknown. Here, we investigated why such large variations in visual working memory (VWM) capacity might occur, by examining the relationship between visual working memory and visual mental imagery. To assess visual working memory capacity participants were required to remember the orientation of a number of Gabor patches and make subsequent judgments about relative changes in orientation. The sensory strength of voluntary imagery was measured using a previously documented binocular rivalry paradigm. Participants with greater imagery strength also had greater visual working memory capacity. However, they were no better on a verbal number working memory task. Introducing a uniform luminous background during the retention interval of the visual working memory task reduced memory capacity, but only for those with strong imagery. Likewise, for the good imagers increasing background luminance during imagery generation reduced its effect on subsequent binocular rivalry. Luminance increases did not affect any of the subgroups on the verbal number working memory task. Together, these results suggest that luminance was disrupting sensory mechanisms common to both visual working memory and imagery, and not a general working memory system. The disruptive selectivity of background luminance suggests that good imagers, unlike moderate or poor imagers, may use imagery as a mnemonic strategy to perform the visual working memory task. © 2014 ARVO.

  2. LANDSAT-4 MSS Geometric Correction: Methods and Results

    NASA Technical Reports Server (NTRS)

    Brooks, J.; Kimmer, E.; Su, J.

    1984-01-01

    An automated image registration system such as that developed for LANDSAT-4 can produce all of the information needed to verify and calibrate the software and to evaluate system performance. The on-line MSS archive generation process which upgrades systematic correction data to geodetic correction data is described as well as the control point library build subsystem which generates control point chips and support data for on-line upgrade of correction data. The system performance was evaluated for both temporal and geodetic registration. For temporal registration, 90% errors were computed to be .36 IFOV (instantaneous field of view) = 82.7 meters) cross track, and .29 IFOV along track. Also, for actual production runs monitored, the 90% errors were .29 IFOV cross track and .25 IFOV along track. The system specification is .3 IFOV, 90% of the time, both cross and along track. For geodetic registration performance, the model bias was measured by designating control points in the geodetically corrected imagery.

  3. Compact time- and space-integrating SAR processor: performance analysis

    NASA Astrophysics Data System (ADS)

    Haney, Michael W.; Levy, James J.; Michael, Robert R., Jr.; Christensen, Marc P.

    1995-06-01

    Progress made during the previous 12 months toward the fabrication and test of a flight demonstration prototype of the acousto-optic time- and space-integrating real-time SAR image formation processor is reported. Compact, rugged, and low-power analog optical signal processing techniques are used for the most computationally taxing portions of the SAR imaging problem to overcome the size and power consumption limitations of electronic approaches. Flexibility and performance are maintained by the use of digital electronics for the critical low-complexity filter generation and output image processing functions. The results reported for this year include tests of a laboratory version of the RAPID SAR concept on phase history data generated from real SAR high-resolution imagery; a description of the new compact 2D acousto-optic scanner that has a 2D space bandwidth product approaching 106 sports, specified and procured for NEOS Technologies during the last year; and a design and layout of the optical module portion of the flight-worthy prototype.

  4. The effect of auditory verbal imagery on signal detection in hallucination-prone individuals

    PubMed Central

    Moseley, Peter; Smailes, David; Ellison, Amanda; Fernyhough, Charles

    2016-01-01

    Cognitive models have suggested that auditory hallucinations occur when internal mental events, such as inner speech or auditory verbal imagery (AVI), are misattributed to an external source. This has been supported by numerous studies indicating that individuals who experience hallucinations tend to perform in a biased manner on tasks that require them to distinguish self-generated from non-self-generated perceptions. However, these tasks have typically been of limited relevance to inner speech models of hallucinations, because they have not manipulated the AVI that participants used during the task. Here, a new paradigm was employed to investigate the interaction between imagery and perception, in which a healthy, non-clinical sample of participants were instructed to use AVI whilst completing an auditory signal detection task. It was hypothesized that AVI-usage would cause participants to perform in a biased manner, therefore falsely detecting more voices in bursts of noise. In Experiment 1, when cued to generate AVI, highly hallucination-prone participants showed a lower response bias than when performing a standard signal detection task, being more willing to report the presence of a voice in the noise. Participants not prone to hallucinations performed no differently between the two conditions. In Experiment 2, participants were not specifically instructed to use AVI, but retrospectively reported how often they engaged in AVI during the task. Highly hallucination-prone participants who retrospectively reported using imagery showed a lower response bias than did participants with lower proneness who also reported using AVI. Results are discussed in relation to prominent inner speech models of hallucinations. PMID:26435050

  5. Comparing synthetic imagery with real imagery for visible signature analysis: human observer results

    NASA Astrophysics Data System (ADS)

    Culpepper, Joanne B.; Richards, Noel; Madden, Christopher S.; Winter, Neal; Wheaton, Vivienne C.

    2017-10-01

    Synthetic imagery could potentially enhance visible signature analysis by providing a wider range of target images in differing environmental conditions than would be feasible to collect in field trials. Achieving this requires a method for generating synthetic imagery that is both verified to be realistic and produces the same visible signature analysis results as real images. Is target detectability as measured by image metrics the same for real images and synthetic images of the same scene? Is target detectability as measured by human observer trials the same for real images and synthetic images of the same scene, and how realistic do the synthetic images need to be? In this paper we present the results of a small scale exploratory study on the second question: a photosimulation experiment conducted using digital photographs and synthetic images generated of the same scene. Two sets of synthetic images were created: a high fidelity set created using an image generation tool, E-on Vue, and a low fidelity set created using a gaming engine, Unity 3D. The target detection results obtained using digital photographs were compared with those obtained using the two sets of synthetic images. There was a moderate correlation between the high fidelity synthetic image set and the real images in both the probability of correct detection (Pd: PCC = 0.58, SCC = 0.57) and mean search time (MST: PCC = 0.63, SCC = 0.61). There was no correlation between the low fidelity synthetic image set and the real images for the Pd, but a moderate correlation for MST (PCC = 0.67, SCC = 0.55).

  6. Horizon: The Portable, Scalable, and Reusable Framework for Developing Automated Data Management and Product Generation Systems

    NASA Astrophysics Data System (ADS)

    Huang, T.; Alarcon, C.; Quach, N. T.

    2014-12-01

    Capture, curate, and analysis are the typical activities performed at any given Earth Science data center. Modern data management systems must be adaptable to heterogeneous science data formats, scalable to meet the mission's quality of service requirements, and able to manage the life-cycle of any given science data product. Designing a scalable data management doesn't happen overnight. It takes countless hours of refining, refactoring, retesting, and re-architecting. The Horizon data management and workflow framework, developed at the Jet Propulsion Laboratory, is a portable, scalable, and reusable framework for developing high-performance data management and product generation workflow systems to automate data capturing, data curation, and data analysis activities. The NASA's Physical Oceanography Distributed Active Archive Center (PO.DAAC)'s Data Management and Archive System (DMAS) is its core data infrastructure that handles capturing and distribution of hundreds of thousands of satellite observations each day around the clock. DMAS is an application of the Horizon framework. The NASA Global Imagery Browse Services (GIBS) is NASA's Earth Observing System Data and Information System (EOSDIS)'s solution for making high-resolution global imageries available to the science communities. The Imagery Exchange (TIE), an application of the Horizon framework, is a core subsystem for GIBS responsible for data capturing and imagery generation automation to support the EOSDIS' 12 distributed active archive centers and 17 Science Investigator-led Processing Systems (SIPS). This presentation discusses our ongoing effort in refining, refactoring, retesting, and re-architecting the Horizon framework to enable data-intensive science and its applications.

  7. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    NASA Astrophysics Data System (ADS)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  8. Multispectral image analysis for object recognition and classification

    NASA Astrophysics Data System (ADS)

    Viau, C. R.; Payeur, P.; Cretu, A.-M.

    2016-05-01

    Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.

  9. Detection of Hail Storms in Radar Imagery Using Deep Learning

    NASA Technical Reports Server (NTRS)

    Pullman, Melinda; Gurung, Iksha; Ramachandran, Rahul; Maskey, Manil

    2017-01-01

    In 2016, hail was responsible for 3.5 billion and 23 million dollars in damage to property and crops, respectively, making it the second costliest weather phenomenon in the United States. In an effort to improve hail-prediction techniques and reduce the societal impacts associated with hail storms, we propose a deep learning technique that leverages radar imagery for automatic detection of hail storms. The technique is applied to radar imagery from 2011 to 2016 for the contiguous United States and achieved a precision of 0.848. Hail storms are primarily detected through the visual interpretation of radar imagery (Mrozet al., 2017). With radars providing data every two minutes, the detection of hail storms has become a big data task. As a result, scientists have turned to neural networks that employ computer vision to identify hail-bearing storms (Marzbanet al., 2001). In this study, we propose a deep Convolutional Neural Network (ConvNet) to understand the spatial features and patterns of radar echoes for detecting hailstorms.

  10. Detection and identification of benthic communities and shoreline features in Biscayne Bay

    NASA Technical Reports Server (NTRS)

    Kolipinski, M. C.; Higer, A. L.

    1970-01-01

    Progress made in the development of a technique for identifying and delinating benthic and shoreline communities using multispectral imagery is described. Images were collected with a multispectral scanner system mounted in a C-47 aircraft. Concurrent with the overflight, ecological ground- and sea-truth information was collected at 19 sites in the bay and on the shore. Preliminary processing of the scanner imagery with a CDC 1604 digital computer provided the optimum channels for discernment among different underwater and coastal objects. Automatic mapping of the benthic plants by multiband imagery and the mapping of isotherms and hydrodynamic parameters by digital model can become an effective predictive ecological tool when coupled together. Using the two systems, it appears possible to predict conditions that could adversely affect the benthic communities. With the advent of the ERTS satellites and space platforms, imagery data could be obtained which, when used in conjunction with water-level and meteorological data, would provide for continuous ecological monitoring.

  11. Robotic Vision-Based Localization in an Urban Environment

    NASA Technical Reports Server (NTRS)

    Mchenry, Michael; Cheng, Yang; Matthies

    2007-01-01

    A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.

  12. Activation of the Parieto-Premotor Network Is Associated with Vivid Motor Imagery—A Parametric fMRI Study

    PubMed Central

    Lorey, Britta; Pilgramm, Sebastian; Bischoff, Matthias; Stark, Rudolf; Vaitl, Dieter; Kindermann, Stefan; Munzert, Jörn; Zentgraf, Karen

    2011-01-01

    The present study examined the neural basis of vivid motor imagery with parametrical functional magnetic resonance imaging. 22 participants performed motor imagery (MI) of six different right-hand movements that differed in terms of pointing accuracy needs and object involvement, i.e., either none, two big or two small squares had to be pointed at in alternation either with or without an object grasped with the fingers. After each imagery trial, they rated the perceived vividness of motor imagery on a 7-point scale. Results showed that increased perceived imagery vividness was parametrically associated with increasing neural activation within the left putamen, the left premotor cortex (PMC), the posterior parietal cortex of the left hemisphere, the left primary motor cortex, the left somatosensory cortex, and the left cerebellum. Within the right hemisphere, activation was found within the right cerebellum, the right putamen, and the right PMC. It is concluded that the perceived vividness of MI is parametrically associated with neural activity within sensorimotor areas. The results corroborate the hypothesis that MI is an outcome of neural computations based on movement representations located within motor areas. PMID:21655298

  13. Classification of prefrontal activity due to mental arithmetic and music imagery using hidden Markov models and frequency domain near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Power, Sarah D.; Falk, Tiago H.; Chau, Tom

    2010-04-01

    Near-infrared spectroscopy (NIRS) has recently been investigated as a non-invasive brain-computer interface (BCI). In particular, previous research has shown that NIRS signals recorded from the motor cortex during left- and right-hand imagery can be distinguished, providing a basis for a two-choice NIRS-BCI. In this study, we investigated the feasibility of an alternative two-choice NIRS-BCI paradigm based on the classification of prefrontal activity due to two cognitive tasks, specifically mental arithmetic and music imagery. Deploying a dual-wavelength frequency domain near-infrared spectrometer, we interrogated nine sites around the frontopolar locations (International 10-20 System) while ten able-bodied adults performed mental arithmetic and music imagery within a synchronous shape-matching paradigm. With the 18 filtered AC signals, we created task- and subject-specific maximum likelihood classifiers using hidden Markov models. Mental arithmetic and music imagery were classified with an average accuracy of 77.2% ± 7.0 across participants, with all participants significantly exceeding chance accuracies. The results suggest the potential of a two-choice NIRS-BCI based on cognitive rather than motor tasks.

  14. GIS Integration for Quantitatively Determining the Capabilities of Five Remote Sensors for Resource Exploration

    NASA Technical Reports Server (NTRS)

    Pascucci, R. F.; Smith, A.

    1982-01-01

    To assist the U.S. Geological Survey in carrying out a Congressional mandate to investigate the use of side-looking airborne radar (SLAR) for resources exploration, a research program was conducted to define the contribution of SLAR imagery to structural geologic mapping and to compare this with contributions from other remote sensing systems. Imagery from two SLAR systems and from three other remote sensing systems was interpreted, and the resulting information was digitized, quantified and intercompared using a computer-assisted geographic information system (GIS). The study area covers approximately 10,000 square miles within the Naval Petroleum Reserve, Alaska, and is situated between the foothills of the Brooks Range and the North Slope. The principal objectives were: (1) to establish quantitatively, the total information contribution of each of the five remote sensing systems to the mapping of structural geology; (2) to determine the amount of information detected in common when the sensors are used in combination; and (3) to determine the amount of unique, incremental information detected by each sensor when used in combination with others. The remote sensor imagery that was investigated included real-aperture and synthetic-aperture radar imagery, standard and digitally enhanced LANDSAT MSS imagery, and aerial photos.

  15. Intense imagery movements: a common and distinct paediatric subgroup of motor stereotypies.

    PubMed

    Robinson, Sally; Woods, Martin; Cardona, Francesco; Baglioni, Valentina; Hedderly, Tammy

    2014-12-01

    The aim of this article is to describe a subgroup of children who presented with stereotyped movements in the context of episodes of intense imagery. This is of relevance to current discussions regarding the clinical usefulness of diagnosing motor stereotypies during development. The sample consisted of 10 children (nine males, one female; mean age 8y 6mo [SD 2y 5mo], range 6-15y). Referrals were from acute paediatricians, neurologists, and tertiary epilepsy services. Children were assessed by multidisciplinary teams with expertise in paediatric movement disorders. Stereotypies presented as paroxysmal complex movements involving upper and lower limbs. Imagery themes typically included computer games (60%), cartoons/films (40%), and fantasy scenes (30%). Comorbid developmental difficulties were reported for 80% of children. Brain imaging and electrophysiological investigations had been conducted for 50% of the children before referral to the clinic. The descriptive term 'intense imagery movements' (IIM) was applied if (after interview) the children reported engaging in acts of imagery while performing stereotyped movements. We believe these children may form a common and discrete stereotypy subgroup, with the concept of IIM being clinically useful to ensure the accurate diagnosis and clinical management of this paediatric movement disorder. © 2014 Mac Keith Press.

  16. The users, uses, and value of Landsat and other moderate-resolution satellite imagery in the United States-Executive report

    USGS Publications Warehouse

    Miller, Holly M.; Sexton, Natalie R.; Koontz, Lynne; Loomis, John; Koontz, Stephen R.; Hermans, Caroline

    2011-01-01

    Moderate-resolution imagery (MRI), such as that provided by the Landsat satellites, provides unique spatial information for use by many people both within and outside of the United States (U.S.). However, exactly who these users are, how they use the imagery, and the value and benefits derived from the information are, to a large extent, unknown. To explore these issues, social scientists at the USGS Fort Collins Science Center conducted a study of U.S.-based MRI users from 2008 through 2010 in two parts: 1) a user identification and 2) a user survey. The objectives for this study were to: 1) identify and classify U.S.-based users of this imagery; 2) better understand how and why MRI, and specifically Landsat, is being used; and 3) qualitatively and quantitatively measure the value and societal benefits of MRI (focusing on Landsat specifically). The results of the survey revealed that respondents from multiple sectors use Landsat imagery in many different ways, as demonstrated by the breadth of project locations and scales, as well as application areas. The value of Landsat imagery to these users was demonstrated by the high importance placed on the imagery, the numerous benefits received from projects using Landsat imagery, the negative impacts if Landsat imagery was no longer available, and the substantial willingness to pay for replacement imagery in the event of a data gap. The survey collected information from users who are both part of and apart from the known user community. The diversity of the sample delivered results that provide a baseline of knowledge about the users, uses, and value of Landsat imagery. While the results supply a wealth of information on their own, they can also be built upon through further research to generate a more complete picture of the population of Landsat users as a whole.

  17. The Sargassum Early Advisory System (SEAS)

    NASA Astrophysics Data System (ADS)

    Armstrong, D.; Gallegos, S. C.

    2016-02-01

    The Sargassum Early Advisory System (SEAS) web-app was designed to automatically detect Sargassum at sea, forecast movement of the seaweed, and alert users of potential landings. Inspired to help address the economic hardships caused by large landings of Sargassum, the web app automates and enhances the manual tasks conducted by the SEAS group of Texas A&M University at Galveston. The SEAS web app is a modular, mobile-friendly tool that automates the entire workflow from data acquisition to user management. The modules include: 1) an Imagery Retrieval Module to automatically download Landsat-8 Operational Land Imagery (OLI) from the United States Geological Survey (USGS), 2) a Processing Module for automatic detection of Sargassum in the OLI imagery, and subsequent mapping of theses patches in the HYCOM grid, producing maps that show Sargassum clusters; 3) a Forecasting engine fed by the HYbrid Coordinate Ocean Model (HYCOM) model currents and winds from weather buoys; and 4) a mobile phone optimized geospatial user interface. The user can view the last known position of Sargassum clusters, trajectory and location projections for the next 24, 72 and 168 hrs. Users can also subscribe to alerts generated for particular areas. Currently, the SEAS web app produces advisories for Texas beaches. The forecasted Sargassum landing locations are validated by reports from Texas beach managers. However, the SEAS web app was designed to easily expand to other areas, and future plans call for extending the SEAS web app to Mexico and the Caribbean islands. The SEAS web app development is led by NASA, with participation by ASRC Federal/Computer Science Corporation, and the Naval Research Laboratory, all at Stennis Space Center, and Texas A&M University at Galveston.

  18. Trial-by-trial adaptation of movements during mental practice under force field.

    PubMed

    Anwar, Muhammad Nabeel; Khan, Salman Hameed

    2013-01-01

    Human nervous system tries to minimize the effect of any external perturbing force by bringing modifications in the internal model. These modifications affect the subsequent motor commands generated by the nervous system. Adaptive compensation along with the appropriate modifications of internal model helps in reducing human movement errors. In the current study, we studied how motor imagery influences trial-to-trial learning in a robot-based adaptation task. Two groups of subjects performed reaching movements with or without motor imagery in a velocity-dependent force field. The results show that reaching movements performed with motor imagery have relatively a more focused generalization pattern and a higher learning rate in training direction.

  19. Google Earth Engine: a new cloud-computing platform for global-scale earth observation data and analysis

    NASA Astrophysics Data System (ADS)

    Moore, R. T.; Hansen, M. C.

    2011-12-01

    Google Earth Engine is a new technology platform that enables monitoring and measurement of changes in the earth's environment, at planetary scale, on a large catalog of earth observation data. The platform offers intrinsically-parallel computational access to thousands of computers in Google's data centers. Initial efforts have focused primarily on global forest monitoring and measurement, in support of REDD+ activities in the developing world. The intent is to put this platform into the hands of scientists and developing world nations, in order to advance the broader operational deployment of existing scientific methods, and strengthen the ability for public institutions and civil society to better understand, manage and report on the state of their natural resources. Earth Engine currently hosts online nearly the complete historical Landsat archive of L5 and L7 data collected over more than twenty-five years. Newly-collected Landsat imagery is downloaded from USGS EROS Center into Earth Engine on a daily basis. Earth Engine also includes a set of historical and current MODIS data products. The platform supports generation, on-demand, of spatial and temporal mosaics, "best-pixel" composites (for example to remove clouds and gaps in satellite imagery), as well as a variety of spectral indices. Supervised learning methods are available over the Landsat data catalog. The platform also includes a new application programming framework, or "API", that allows scientists access to these computational and data resources, to scale their current algorithms or develop new ones. Under the covers of the Google Earth Engine API is an intrinsically-parallel image-processing system. Several forest monitoring applications powered by this API are currently in development and expected to be operational in 2011. Combining science with massive data and technology resources in a cloud-computing framework can offer advantages of computational speed, ease-of-use and collaboration, as well as transparency in data and methods. Methods developed for global processing of MODIS data to map land cover are being adopted for use with Landsat data. Specifically, the MODIS Vegetation Continuous Field product methodology has been applied for mapping forest extent and change at national scales using Landsat time-series data sets. Scaling this method to continental and global scales is enabled by Google Earth Engine computing capabilities. By combining the supervised learning VCF approach with the Landsat archive and cloud computing, unprecedented monitoring of land cover dynamics is enabled.

  20. Mapping broom snakeweed through image analysis of color-infrared photography and digital imagery.

    PubMed

    Everitt, J H; Yang, C

    2007-11-01

    A study was conducted on a south Texas rangeland area to evaluate aerial color-infrared (CIR) photography and CIR digital imagery combined with unsupervised image analysis techniques to map broom snakeweed [Gutierrezia sarothrae (Pursh.) Britt. and Rusby]. Accuracy assessments performed on computer-classified maps of photographic images from two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 88.3%, respectively; whereas, accuracy assessments performed on classified maps from digital images of the same two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 92.8%, respectively. These results indicate that CIR photography and CIR digital imagery combined with image analysis techniques can be used successfully to map broom snakeweed infestations on south Texas rangelands.

  1. Onboard Algorithms for Data Prioritization and Summarization of Aerial Imagery

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Hayden, David; Thompson, David R.; Castano, Rebecca

    2013-01-01

    Many current and future NASA missions are capable of collecting enormous amounts of data, of which only a small portion can be transmitted to Earth. Communications are limited due to distance, visibility constraints, and competing mission downlinks. Long missions and high-resolution, multispectral imaging devices easily produce data exceeding the available bandwidth. To address this situation computationally efficient algorithms were developed for analyzing science imagery onboard the spacecraft. These algorithms autonomously cluster the data into classes of similar imagery, enabling selective downlink of representatives of each class, and a map classifying the terrain imaged rather than the full dataset, reducing the volume of the downlinked data. A range of approaches was examined, including k-means clustering using image features based on color, texture, temporal, and spatial arrangement

  2. Short-term kinesthetic training for sensorimotor rhythms: effects in experts and amateurs.

    PubMed

    Zapała, Dariusz; Zabielska-Mendyk, Emilia; Cudo, Andrzej; Krzysztofiak, Agnieszka; Augustynowicz, Paweł; Francuz, Piotr

    2015-01-01

    The authors' aim was to examine whether short-term kinesthetic training affects the level of sensorimotor rhythm (SMR) in different frequency band: alpha (8-12 Hz), lower beta (12.5-16 Hz) and beta (16.5-20 Hz) during the execution of a motor imagery task of closing and opening the right and the left hand by experts (jugglers, practicing similar exercises on an everyday basis) and amateurs (individuals not practicing any sports). It was found that the performance of short kinesthetic training increases the power of alpha rhythm when executing imagery tasks only in the group of amateurs. Therefore, kinesthetic training may be successfully used as a method increasing the vividness of motor imagery, for example, in tasks involving the control of brain-computer interfaces based on SMR.

  3. Single image super-resolution via regularized extreme learning regression for imagery from microgrid polarimeters

    NASA Astrophysics Data System (ADS)

    Sargent, Garrett C.; Ratliff, Bradley M.; Asari, Vijayan K.

    2017-08-01

    The advantage of division of focal plane imaging polarimeters is their ability to obtain temporally synchronized intensity measurements across a scene; however, they sacrifice spatial resolution in doing so due to their spatially modulated arrangement of the pixel-to-pixel polarizers and often result in aliased imagery. Here, we propose a super-resolution method based upon two previously trained extreme learning machines (ELM) that attempt to recover missing high frequency and low frequency content beyond the spatial resolution of the sensor. This method yields a computationally fast and simple way of recovering lost high and low frequency content from demosaicing raw microgrid polarimetric imagery. The proposed method outperforms other state-of-the-art single-image super-resolution algorithms in terms of structural similarity and peak signal-to-noise ratio.

  4. Enhanced Sidescan-Sonar Imagery, North-Central Long Island Sound

    USGS Publications Warehouse

    McMullen, K.Y.; Poppe, L.J.; Schattgen, P.T.; Doran, E.F.

    2008-01-01

    The U.S. Geological Survey, National Oceanic and Atmospheric Administration (NOAA), and Connecticut Department of Environmental Protection have been working cooperatively to map the sea-floor geology within Long Island Sound. Sidescan-sonar imagery collected during three NOAA hydrographic surveys (H11043, H11044, and H11045) was used to interpret the surficial-sediment distribution and sedimentary environments within the Sound. The original sidescan-sonar imagery generated by NOAA was used to evaluate hazards to navigation, which does not require consistent tonal matching throughout the survey. In order to fully utilize these data for geologic interpretation, artifacts within the imagery, primarily due to sidescan-system settings (for example, gain changes), processing techniques (for example, lack of across-track normalization) and environmental noise (for example, sea state), need to be minimized. Sidescan-sonar imagery from surveys H11043, H11044, and H11045 in north-central Long Island Sound was enhanced by matching the grayscale tones between adjacent sidescan-sonar lines to decrease the patchwork effect caused by numerous artifacts and to provide a more coherent sidescan-sonar image for use in geologic interpretation.

  5. Cloud cover typing from environmental satellite imagery. Discriminating cloud structure with Fast Fourier Transforms (FFT)

    NASA Technical Reports Server (NTRS)

    Logan, T. L.; Huning, J. R.; Glackin, D. L.

    1983-01-01

    The use of two dimensional Fast Fourier Transforms (FFTs) subjected to pattern recognition technology for the identification and classification of low altitude stratus cloud structure from Geostationary Operational Environmental Satellite (GOES) imagery was examined. The development of a scene independent pattern recognition methodology, unconstrained by conventional cloud morphological classifications was emphasized. A technique for extracting cloud shape, direction, and size attributes from GOES visual imagery was developed. These attributes were combined with two statistical attributes (cloud mean brightness, cloud standard deviation), and interrogated using unsupervised clustering amd maximum likelihood classification techniques. Results indicate that: (1) the key cloud discrimination attributes are mean brightness, direction, shape, and minimum size; (2) cloud structure can be differentiated at given pixel scales; (3) cloud type may be identifiable at coarser scales; (4) there are positive indications of scene independence which would permit development of a cloud signature bank; (5) edge enhancement of GOES imagery does not appreciably improve cloud classification over the use of raw data; and (6) the GOES imagery must be apodized before generation of FFTs.

  6. An Automated Technique for Estimating Daily Precipitation over the State of Virginia

    NASA Technical Reports Server (NTRS)

    Follansbee, W. A.; Chamberlain, L. W., III

    1981-01-01

    Digital IR and visible imagery obtained from a geostationary satellite located over the equator at 75 deg west latitude were provided by NASA and used to obtain a linear relationship between cloud top temperature and hourly precipitation. Two computer programs written in FORTRAN were used. The first program computes the satellite estimate field from the hourly digital IR imagery. The second program computes the final estimate for the entire state area by comparing five preliminary estimates of 24 hour precipitation with control raingage readings and determining which of the five methods gives the best estimate for the day. The final estimate is then produced by incorporating control gage readings into the winning method. In presenting reliable precipitation estimates for every cell in Virginia in near real time on a daily on going basis, the techniques require on the order of 125 to 150 daily gage readings by dependable, highly motivated observers distributed as uniformly as feasible across the state.

  7. Comparative analysis of feature extraction methods in satellite imagery

    NASA Astrophysics Data System (ADS)

    Karim, Shahid; Zhang, Ye; Asif, Muhammad Rizwan; Ali, Saad

    2017-10-01

    Feature extraction techniques are extensively being used in satellite imagery and getting impressive attention for remote sensing applications. The state-of-the-art feature extraction methods are appropriate according to the categories and structures of the objects to be detected. Based on distinctive computations of each feature extraction method, different types of images are selected to evaluate the performance of the methods, such as binary robust invariant scalable keypoints (BRISK), scale-invariant feature transform, speeded-up robust features (SURF), features from accelerated segment test (FAST), histogram of oriented gradients, and local binary patterns. Total computational time is calculated to evaluate the speed of each feature extraction method. The extracted features are counted under shadow regions and preprocessed shadow regions to compare the functioning of each method. We have studied the combination of SURF with FAST and BRISK individually and found very promising results with an increased number of features and less computational time. Finally, feature matching is conferred for all methods.

  8. Computer vision uncovers predictors of physical urban change.

    PubMed

    Naik, Nikhil; Kominers, Scott Duke; Raskar, Ramesh; Glaeser, Edward L; Hidalgo, César A

    2017-07-18

    Which neighborhoods experience physical improvements? In this paper, we introduce a computer vision method to measure changes in the physical appearances of neighborhoods from time-series street-level imagery. We connect changes in the physical appearance of five US cities with economic and demographic data and find three factors that predict neighborhood improvement. First, neighborhoods that are densely populated by college-educated adults are more likely to experience physical improvements-an observation that is compatible with the economic literature linking human capital and local success. Second, neighborhoods with better initial appearances experience, on average, larger positive improvements-an observation that is consistent with "tipping" theories of urban change. Third, neighborhood improvement correlates positively with physical proximity to the central business district and to other physically attractive neighborhoods-an observation that is consistent with the "invasion" theories of urban sociology. Together, our results provide support for three classical theories of urban change and illustrate the value of using computer vision methods and street-level imagery to understand the physical dynamics of cities.

  9. Computer vision uncovers predictors of physical urban change

    PubMed Central

    Naik, Nikhil; Kominers, Scott Duke; Raskar, Ramesh; Glaeser, Edward L.; Hidalgo, César A.

    2017-01-01

    Which neighborhoods experience physical improvements? In this paper, we introduce a computer vision method to measure changes in the physical appearances of neighborhoods from time-series street-level imagery. We connect changes in the physical appearance of five US cities with economic and demographic data and find three factors that predict neighborhood improvement. First, neighborhoods that are densely populated by college-educated adults are more likely to experience physical improvements—an observation that is compatible with the economic literature linking human capital and local success. Second, neighborhoods with better initial appearances experience, on average, larger positive improvements—an observation that is consistent with “tipping” theories of urban change. Third, neighborhood improvement correlates positively with physical proximity to the central business district and to other physically attractive neighborhoods—an observation that is consistent with the “invasion” theories of urban sociology. Together, our results provide support for three classical theories of urban change and illustrate the value of using computer vision methods and street-level imagery to understand the physical dynamics of cities. PMID:28684401

  10. Neurotechnology for intelligence analysts

    NASA Astrophysics Data System (ADS)

    Kruse, Amy A.; Boyd, Karen C.; Schulman, Joshua J.

    2006-05-01

    Geospatial Intelligence Analysts are currently faced with an enormous volume of imagery, only a fraction of which can be processed or reviewed in a timely operational manner. Computer-based target detection efforts have failed to yield the speed, flexibility and accuracy of the human visual system. Rather than focus solely on artificial systems, we hypothesize that the human visual system is still the best target detection apparatus currently in use, and with the addition of neuroscience-based measurement capabilities it can surpass the throughput of the unaided human severalfold. Using electroencephalography (EEG), Thorpe et al1 described a fast signal in the brain associated with the early detection of targets in static imagery using a Rapid Serial Visual Presentation (RSVP) paradigm. This finding suggests that it may be possible to extract target detection signals from complex imagery in real time utilizing non-invasive neurophysiological assessment tools. To transform this phenomenon into a capability for defense applications, the Defense Advanced Research Projects Agency (DARPA) currently is sponsoring an effort titled Neurotechnology for Intelligence Analysts (NIA). The vision of the NIA program is to revolutionize the way that analysts handle intelligence imagery, increasing both the throughput of imagery to the analyst and overall accuracy of the assessments. Successful development of a neurobiologically-based image triage system will enable image analysts to train more effectively and process imagery with greater speed and precision.

  11. Transition from lab to flight demo for model-based FLIR ATR and SAR-FLIR fusion

    NASA Astrophysics Data System (ADS)

    Childs, Martin B.; Carlson, Karen M.; Pujara, Neeraj

    2000-08-01

    Model-based automatic target recognition (ATR) using forward- looking infrared (FLIR) imagery, and using FLIR imagery combined with cues from a synthetic aperture radar (SAR) system, has been successfully demonstrated in the laboratory. For the laboratory demonstration, FLIR images, platform location, sensor data, and SAR cues were read in from files stored on computer disk. This ATR system, however, was intended to ultimately be flown in a fighter aircraft. We discuss the transition from laboratory demonstration to flight demonstration for this system. The obvious changes required were in the interfaces: the flight system must get live FLIR imagery from a sensor; it must get platform location, sensor data, and controls from the avionics computer in the aircraft via 1553 bus; and it must get SAR cues from the on-board SAR system, also via 1553 bus. Other changes included the transition to rugged hardware that would withstand the fighter aircraft environment, and the need for the system to be compact and self-contained. Unexpected as well as expected challenges were encountered. We discuss some of these challenges, how they were met, and the performance of the flight-demonstration system.

  12. A Parallel Processing Algorithm for Remote Sensing Classification

    NASA Technical Reports Server (NTRS)

    Gualtieri, J. Anthony

    2005-01-01

    A current thread in parallel computation is the use of cluster computers created by networking a few to thousands of commodity general-purpose workstation-level commuters using the Linux operating system. For example on the Medusa cluster at NASA/GSFC, this provides for super computing performance, 130 G(sub flops) (Linpack Benchmark) at moderate cost, $370K. However, to be useful for scientific computing in the area of Earth science, issues of ease of programming, access to existing scientific libraries, and portability of existing code need to be considered. In this paper, I address these issues in the context of tools for rendering earth science remote sensing data into useful products. In particular, I focus on a problem that can be decomposed into a set of independent tasks, which on a serial computer would be performed sequentially, but with a cluster computer can be performed in parallel, giving an obvious speedup. To make the ideas concrete, I consider the problem of classifying hyperspectral imagery where some ground truth is available to train the classifier. In particular I will use the Support Vector Machine (SVM) approach as applied to hyperspectral imagery. The approach will be to introduce notions about parallel computation and then to restrict the development to the SVM problem. Pseudocode (an outline of the computation) will be described and then details specific to the implementation will be given. Then timing results will be reported to show what speedups are possible using parallel computation. The paper will close with a discussion of the results.

  13. Technology and Art: A Postmodern Reading of Orwell as Advertising.

    ERIC Educational Resources Information Center

    Sayre, Shay; Moriarty, Sandra E.

    The "1984" commercial used by Apple computers to introduce the Macintosh computer is analyzed to compare the production and reception of modern and postmodern imagery by a group of contemporary viewers. The commercial was based on George Orwell's famous novel "1984." The use of the novel as a theme for new product introduction,…

  14. The impact of ageing and gender on visual mental imagery processes: A study of performance on tasks from the Complete Visual Mental Imagery Battery (CVMIB).

    PubMed

    Palermo, Liana; Piccardi, Laura; Nori, Raffaella; Giusberti, Fiorella; Guariglia, Cecilia

    2016-09-01

    In this study we aim to evaluate the impact of ageing and gender on different visual mental imagery processes. Two hundred and fifty-one participants (130 women and 121 men; age range = 18-77 years) were given an extensive neuropsychological battery including tasks probing the generation, maintenance, inspection, and transformation of visual mental images (Complete Visual Mental Imagery Battery, CVMIB). Our results show that all mental imagery processes with the exception of the maintenance are affected by ageing, suggesting that other deficits, such as working memory deficits, could account for this effect. However, the analysis of the transformation process, investigated in terms of mental rotation and mental folding skills, shows a steeper decline in mental rotation, suggesting that age could affect rigid transformations of objects and spare non-rigid transformations. Our study also adds to previous ones in showing gender differences favoring men across the lifespan in the transformation process, and, interestingly, it shows a steeper decline in men than in women in inspecting mental images, which could partially account for the mixed results about the effect of ageing on this specific process. We also discuss the possibility to introduce the CVMIB in clinical assessment in the context of theoretical models of mental imagery.

  15. Classification of wetlands vegetation using small scale color infrared imagery

    NASA Technical Reports Server (NTRS)

    Williamson, F. S. L.

    1975-01-01

    A classification system for Chesapeake Bay wetlands was derived from the correlation of film density classes and actual vegetation classes. The data processing programs used were developed by the Laboratory for the Applications of Remote Sensing. These programs were tested for their value in classifying natural vegetation, using digitized data from small scale aerial photography. Existing imagery and the vegetation map of Farm Creek Marsh were used to determine the optimal number of classes, and to aid in determining if the computer maps were a believable product.

  16. The Importance of Visual Feedback Design in BCIs; from Embodiment to Motor Imagery Learning

    PubMed Central

    Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi

    2016-01-01

    Brain computer interfaces (BCIs) have been developed and implemented in many areas as a new communication channel between the human brain and external devices. Despite their rapid growth and broad popularity, the inaccurate performance and cost of user-training are yet the main issues that prevent their application out of the research and clinical environment. We previously introduced a BCI system for the control of a very humanlike android that could raise a sense of embodiment and agency in the operators only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we further discovered that the positive bias of subjects’ performance both increased their sensation of embodiment and improved their motor imagery skills in a short period. In this work, we studied the shared mechanism between the experience of embodiment and motor imagery. We compared the trend of motor imagery learning when two groups of subjects BCI-operated different looking robots, a very humanlike android’s hands and a pair of metallic gripper. Although our experiments did not show a significant change of learning between the two groups immediately during one session, the android group revealed better motor imagery skills in the follow up session when both groups repeated the task using the non-humanlike gripper. This result shows that motor imagery skills learnt during the BCI-operation of humanlike hands are more robust to time and visual feedback changes. We discuss the role of embodiment and mirror neuron system in such outcome and propose the application of androids for efficient BCI training. PMID:27598310

  17. The Importance of Visual Feedback Design in BCIs; from Embodiment to Motor Imagery Learning.

    PubMed

    Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi

    2016-01-01

    Brain computer interfaces (BCIs) have been developed and implemented in many areas as a new communication channel between the human brain and external devices. Despite their rapid growth and broad popularity, the inaccurate performance and cost of user-training are yet the main issues that prevent their application out of the research and clinical environment. We previously introduced a BCI system for the control of a very humanlike android that could raise a sense of embodiment and agency in the operators only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we further discovered that the positive bias of subjects' performance both increased their sensation of embodiment and improved their motor imagery skills in a short period. In this work, we studied the shared mechanism between the experience of embodiment and motor imagery. We compared the trend of motor imagery learning when two groups of subjects BCI-operated different looking robots, a very humanlike android's hands and a pair of metallic gripper. Although our experiments did not show a significant change of learning between the two groups immediately during one session, the android group revealed better motor imagery skills in the follow up session when both groups repeated the task using the non-humanlike gripper. This result shows that motor imagery skills learnt during the BCI-operation of humanlike hands are more robust to time and visual feedback changes. We discuss the role of embodiment and mirror neuron system in such outcome and propose the application of androids for efficient BCI training.

  18. A novel latent gaussian copula framework for modeling spatial correlation in quantized SAR imagery with applications to ATR

    NASA Astrophysics Data System (ADS)

    Thelen, Brian T.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.

    2017-04-01

    With all of the new remote sensing modalities available, and with ever increasing capabilities and frequency of collection, there is a desire to fundamentally understand/quantify the information content in the collected image data relative to various exploitation goals, such as detection/classification. A fundamental approach for this is the framework of Bayesian decision theory, but a daunting challenge is to have significantly flexible and accurate multivariate models for the features and/or pixels that capture a wide assortment of distributions and dependen- cies. In addition, data can come in the form of both continuous and discrete representations, where the latter is often generated based on considerations of robustness to imaging conditions and occlusions/degradations. In this paper we propose a novel suite of "latent" models fundamentally based on multivariate Gaussian copula models that can be used for quantized data from SAR imagery. For this Latent Gaussian Copula (LGC) model, we derive an approximate, maximum-likelihood estimation algorithm and demonstrate very reasonable estimation performance even for the larger images with many pixels. However applying these LGC models to large dimen- sions/images within a Bayesian decision/classification theory is infeasible due to the computational/numerical issues in evaluating the true full likelihood, and we propose an alternative class of novel pseudo-likelihoood detection statistics that are computationally feasible. We show in a few simple examples that these statistics have the potential to provide very good and robust detection/classification performance. All of this framework is demonstrated on a simulated SLICY data set, and the results show the importance of modeling the dependencies, and of utilizing the pseudo-likelihood methods.

  19. Effect of Different Movement Speed Modes on Human Action Observation: An EEG Study.

    PubMed

    Luo, Tian-Jian; Lv, Jitu; Chao, Fei; Zhou, Changle

    2018-01-01

    Action observation (AO) generates event-related desynchronization (ERD) suppressions in the human brain by activating partial regions of the human mirror neuron system (hMNS). The activation of the hMNS response to AO remains controversial for several reasons. Therefore, this study investigated the activation of the hMNS response to a speed factor of AO by controlling the movement speed modes of a humanoid robot's arm movements. Since hMNS activation is reflected by ERD suppressions, electroencephalography (EEG) with BCI analysis methods for ERD suppressions were used as the recording and analysis modalities. Six healthy individuals were asked to participate in experiments comprising five different conditions. Four incremental-speed AO tasks and a motor imagery (MI) task involving imaging of the same movement were presented to the individuals. Occipital and sensorimotor regions were selected for BCI analyses. The experimental results showed that hMNS activation was higher in the occipital region but more robust in the sensorimotor region. Since the attended information impacts the activations of the hMNS during AO, the pattern of hMNS activations first rises and subsequently falls to a stable level during incremental-speed modes of AO. The discipline curves suggested that a moderate speed within a decent inter-stimulus interval (ISI) range produced the highest hMNS activations. Since a brain computer/machine interface (BCI) builds a path-way between human and computer/mahcine, the discipline curves will help to construct BCIs made by patterns of action observation (AO-BCI). Furthermore, a new method for constructing non-invasive brain machine brain interfaces (BMBIs) with moderate AO-BCI and motor imagery BCI (MI-BCI) was inspired by this paper.

  20. Photogrammetry of the Viking Lander imagery

    NASA Technical Reports Server (NTRS)

    Wu, S. S. C.; Schafer, F. J.

    1982-01-01

    The problem of photogrammetric mapping which uses Viking Lander photography as its basis is solved in two ways: (1) by converting the azimuth and elevation scanning imagery to the equivalent of a frame picture, using computerized rectification; and (2) by interfacing a high-speed, general-purpose computer to the analytical plotter employed, so that all correction computations can be performed in real time during the model-orientation and map-compilation process. Both the efficiency of the Viking Lander cameras and the validity of the rectification method have been established by a series of pre-mission tests which compared the accuracy of terrestrial maps compiled by this method with maps made from aerial photographs. In addition, 1:10-scale topographic maps of Viking Lander sites 1 and 2 having a contour interval of 1.0 cm have been made to test the rectification method.

  1. Bipolar electrode selection for a motor imagery based brain computer interface

    NASA Astrophysics Data System (ADS)

    Lou, Bin; Hong, Bo; Gao, Xiaorong; Gao, Shangkai

    2008-09-01

    A motor imagery based brain-computer interface (BCI) provides a non-muscular communication channel that enables people with paralysis to control external devices using their motor imagination. Reducing the number of electrodes is critical to improving the portability and practicability of the BCI system. A novel method is proposed to reduce the number of electrodes to a total of four by finding the optimal positions of two bipolar electrodes. Independent component analysis (ICA) is applied to find the source components of mu and alpha rhythms, and optimal electrodes are chosen by comparing the projection weights of sources on each channel. The results of eight subjects demonstrate the better classification performance of the optimal layout compared with traditional layouts, and the stability of this optimal layout over a one week interval was further verified.

  2. Korean coastal water depth/sediment and land cover mapping (1:25,000) by computer analysis of LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Park, K. Y.; Miller, L. D.

    1978-01-01

    Computer analysis was applied to single date LANDSAT MSS imagery of a sample coastal area near Seoul, Korea equivalent to a 1:50,000 topographic map. Supervised image processing yielded a test classification map from this sample image containing 12 classes: 5 water depth/sediment classes, 2 shoreline/tidal classes, and 5 coastal land cover classes at a scale of 1:25,000 and with a training set accuracy of 76%. Unsupervised image classification was applied to a subportion of the site analyzed and produced classification maps comparable in results in a spatial sense. The results of this test indicated that it is feasible to produce such quantitative maps for detailed study of dynamic coastal processes given a LANDSAT image data base at sufficiently frequent time intervals.

  3. The investigation of brain-computer interface for motor imagery and execution using functional near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Jiao, Xuejun; Xu, Fengang; Jiang, Jin; Yang, Hanjun; Cao, Yong; Fu, Jiahao

    2017-01-01

    Functional near-infrared spectroscopy (fNIRS), which can measure cortex hemoglobin activity, has been widely adopted in brain-computer interface (BCI). To explore the feasibility of recognizing motor imagery (MI) and motor execution (ME) in the same motion. We measured changes of oxygenated hemoglobin (HBO) and deoxygenated hemoglobin (HBR) on PFC and Motor Cortex (MC) when 15 subjects performing hand extension and finger tapping tasks. The mean, slope, quadratic coefficient and approximate entropy features were extracted from HBO as the input of support vector machine (SVM). For the four-class fNIRS-BCI classifiers, we realized 87.65% and 87.58% classification accuracy corresponding to hand extension and finger tapping tasks. In conclusion, it is effective for fNIRS-BCI to recognize MI and ME in the same motion.

  4. Computer vision techniques for rotorcraft low-altitude flight

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Cheng, Victor H. L.

    1988-01-01

    A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.

  5. Feature Selection for Motor Imagery EEG Classification Based on Firefly Algorithm and Learning Automata

    PubMed Central

    Liu, Aiming; Liu, Quan; Ai, Qingsong; Xie, Yi; Chen, Anqi

    2017-01-01

    Motor Imagery (MI) electroencephalography (EEG) is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA) can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP) and local characteristic-scale decomposition (LCD) algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA) classifier. Both the fourth brain–computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain–computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain–computer interface systems. PMID:29117100

  6. Feature Selection for Motor Imagery EEG Classification Based on Firefly Algorithm and Learning Automata.

    PubMed

    Liu, Aiming; Chen, Kun; Liu, Quan; Ai, Qingsong; Xie, Yi; Chen, Anqi

    2017-11-08

    Motor Imagery (MI) electroencephalography (EEG) is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA) can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP) and local characteristic-scale decomposition (LCD) algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA) classifier. Both the fourth brain-computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain-computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain-computer interface systems.

  7. User-Driven Geolocation of Untagged Desert Imagery Using Digital Elevation Models (Open Access)

    DTIC Science & Technology

    2013-09-12

    IEEE International Conference on, pages 3677–3680. IEEE, 2011. [13] W. Zhang and J. Kosecka. Image based localization in urban environments. In 3D ...non- urban environments such as deserts. Our system generates synthetic skyline views from a DEM and extracts stable concavity-based features from these...fine as 100m2. 1. Introduction Automatic geolocation of imagery has many exciting use cases. For example, such a tool could semantically orga- nize

  8. User-Driven Geolocation of Untagged Desert Imagery Using Digital Elevation Models

    DTIC Science & Technology

    2013-01-01

    Conference on, pages 3677–3680. IEEE, 2011. [13] W. Zhang and J. Kosecka. Image based localization in urban environments. In 3D Data Processing...non- urban environments such as deserts. Our system generates synthetic skyline views from a DEM and extracts stable concavity-based features from these...fine as 100m2. 1. Introduction Automatic geolocation of imagery has many exciting use cases. For example, such a tool could semantically orga- nize

  9. Global, Frequent Landsat-class Mosaics for Real Time Crop Monitoring and Analysis

    NASA Astrophysics Data System (ADS)

    Varlyguin, D.; Crutchfield, J.; Hulina, S.; Reynolds, C. A.; Frantz, R.; Tetrault, R. L.

    2016-12-01

    The presentation will discuss the current status of GDA technology for operational, automated generation of near global mosaics of Landsat-class data for visualization, monitoring, and analysis. Current version of the mosaic combines Landsat 8 and Landsat 7. Sentinel-2A and ASTER imagery are to be added shortly. The mosaics are surface reflectance calibrated and are analysis ready. They offer full spatial resolution and all multi-spectral bands of the source imagery. Each mosaic covers all major agricultural regions of the world for the last 18 months with a 16 day frequency. The mosaics are updated in real-time, as soon as GDA downloads the imagery, calibrates it to the surface reflectances, and generates data gap masks (all typically under 10 minutes for a Landsat scene). Best pixel value from available opportunities is selected during the mosaic update. The technology eliminates the complex, multi-step, hands-on process of data preparation and provides imagery ready for repetitive, field-to-country analysis of crop conditions, progress, acreages, yield, and production. The mosaics are used for real-time, on-line interactive mapping and time series drilling via GeoSynergy webGIS platform and for off line in-season crop mapping. USDA FAS uses this product for persistent monitoring of selected countries and their croplands and for in-season crop analysis. The presentation will overview Landsat-class mosaics and their use in support of USDA FAS efforts.

  10. The Use of OMPS Near Real Time Products in Volcanic Cloud Risk Mitigation and Smoke/Dust Air Quality Assessments

    NASA Astrophysics Data System (ADS)

    Seftor, C. J.; Krotkov, N. A.; McPeters, R. D.; Li, J. Y.; Durbin, P. B.

    2015-12-01

    Near real time (NRT) SO2 and aerosol index (AI) imagery from Aura's Ozone Monitoring Instrument (OMI) has proven invaluable in mitigating the risk posed to air traffic by SO2 and ash clouds from volcanic eruptions. The OMI products, generated as part of NASA's Land, Atmosphere Near real-time Capability for EOS (LANCE) NRT system and available through LANCE and both NOAA's NESDIS and ESA's Support to Aviation Control Service (SACS) portals, are used to monitor the current location of volcanic clouds and to provide input into Volcanic Ash (VA) advisory forecasts. NRT products have recently been developed using data from the Ozone Mapping and Profiler Suite onboard the Suomi NPP platform; they are currently being made available through the SACS portal and will shortly be incorporated into the LANCE NRT system. We will show examples of the use of OMPS NRT SO2 and AI imagery to monitor recent volcanic eruption events. We will also demonstrate the usefulness of OMPS AI imagery to detect and track dust storms and smoke from fires, and how this information can be used to forecast their impact on air quality in areas far removed from their source. Finally, we will show SO2 and AI imagery generated from our OMPS Direct Broadcast data to highlight the capability of our real time system.

  11. Modeling the topography of shallow braided rivers using Structure-from-Motion photogrammetry

    NASA Astrophysics Data System (ADS)

    Javernick, L.; Brasington, J.; Caruso, B.

    2014-05-01

    Recent advances in computer vision and image analysis have led to the development of a novel, fully automated photogrammetric method to generate dense 3d point cloud data. This approach, termed Structure-from-Motion or SfM, requires only limited ground-control and is ideally suited to imagery obtained from low-cost, non-metric cameras acquired either at close-range or using aerial platforms. Terrain models generated using SfM have begun to emerge recently and with a growing spectrum of software now available, there is an urgent need to provide a robust quality assessment of the data products generated using standard field and computational workflows. To address this demand, we present a detailed error analysis of sub-meter resolution terrain models of two contiguous reaches (1.6 and 1.7 km long) of the braided Ahuriri River, New Zealand, generated using SfM. A six stage methodology is described, involving: i) hand-held image acquisition from an aerial platform, ii) 3d point cloud extraction modeling using Agisoft PhotoScan, iii) georeferencing on a redundant network of GPS-surveyed ground-control points, iv) point cloud filtering to reduce computational demand as well as reduce vegetation noise, v) optical bathymetric modeling of inundated areas; and vi) data fusion and surface modeling to generate sub-meter raster terrain models. Bootstrapped geo-registration as well as extensive distributed GPS and sonar-based bathymetric check-data were used to quantify the quality of the models generated after each processing step. The results obtained provide the first quantified analysis of SfM applied to model the complex terrain of a braided river. Results indicate that geo-registration errors of 0.04 m (planar) and 0.10 m (elevation) and vertical surface errors of 0.10 m in non-vegetation areas can be achieved from a dataset of photographs taken at 600 m and 800 m above the ground level. These encouraging results suggest that this low-cost, logistically simple method can deliver high quality terrain datasets competitive with those obtained with significantly more expensive laser scanning, and suitable for geomorphic change detection and hydrodynamic modeling.

  12. The Use of LIDAR and Volunteered Geographic Information to Map Flood Extents and Inundation

    NASA Astrophysics Data System (ADS)

    McDougall, K.; Temple-Watts, P.

    2012-07-01

    Floods are one of the most destructive natural disasters that threaten communities and properties. In recent decades, flooding has claimed more lives, destroyed more houses and ruined more agricultural land than any other natural hazard. The accurate prediction of the areas of inundation from flooding is critical to saving lives and property, but relies heavily on accurate digital elevation and hydrologic models. The 2011 Brisbane floods provided a unique opportunity to capture high resolution digital aerial imagery as the floods neared their peak, allowing the capture of areas of inundation over the various city suburbs. This high quality imagery, together with accurate LiDAR data over the area and publically available volunteered geographic imagery through repositories such as Flickr, enabled the reconstruction of flood extents and the assessment of both area and depth of inundation for the assessment of damage. In this study, approximately 20 images of flood damaged properties were utilised to identify the peak of the flood. Accurate position and height values were determined through the use of RTK GPS and conventional survey methods. This information was then utilised in conjunction with river gauge information to generate a digital flood surface. The LiDAR generated DEM was then intersected with the flood surface to reconstruct the area of inundation. The model determined areas of inundation were then compared to the mapped flood extent from the high resolution digital imagery to assess the accuracy of the process. The paper concludes that accurate flood extent prediction or mapping is possible through this method, although its accuracy is dependent on the number and location of sampled points. The utilisation of LiDAR generated DEMs and DSMs can also provide an excellent mechanism to estimate depths of inundation and hence flood damage

  13. Region-based automatic building and forest change detection on Cartosat-1 stereo imagery

    NASA Astrophysics Data System (ADS)

    Tian, J.; Reinartz, P.; d'Angelo, P.; Ehlers, M.

    2013-05-01

    In this paper a novel region-based method is proposed for change detection using space borne panchromatic Cartosat-1 stereo imagery. In the first step, Digital Surface Models (DSMs) from two dates are generated by semi-global matching. The geometric lateral resolution of the DSMs is 5 m × 5 m and the height accuracy is in the range of approximately 3 m (RMSE). In the second step, mean-shift segmentation is applied on the orthorectified images of two dates to obtain initial regions. A region intersection following a merging strategy is proposed to get minimum change regions and multi-level change vectors are extracted for these regions. Finally change detection is achieved by combining these features with weighted change vector analysis. The result evaluations demonstrate that the applied DSM generation method is well suited for Cartosat-1 imagery, and the extracted height values can largely improve the change detection accuracy, moreover it is shown that the proposed change detection method can be used robustly for both forest and industrial areas.

  14. Age Differences in the Effects of Experimenter-Instructed Versus Self-Generated Strategy Use

    PubMed Central

    Hertzog, Christopher; Price, Jodi; Dunlosky, John

    2013-01-01

    Background/Study Context Interactive imagery is superior to rote repetition as an encoding strategy for paired associate (PA) recall. Younger and older individuals often rate these strategies as equally effective before they gain experience using each strategy. The present study investigated how experimenter-supervised and participant-chosen strategy experience affected younger and older adults’ knowledge about the effectiveness of these two strategies. Methods Ninety-nine younger (M = 19.0 years, SD = 1.4) and 90 older adults (M = 70.4 years, SD = 5.2) participated in the experiment. In learning a first PA list participants were either instructed to use imagery or repetition to study specific items (supervised) or could choose their own strategies (unsupervised). All participants were unsupervised on a second PA list to evaluate whether strategy experience affected strategy knowledge, strategy use, and PA recall. Results Both instruction groups learned about the superiority of imagery use through task experience, downgrading repetition ratings and upgrading imagery ratings on the second list. However, older adults showed less knowledge updating than did younger adults. Previously supervised younger adults increased their imagery use, improving PA recall; older adults maintained a higher level of repetition use. Conclusions Older adults update knowledge of the differential effectiveness of the rote and imagery strategies, but to a lesser degree than younger adults. Older adults manifest an inertial tendency to continue using the repetition strategy even though they have learned that it is inferior to interactive imagery. PMID:22224949

  15. Determination of circulation and turbidity patterns in Kerr Lake from LANDSAT MSS imagery. [Kerr Lake, Virginia, North Carolina

    NASA Technical Reports Server (NTRS)

    Lecroy, S. R. (Principal Investigator)

    1981-01-01

    The LANDSAT imagery was historically analyzed to determine the circulation and turbidity patterns of Kerr Lake, located on the Virginia-North Carolina border. By examining the seasonal and regional turbidity and circulation patterns, a record of sediment transport and possible disposition can be obtained. Sketches were generated, displaying different intensities of brightness observed in bands 5 and 7 of LANDSAT's multispectral scanner data. Differences in and between bands 5 and 7 indicate variances in the levels of surface sediment concentrations. High sediment loads are revealed when distinct patterns appear in the band 7 imagery. The upwelled signal is exponential in nature and saturates in band 5 at low wavelengths for large concentrations of suspended solids.

  16. Cultural Artifact Detection in Long Wave Infrared Imagery.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dylan Zachary; Craven, Julia M.; Ramon, Eric

    2017-01-01

    Detection of cultural artifacts from airborne remotely sensed data is an important task in the context of on-site inspections. Airborne artifact detection can reduce the size of the search area the ground based inspection team must visit, thereby improving the efficiency of the inspection process. This report details two algorithms for detection of cultural artifacts in aerial long wave infrared imagery. The first algorithm creates an explicit model for cultural artifacts, and finds data that fits the model. The second algorithm creates a model of the background and finds data that does not fit the model. Both algorithms are appliedmore » to orthomosaic imagery generated as part of the MSFE13 data collection campaign under the spectral technology evaluation project.« less

  17. Object-Oriented Image Clustering Method Using UAS Photogrammetric Imagery

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Larson, A.; Schultz-Fellenz, E. S.; Sussman, A. J.; Swanson, E.; Coppersmith, R.

    2016-12-01

    Unmanned Aerial Systems (UAS) have been used widely as an imaging modality to obtain remotely sensed multi-band surface imagery, and are growing in popularity due to their efficiency, ease of use, and affordability. Los Alamos National Laboratory (LANL) has employed the use of UAS for geologic site characterization and change detection studies at a variety of field sites. The deployed UAS equipped with a standard visible band camera to collect imagery datasets. Based on the imagery collected, we use deep sparse algorithmic processing to detect and discriminate subtle topographic features created or impacted by subsurface activities. In this work, we develop an object-oriented remote sensing imagery clustering method for land cover classification. To improve the clustering and segmentation accuracy, instead of using conventional pixel-based clustering methods, we integrate the spatial information from neighboring regions to create super-pixels to avoid salt-and-pepper noise and subsequent over-segmentation. To further improve robustness of our clustering method, we also incorporate a custom digital elevation model (DEM) dataset generated using a structure-from-motion (SfM) algorithm together with the red, green, and blue (RGB) band data for clustering. In particular, we first employ an agglomerative clustering to create an initial segmentation map, from where every object is treated as a single (new) pixel. Based on the new pixels obtained, we generate new features to implement another level of clustering. We employ our clustering method to the RGB+DEM datasets collected at the field site. Through binary clustering and multi-object clustering tests, we verify that our method can accurately separate vegetation from non-vegetation regions, and are also able to differentiate object features on the surface.

  18. LiDAR and IFSAR-Based Flood Inundation Model Estimates for Flood-Prone Areas of Afghanistan

    NASA Astrophysics Data System (ADS)

    Johnson, W. C.; Goldade, M. M.; Kastens, J.; Dobbs, K. E.; Macpherson, G. L.

    2014-12-01

    Extreme flood events are not unusual in semi-arid to hyper-arid regions of the world, and Afghanistan is no exception. Recent flashfloods and flashflood-induced landslides took nearly 100 lives and destroyed or damaged nearly 2000 homes in 12 villages within Guzargah-e-Nur district of Baghlan province in northeastern Afghanistan. With available satellite imagery, flood-water inundation estimation can be accomplished remotely, thereby providing a means to reduce the impact of such flood events by improving shared situational awareness during major flood events. Satellite orbital considerations, weather, cost, data licensing restrictions, and other issues can often complicate the acquisition of appropriately timed imagery. Given the need for tools to supplement imagery where not available, complement imagery when it is available, and bridge the gap between imagery based flood mapping and traditional hydrodynamic modeling approaches, we have developed a topographic floodplain model (FLDPLN), which has been used to identify and map river valley floodplains with elevation data ranging from 90-m SRTM to 1-m LiDAR. Floodplain "depth to flood" (DTF) databases generated by FLDPLN are completely seamless and modular. FLDPLN has been applied in Afghanistan to flood-prone areas along the northern and southern flanks of the Hindu Kush mountain range to generate a continuum of 1-m increment flood-event models up to 10 m in depth. Elevation data used in this application of FLDPLN included high-resolution, drone-acquired LiDAR (~1 m) and IFSAR (5 m; INTERMAP). Validation of the model has been accomplished using the best available satellite-derived flood inundation maps, such as those issued by Unitar's Operational Satellite Applications Programme (UNOSAT). Results provide a quantitative approach to evaluating the potential risk to urban/village infrastructure as well as to irrigation systems, agricultural fields and archaeological sites.

  19. Matched-filter algorithm for subpixel spectral detection in hyperspectral image data

    NASA Astrophysics Data System (ADS)

    Borough, Howard C.

    1991-11-01

    Hyperspectral imagery, spatial imagery with associated wavelength data for every pixel, offers a significant potential for improved detection and identification of certain classes of targets. The ability to make spectral identifications of objects which only partially fill a single pixel (due to range or small size) is of considerable interest. Multiband imagery such as Landsat's 5 and 7 band imagery has demonstrated significant utility in the past. Hyperspectral imaging systems with hundreds of spectral bands offer improved performance. To explore the application of differentpixel spectral detection algorithms a synthesized set of hyperspectral image data (hypercubes) was generated utilizing NASA earth resources and other spectral data. The data was modified using LOWTRAN 7 to model the illumination, atmospheric contributions, attenuations and viewing geometry to represent a nadir view from 10,000 ft. altitude. The base hypercube (HC) represented 16 by 21 spatial pixels with 101 wavelength samples from 0.5 to 2.5 micrometers for each pixel. Insertions were made into the base data to provide random location, random pixel percentage, and random material. Fifteen different hypercubes were generated for blind testing of candidate algorithms. An algorithm utilizing a matched filter in the spectral dimension proved surprisingly good yielding 100% detections for pixels filled greater than 40% with a standard camouflage paint, and a 50% probability of detection for pixels filled 20% with the paint, with no false alarms. The false alarm rate as a function of the number of spectral bands in the range from 101 to 12 bands was measured and found to increase from zero to 50% illustrating the value of a large number of spectral bands. This test was on imagery without system noise; the next step is to incorporate typical system noise sources.

  20. The impact of initialization procedures on unsupervised unmixing of hyperspectral imagery using the constrained positive matrix factorization

    NASA Astrophysics Data System (ADS)

    Masalmah, Yahya M.; Vélez-Reyes, Miguel

    2007-04-01

    The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.

  1. Interactive 3D Visualization: An Important Element in Dealing with Increasing Data Volumes and Decreasing Resources

    NASA Astrophysics Data System (ADS)

    Gee, L.; Reed, B.; Mayer, L.

    2002-12-01

    Recent years have seen remarkable advances in sonar technology, positioning capabilities, and computer processing power that have revolutionized the way we image the seafloor. The US Naval Oceanographic Office (NAVOCEANO) has updated its survey vessels and launches to the latest generation of technology and now possesses a tremendous ocean observing and mapping capability. However, the systems produce massive amounts of data that must be validated prior to inclusion in various bathymetry, hydrography, and imagery products. The key to meeting the challenge of the massive data volumes was to change the approach that required every data point be viewed. This was achieved with the replacement of the traditional line-by-line editing approach with an automated cleaning module, and an area-based editor. The approach includes a unique data structure that enables the direct access to the full resolution data from the area based view, including a direct interface to target files and imagery snippets from mosaic and full resolution imagery. The increased data volumes to be processed also offered tremendous opportunities in terms of visualization and analysis, and interactive 3D presentation of the complex multi-attribute data provided a natural complement to the area based processing. If properly geo-referenced and treated, the complex data sets can be presented in a natural and intuitive manner that allows the integration of multiple components each at their inherent level of resolution and without compromising the quantitative nature of the data. Artificial sun-illumination, shading, and 3-D rendering are used with digital bathymetric data to form natural looking and easily interpretable, yet quantitative, landscapes that allow the user to rapidly identify the data requiring further processing or analysis. Color can be used to represent depth or other parameters (like backscatter, quality factors or sediment properties), which can be draped over the DTM, or high resolution imagery can be texture mapped on bathymetric data. The presentation will demonstrate the new approach of the integrated area based processing and 3D visualization with a number of data sets from recent surveys.

  2. Accuracy Potential and Applications of MIDAS Aerial Oblique Camera System

    NASA Astrophysics Data System (ADS)

    Madani, M.

    2012-07-01

    Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System) is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees) cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels) with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm) and (50 mm/50 mm)) were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance) for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining systematic errors were modeled by analyzing residuals using correction grid. The results of the final bundle adjustments are sufficient to enable Sanborn to produce DEM/DTM and orthophotos from the nadir imagery and create 3D models using georeferenced oblique imagery.

  3. Imagery Integration Team

    NASA Technical Reports Server (NTRS)

    Calhoun, Tracy; Melendrez, Dave

    2014-01-01

    The Human Exploration Science Office (KX) provides leadership for NASA's Imagery Integration (Integration 2) Team, an affiliation of experts in the use of engineering-class imagery intended to monitor the performance of launch vehicles and crewed spacecraft in flight. Typical engineering imagery assessments include studying and characterizing the liftoff and ascent debris environments; launch vehicle and propulsion element performance; in-flight activities; and entry, landing, and recovery operations. Integration 2 support has been provided not only for U.S. Government spaceflight (e.g., Space Shuttle, Ares I-X) but also for commercial launch providers, such as Space Exploration Technologies Corporation (SpaceX) and Orbital Sciences Corporation, servicing the International Space Station. The NASA Integration 2 Team is composed of imagery integration specialists from JSC, the Marshall Space Flight Center (MSFC), and the Kennedy Space Center (KSC), who have access to a vast pool of experience and capabilities related to program integration, deployment and management of imagery assets, imagery data management, and photogrammetric analysis. The Integration 2 team is currently providing integration services to commercial demonstration flights, Exploration Flight Test-1 (EFT-1), and the Space Launch System (SLS)-based Exploration Missions (EM)-1 and EM-2. EM-2 will be the first attempt to fly a piloted mission with the Orion spacecraft. The Integration 2 Team provides the customer (both commercial and Government) with access to a wide array of imagery options - ground-based, airborne, seaborne, or vehicle-based - that are available through the Government and commercial vendors. The team guides the customer in assembling the appropriate complement of imagery acquisition assets at the customer's facilities, minimizing costs associated with market research and the risk of purchasing inadequate assets. The NASA Integration 2 capability simplifies the process of securing one-of-a-kind imagery assets and skill sets, such as ground-based fixed and tracking cameras, crew-in the-loop imaging applications, and the integration of custom or commercial-off-the-shelf sensors onboard spacecraft. For spaceflight applications, the Integration 2 Team leverages modeling, analytical, and scientific resources along with decades of experience and lessons learned to assist the customer in optimizing engineering imagery acquisition and management schemes for any phase of flight - launch, ascent, on-orbit, descent, and landing. The Integration 2 Team guides the customer in using NASA's world-class imagery analysis teams, which specialize in overcoming inherent challenges associated with spaceflight imagery sets. Precision motion tracking, two-dimensional (2D) and three-dimensional (3D) photogrammetry, image stabilization, 3D modeling of imagery data, lighting assessment, and vehicle fiducial marking assessments are available. During a mission or test, the Integration 2 Team provides oversight of imagery operations to verify fulfillment of imagery requirements. The team oversees the collection, screening, and analysis of imagery to build a set of imagery findings. It integrates and corroborates the imagery findings with other mission data sets, generating executive summaries to support time-critical mission decisions.

  4. Application of digital image processing techniques to astronomical imagery 1980

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.

    1981-01-01

    Topics include: (1) polar coordinate transformations (M83); (2) multispectral ratios (M82); (3) maximum entropy restoration (M87); (4) automated computation of stellar magnitudes in nebulosity; (5) color and polarization; (6) aliasing.

  5. Towards a user-friendly brain-computer interface: initial tests in ALS and PLS patients.

    PubMed

    Bai, Ou; Lin, Peter; Huang, Dandan; Fei, Ding-Yu; Floeter, Mary Kay

    2010-08-01

    Patients usually require long-term training for effective EEG-based brain-computer interface (BCI) control due to fatigue caused by the demands for focused attention during prolonged BCI operation. We intended to develop a user-friendly BCI requiring minimal training and less mental load. Testing of BCI performance was investigated in three patients with amyotrophic lateral sclerosis (ALS) and three patients with primary lateral sclerosis (PLS), who had no previous BCI experience. All patients performed binary control of cursor movement. One ALS patient and one PLS patient performed four-directional cursor control in a two-dimensional domain under a BCI paradigm associated with human natural motor behavior using motor execution and motor imagery. Subjects practiced for 5-10min and then participated in a multi-session study of either binary control or four-directional control including online BCI game over 1.5-2h in a single visit. Event-related desynchronization and event-related synchronization in the beta band were observed in all patients during the production of voluntary movement either by motor execution or motor imagery. The online binary control of cursor movement was achieved with an average accuracy about 82.1+/-8.2% with motor execution and about 80% with motor imagery, whereas offline accuracy was achieved with 91.4+/-3.4% with motor execution and 83.3+/-8.9% with motor imagery after optimization. In addition, four-directional cursor control was achieved with an accuracy of 50-60% with motor execution and motor imagery. Patients with ALS or PLS may achieve BCI control without extended training, and fatigue might be reduced during operation of a BCI associated with human natural motor behavior. The development of a user-friendly BCI will promote practical BCI applications in paralyzed patients. Copyright 2010 International Federation of Clinical Neurophysiology. All rights reserved.

  6. Using Mental Imagery Processes for Teaching and Research in Mathematics and Computer Science

    ERIC Educational Resources Information Center

    Arnoux, Pierre; Finkel, Alain

    2010-01-01

    The role of mental representations in mathematics and computer science (for teaching or research) is often downplayed or even completely ignored. Using an ongoing work on the subject, we argue for a more systematic study and use of mental representations, to get an intuition of mathematical concepts, and also to understand and build proofs. We…

  7. Estimating Water Levels with Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Lucero, E.; Russo, T. A.; Zentner, M.; May, J.; Nguy-Robertson, A. L.

    2016-12-01

    Reservoirs serve multiple functions and are vital for storage, electricity generation, and flood control. For many areas, traditional ground-based reservoir measurements may not be available or data dissemination may be problematic. Consistent monitoring of reservoir levels in data-poor areas can be achieved through remote sensing, providing information to researchers and the international community. Estimates of trends and relative reservoir volume can be used to identify water supply vulnerability, anticipate low power generation, and predict flood risk. Image processing with automated cloud computing provides opportunities to study multiple geographic areas in near real-time. We demonstrate the prediction capability of a cloud environment for identifying water trends at reservoirs in the US, and then apply the method to data-poor areas in North Korea, Iran, Azerbaijan, Zambia, and India. The Google Earth Engine cloud platform hosts remote sensing data and can be used to automate reservoir level estimation with multispectral imagery. We combine automated cloud-based analysis from Landsat image classification to identify reservoir surface area trends and radar altimetry to identify reservoir level trends. The study estimates water level trends using three years of data from four domestic reservoirs to validate the remote sensing method, and five foreign reservoirs to demonstrate the method application. We report correlations between ground-based reservoir level measurements in the US and our remote sensing methods, and correlations between the cloud analysis and altimetry data for reservoirs in data-poor areas. The availability of regular satellite imagery and an automated, near real-time application method provides the necessary datasets for further temporal analysis, reservoir modeling, and flood forecasting. All statements of fact, analysis, or opinion are those of the author and do not reflect the official policy or position of the Department of Defense or any of its components or the U.S. Government

  8. Using RGB displays to portray color realistic imagery to animal eyes

    PubMed Central

    Johnsen, Sönke

    2017-01-01

    Abstract RGB displays effectively simulate millions of colors in the eyes of humans by modulating the relative amount of light emitted by 3 differently colored juxtaposed lights (red, green, and blue). The relationship between the ratio of red, green, and blue light and the perceptual experience of that light has been well defined by psychophysical experiments in humans, but is unknown in animals. The perceptual experience of an animal looking at an RGB display of imagery designed for humans is likely to poorly represent an animal’s experience of the same stimulus in the real world. This is due, in part, to the fact that many animals have different numbers of photoreceptor classes than humans do and that their photoreceptor classes have peak sensitivities centered over different parts of the ultraviolet and visible spectrum. However, it is sometimes possible to generate videos that accurately mimic natural stimuli in the eyes of another animal, even if that animal’s sensitivity extends into the ultraviolet portion of the spectrum. How independently each RGB phosphor stimulates each of an animal’s photoreceptor classes determines the range of colors that can be simulated for that animal. What is required to determine optimal color rendering for another animal is a device capable of measuring absolute or relative quanta of light across the portion of the spectrum visible to the animal (i.e., a spectrometer), and data on the spectral sensitivities of the animal’s photoreceptor classes. In this article, we outline how to use such equipment and information to generate video stimuli that mimic, as closely as possible, an animal’s color perceptual experience of real-world objects. Key words: color vision, computer animation, perception, video playback, virtual reality. PMID:29491960

  9. The computer treatment of remotely sensed data: An introduction to techniques which have geologic applications. [image enhancement and thematic classification in Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Paradella, W. R.; Vitorello, I.

    1982-01-01

    Several aspects of computer-assisted analysis techniques for image enhancement and thematic classification by which LANDSAT MSS imagery may be treated quantitatively are explained. On geological applications, computer processing of digital data allows, possibly, the fullest use of LANDSAT data, by displaying enhanced and corrected data for visual analysis and by evaluating and assigning each spectral pixel information to a given class.

  10. Part II: The Effects of Aromatherapy and Guided Imagery for the Symptom Management of Anxiety, Pain, and Insomnia in Critically Ill Patients: An Integrative Review of Current Literature.

    PubMed

    Meghani, Naheed; Tracy, Mary Fran; Hadidi, Niloufar Niakosari; Lindquist, Ruth

    This review is part II of a 2-part series that presents evidence on the effectiveness of aromatherapy and guided imagery for the symptom management of anxiety, pain, and insomnia in adult critically ill patients. Evidence from this review supports the use of aromatherapy for management of pain, insomnia, and anxiety in critically ill patients. Evidence also supports the use of guided imagery for managing these symptoms in critical care; however, the evidence is sparse, mixed, and weak. More studies with larger samples and stronger designs are needed to further establish efficacy of guided imagery for the management of anxiety, pain, and insomnia of critically ill patients; to accomplish this, standardized evidence-based intervention protocols to ensure comparability and to establish optimal effectiveness are needed. Discussion and recommendations related to the use of these therapies in practice and needs for future research in these areas were generated.

  11. Forest cover type analysis of New England forests using innovative WorldView-2 imagery

    NASA Astrophysics Data System (ADS)

    Kovacs, Jenna M.

    For many years, remote sensing has been used to generate land cover type maps to create a visual representation of what is occurring on the ground. One significant use of remote sensing is the identification of forest cover types. New England forests are notorious for their especially complex forest structure and as a result have been, and continue to be, a challenge when classifying forest cover types. To most accurately depict forest cover types occurring on the ground, it is essential to utilize image data that have a suitable combination of both spectral and spatial resolution. The WorldView-2 (WV2) commercial satellite, launched in 2009, is the first of its kind, having both high spectral and spatial resolutions. WV2 records eight bands of multispectral imagery, four more than the usual high spatial resolution sensors, and has a pixel size of 1.85 meters at the nadir. These additional bands have the potential to improve classification detail and classification accuracy of forest cover type maps. For this reason, WV2 imagery was utilized on its own, and in combination with Landsat 5 TM (LS5) multispectral imagery, to evaluate whether these image data could more accurately classify forest cover types. In keeping with recent developments in image analysis, an Object-Based Image Analysis (OBIA) approach was used to segment images of Pawtuckaway State Park and nearby private lands, an area representative of the typical complex forest structure found in the New England region. A Classification and Regression Tree (CART) analysis was then used to classify image segments at two levels of classification detail. Accuracies for each forest cover type map produced were generated using traditional and area-based error matrices, and additional standard accuracy measures (i.e., KAPPA) were generated. The results from this study show that there is value in analyzing imagery with both high spectral and spatial resolutions, and that WV2's new and innovative bands can be useful for the classification of complex forest structures.

  12. Critical infrastructure monitoring using UAV imagery

    NASA Astrophysics Data System (ADS)

    Maltezos, Evangelos; Skitsas, Michael; Charalambous, Elisavet; Koutras, Nikolaos; Bliziotis, Dimitris; Themistocleous, Kyriacos

    2016-08-01

    The constant technological evolution in Computer Vision enabled the development of new techniques which in conjunction with the use of Unmanned Aerial Vehicles (UAVs) may extract high quality photogrammetric products for several applications. Dense Image Matching (DIM) is a Computer Vision technique that can generate a dense 3D point cloud of an area or object. The use of UAV systems and DIM techniques is not only a flexible and attractive solution to produce accurate and high qualitative photogrammetric results but also is a major contribution to cost effectiveness. In this context, this study aims to highlight the benefits of the use of the UAVs in critical infrastructure monitoring applying DIM. A Multi-View Stereo (MVS) approach using multiple images (RGB digital aerial and oblique images), to fully cover the area of interest, is implemented. The application area is an Olympic venue in Attica, Greece, at an area of 400 acres. The results of our study indicate that the UAV+DIM approach respond very well to the increasingly greater demands for accurate and cost effective applications when provided with, a 3D point cloud and orthomosaic.

  13. Toward brain-actuated car applications: Self-paced control with a motor imagery-based brain-computer interface.

    PubMed

    Yu, Yang; Zhou, Zongtan; Yin, Erwei; Jiang, Jun; Tang, Jingsheng; Liu, Yadong; Hu, Dewen

    2016-10-01

    This study presented a paradigm for controlling a car using an asynchronous electroencephalogram (EEG)-based brain-computer interface (BCI) and presented the experimental results of a simulation performed in an experimental environment outside the laboratory. This paradigm uses two distinct MI tasks, imaginary left- and right-hand movements, to generate a multi-task car control strategy consisting of starting the engine, moving forward, turning left, turning right, moving backward, and stopping the engine. Five healthy subjects participated in the online car control experiment, and all successfully controlled the car by following a previously outlined route. Subject S1 exhibited the most satisfactory BCI-based performance, which was comparable to the manual control-based performance. We hypothesize that the proposed self-paced car control paradigm based on EEG signals could potentially be used in car control applications, and we provide a complementary or alternative way for individuals with locked-in disorders to achieve more mobility in the future, as well as providing a supplementary car-driving strategy to assist healthy people in driving a car. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Advanced Ecosystem Mapping Techniques for Large Arctic Study Domains Using Calibrated High-Resolution Imagery

    NASA Astrophysics Data System (ADS)

    Macander, M. J.; Frost, G. V., Jr.

    2015-12-01

    Regional-scale mapping of vegetation and other ecosystem properties has traditionally relied on medium-resolution remote sensing such as Landsat (30 m) and MODIS (250 m). Yet, the burgeoning availability of high-resolution (<=2 m) imagery and ongoing advances in computing power and analysis tools raises the prospect of performing ecosystem mapping at fine spatial scales over large study domains. Here we demonstrate cutting-edge mapping approaches over a ~35,000 km² study area on Alaska's North Slope using calibrated and atmospherically-corrected mosaics of high-resolution WorldView-2 and GeoEye-1 imagery: (1) an a priori spectral approach incorporating the Satellite Imagery Automatic Mapper (SIAM) algorithms; (2) image segmentation techniques; and (3) texture metrics. The SIAM spectral approach classifies radiometrically-calibrated imagery to general vegetation density categories and non-vegetated classes. The SIAM classes were developed globally and their applicability in arctic tundra environments has not been previously evaluated. Image segmentation, or object-based image analysis, automatically partitions high-resolution imagery into homogeneous image regions that can then be analyzed based on spectral, textural, and contextual information. We applied eCognition software to delineate waterbodies and vegetation classes, in combination with other techniques. Texture metrics were evaluated to determine the feasibility of using high-resolution imagery to algorithmically characterize periglacial surface forms (e.g., ice-wedge polygons), which are an important physical characteristic of permafrost-dominated regions but which cannot be distinguished by medium-resolution remote sensing. These advanced mapping techniques provide products which can provide essential information supporting a broad range of ecosystem science and land-use planning applications in northern Alaska and elsewhere in the circumpolar Arctic.

  15. Window of visibility - A psychophysical theory of fidelity in time-sampled visual motion displays

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Ahumada, A. J., Jr.; Farrell, J. E.

    1986-01-01

    A film of an object in motion presents on the screen a sequence of static views, while the human observer sees the object moving smoothly across the screen. Questions related to the perceptual identity of continuous and stroboscopic displays are examined. Time-sampled moving images are considered along with the contrast distribution of continuous motion, the contrast distribution of stroboscopic motion, the frequency spectrum of continuous motion, the frequency spectrum of stroboscopic motion, the approximation of the limits of human visual sensitivity to spatial and temporal frequencies by a window of visibility, the critical sampling frequency, the contrast distribution of staircase motion and the frequency spectrum of this motion, and the spatial dependence of the critical sampling frequency. Attention is given to apparent motion, models of motion, image recording, and computer-generated imagery.

  16. The window of visibility: A psychological theory of fidelity in time-sampled visual motion displays

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Ahumada, A. J., Jr.; Farrell, J. E.

    1983-01-01

    Many visual displays, such as movies and television, rely upon sampling in the time domain. The spatiotemporal frequency spectra for some simple moving images are derived and illustrations of how these spectra are altered by sampling in the time domain are provided. A simple model of the human perceiver which predicts the critical sample rate required to render sampled and continuous moving images indistinguishable is constructed. The rate is shown to depend upon the spatial and temporal acuity of the observer, and upon the velocity and spatial frequency content of the image. Several predictions of this model are tested and confirmed. The model is offered as an explanation of many of the phenomena known as apparent motion. Finally, the implications of the model for computer-generated imagery are discussed.

  17. Mapping Geology and Vegetation using Hyperspectral Data in Antarctica: Current Challenges, New Solutions and Looking to the Future

    NASA Astrophysics Data System (ADS)

    Black, M.; Riley, T. R.; Fleming, A. H.; Ferrier, G.; Fretwell, P.; Casanovas, P.

    2015-12-01

    Antarctica is a unique and geographically remote environment. Traditional field campaigns investigating geology and vegetation in the region encounter numerous challenges including the harsh polar climate, the invasive nature of the work, steep topography and high infrastructure costs. Additionally, such field campaigns are often limited in terms of spatial and temporal resolution, and particularly, the topographical challenges presented in the Antarctic mean that many areas remain inaccessible. Remote Sensing, particularly hyperspectral imaging, may provide a solution to overcome the difficulties associated with field based mapping in the Antarctic. Planned satellite launches, such as EnMAP and HyspIRI, if successful, will yield large-scale, repeated hyperspectral imagery of Antarctica. Hyperspectral imagery has proven mapping capabilities and can yield greater information than can be attained using multispectral data. As a precursor to future satellite imagery, we utilise hyperspectral imagery from the first known airborne hyperspectral survey carried out in the Antarctic by the British Antarctic Survey and partners in 2011. Multiple imaging spectrometers were simultaneously deployed covering the visible, shortwave and thermal infrared regions of the electromagnetic spectrum. Additional data was generated during a field campaign deploying multiple ground spectrometers covering the same wavelengths as the airborne imagers. We utilise this imagery to assess the current challenges and propose some new solutions for mapping vegetation and geology, which may be directly applicable to future satellite hyperspectral imagery in the Antarctic.

  18. Colors in mind: a novel paradigm to investigate pure color imagery.

    PubMed

    Wantz, Andrea L; Borst, Grégoire; Mast, Fred W; Lobmaier, Janek S

    2015-07-01

    Mental color imagery abilities are commonly measured using paradigms that involve naming, judging, or comparing the colors of visual mental images of well-known objects (e.g., "Is a sunflower darker yellow than a lemon"?). Although this approach is widely used in patient studies, differences in the ability to perform such color comparisons might simply reflect participants' general knowledge of object colors rather than their ability to generate accurate visual mental images of the colors of the objects. The aim of the present study was to design a new color imagery paradigm. Participants were asked to visualize a color for 3 s and then to determine a visually presented color by pressing 1 of 6 keys. We reasoned that participants would react faster when the imagined and perceived colors were congruent than when they were incongruent. In Experiment 1, participants were slower in incongruent than congruent trials but only when they were instructed to visualize the colors. The results in Experiment 2 demonstrate that the congruency effect reported in Experiment 1 cannot be attributed to verbalization of the color that had to be visualized. Finally, in Experiment 3, the congruency effect evoked by mental imagery correlated with performance in a perceptual version of the task. We discuss these findings with respect to the mechanisms that underlie mental imagery and patients suffering from color imagery deficits. (c) 2015 APA, all rights reserved.

  19. [Brain-Computer Interface: the First Clinical Experience in Russia].

    PubMed

    Mokienko, O A; Lyukmanov, R Kh; Chernikova, L A; Suponeva, N A; Piradov, M A; Frolov, A A

    2016-01-01

    Motor imagery is suggested to stimulate the same plastic mechanisms in the brain as a real movement. The brain-computer interface (BCI) controls motor imagery by converting EEG during this process into the commands for an external device. This article presents the results of two-stage study of the clinical use of non-invasive BCI in the rehabilitation of patients with severe hemiparesis caused by focal brain damage. It was found that the ability to control BCI did not depend on the duration of a disease, brain lesion localization and the degree of neurological deficit. The first step of the study involved 36 patients; it showed that the efficacy of rehabilitation was higher in the group with the use of BCI (the score on the Action Research Arm Test (ARAT) improved from 1 [0; 2] to 5 [0; 16] points, p = 0.012; no significant improvement was observed in control group). The second step of the study involved 19 patients; the complex BCI-exoskeleton (i.e. with the kinesthetic feedback) was used for motor imagery trainings. The improvement of the motor function of hands was proved by ARAT (the score improved from 2 [0; 37] to 4 [1; 45:5] points, p = 0.005) and Fugl-Meyer scale (from 72 [63; 110 ] to 79 [68; 115] points, p = 0.005).

  20. Large-scale deep learning for robotically gathered imagery for science

    NASA Astrophysics Data System (ADS)

    Skinner, K.; Johnson-Roberson, M.; Li, J.; Iscar, E.

    2016-12-01

    With the explosion of computing power, the intelligence and capability of mobile robotics has dramatically increased over the last two decades. Today, we can deploy autonomous robots to achieve observations in a variety of environments ripe for scientific exploration. These platforms are capable of gathering a volume of data previously unimaginable. Additionally, optical cameras, driven by mobile phones and consumer photography, have rapidly improved in size, power consumption, and quality making their deployment cheaper and easier. Finally, in parallel we have seen the rise of large-scale machine learning approaches, particularly deep neural networks (DNNs), increasing the quality of the semantic understanding that can be automatically extracted from optical imagery. In concert this enables new science using a combination of machine learning and robotics. This work will discuss the application of new low-cost high-performance computing approaches and the associated software frameworks to enable scientists to rapidly extract useful science data from millions of robotically gathered images. The automated analysis of imagery on this scale opens up new avenues of inquiry unavailable using more traditional manual or semi-automated approaches. We will use a large archive of millions of benthic images gathered with an autonomous underwater vehicle to demonstrate how these tools enable new scientific questions to be posed.

  1. An efficient rhythmic component expression and weighting synthesis strategy for classifying motor imagery EEG in a brain computer interface

    NASA Astrophysics Data System (ADS)

    Wang, Tao; He, Bin

    2004-03-01

    The recognition of mental states during motor imagery tasks is crucial for EEG-based brain computer interface research. We have developed a new algorithm by means of frequency decomposition and weighting synthesis strategy for recognizing imagined right- and left-hand movements. A frequency range from 5 to 25 Hz was divided into 20 band bins for each trial, and the corresponding envelopes of filtered EEG signals for each trial were extracted as a measure of instantaneous power at each frequency band. The dimensionality of the feature space was reduced from 200 (corresponding to 2 s) to 3 by down-sampling of envelopes of the feature signals, and subsequently applying principal component analysis. The linear discriminate analysis algorithm was then used to classify the features, due to its generalization capability. Each frequency band bin was weighted by a function determined according to the classification accuracy during the training process. The present classification algorithm was applied to a dataset of nine human subjects, and achieved a success rate of classification of 90% in training and 77% in testing. The present promising results suggest that the present classification algorithm can be used in initiating a general-purpose mental state recognition based on motor imagery tasks.

  2. High Resolution Topography of Polar Regions from Commercial Satellite Imagery, Petascale Computing and Open Source Software

    NASA Astrophysics Data System (ADS)

    Morin, Paul; Porter, Claire; Cloutier, Michael; Howat, Ian; Noh, Myoung-Jong; Willis, Michael; Kramer, WIlliam; Bauer, Greg; Bates, Brian; Williamson, Cathleen

    2017-04-01

    Surface topography is among the most fundamental data sets for geosciences, essential for disciplines ranging from glaciology to geodynamics. Two new projects are using sub-meter, commercial imagery licensed by the National Geospatial-Intelligence Agency and open source photogrammetry software to produce a time-tagged 2m posting elevation model of the Arctic and an 8m posting reference elevation model for the Antarctic. When complete, this publically available data will be at higher resolution than any elevation models that cover the entirety of the Western United States. These two polar projects are made possible due to three equally important factors: 1) open-source photogrammetry software, 2) petascale computing, and 3) sub-meter imagery licensed to the United States Government. Our talk will detail the technical challenges of using automated photogrammetry software; the rapid workflow evolution to allow DEM production; the task of deploying the workflow on one of the world's largest supercomputers; the trials of moving massive amounts of data, and the management strategies the team needed to solve in order to meet deadlines. Finally, we will discuss the implications of this type of collaboration for future multi-team use of leadership-class systems such as Blue Waters, and for further elevation mapping.

  3. Extraction of incident irradiance from LWIR hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Lahaie, Pierre

    2014-10-01

    The atmospheric correction of thermal hyperspectral imagery can be separated in two distinct processes: Atmospheric Compensation (AC) and Temperature and Emissivity separation (TES). TES requires for input at each pixel, the ground leaving radiance and the atmospheric downwelling irradiance, which are the outputs of the AC process. The extraction from imagery of the downwelling irradiance requires assumptions about some of the pixels' nature, the sensor and the atmosphere. Another difficulty is that, often the sensor's spectral response is not well characterized. To deal with this unknown, we defined a spectral mean operator that is used to filter the ground leaving radiance and a computation of the downwelling irradiance from MODTRAN. A user will select a number of pixels in the image for which the emissivity is assumed to be known. The emissivity of these pixels is assumed to be smooth and that the only spectrally fast varying variable in the downwelling irradiance. Using these assumptions we built an algorithm to estimate the downwelling irradiance. The algorithm is used on all the selected pixels. The estimated irradiance is the average on the spectral channels of the resulting computation. The algorithm performs well in simulation and results are shown for errors in the assumed emissivity and for errors in the atmospheric profiles. The sensor noise influences mainly the required number of pixels.

  4. A subject-independent pattern-based Brain-Computer Interface

    PubMed Central

    Ray, Andreas M.; Sitaram, Ranganatha; Rana, Mohit; Pasqualotto, Emanuele; Buyukturkoglu, Korhan; Guan, Cuntai; Ang, Kai-Keng; Tejos, Cristián; Zamorano, Francisco; Aboitiz, Francisco; Birbaumer, Niels; Ruiz, Sergio

    2015-01-01

    While earlier Brain-Computer Interface (BCI) studies have mostly focused on modulating specific brain regions or signals, new developments in pattern classification of brain states are enabling real-time decoding and modulation of an entire functional network. The present study proposes a new method for real-time pattern classification and neurofeedback of brain states from electroencephalographic (EEG) signals. It involves the creation of a fused classification model based on the method of Common Spatial Patterns (CSPs) from data of several healthy individuals. The subject-independent model is then used to classify EEG data in real-time and provide feedback to new individuals. In a series of offline experiments involving training and testing of the classifier with individual data from 27 healthy subjects, a mean classification accuracy of 75.30% was achieved, demonstrating that the classification system at hand can reliably decode two types of imagery used in our experiments, i.e., happy emotional imagery and motor imagery. In a subsequent experiment it is shown that the classifier can be used to provide neurofeedback to new subjects, and that these subjects learn to “match” their brain pattern to that of the fused classification model in a few days of neurofeedback training. This finding can have important implications for future studies on neurofeedback and its clinical applications on neuropsychiatric disorders. PMID:26539089

  5. Functional recovery in upper limb function in stroke survivors by using brain-computer interface A single case A-B-A-B design.

    PubMed

    Ono, Takashi; Mukaino, Masahiko; Ushiba, Junichi

    2013-01-01

    Resent studies suggest that brain-computer interface (BCI) training for chronic stroke patient is useful to improve their motor function of paretic hand. However, these studies does not show the extent of the contribution of the BCI clearly because they prescribed BCI with other rehabilitation systems, e.g. an orthosis itself, a robotic intervention, or electrical stimulation. We therefore compared neurological effects between interventions with neuromuscular electrical stimulation (NMES) with motor imagery and BCI-driven NMES, employing an ABAB experimental design. In epoch A, the subject received NMES on paretic extensor digitorum communis (EDC). The subject was asked to attempt finger extension simultaneously. In epoch B, the subject received NMES when BCI system detected motor-related electroencephalogram change while attempting motor imagery. Both epochs were carried out for 60 min per day, 5 days per week. As a result, EMG activity of EDC was enhanced by BCI-driven NMES and significant cortico-muscular coherence was observed at the final evaluation. These results indicate that the training by BCI-driven NMES is effective even compared to motor imagery combined with NMES, suggesting the superiority of closed-loop training with BCI-driven NMES to open-loop NMES for chronic stroke patients.

  6. Final Report on Video Log Data Mining Project

    DOT National Transportation Integrated Search

    2012-06-01

    This report describes the development of an automated computer vision system that identities and inventories road signs : from imagery acquired from the Kansas Department of Transportations road profiling system that takes images every 26.4 : feet...

  7. HVS: an image-based approach for constructing virtual environments

    NASA Astrophysics Data System (ADS)

    Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao

    1998-09-01

    Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.

  8. The Goddard Profiling Algorithm (GPROF): Description and Current Applications

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Yang, Song; Stout, John E.; Grecu, Mircea

    2004-01-01

    Atmospheric scientists use different methods for interpreting satellite data. In the early days of satellite meteorology, the analysis of cloud pictures from satellites was primarily subjective. As computer technology improved, satellite pictures could be processed digitally, and mathematical algorithms were developed and applied to the digital images in different wavelength bands to extract information about the atmosphere in an objective way. The kind of mathematical algorithm one applies to satellite data may depend on the complexity of the physical processes that lead to the observed image, and how much information is contained in the satellite images both spatially and at different wavelengths. Imagery from satellite-borne passive microwave radiometers has limited horizontal resolution, and the observed microwave radiances are the result of complex physical processes that are not easily modeled. For this reason, a type of algorithm called a Bayesian estimation method is utilized to interpret passive microwave imagery in an objective, yet computationally efficient manner.

  9. Assessing the accuracy and repeatability of automated photogrammetrically generated digital surface models from unmanned aerial system imagery

    NASA Astrophysics Data System (ADS)

    Chavis, Christopher

    Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.

  10. Research in interactive scene analysis

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M.; Barrow, H. G.; Weyl, S. A.

    1976-01-01

    Cooperative (man-machine) scene analysis techniques were developed whereby humans can provide a computer with guidance when completely automated processing is infeasible. An interactive approach promises significant near-term payoffs in analyzing various types of high volume satellite imagery, as well as vehicle-based imagery used in robot planetary exploration. This report summarizes the work accomplished over the duration of the project and describes in detail three major accomplishments: (1) the interactive design of texture classifiers; (2) a new approach for integrating the segmentation and interpretation phases of scene analysis; and (3) the application of interactive scene analysis techniques to cartography.

  11. Utilizing remote sensing of thematic mapper data to improve our understanding of estuarine processes and their influence on the productivity of estuarine-dependent fisheries

    NASA Technical Reports Server (NTRS)

    Browder, Joan A.; May, L. Nelson, Jr.; Rosenthal, Alan; Baumann, Robert H.; Gosselink, James G.

    1987-01-01

    A stochastic spatial computer model addressing coastal resource problems in Lousiana is being refined and validated using thematic mapper (TM) imagery. The TM images of brackish marsh sites were processed and data were tabulated on spatial parameters from TM images of the salt marsh sites. The Fisheries Image Processing Systems (FIPS) was used to analyze the TM scene. Activities were concentrated on improving the structure of the model and developing a structure and methodology for calibrating the model with spatial-pattern data from the TM imagery.

  12. Remote sensing inputs to landscape models which predict future spatial land use patterns for hydrologic models

    NASA Technical Reports Server (NTRS)

    Miller, L. D.; Tom, C.; Nualchawee, K.

    1977-01-01

    A tropical forest area of Northern Thailand provided a test case of the application of the approach in more natural surroundings. Remote sensing imagery subjected to proper computer analysis has been shown to be a very useful means of collecting spatial data for the science of hydrology. Remote sensing products provide direct input to hydrologic models and practical data bases for planning large and small-scale hydrologic developments. Combining the available remote sensing imagery together with available map information in the landscape model provides a basis for substantial improvements in these applications.

  13. Generating Multispectral VIIRS Imagery in Near Real-Time for Use by the National Weather Service in Alaska

    NASA Astrophysics Data System (ADS)

    Broderson, D.; Dierking, C.; Stevens, E.; Heinrichs, T. A.; Cherry, J. E.

    2016-12-01

    The Geographic Information Network of Alaska (GINA) at the University of Alaska Fairbanks (UAF) uses two direct broadcast antennas to receive data from a number of polar-orbiting weather satellites, including the Suomi National Polar Partnership (S-NPP) satellite. GINA uses data from S-NPP's Visible Infrared Imaging Radiometer Suite (VIIRS) to generate a variety of multispectral imagery products developed with the needs of the National Weather Service operational meteorologist in mind. Multispectral products have two primary advantages over single-channel products. First, they can more clearly highlight some terrain and meteorological features which are less evident in the component single channels. Second, multispectral present the information from several bands through just one image, thereby sparing the meteorologist unnecessary time interrogating the component single bands individually. With 22 channels available from the VIIRS instrument, the number of possible multispectral products is theoretically huge. A small number of products will be emphasized in this presentation, with the products chosen based on their proven utility in the forecasting environment. Multispectral products can be generated upstream of the end user or by the end user at their own workstation. The advantage and disadvantages of both approaches will be outlined. Lastly, the technique of improving the appearance of multispectral imagery by correcting for atmospheric reflectance at the shorter wavelengths will be described.

  14. Flight data acquisition methodology for validation of passive ranging algorithms for obstacle avoidance

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.

    1990-01-01

    The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.

  15. Techniques for Producing Coastal Land Water Masks from Landsat and Other Multispectral Satellite Data

    NASA Technical Reports Server (NTRS)

    Spruce, Joseph P.; Hall, Callie

    2005-01-01

    Coastal erosion and land loss continue to threaten many areas in the United States. Landsat data has been used to monitor regional coastal change since the 1970s. Many techniques can be used to produce coastal land water masks, including image classification and density slicing of individual bands or of band ratios. Band ratios used in land water detection include several variations of the Normalized Difference Water Index (NDWI). This poster discusses a study that compares land water masks computed from unsupervised Landsat image classification with masks from density-sliced band ratios and from the Landsat TM band 5. The greater New Orleans area is employed in this study, due to its abundance of coastal habitats and its vulnerability to coastal land loss. Image classification produced the best results based on visual comparison to higher resolution satellite and aerial image displays. However, density sliced NDWI imagery from either near infrared (NIR) and blue bands or from NIR and green bands also produced more effective land water masks than imagery from the density-sliced Landsat TM band 5. NDWI based on NIR and green bands is noteworthy because it allows land water masks to be generated from multispectral satellite sensors without a blue band (e.g., ASTER and Landsat MSS). NDWI techniques also have potential for producing land water masks from coarser scaled satellite data, such as MODIS.

  16. Techniques for Producing Coastal Land Water Masks from Landsat and Other Multispectral Satellite Data

    NASA Technical Reports Server (NTRS)

    Spruce, Joe; Hall, Callie

    2005-01-01

    Coastal erosion and land loss continue to threaten many areas in the United States. Landsat data has been used to monitor regional coastal change since the 1970's. Many techniques can be used to produce coastal land water masks, including image classification and density slicing of individual bands or of band ratios. Band ratios used in land water detection include several variations of the Normalized Difference Water Index (NDWI). This poster discusses a study that compares land water masks computed from unsupervised Landsat image classification with masks from density-sliced band ratios and from the Landsat TM band 5. The greater New Orleans area is imployed in this study, due to its abundance of coastal habitats and ist vulnerability to coastal land loss. Image classification produced the best results based on visual comparison to higher resolution satellite and aerial image displays. However, density-sliced NDWI imagery from either near infrared (NIR) and blue bands or from NIR and green bands also produced more effective land water masks than imagery from the density-sliced Landsat TM band 5. NDWI based on NIR and green bands is noteworthy because it allows land water masks to be generated form multispectral satellite sensors without a blue band (e.g., ASTER and Landsat MSS). NDWI techniques also have potential for producing land water masks from coarser scaled satellite data, such as MODIS.

  17. An automated, open-source pipeline for mass production of digital elevation models (DEMs) from very-high-resolution commercial stereo satellite imagery

    NASA Astrophysics Data System (ADS)

    Shean, David E.; Alexandrov, Oleg; Moratto, Zachary M.; Smith, Benjamin E.; Joughin, Ian R.; Porter, Claire; Morin, Paul

    2016-06-01

    We adapted the automated, open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline a processing workflow for ˜0.5 m ground sample distance (GSD) DigitalGlobe WorldView-1 and WorldView-2 along-track stereo image data, with an overview of ASP capabilities, an evaluation of ASP correlator options, benchmark test results, and two case studies of DEM accuracy. Output DEM products are posted at ˜2 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5 m where appropriate ground-control data are available, with observed standard deviation of ˜0.1-0.5 m for overlapping, co-registered DEMs (n = 14, 17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We are leveraging these resources to produce dense time series and regional mosaics for the Earth's polar regions.

  18. A framework for activity detection in wide-area motion imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, Reid B; Ruggiero, Christy E; Morrison, Jack D

    2009-01-01

    Wide-area persistent imaging systems are becoming increasingly cost effective and now large areas of the earth can be imaged at relatively high frame rates (1-2 fps). The efficient exploitation of the large geo-spatial-temporal datasets produced by these systems poses significant technical challenges for image and video analysis and data mining. In recent years there has been significant progress made on stabilization, moving object detection and tracking and automated systems now generate hundreds to thousands of vehicle tracks from raw data, with little human intervention. However, the tracking performance at this scale, is unreliable and average track length is much smallermore » than the average vehicle route. This is a limiting factor for applications which depend heavily on track identity, i.e. tracking vehicles from their points of origin to their final destination. In this paper we propose and investigate a framework for wide-area motion imagery (W AMI) exploitation that minimizes the dependence on track identity. In its current form this framework takes noisy, incomplete moving object detection tracks as input, and produces a small set of activities (e.g. multi-vehicle meetings) as output. The framework can be used to focus and direct human users and additional computation, and suggests a path towards high-level content extraction by learning from the human-in-the-loop.« less

  19. Enhanced detection and visualization of anomalies in spectral imagery

    NASA Astrophysics Data System (ADS)

    Basener, William F.; Messinger, David W.

    2009-05-01

    Anomaly detection algorithms applied to hyperspectral imagery are able to reliably identify man-made objects from a natural environment based on statistical/geometric likelyhood. The process is more robust than target identification, which requires precise prior knowledge of the object of interest, but has an inherently higher false alarm rate. Standard anomaly detection algorithms measure deviation of pixel spectra from a parametric model (either statistical or linear mixing) estimating the image background. The topological anomaly detector (TAD) creates a fully non-parametric, graph theory-based, topological model of the image background and measures deviation from this background using codensity. In this paper we present a large-scale comparative test of TAD against 80+ targets in four full HYDICE images using the entire canonical target set for generation of ROC curves. TAD will be compared against several statistics-based detectors including local RX and subspace RX. Even a perfect anomaly detection algorithm would have a high practical false alarm rate in most scenes simply because the user/analyst is not interested in every anomalous object. To assist the analyst in identifying and sorting objects of interest, we investigate coloring of the anomalies with principle components projections using statistics computed from the anomalies. This gives a very useful colorization of anomalies in which objects of similar material tend to have the same color, enabling an analyst to quickly sort and identify anomalies of highest interest.

  20. Use of multiresolution wavelet feature pyramids for automatic registration of multisensor imagery

    NASA Technical Reports Server (NTRS)

    Zavorin, Ilya; Le Moigne, Jacqueline

    2005-01-01

    The problem of image registration, or the alignment of two or more images representing the same scene or object, has to be addressed in various disciplines that employ digital imaging. In the area of remote sensing, just like in medical imaging or computer vision, it is necessary to design robust, fast, and widely applicable algorithms that would allow automatic registration of images generated by various imaging platforms at the same or different times and that would provide subpixel accuracy. One of the main issues that needs to be addressed when developing a registration algorithm is what type of information should be extracted from the images being registered, to be used in the search for the geometric transformation that best aligns them. The main objective of this paper is to evaluate several wavelet pyramids that may be used both for invariant feature extraction and for representing images at multiple spatial resolutions to accelerate registration. We find that the bandpass wavelets obtained from the steerable pyramid due to Simoncelli performs best in terms of accuracy and consistency, while the low-pass wavelets obtained from the same pyramid give the best results in terms of the radius of convergence. Based on these findings, we propose a modification of a gradient-based registration algorithm that has recently been developed for medical data. We test the modified algorithm on several sets of real and synthetic satellite imagery.

  1. Use of Multi-Resolution Wavelet Feature Pyramids for Automatic Registration of Multi-Sensor Imagery

    NASA Technical Reports Server (NTRS)

    Zavorin, Ilya; LeMoigne, Jacqueline

    2003-01-01

    The problem of image registration, or alignment of two or more images representing the same scene or object, has to be addressed in various disciplines that employ digital imaging. In the area of remote sensing, just like in medical imaging or computer vision, it is necessary to design robust, fast and widely applicable algorithms that would allow automatic registration of images generated by various imaging platforms at the same or different times, and that would provide sub-pixel accuracy. One of the main issues that needs to be addressed when developing a registration algorithm is what type of information should be extracted from the images being registered, to be used in the search for the geometric transformation that best aligns them. The main objective of this paper is to evaluate several wavelet pyramids that may be used both for invariant feature extraction and for representing images at multiple spatial resolutions to accelerate registration. We find that the band-pass wavelets obtained from the Steerable Pyramid due to Simoncelli perform better than two types of low-pass pyramids when the images being registered have relatively small amount of nonlinear radiometric variations between them. Based on these findings, we propose a modification of a gradient-based registration algorithm that has recently been developed for medical data. We test the modified algorithm on several sets of real and synthetic satellite imagery.

  2. Colorizing SENTINEL-1 SAR Images Using a Variational Autoencoder Conditioned on SENTINEL-2 Imagery

    NASA Astrophysics Data System (ADS)

    Schmitt, M.; Hughes, L. H.; Körner, M.; Zhu, X. X.

    2018-05-01

    In this paper, we have shown an approach for the automatic colorization of SAR backscatter images, which are usually provided in the form of single-channel gray-scale imagery. Using a deep generative model proposed for the purpose of photograph colorization and a Lab-space-based SAR-optical image fusion formulation, we are able to predict artificial color SAR images, which disclose much more information to the human interpreter than the original SAR data. Future work will aim at further adaption of the employed procedure to our special case of multi-sensor remote sensing imagery. Furthermore, we will investigate if the low-level representations learned intrinsically by the deep network can be used for SAR image interpretation in an end-to-end manner.

  3. Spatial reasoning to determine stream network from LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Haralick, R. M.; Wang, S.; Elliott, D. B.

    1983-01-01

    In LANDSAT imagery, spectral and spatial information can be used to detect the drainage network as well as the relative elevation model in mountainous terrain. To do this, mixed information of material reflectance in the original LANDSAT imagery must be separated. From the material reflectance information, big visible rivers can be detected. From the topographic modulation information, ridges and valleys can be detected and assigned relative elevations. A complete elevation model can be generated by interpolating values for nonridge and non-valley pixels. The small streams not detectable from material reflectance information can be located in the valleys with flow direction known from the elevation model. Finally, the flow directions of big visible rivers can be inferred by solving a consistent labeling problem based on a set of spatial reasoning constraints.

  4. BCI Competition IV – Data Set I: Learning Discriminative Patterns for Self-Paced EEG-Based Motor Imagery Detection

    PubMed Central

    Zhang, Haihong; Guan, Cuntai; Ang, Kai Keng; Wang, Chuanchu

    2012-01-01

    Detecting motor imagery activities versus non-control in brain signals is the basis of self-paced brain-computer interfaces (BCIs), but also poses a considerable challenge to signal processing due to the complex and non-stationary characteristics of motor imagery as well as non-control. This paper presents a self-paced BCI based on a robust learning mechanism that extracts and selects spatio-spectral features for differentiating multiple EEG classes. It also employs a non-linear regression and post-processing technique for predicting the time-series of class labels from the spatio-spectral features. The method was validated in the BCI Competition IV on Dataset I where it produced the lowest prediction error of class labels continuously. This report also presents and discusses analysis of the method using the competition data set. PMID:22347153

  5. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  6. Clear water radiances for atmospheric correction of coastal zone color scanner imagery

    NASA Technical Reports Server (NTRS)

    Gordon, H. R.; Clark, D. K.

    1981-01-01

    The possibility of computing the inherent sea surface radiance for regions of clear water from coastal zone color scanner (CZCS) imagery given only a knowledge of the local solar zenith angle is examined. The inherent sea surface radiance is related to the upwelling and downwelling irradiances just beneath the sea surface, and an expression is obtained for a normalized inherent sea surface radiance which is nearly independent of solar zenith angle for low phytoplankton pigment concentrations. An analysis of a data base consisting of vertical profiles of upwelled spectral radiance and pigment concentration, which was used in the development of the CZCS program, confirms the virtual constancy of the normalized inherent sea surface radiance at wavelengths of 520 and 550 nm for cases when the pigment concentration is less than 0.25 mg/cu m. A strategy is then developed for using the normalized inherent sea surface radiance in the atmospheric correction of CZCS imagery.

  7. Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ochilov, S.; Alam, M. S.; Bal, A.

    2006-05-01

    Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.

  8. Semantically enabled image similarity search

    NASA Astrophysics Data System (ADS)

    Casterline, May V.; Emerick, Timothy; Sadeghi, Kolia; Gosse, C. A.; Bartlett, Brent; Casey, Jason

    2015-05-01

    Georeferenced data of various modalities are increasingly available for intelligence and commercial use, however effectively exploiting these sources demands a unified data space capable of capturing the unique contribution of each input. This work presents a suite of software tools for representing geospatial vector data and overhead imagery in a shared high-dimension vector or embedding" space that supports fused learning and similarity search across dissimilar modalities. While the approach is suitable for fusing arbitrary input types, including free text, the present work exploits the obvious but computationally difficult relationship between GIS and overhead imagery. GIS is comprised of temporally-smoothed but information-limited content of a GIS, while overhead imagery provides an information-rich but temporally-limited perspective. This processing framework includes some important extensions of concepts in literature but, more critically, presents a means to accomplish them as a unified framework at scale on commodity cloud architectures.

  9. LANDSAT-1 data, its use in a soil survey program

    NASA Technical Reports Server (NTRS)

    Westin, F. C.; Frazee, C. J.

    1975-01-01

    The following applications of LANDSAT imagery were investigated: assistance in recognizing soil survey boundaries, low intensity soil surveys, and preparation of a base map for publishing thematic soils maps. The following characteristics of LANDSAT imagery were tested as they apply to the recognition of soil boundaries in South Dakota and western Minnesota: synoptic views due to the large areas covered, near-orthography and lack of distortion, flexibility of selecting the proper season, data recording in four parts of the spectrum, and the use of computer compatible tapes. A low intensity soil survey of Pennington County, South Dakota was completed in 1974. Low intensity inexpensive soil surveys can provide the data needed to evaluate agricultural land for the remaining counties until detailed soil surveys are completed. In using LANDSAT imagery as a base map for publishing thematic soil maps, the first step was to prepare a mosaic with 20 LANDSAT scenes from several late spring passes in 1973.

  10. Researching on the process of remote sensing video imagery

    NASA Astrophysics Data System (ADS)

    Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan

    Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.

  11. STS-53 Discovery, OV-103, DOD Hercules digital electronic imagery equipment

    NASA Technical Reports Server (NTRS)

    1992-01-01

    STS-53 Discovery, Orbiter Vehicle (OV) 103, Department of Defense (DOD) mission Hand-held Earth-oriented Real-time Cooperative, User-friendly, Location, targeting, and Environmental System (Hercules) spaceborne experiment equipment is documented in this table top view. HERCULES is a joint NAVY-NASA-ARMY payload designed to provide real-time high resolution digital electronic imagery and geolocation (latitude and longitude determination) of earth surface targets of interest. HERCULES system consists of (from left to right): a specially modified GRID Systems portable computer mounted atop NASA developed Playback-Downlink Unit (PDU) and the Naval Research Laboratory (NRL) developed HERCULES Attitude Processor (HAP); the NASA-developed Electronic Still Camera (ESC) Electronics Box (ESCEB) including removable imagery data storage disks and various connecting cables; the ESC (a NASA modified Nikon F-4 camera) mounted atop the NRL HERCULES Inertial Measurement Unit (HIMU) containing the three

  12. STS-53 Discovery, OV-103, DOD Hercules digital electronic imagery equipment

    NASA Image and Video Library

    1992-04-22

    STS-53 Discovery, Orbiter Vehicle (OV) 103, Department of Defense (DOD) mission Hand-held Earth-oriented Real-time Cooperative, User-friendly, Location, targeting, and Environmental System (Hercules) spaceborne experiment equipment is documented in this table top view. HERCULES is a joint NAVY-NASA-ARMY payload designed to provide real-time high resolution digital electronic imagery and geolocation (latitude and longitude determination) of earth surface targets of interest. HERCULES system consists of (from left to right): a specially modified GRID Systems portable computer mounted atop NASA developed Playback-Downlink Unit (PDU) and the Naval Research Laboratory (NRL) developed HERCULES Attitude Processor (HAP); the NASA-developed Electronic Still Camera (ESC) Electronics Box (ESCEB) including removable imagery data storage disks and various connecting cables; the ESC (a NASA modified Nikon F-4 camera) mounted atop the NRL HERCULES Inertial Measurement Unit (HIMU) containing the three-axis ring-laser gyro.

  13. A Computer Vision System forLocating and Identifying Internal Log Defects Using CT Imagery

    Treesearch

    Dongping Zhu; Richard W. Conners; Frederick Lamb; Philip A. Araman

    1991-01-01

    A number of researchers have shown the ability of magnetic resonance imaging (MRI) and computer tomography (CT) imaging to detect internal defects in logs. However, if these devices are ever to play a role in the forest products industry, automatic methods for analyzing data from these devices must be developed. This paper reports research aimed at developing a...

  14. Image Understanding Research and Its Application to Cartography and Computer-Based Analysis of Aerial Imagery

    DTIC Science & Technology

    1983-09-01

    Report Al-TR-346. Artifcial Intelligence Laboratory, Mamachusetts Institute of Tech- niugy. Cambridge, Mmeh mett. June 19 [G.usmn@ A. Gaman-Arenas...Testbed Coordinator, 415/859-4395 Artificial Intelligence Center Computer Science and Technology Division Prepared for: Defense Advanced Research...to support processing of aerial photographs for such military applications as cartography, Intelligence , weapon guidance, and targeting. A key

  15. Glacial changes and glacier mass balance at Gran Campo Nevado, Chile during recent decades

    NASA Astrophysics Data System (ADS)

    Schneider, C.; Schnirch, M.; Kilian, R.; Acuña, C.; Casassa, G.

    2003-04-01

    Within the framework of the program Global Land Ice Measurements from Space (GLIMS) a glacier inventory of the Peninsula Muñoz Gamero in the southernmost Andes of Chile (53°S) has been generated using aerial photopgrahy and Landsat Thematic Mapper imagery. The Peninsula is partly covered by the ice cap of the Gran Campo Nevado (GCN), including several outlet glaciers plus some minor glaciers and firn fields. All together the ice covered areas sum up to 260 km2. GCN forms the only major ice body between the Southern Patagonia Icefield and the Strait of Magallan. Its almost unique location in a zone affected year-round by the westerlies makes it a region of key interest in terms of glacier and climate change studies of the west-wind zone of the Southern Hemisphere. A digital elevation model (DEM) was created for the area, using aerial imagery from 1942, 1984, and 1998 and a Chilean topographic map (1: 100 000). All information was incorporated into a GIS together with satellite imagery from 1986 and 2001. Delineation of glacier inflow from the central plateau of Gran Campo Nevado was accomplished using an automatic module for watershed delineation within the GIS. The GIS served to outline the extent of the present glaciation of the peninsula, as well as to evaluate the derived historic information. The comparison of historic and recent imagery reveals a dramatic glacier retreat during the last 60 years. Some of the outlet glaciers lost more than 20% of their total area during this period. In February and March 2000 a automatic weather station (AWS) was run on a nameless outlet glacier, inofficially Glaciar Lengua, of the Gran Campo Nevado Ice Cap. From the computed energy balance, it was possible to derive degree-day factors for the Glaciar Lengua. With data from the nearby AWS at fjord coast (Bahia Bahamondes) we computed ablation for the summer seasons of 1999/2000, 2000/2001 and 2001/2002. Ablation at 450 m a.s.l. sums up to about 7 m in 1999/2000, 5.5 m in 2000/2001 and 8.5 m in 2001/2002. This is in excellent accordance (+/-4%) with measurements at 12 m-long ablation stakes that have been drilled into the glacier. The DEM and a GIS layer defining glacier boundaries provide the basis for the distributed calculation of glacier mass balance. It was computed from the degree-day-model by applying elevation-corrected temperature and precipitation data to each grid point of the DEM. Furthermore, weather station data from Punta Arenas and Faro Evangelistas since 1905 enables to estimate the mass balance of Glaciar Lengua for almost one century. The derived mass balance record indicates a slightly negative mass balance during most of the 20th century. This in excellent agreement with the result obtained from aerial photography and GIS. The work was conducted as part of the international and interdisciplinary working group “Gran Campo Nevado” and was supported by the German Research Foundation (DFG).

  16. Feasibility of approaches combining sensor and source features in brain-computer interface.

    PubMed

    Ahn, Minkyu; Hong, Jun Hee; Jun, Sung Chan

    2012-02-15

    Brain-computer interface (BCI) provides a new channel for communication between brain and computers through brain signals. Cost-effective EEG provides good temporal resolution, but its spatial resolution is poor and sensor information is blurred by inherent noise. To overcome these issues, spatial filtering and feature extraction techniques have been developed. Source imaging, transformation of sensor signals into the source space through source localizer, has gained attention as a new approach for BCI. It has been reported that the source imaging yields some improvement of BCI performance. However, there exists no thorough investigation on how source imaging information overlaps with, and is complementary to, sensor information. Information (visible information) from the source space may overlap as well as be exclusive to information from the sensor space is hypothesized. Therefore, we can extract more information from the sensor and source spaces if our hypothesis is true, thereby contributing to more accurate BCI systems. In this work, features from each space (sensor or source), and two strategies combining sensor and source features are assessed. The information distribution among the sensor, source, and combined spaces is discussed through a Venn diagram for 18 motor imagery datasets. Additional 5 motor imagery datasets from the BCI Competition III site were examined. The results showed that the addition of source information yielded about 3.8% classification improvement for 18 motor imagery datasets and showed an average accuracy of 75.56% for BCI Competition data. Our proposed approach is promising, and improved performance may be possible with better head model. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Rapid extraction of image texture by co-occurrence using a hybrid data structure

    NASA Astrophysics Data System (ADS)

    Clausi, David A.; Zhao, Yongping

    2002-07-01

    Calculation of co-occurrence probabilities is a popular method for determining texture features within remotely sensed digital imagery. Typically, the co-occurrence features are calculated by using a grey level co-occurrence matrix (GLCM) to store the co-occurring probabilities. Statistics are applied to the probabilities in the GLCM to generate the texture features. This method is computationally intensive since the matrix is usually sparse leading to many unnecessary calculations involving zero probabilities when applying the statistics. An improvement on the GLCM method is to utilize a grey level co-occurrence linked list (GLCLL) to store only the non-zero co-occurring probabilities. The GLCLL suffers since, to achieve preferred computational speeds, the list should be sorted. An improvement on the GLCLL is to utilize a grey level co-occurrence hybrid structure (GLCHS) based on an integrated hash table and linked list approach. Texture features obtained using this technique are identical to those obtained using the GLCM and GLCLL. The GLCHS method is implemented using the C language in a Unix environment. Based on a Brodatz test image, the GLCHS method is demonstrated to be a superior technique when compared across various window sizes and grey level quantizations. The GLCHS method required, on average, 33.4% ( σ=3.08%) of the computational time required by the GLCLL. Significant computational gains are made using the GLCHS method.

  18. 3D laser scanning and modelling of the Dhow heritage for the Qatar National Museum

    NASA Astrophysics Data System (ADS)

    Wetherelt, A.; Cooper, J. P.; Zazzaro, C.

    2014-08-01

    Curating boats can be difficult. They are complex structures, often demanding to conserve whether in or out of the water; they are usually large, difficult to move on land, and demanding of gallery space. Communicating life on board to a visiting public in the terra firma context of a museum can be difficult. Boats in their native environment are inherently dynamic artifacts. In a museum they can be static and divorced from the maritime context that might inspire engagement. New technologies offer new approaches to these problems. 3D laser scanning and digital modeling offers museums a multifaceted means of recording, monitoring, studying and communicating watercraft in their care. In this paper we describe the application of 3D laser scanning and subsequent digital modeling. Laser scans were further developed using computer-generated imagery (CGI) modeling techniques to produce photorealistic 3D digital models for development into interactive, media-based museum displays. The scans were also used to generate 2D naval lines and orthographic drawings as a lasting curatorial record of the dhows held by the National Museum of Qatar.

  19. Development of hierarchical structures for actions and motor imagery: a constructivist view from synthetic neuro-robotics study.

    PubMed

    Nishimoto, Ryunosuke; Tani, Jun

    2009-07-01

    The current paper shows a neuro-robotics experiment on developmental learning of goal-directed actions. The robot was trained to predict visuo-proprioceptive flow of achieving a set of goal-directed behaviors through iterative tutor training processes. The learning was conducted by employing a dynamic neural network model which is characterized by their multiple time-scale dynamics. The experimental results showed that functional hierarchical structures emerge through stages of developments where behavior primitives are generated in earlier stages and their sequences of achieving goals appear in later stages. It was also observed that motor imagery is generated in earlier stages compared to actual behaviors. Our claim that manipulatable inner representation should emerge through the sensory-motor interactions is corresponded to Piaget's constructivist view.

  20. Variations on a Theme.

    ERIC Educational Resources Information Center

    Vitali, Julius

    1990-01-01

    Explains an experimental photographic technique starting with a realistic photograph. Using various media (oil painting, video/computer photography, and multiprint imagery) the artist changes the photograph's compositional elements. Outlines the phases of this evolutionary process. Illustrates four images created by the technique. (DB)

  1. Mobile Robot Self-Localization by Matching Range Maps Using a Hausdorff Measure

    NASA Technical Reports Server (NTRS)

    Olson, C. F.

    1997-01-01

    This paper examines techniques for a mobile robot to perform self-localization in natural terrain by comparing a dense range map computed from stereo imagery to a range map in a known frame of reference.

  2. Translation of EEG Spatial Filters from Resting to Motor Imagery Using Independent Component Analysis

    PubMed Central

    Wang, Yijun; Wang, Yu-Te; Jung, Tzyy-Ping

    2012-01-01

    Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) often use spatial filters to improve signal-to-noise ratio of task-related EEG activities. To obtain robust spatial filters, large amounts of labeled data, which are often expensive and labor-intensive to obtain, need to be collected in a training procedure before online BCI control. Several studies have recently developed zero-training methods using a session-to-session scenario in order to alleviate this problem. To our knowledge, a state-to-state translation, which applies spatial filters derived from one state to another, has never been reported. This study proposes a state-to-state, zero-training method to construct spatial filters for extracting EEG changes induced by motor imagery. Independent component analysis (ICA) was separately applied to the multi-channel EEG in the resting and the motor imagery states to obtain motor-related spatial filters. The resultant spatial filters were then applied to single-trial EEG to differentiate left- and right-hand imagery movements. On a motor imagery dataset collected from nine subjects, comparable classification accuracies were obtained by using ICA-based spatial filters derived from the two states (motor imagery: 87.0%, resting: 85.9%), which were both significantly higher than the accuracy achieved by using monopolar scalp EEG data (80.4%). The proposed method considerably increases the practicality of BCI systems in real-world environments because it is less sensitive to electrode misalignment across different sessions or days and does not require annotated pilot data to derive spatial filters. PMID:22666377

  3. Land cover classification in multispectral imagery using clustering of sparse approximations over learned feature dictionaries

    DOE PAGES

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...

    2014-12-09

    We present results from an ongoing effort to extend neuromimetic machine vision algorithms to multispectral data using adaptive signal processing combined with compressive sensing and machine learning techniques. Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and topographic/geomorphic characteristics. We use a Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labelsmore » are automatically generated using unsupervised clustering of sparse approximations (CoSA). We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska. We explore learning from both raw multispectral imagery and normalized band difference indices. We explore a quantitative metric to evaluate the spectral properties of the clusters in order to potentially aid in assigning land cover categories to the cluster labels. In this study, our results suggest CoSA is a promising approach to unsupervised land cover classification in high-resolution satellite imagery.« less

  4. True and false memory for colour names versus actual colours: support for the visual distinctiveness heuristic in memory for colour information.

    PubMed

    Eslick, Andrea N; Kostic, Bogdan; Cleary, Anne M

    2010-06-01

    In a colour variation of the Deese-Roediger-McDermott (DRM) false memory paradigm, participants studied lists of words critically related to a nonstudied colour name (e.g., "blood, cherry, scarlet, rouge ... "); they later showed false memory for the critical colour name (e.g., "red"). Two additional experiments suggest that participants generate colour imagery in response to such colour-related DRM lists. First, participants claim to experience colour imagery more often following colour-related than standard non-colour-related DRM lists; they also rate their colour imagery as more vivid following colour-related lists. Second, participants exhibit facilitative priming for critical colours in a dot selection task that follows words in the colour-related DRM list, suggesting that colour-related DRM lists prime participants for the actual critical colours themselves. Despite these findings, false memory for critical colour names does not extend to the actual colours themselves (font colours). Rather than leading to source confusion about which colours were self-generated and which were studied, presenting the study lists in varied font colours actually worked to reduce false memory overall. Results are interpreted within the framework of the visual distinctiveness hypothesis.

  5. Detecting photovoltaic solar panels using hyperspectral imagery and estimating solar power production

    NASA Astrophysics Data System (ADS)

    Czirjak, Daniel

    2017-04-01

    Remote sensing platforms have consistently demonstrated the ability to detect, and in some cases identify, specific targets of interest, and photovoltaic solar panels are shown to have a unique spectral signature that is consistent across multiple manufacturers and construction methods. Solar panels are proven to be detectable in hyperspectral imagery using common statistical target detection methods such as the adaptive cosine estimator, and false alarms can be mitigated through the use of a spectral verification process that eliminates pixels that do not have the key spectral features of photovoltaic solar panel reflectance spectrum. The normalized solar panel index is described and is a key component in the false-alarm mitigation process. After spectral verification, these solar panel arrays are confirmed on openly available literal imagery and can be measured using numerous open-source algorithms and tools. The measurements allow for the assessment of overall solar power generation capacity using an equation that accounts for solar insolation, the area of solar panels, and the efficiency of the solar panels conversion of solar energy to power. Using a known location with readily available information, the methods outlined in this paper estimate the power generation capabilities within 6% of the rated power.

  6. Defining habitat covariates in camera-trap based occupancy studies

    PubMed Central

    Niedballa, Jürgen; Sollmann, Rahel; Mohamed, Azlan bin; Bender, Johannes; Wilting, Andreas

    2015-01-01

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10–500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations. PMID:26596779

  7. New Tools for Viewing Spectrally and Temporally-Rich Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Bradley, E. S.; Toomey, M. P.; Roberts, D. A.; Still, C. J.

    2010-12-01

    High frequency, temporally extensive remote sensing datasets (GOES: 30 minutes, Santa Cruz Island webcam: nearly 5 years at every 10 min.) and airborne imaging spectrometry (AVIRIS with 224 spectral bands), present exciting opportunities for education, synthesis, and analysis. However, the large file volume / size can make holistic review and exploration difficult. In this research, we explore two options for visualization (1) a web-based portal for time-series analysis, PanOpt, and (2) Google Earth-based timestamped image overlays. PanOpt is an interactive website (http://zulu.geog.ucsb.edu/panopt/), which integrates high frequency (GOES) and multispectral (MODIS) satellite imagery with webcam ground-based repeat photography. Side-by-side comparison of satellite imagery with webcam images supports analysis of atmospheric and environmental phenomena. In this proof of concept, we have integrated four years of imagery for a multi-view FogCam on Santa Cruz Island off the coast of Southern California with two years of GOES-11 and four years of MODIS Aqua imagery subsets for the area (14,000 km2). From the PHP-based website, users can search the data (date, time of day, etc.) and specify timestep and display size; and then view the image stack as animations or in a matrix form. Extracted metrics for regions of interest (ROIs) can be viewed in different formats, including time-series and scatter plots. Through click and mouseover actions over the hyperlink-enabled data points, users can view the corresponding images. This directly melds the quantitative and qualitative aspects and could be particularly effective for both education as well as anomaly interpretation. We have also extended this project to Google Earth with timestamped GOES and MODIS image overlays, which can be controlled using the temporal slider and linked to a screen chart of ancillary meteorological data. The automated ENVI/IDL script for generating KMZ overlays was also applied for generating same-day visualization of AVIRIS acquisitions as part of the Gulf of Mexico oil spill response. This supports location-focused imagery review and synthesis, which is critical for successfully imaging moving targets, such as oil slicks.

  8. Volumetric Forest Change Detection Through Vhr Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Akca, Devrim; Stylianidis, Efstratios; Smagas, Konstantinos; Hofer, Martin; Poli, Daniela; Gruen, Armin; Sanchez Martin, Victor; Altan, Orhan; Walli, Andreas; Jimeno, Elisa; Garcia, Alejandro

    2016-06-01

    Quick and economical ways of detecting of planimetric and volumetric changes of forest areas are in high demand. A research platform, called FORSAT (A satellite processing platform for high resolution forest assessment), was developed for the extraction of 3D geometric information from VHR (very-high resolution) imagery from satellite optical sensors and automatic change detection. This 3D forest information solution was developed during a Eurostars project. FORSAT includes two main units. The first one is dedicated to the geometric and radiometric processing of satellite optical imagery and 2D/3D information extraction. This includes: image radiometric pre-processing, image and ground point measurement, improvement of geometric sensor orientation, quasiepipolar image generation for stereo measurements, digital surface model (DSM) extraction by using a precise and robust image matching approach specially designed for VHR satellite imagery, generation of orthoimages, and 3D measurements in single images using mono-plotting and in stereo images as well as triplets. FORSAT supports most of the VHR optically imagery commonly used for civil applications: IKONOS, OrbView - 3, SPOT - 5 HRS, SPOT - 5 HRG, QuickBird, GeoEye-1, WorldView-1/2, Pléiades 1A/1B, SPOT 6/7, and sensors of similar type to be expected in the future. The second unit of FORSAT is dedicated to 3D surface comparison for change detection. It allows users to import digital elevation models (DEMs), align them using an advanced 3D surface matching approach and calculate the 3D differences and volume changes between epochs. To this end our 3D surface matching method LS3D is being used. FORSAT is a single source and flexible forest information solution with a very competitive price/quality ratio, allowing expert and non-expert remote sensing users to monitor forests in three and four dimensions from VHR optical imagery for many forest information needs. The capacity and benefits of FORSAT have been tested in six case studies located in Austria, Cyprus, Spain, Switzerland and Turkey, using optical data from different sensors and with the purpose to monitor forest with different geometric characteristics. The validation run on Cyprus dataset is reported and commented.

  9. Wavelet subband coding of computer simulation output using the A++ array class library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.

    1995-07-01

    The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using amore » bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.« less

  10. The Generation and Maintenance of Visual Mental Images: Evidence from Image Type and Aging

    ERIC Educational Resources Information Center

    De Beni, Rossana; Pazzaglia, Francesca; Gardini, Simona

    2007-01-01

    Imagery is a multi-componential process involving different mental operations. This paper addresses whether separate processes underlie the generation, maintenance and transformation of mental images or whether these cognitive processes rely on the same mental functions. We also examine the influence of age on these mental operations for…

  11. Source misattributions and false recognition errors: examining the role of perceptual resemblance and imagery generation processes.

    PubMed

    Foley, Mary Ann; Bays, Rebecca Brooke; Foy, Jeffrey; Woodfield, Mila

    2015-01-01

    In three experiments, we examine the extent to which participants' memory errors are affected by the perceptual features of an encoding series and imagery generation processes. Perceptual features were examined by manipulating the features associated with individual items as well as the relationships among items. An encoding instruction manipulation was included to examine the effects of explicit requests to generate images. In all three experiments, participants falsely claimed to have seen pictures of items presented as words, committing picture misattribution errors. These misattribution errors were exaggerated when the perceptual resemblance between pictures and images was relatively high (Experiment 1) and when explicit requests to generate images were omitted from encoding instructions (Experiments 1 and 2). When perceptual cues made the thematic relationships among items salient, the level and pattern of misattribution errors were also affected (Experiments 2 and 3). Results address alternative views about the nature of internal representations resulting in misattribution errors and refute the idea that these errors reflect only participants' general impressions or beliefs about what was seen.

  12. Meteorological satellite product support and research for project GALE

    NASA Technical Reports Server (NTRS)

    Velden, Christopher S.; Smith, William L.; Achtor, Thomas H.; Menzel, W. Paul

    1988-01-01

    This participation in the Genesis of Atlantic Lows Experiment (GALE) focused on three main areas: (1) real-time support of the field phase, centered on a McIDAS workstation; (2) satellite data collection, archive, product generation, and dissemination; and (3) research into satellite rainfall estimation and data assimilation. Accomplishments include production of a videotape of animated GOES satellite imagery, production of an atlas of GOES satellite imagery, production of a set of 12-hour interval analyses; research into 4-D data assimilation, and production of a set of satellite-estimated rainfall maps.

  13. The American Dream: A Crossover of Community Imagery.

    ERIC Educational Resources Information Center

    Metelka, Charles J.

    Even as conceptual models, distinctions between "rural" and "urban" have become blurred--by changes in transportation, telecommunications, computer technology, business expertise, formal education, health care, and citizenry expectations/knowledge. Two typologies describing future trends and incorporating changes in rural/urban…

  14. Earth Science

    NASA Image and Video Library

    1992-02-27

    This map shows the presence of water vapor over global oceans. The imagery was produced by combining Special Sensor Microwave Imager measurements and computer models. This data will help scientists better understand how weather systems move water vapor from the tropics toward the poles producing precipitation.

  15. Moving Stimuli Facilitate Synchronization But Not Temporal Perception

    PubMed Central

    Silva, Susana; Castro, São Luís

    2016-01-01

    Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap. PMID:27909419

  16. Moving Stimuli Facilitate Synchronization But Not Temporal Perception.

    PubMed

    Silva, Susana; Castro, São Luís

    2016-01-01

    Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.

  17. SPoRT Participation in the GOES-R and JPSS Proving Grounds

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Fuell, Kevin; Smith, Matthew

    2013-01-01

    For the last several years, the NASA Short-term Prediction Research and Transition (SPoRT) project at has been working with the various algorithm working groups and science teams to demonstrate the utility of future operational sensors for GOES-R and the suite of instruments for the JPSS observing platforms. For GOES-R, imagery and products have been developed from polar-orbiting sensors such as MODIS and geostationary observations from SEVIRI, simulated imagery, enhanced products derived from existing GOES satellites, and data from ground-based observing systems to generate pseudo or proxy products for the ABI and GLM instruments. The suite of products include GOES-POES basic and RGB hybrid imagery, total lightning flash products, quantitative precipitation estimates, and convective initiation products. SPoRT is using imagery and products from VIIRS, CrIS, ATMS, and OMPS to show the utility of data and products from their operational counterparts on JPSS. The products include VIIRS imagery in swath form, the GOES-POES hybrid, a suite of RGB products including the air mass RGB using water vapor and ozone channels from CrIS, and several DNB products. Over a dozen SPoRT collaborative WFOs and several National Centers are involved in an intensive evaluation of the operational utility of these products.

  18. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation.

    PubMed

    Jin, Wei; Gong, Fei; Zeng, Xingbin; Fu, Randi

    2016-12-16

    Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC) is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC), atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency.

  19. Transition, Training, and Assessment of Multispectral Composite Imagery in Support of the NWS Aviation Forecast Mission

    NASA Technical Reports Server (NTRS)

    Fuell, Kevin; Jedlovec, Gary; Leroy, Anita; Schultz, Lori

    2015-01-01

    The NASA/Short-term Prediction, Research, and Transition (SPoRT) Program works closely with NOAA/NWS weather forecasters to transition unique satellite data and capabilities into operations in order to assist with nowcasting and short-term forecasting issues. Several multispectral composite imagery (i.e. RGB) products were introduced to users in the early 2000s to support hydrometeorology and aviation challenges as well as incident support. These activities lead to SPoRT collaboration with the GOES-R Proving Ground efforts where instruments such as MODIS (Aqua, Terra) and S-NPP/VIIRS imagers began to be used as near-realtime proxies to future capabilities of the Advanced Baseline Imager (ABI). One of the composite imagery products introduced to users was the Night-time Microphysics RGB, originally developed by EUMETSAT. SPoRT worked to transition this imagery to NWS users, provide region-specific training, and assess the impact of the imagery to aviation forecast needs. This presentation discusses the method used to interact with users to address specific aviation forecast challenges, including training activities undertaken to prepare for a product assessment. Users who assessed the multispectral imagery ranged from southern U.S. inland and coastal NWS weather forecast offices (WFOs), to those in the Rocky Mountain Front Range region and West Coast, as well as highlatitude forecasters of Alaska. These user-based assessments were documented and shared with the satellite community to support product developers and the broad users of new generation satellite data.

  20. Automatic Generation of High Quality DSM Based on IRS-P5 Cartosat-1 Stereo Data

    NASA Astrophysics Data System (ADS)

    d'Angelo, Pablo; Uttenthaler, Andreas; Carl, Sebastian; Barner, Frithjof; Reinartz, Peter

    2010-12-01

    IRS-P5 Cartosat-1 high resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on IRS-P5 Cartosat-1 imagery is presented, with an emphasis on automated processing and product quality. The proposed system processes IRS-P5 level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The described method uses an RPC correction based on DSM alignment instead of using reference images with a lower lateral accuracy, this results in improved geolocation of the DSMs and orthoimages. Following RPC correction, highly detailed DSMs with 5 m grid spacing are derived using Semiglobal Matching. The proposed method is part of an operational Cartosat-1 processor for the generation of a high resolution DSM. Evaluation of 18 scenes against independent ground truth measurements indicates a mean lateral error (CE90) of 6.7 meters and a mean vertical accuracy (LE90) of 5.1 meters.

Top