Sample records for dynamic ir scene

  1. A dual-waveband dynamic IR scene projector based on DMD

    NASA Astrophysics Data System (ADS)

    Hu, Yu; Zheng, Ya-wei; Gao, Jiao-bo; Sun, Ke-feng; Li, Jun-na; Zhang, Lei; Zhang, Fang

    2016-10-01

    Infrared scene simulation system can simulate multifold objects and backgrounds to perform dynamic test and evaluate EO detecting system in the hardware in-the-loop test. The basic structure of a dual-waveband dynamic IR scene projector was introduced in the paper. The system's core device is an IR Digital Micro-mirror Device (DMD) and the radiant source is a mini-type high temperature IR plane black-body. An IR collimation optical system which transmission range includes 3-5μm and 8-12μm is designed as the projection optical system. Scene simulation software was developed with Visual C++ and Vega soft tools and a software flow chart was presented. The parameters and testing results of the system were given, and this system was applied with satisfying performance in an IR imaging simulation testing.

  2. Bulk silicon as photonic dynamic infrared scene projector

    NASA Astrophysics Data System (ADS)

    Malyutenko, V. K.; Bogatyrenko, V. V.; Malyutenko, O. Yu.

    2013-04-01

    A Si-based fast (frame rate >1 kHz), large-scale (scene area 100 cm2), broadband (3-12 μm), dynamic contactless infrared (IR) scene projector is demonstrated. An IR movie appears on a scene because of the conversion of a visible scenario projected at a scene kept at elevated temperature. Light down conversion comes as a result of free carrier generation in a bulk Si scene followed by modulation of its thermal emission output in the spectral band of free carrier absorption. The experimental setup, an IR movie, figures of merit, and the process's advantages in comparison to other projector technologies are discussed.

  3. Programmable personality interface for the dynamic infrared scene generator (IRSG2)

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; Mobley, Scott B.; Mayhall, Anthony J.; Braselton, William J.

    1998-07-01

    As scene generator platforms begin to rely specifically on commercial off-the-shelf (COTS) hardware and software components, the need for high speed programmable personality interfaces (PPIs) are required for interfacing to Infrared (IR) flight computer/processors and complex IR projectors in the hardware-in-the-loop (HWIL) simulation facilities. Recent technological advances and innovative applications of established technologies are beginning to allow development of cost effective PPIs to interface to COTS scene generators. At the U.S. Army Aviation and Missile Command (AMCOM) Missile Research, Development, and Engineering Center (MRDEC) researchers have developed such a PPI to reside between the AMCOM MRDEC IR Scene Generator (IRSG) and either a missile flight computer or the dynamic Laser Diode Array Projector (LDAP). AMCOM MRDEC has developed several PPIs for the first and second generation IRSGs (IRSG1 and IRSG2), which are based on Silicon Graphics Incorporated (SGI) Onyx and Onyx2 computers with Reality Engine 2 (RE2) and Infinite Reality (IR/IR2) graphics engines. This paper provides an overview of PPIs designed, integrated, tested, and verified at AMCOM MRDEC, specifically the IRSG2's PPI.

  4. Steering and positioning targets for HWIL IR testing at cryogenic conditions

    NASA Astrophysics Data System (ADS)

    Perkes, D. W.; Jensen, G. L.; Higham, D. L.; Lowry, H. S.; Simpson, W. R.

    2006-05-01

    In order to increase the fidelity of hardware-in-the-loop ground-truth testing, it is desirable to create a dynamic scene of multiple, independently controlled IR point sources. ATK-Mission Research has developed and supplied the steering mirror systems for the 7V and 10V Space Simulation Test Chambers at the Arnold Engineering Development Center (AEDC), Air Force Materiel Command (AFMC). A portion of the 10V system incorporates multiple target sources beam-combined at the focal point of a 20K cryogenic collimator. Each IR source consists of a precision blackbody with cryogenic aperture and filter wheels mounted on a cryogenic two-axis translation stage. This point source target scene is steered by a high-speed steering mirror to produce further complex motion. The scene changes dynamically in order to simulate an actual operational scene as viewed by the System Under Test (SUT) as it executes various dynamic look-direction changes during its flight to a target. Synchronization and real-time hardware-in-the-loop control is accomplished using reflective memory for each subsystem control and feedback loop. This paper focuses on the steering mirror system and the required tradeoffs of optical performance, precision, repeatability and high-speed motion as well as the complications of encoder feedback calibration and operation at 20K.

  5. Review of infrared scene projector technology-1993

    NASA Astrophysics Data System (ADS)

    Driggers, Ronald G.; Barnard, Kenneth J.; Burroughs, E. E.; Deep, Raymond G.; Williams, Owen M.

    1994-07-01

    The importance of testing IR imagers and missile seekers with realistic IR scenes warrants a review of the current technologies used in dynamic infrared scene projection. These technologies include resistive arrays, deformable mirror arrays, mirror membrane devices, liquid crystal light valves, laser writers, laser diode arrays, and CRTs. Other methods include frustrated total internal reflection, thermoelectric devices, galvanic cells, Bly cells, and vanadium dioxide. A description of each technology is presented along with a discussion of their relative benefits and disadvantages. The current state of each methodology is also summarized. Finally, the methods are compared and contrasted in terms of their performance parameters.

  6. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    NASA Astrophysics Data System (ADS)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  7. An Analysis of the Max-Min Texture Measure.

    DTIC Science & Technology

    1982-01-01

    PANC 33 D2 Confusion Matrices for Scene A, IR 34 D3 Confusion Matrices for Scene B, PANC 35 D4 Confusion Matrices for Scene B, IR 36 D5 Confusion...Matrices for Scene C, PANC 37 D6 Confusion Matrices for Scene C, IR 38 D7 Confusion Matrices for Scene E, PANC 39 D8 Confusion Matrices for Scene E, IR 40...D9 Confusion Matrices for Scene H, PANC 41 DIO Confusion Matrices for Scene H, JR 42 3 .D 10CnuinMtie o cn ,IR4 AN ANALYSIS OF THE MAX-MIN TEXTURE

  8. Hybrid-mode read-in integrated circuit for infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Cho, Min Ji; Shin, Uisub; Lee, Hee Chul

    2017-05-01

    The infrared scene projector (IRSP) is a tool for evaluating infrared sensors by producing infrared images. Because sensor testing with IRSPs is safer than field testing, the usefulness of IRSPs is widely recognized at present. The important performance characteristics of IRSPs are the thermal resolution and the thermal dynamic range. However, due to an existing trade-off between these requirements, it is often difficult to find a workable balance between them. The conventional read-in integrated circuit (RIIC) can be classified into two types: voltage-mode and current-mode types. An IR emitter driven by a voltage-mode RIIC offers a fine thermal resolution. On the other hand, an emitter driven by the current-mode RIIC has the advantage of a wide thermal dynamic range. In order to provide various scenes, i.e., from highresolution scenes to high-temperature scenes, both of the aforementioned advantages are required. In this paper, a hybridmode RIIC which is selectively operated in two modes is proposed. The mode-selective characteristic of the proposed RIIC allows users to generate high-fidelity scenes regardless of the scene content. A prototype of the hybrid-mode RIIC was fabricated using a 0.18-μm 1-poly 6-metal CMOS process. The thermal range and the thermal resolution of the IR emitter driven by the proposed circuit were calculated based on measured data. The estimated thermal dynamic range of the current mode was from 261K to 790K, and the estimated thermal resolution of the voltage mode at 300K was 23 mK with a 12-bit gray-scale resolution.

  9. Low-cost real-time infrared scene generation for image projection and signal injection

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; King, David E.; Bowden, Mark H.

    1998-07-01

    As cost becomes an increasingly important factor in the development and testing of Infrared sensors and flight computer/processors, the need for accurate hardware-in-the- loop (HWIL) simulations is critical. In the past, expensive and complex dedicated scene generation hardware was needed to attain the fidelity necessary for accurate testing. Recent technological advances and innovative applications of established technologies are beginning to allow development of cost-effective replacements for dedicated scene generators. These new scene generators are mainly constructed from commercial-off-the-shelf (COTS) hardware and software components. At the U.S. Army Aviation and Missile Command (AMCOM) Missile Research, Development, and Engineering Center (MRDEC), researchers have developed such a dynamic IR scene generator (IRSG) built around COTS hardware and software. The IRSG is used to provide dynamic inputs to an IR scene projector for in-band seeker testing and for direct signal injection into the seeker or processor electronics. AMCOM MRDEC has developed a second generation IRSG, namely IRSG2, using the latest Silicon Graphics Incorporated (SGI) Onyx2 with Infinite Reality graphics. As reported in previous papers, the SGI Onyx Reality Engine 2 is the platform of the original IRSG that is now referred to as IRSG1. IRSG1 has been in operation and used daily for the past three years on several IR projection and signal injection HWIL programs. Using this second generation IRSG, frame rates have increased from 120 Hz to 400 Hz and intensity resolution from 12 bits to 16 bits. The key features of the IRSGs are real time missile frame rates and frame sizes, dynamic missile-to-target(s) viewpoint updated each frame in real-time by a six-degree-of- freedom (6DOF) system under test (SUT) simulation, multiple dynamic objects (e.g. targets, terrain/background, countermeasures, and atmospheric effects), latency compensation, point-to-extended source anti-aliased targets, and sensor modeling effects. This paper provides a comparison between the IRSG1 and IRSG2 systems and focuses on the IRSG software, real time features, and database development tools.

  10. Unique digital imagery interface between a silicon graphics computer and the kinetic kill vehicle hardware-in-the-loop simulator (KHILS) wideband infrared scene projector (WISP)

    NASA Astrophysics Data System (ADS)

    Erickson, Ricky A.; Moren, Stephen E.; Skalka, Marion S.

    1998-07-01

    Providing a flexible and reliable source of IR target imagery is absolutely essential for operation of an IR Scene Projector in a hardware-in-the-loop simulation environment. The Kinetic Kill Vehicle Hardware-in-the-Loop Simulator (KHILS) at Eglin AFB provides the capability, and requisite interfaces, to supply target IR imagery to its Wideband IR Scene Projector (WISP) from three separate sources at frame rates ranging from 30 - 120 Hz. Video can be input from a VCR source at the conventional 30 Hz frame rate. Pre-canned digital imagery and test patterns can be downloaded into stored memory from the host processor and played back as individual still frames or movie sequences up to a 120 Hz frame rate. Dynamic real-time imagery to the KHILS WISP projector system, at a 120 Hz frame rate, can be provided from a Silicon Graphics Onyx computer system normally used for generation of digital IR imagery through a custom CSA-built interface which is available for either the SGI/DVP or SGI/DD02 interface port. The primary focus of this paper is to describe our technical approach and experience in the development of this unique SGI computer and WISP projector interface.

  11. IR characteristic simulation of city scenes based on radiosity model

    NASA Astrophysics Data System (ADS)

    Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu

    2013-09-01

    Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.

  12. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    PubMed Central

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  13. Real-time synchronized multiple-sensor IR/EO scene generation utilizing the SGI Onyx2

    NASA Astrophysics Data System (ADS)

    Makar, Robert J.; O'Toole, Brian E.

    1998-07-01

    An approach to utilize the symmetric multiprocessing environment of the Silicon Graphics Inc.R (SGI) Onyx2TM has been developed to support the generation of IR/EO scenes in real-time. This development, supported by the Naval Air Warfare Center Aircraft Division (NAWC/AD), focuses on high frame rate hardware-in-the-loop testing of multiple sensor avionics systems. In the past, real-time IR/EO scene generators have been developed as custom architectures that were often expensive and difficult to maintain. Previous COTS scene generation systems, designed and optimized for visual simulation, could not be adapted for accurate IR/EO sensor stimulation. The new Onyx2 connection mesh architecture made it possible to develop a more economical system while maintaining the fidelity needed to stimulate actual sensors. An SGI based Real-time IR/EO Scene Simulator (RISS) system was developed to utilize the Onyx2's fast multiprocessing hardware to perform real-time IR/EO scene radiance calculations. During real-time scene simulation, the multiprocessors are used to update polygon vertex locations and compute radiometrically accurate floating point radiance values. The output of this process can be utilized to drive a variety of scene rendering engines. Recent advancements in COTS graphics systems, such as the Silicon Graphics InfiniteRealityR make a total COTS solution possible for some classes of sensors. This paper will discuss the critical technologies that apply to infrared scene generation and hardware-in-the-loop testing using SGI compatible hardware. Specifically, the application of RISS high-fidelity real-time radiance algorithms on the SGI Onyx2's multiprocessing hardware will be discussed. Also, issues relating to external real-time control of multiple synchronized scene generation channels will be addressed.

  14. Image reconstruction of dynamic infrared single-pixel imaging system

    NASA Astrophysics Data System (ADS)

    Tong, Qi; Jiang, Yilin; Wang, Haiyan; Guo, Limin

    2018-03-01

    Single-pixel imaging technique has recently received much attention. Most of the current single-pixel imaging is aimed at relatively static targets or the imaging system is fixed, which is limited by the number of measurements received through the single detector. In this paper, we proposed a novel dynamic compressive imaging method to solve the imaging problem, where exists imaging system motion behavior, for the infrared (IR) rosette scanning system. The relationship between adjacent target images and scene is analyzed under different system movement scenarios. These relationships are used to build dynamic compressive imaging models. Simulation results demonstrate that the proposed method can improve the reconstruction quality of IR image and enhance the contrast between the target and the background in the presence of system movement.

  15. Development of a high-definition IR LED scene projector

    NASA Astrophysics Data System (ADS)

    Norton, Dennis T.; LaVeigne, Joe; Franks, Greg; McHugh, Steve; Vengel, Tony; Oleson, Jim; MacDougal, Michael; Westerfeld, David

    2016-05-01

    Next-generation Infrared Focal Plane Arrays (IRFPAs) are demonstrating ever increasing frame rates, dynamic range, and format size, while moving to smaller pitch arrays.1 These improvements in IRFPA performance and array format have challenged the IRFPA test community to accurately and reliably test them in a Hardware-In-the-Loop environment utilizing Infrared Scene Projector (IRSP) systems. The rapidly-evolving IR seeker and sensor technology has, in some cases, surpassed the capabilities of existing IRSP technology. To meet the demands of future IRFPA testing, Santa Barbara Infrared Inc. is developing an Infrared Light Emitting Diode IRSP system. Design goals of the system include a peak radiance >2.0W/cm2/sr within the 3.0-5.0μm waveband, maximum frame rates >240Hz, and >4million pixels within a form factor supported by pixel pitches <=32μm. This paper provides an overview of our current phase of development, system design considerations, and future development work.

  16. Thermal-to-visible transducer (TVT) for thermal-IR imaging

    NASA Astrophysics Data System (ADS)

    Flusberg, Allen; Swartz, Stephen; Huff, Michael; Gross, Steven

    2008-04-01

    We have been developing a novel thermal-to-visible transducer (TVT), an uncooled thermal-IR imager that is based on a Fabry-Perot Interferometer (FPI). The FPI-based IR imager can convert a thermal-IR image to a video electronic image. IR radiation that is emitted by an object in the scene is imaged onto an IR-absorbing material that is located within an FPI. Temperature variations generated by the spatial variations in the IR image intensity cause variations in optical thickness, modulating the reflectivity seen by a probe laser beam. The reflected probe is imaged onto a visible array, producing a visible image of the IR scene. This technology can provide low-cost IR cameras with excellent sensitivity, low power consumption, and the potential for self-registered fusion of thermal-IR and visible images. We will describe characteristics of requisite pixelated arrays that we have fabricated.

  17. An Evaluation of Pixel-Based Methods for the Detection of Floating Objects on the Sea Surface

    NASA Astrophysics Data System (ADS)

    Borghgraef, Alexander; Barnich, Olivier; Lapierre, Fabian; Van Droogenbroeck, Marc; Philips, Wilfried; Acheroy, Marc

    2010-12-01

    Ship-based automatic detection of small floating objects on an agitated sea surface remains a hard problem. Our main concern is the detection of floating mines, which proved a real threat to shipping in confined waterways during the first Gulf War, but applications include salvaging, search-and-rescue operation, perimeter, or harbour defense. Detection in infrared (IR) is challenging because a rough sea is seen as a dynamic background of moving objects with size order, shape, and temperature similar to those of the floating mine. In this paper we have applied a selection of background subtraction algorithms to the problem, and we show that the recent algorithms such as ViBe and behaviour subtraction, which take into account spatial and temporal correlations within the dynamic scene, significantly outperform the more conventional parametric techniques, with only little prior assumptions about the physical properties of the scene.

  18. Visible-Infrared Hyperspectral Image Projector

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew

    2013-01-01

    The VisIR HIP generates spatially-spectrally complex scenes. The generated scenes simulate real-world targets viewed by various remote sensing instruments. The VisIR HIP consists of two subsystems: a spectral engine and a spatial engine. The spectral engine generates spectrally complex uniform illumination that spans the wavelength range between 380 nm and 1,600 nm. The spatial engine generates two-dimensional gray-scale scenes. When combined, the two engines are capable of producing two-dimensional scenes with a unique spectrum at each pixel. The VisIR HIP can be used to calibrate any spectrally sensitive remote-sensing instrument. Tests were conducted on the Wide-field Imaging Interferometer Testbed at NASA s Goddard Space Flight Center. The device is a variation of the calibrated hyperspectral image projector developed by the National Institute of Standards and Technology in Gaithersburg, MD. It uses Gooch & Housego Visible and Infrared OL490 Agile Light Sources to generate arbitrary spectra. The two light sources are coupled to a digital light processing (DLP(TradeMark)) digital mirror device (DMD) that serves as the spatial engine. Scenes are displayed on the DMD synchronously with desired spectrum. Scene/spectrum combinations are displayed in rapid succession, over time intervals that are short compared to the integration time of the system under test.

  19. Demonstration of KHILS two-color IR projection capability

    NASA Astrophysics Data System (ADS)

    Jones, Lawrence E.; Coker, Jason S.; Garbo, Dennis L.; Olson, Eric M.; Murrer, Robert Lee, Jr.; Bergin, Thomas P.; Goldsmith, George C., II; Crow, Dennis R.; Guertin, Andrew W.; Dougherty, Michael; Marler, Thomas M.; Timms, Virgil G.

    1998-07-01

    For more than a decade, there has been considerable discussion about using different IR bands for the detection of low contrast military targets. Theory predicts that a target can have little to no contrast against the background in one IR band while having a discernible signature in another IR band. A significant amount of effort has been invested towards establishing hardware that is capable of simultaneously imaging in two IR bands to take advantage of this phenomenon. Focal plane arrays (FPA) are starting to materialize with this simultaneous two-color imaging capability. The Kinetic Kill Vehicle Hardware-in-the-loop Simulator (KHILS) team of the Air Force Research Laboratory and the Guided Weapons Evaluation Facility (GWEF), both at Eglin AFB, FL, have spent the last 10 years developing the ability to project dynamic IR scenes to imaging IR seekers. Through the Wideband Infrared Scene Projector (WISP) program, the capability to project two simultaneous IR scenes to a dual color seeker has been established at KHILS. WISP utilizes resistor arrays to produce the IR energy. Resistor arrays are not ideal blackbodies. The projection of two IR colors with resistor arrays, therefore, requires two optically coupled arrays. This paper documents the first demonstration of two-color simultaneous projection at KHILS. Agema cameras were used for the measurements. The Agema's HgCdTe detector has responsivity from 4 to 14 microns. A blackbody and two IR filters (MWIR equals 4.2 t 7.4 microns, LWIR equals 7.7 to 13 microns) were used to calibrate the Agema in two bands. Each filter was placed in front of the blackbody one at a time, and the temperature of the blackbody was stepped up in incremental amounts. The output counts from the Agema were recorded at each temperature. This calibration process established the radiance to Agema output count curves for the two bands. The WISP optical system utilizes a dichroic beam combiner to optically couple the two resistor arrays. The transmission path of the beam combiner provided the LWIR (6.75 to 12 microns), while the reflective path produced the MWIR (3 to 6.5 microns). Each resistor array was individually projected into the Agema through the beam combiner at incremental output levels. Once again the Agema's output counts were recorded at each resistor array output level. These projections established the resistor array output to Agema count curves for the MWIR and LWIR resistor arrays. Using the radiance to Agema counts curves, the MWIR and LWIR resistor array output to radiance curves were established. With the calibration curves established, a two-color movie was projected and compared to the generated movie radiance values. By taking care to correctly account for the spectral qualities of the Agema camera, the calibration filters, and the diachroic beam combiner, the projections matched the theoretical calculations. In the near future, a Lockheed- Martin Multiple Quantum Well camera with true two-color IR capability will be tested.

  20. Integration of an open interface PC scene generator using COTS DVI converter hardware

    NASA Astrophysics Data System (ADS)

    Nordland, Todd; Lyles, Patrick; Schultz, Bret

    2006-05-01

    Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.

  1. High-temperature MIRAGE XL (LFRA) IRSP system development

    NASA Astrophysics Data System (ADS)

    McHugh, Steve; Franks, Greg; LaVeigne, Joe

    2017-05-01

    The development of very-large format infrared detector arrays has challenged the IR scene projector community to develop larger-format infrared emitter arrays. Many scene projector applications also require much higher simulated temperatures than can be generated with current technology. This paper will present an overview of resistive emitterbased (broadband) IR scene projector system development, as well as describe recent progress in emitter materials and pixel designs applicable for legacy MIRAGE XL Systems to achieve apparent temperatures >1000K in the MWIR. These new high temperature MIRAGE XL (LFRA) Digital Emitter Engines (DEE) will be "plug and play" equivalent with legacy MIRAGE XL DEEs, the rest of the system is reusable. Under the High Temperature Dynamic Resistive Array (HDRA) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>2k x 2k) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During earlier phases of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1500 K. These new emitter materials can be utilized with legacy RIICs to produce pixels that can achieve 7X the radiance of the legacy systems with low cost and low risk. A 'scalable' Read-In Integrated Circuit (RIIC) is also being developed under the same HDRA program to drive the high temperature pixels. This RIIC will utilize through-silicon via (TSV) and Quilt Packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the yield limitations inherent in large-scale integrated circuits. These quilted arrays can be fabricated in any N x M size in 512 steps.

  2. New impressive capabilities of SE-workbench for EO/IR real-time rendering of animated scenarios including flares

    NASA Astrophysics Data System (ADS)

    Le Goff, Alain; Cathala, Thierry; Latger, Jean

    2015-10-01

    To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.

  3. PC Scene Generation

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; Cosby, David; Bunfield, Dennis H.; Mayhall, Anthony J.; Trimble, Darian E.

    2007-04-01

    AMRDEC has successfully tested hardware and software for Real-Time Scene Generation for IR and SAL Sensors on COTS PC based hardware and video cards. AMRDEC personnel worked with nVidia and Concurrent Computer Corporation to develop a Scene Generation system capable of frame rates of at least 120Hz while frame locked to an external source (such as a missile seeker) with no dropped frames. Latency measurements and image validation were performed using COTS and in-house developed hardware and software. Software for the Scene Generation system was developed using OpenSceneGraph.

  4. Experiment research on infrared targets signature in mid and long IR spectral bands

    NASA Astrophysics Data System (ADS)

    Wang, Chensheng; Hong, Pu; Lei, Bo; Yue, Song; Zhang, Zhijie; Ren, Tingting

    2013-09-01

    Since the infrared imaging system has played a significant role in the military self-defense system and fire control system, the radiation signature of IR target becomes an important topic in IR imaging application technology. IR target signature can be applied in target identification, especially for small and dim targets, as well as the target IR thermal design. To research and analyze the targets IR signature systematically, a practical and experimental project is processed under different backgrounds and conditions. An infrared radiation acquisition system based on a MWIR cooled thermal imager and a LWIR cooled thermal imager is developed to capture the digital infrared images. Furthermore, some instruments are introduced to provide other parameters. According to the original image data and the related parameters in a certain scene, the IR signature of interested target scene can be calculated. Different background and targets are measured with this approach, and a comparison experiment analysis shall be presented in this paper as an example. This practical experiment has proved the validation of this research work, and it is useful in detection performance evaluation and further target identification research.

  5. Design and manufacturing considerations for high-performance gimbals used for land, sea, air, and space

    NASA Astrophysics Data System (ADS)

    Sweeney, Mike; Redd, Lafe; Vettese, Tom; Myatt, Ray; Uchida, David; Sellers, Del

    2015-09-01

    High performance stabilized EO/IR surveillance and targeting systems are in demand for a wide variety of military, law enforcement, and commercial assets for land, sea, air, and space. Operating ranges, wavelengths, and angular resolution capabilities define the requirements for EO/IR optics and sensors, and line of sight stabilization. Many materials and design configurations are available for EO/IR pointing gimbals depending on trade-offs of size, weight, power (SWaP), performance, and cost. Space and high performance military aircraft applications are often driven toward expensive but exceptionally performing beryllium and aluminum beryllium components. Commercial applications often rely on aluminum and composite materials. Gimbal design considerations include achieving minimized mass and inertia simultaneous with demanding structural, thermal, optical, and scene stabilization requirements when operating in dynamic operational environments. Manufacturing considerations include precision lapping and honing of ball bearing interfaces, brazing, welding, and casting of complex aluminum and beryllium alloy structures, and molding of composite structures. Several notional and previously developed EO/IR gimbal platforms are profiled that exemplify applicable design and manufacturing technologies.

  6. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R. (Principal Investigator); Wiegand, C. L.; Richardson, A. J.; Johnson, M. P.

    1981-01-01

    Plotted transects made from south Texas daytime HCMM data show the effect of subvisible cirrus (SCI) clouds in the emissive (IR) band but the effect is unnoticable in the reflective (VIS) band. The depression of satellite indicated temperatures ws greatest in the center of SCi streamers and tapered off at the edges. Pixels of uncontaminated land and water features in the HCMM test area shared identical VIS and IR digital count combinations with other pixels representing similar features. A minimum of 0.015 percent repeats of identical VIS-IR combinations are characteristic of land and water features in a scene of 30 percent cloud cover. This increases to 0.021 percent of more when the scene is clear. Pixels having shared VIS-IR combinations less than these amounts are considered to be cloud contaminated in the cluster screening method. About twenty percent of SCi was machine indistinguishable from land features in two dimensional spectral space (VIS vs IR).

  7. Making methane visible

    NASA Astrophysics Data System (ADS)

    Gålfalk, Magnus; Olofsson, Göran; Crill, Patrick; Bastviken, David

    2016-04-01

    Methane (CH4) is one of the most important greenhouse gases, and an important energy carrier in biogas and natural gas. Its large-scale emission patterns have been unpredictable and the source and sink distributions are poorly constrained. Remote assessment of CH4 with high sensitivity at a m2 spatial resolution would allow detailed mapping of the near-ground distribution and anthropogenic sources in landscapes but has hitherto not been possible. Here we show that CH4 gradients can be imaged on the

  8. Electrostatic artificial eyelid actuator as an analog micromirror device

    NASA Astrophysics Data System (ADS)

    Goodwin, Scott H.; Dausch, David E.; Solomon, Steven L.; Lamvik, Michael K.

    2005-05-01

    An electrostatic MEMS actuator is described for use as an analog micromirror device (AMD) for high performance, broadband, hardware-in-the-loop (HWIL) scene generation. Current state-of-the-art technology is based on resistively heated pixel arrays. As these arrays drive to the higher scene temperatures required by missile defense scenarios, the power required to drive the large format resistive arrays will ultimately become prohibitive. Existing digital micromirrors (DMD) are, in principle, capable of generating the required scene irradiances, but suffer from limited dynamic range, resolution and flicker effects. An AMD would be free of these limitations, and so represents a viable alternative for high performance UV/VIS/IR scene generation. An electrostatic flexible film actuator technology, developed for use as "artificial eyelid" shutters for focal plane sensors to protect against damaging radiation, is suitable as an AMD for analog control of projection irradiance. In shutter applications, the artificial eyelid actuator contained radius of curvature as low as 25um and operated at high voltage (>200V). Recent testing suggests that these devices are capable of analog operation as reflective microcantilever mirrors appropriate for scene projector systems. In this case, the device would possess larger radius and operate at lower voltages (20-50V). Additionally, frame rates have been measured at greater than 5kHz for continuous operation. The paper will describe the artificial eyelid technology, preliminary measurements of analog test pixels, and design aspects related to application for scene projection systems. We believe this technology will enable AMD projectors with at least 5122 spatial resolution, non-temporally-modulated output, and pixel response times of <1.25ms.

  9. Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes.

    PubMed

    Smith, Tim J; Mital, Parag K

    2013-07-17

    Does viewing task influence gaze during dynamic scene viewing? Research into the factors influencing gaze allocation during free viewing of dynamic scenes has reported that the gaze of multiple viewers clusters around points of high motion (attentional synchrony), suggesting that gaze may be primarily under exogenous control. However, the influence of viewing task on gaze behavior in static scenes and during real-world interaction has been widely demonstrated. To dissociate exogenous from endogenous factors during dynamic scene viewing we tracked participants' eye movements while they (a) freely watched unedited videos of real-world scenes (free viewing) or (b) quickly identified where the video was filmed (spot-the-location). Static scenes were also presented as controls for scene dynamics. Free viewing of dynamic scenes showed greater attentional synchrony, longer fixations, and more gaze to people and areas of high flicker compared with static scenes. These differences were minimized by the viewing task. In comparison with the free viewing of dynamic scenes, during the spot-the-location task fixation durations were shorter, saccade amplitudes were longer, and gaze exhibited less attentional synchrony and was biased away from areas of flicker and people. These results suggest that the viewing task can have a significant influence on gaze during a dynamic scene but that endogenous control is slow to kick in as initial saccades default toward the screen center, areas of high motion and people before shifting to task-relevant features. This default-like viewing behavior returns after the viewing task is completed, confirming that gaze behavior is more predictable during free viewing of dynamic than static scenes but that this may be due to natural correlation between regions of interest (e.g., people) and motion.

  10. Automatic Generation of High Quality DSM Based on IRS-P5 Cartosat-1 Stereo Data

    NASA Astrophysics Data System (ADS)

    d'Angelo, Pablo; Uttenthaler, Andreas; Carl, Sebastian; Barner, Frithjof; Reinartz, Peter

    2010-12-01

    IRS-P5 Cartosat-1 high resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on IRS-P5 Cartosat-1 imagery is presented, with an emphasis on automated processing and product quality. The proposed system processes IRS-P5 level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The described method uses an RPC correction based on DSM alignment instead of using reference images with a lower lateral accuracy, this results in improved geolocation of the DSMs and orthoimages. Following RPC correction, highly detailed DSMs with 5 m grid spacing are derived using Semiglobal Matching. The proposed method is part of an operational Cartosat-1 processor for the generation of a high resolution DSM. Evaluation of 18 scenes against independent ground truth measurements indicates a mean lateral error (CE90) of 6.7 meters and a mean vertical accuracy (LE90) of 5.1 meters.

  11. A qualitative approach for recovering relative depths in dynamic scenes

    NASA Technical Reports Server (NTRS)

    Haynes, S. M.; Jain, R.

    1987-01-01

    This approach to dynamic scene analysis is a qualitative one. It computes relative depths using very general rules. The depths calculated are qualitative in the sense that the only information obtained is which object is in front of which others. The motion is qualitative in the sense that the only required motion data is whether objects are moving toward or away from the camera. Reasoning, which takes into account the temporal character of the data and the scene, is qualitative. This approach to dynamic scene analysis can tolerate imprecise data because in dynamic scenes the data are redundant.

  12. Validation of the thermal code of RadTherm-IR, IR-Workbench, and F-TOM

    NASA Astrophysics Data System (ADS)

    Schwenger, Frédéric; Grossmann, Peter; Malaplate, Alain

    2009-05-01

    System assessment by image simulation requires synthetic scenarios that can be viewed by the device to be simulated. In addition to physical modeling of the camera, a reliable modeling of scene elements is necessary. Software products for modeling of target data in the IR should be capable of (i) predicting surface temperatures of scene elements over a long period of time and (ii) computing sensor views of the scenario. For such applications, FGAN-FOM acquired the software products RadTherm-IR (ThermoAnalytics Inc., Calumet, USA; IR-Workbench (OKTAL-SE, Toulouse, France). Inspection of the accuracy of simulation results by validation is necessary before using these products for applications. In the first step of validation, the performance of both "thermal solvers" was determined through comparison of the computed diurnal surface temperatures of a simple object with the corresponding values from measurements. CUBI is a rather simple geometric object with well known material parameters which makes it suitable for testing and validating object models in IR. It was used in this study as a test body. Comparison of calculated and measured surface temperature values will be presented, together with the results from the FGAN-FOM thermal object code F-TOM. In the second validation step, radiances of the simulated sensor views computed by RadTherm-IR and IR-Workbench will be compared with radiances retrieved from the recorded sensor images taken by the sensor that was simulated. Strengths and weaknesses of the models RadTherm-IR, IR-Workbench and F-TOM will be discussed.

  13. New technologies for HWIL testing of WFOV, large-format FPA sensor systems

    NASA Astrophysics Data System (ADS)

    Fink, Christopher

    2016-05-01

    Advancements in FPA density and associated wide-field-of-view infrared sensors (>=4000x4000 detectors) have outpaced the current-art HWIL technology. Whether testing in optical projection or digital signal injection modes, current-art technologies for infrared scene projection, digital injection interfaces, and scene generation systems simply lack the required resolution and bandwidth. For example, the L3 Cincinnati Electronics ultra-high resolution MWIR Camera deployed in some UAV reconnaissance systems features 16MP resolution at 60Hz, while the current upper limit of IR emitter arrays is ~1MP, and single-channel dual-link DVI throughput of COTs graphics cards is limited to 2560x1580 pixels at 60Hz. Moreover, there are significant challenges in real-time, closed-loop, physics-based IR scene generation for large format FPAs, including the size and spatial detail required for very large area terrains, and multi - channel low-latency synchronization to achieve the required bandwidth. In this paper, the author's team presents some of their ongoing research and technical approaches toward HWIL testing of large-format FPAs with wide-FOV optics. One approach presented is a hybrid projection/injection design, where digital signal injection is used to augment the resolution of current-art IRSPs, utilizing a multi-channel, high-fidelity physics-based IR scene simulator in conjunction with a novel image composition hardware unit, to allow projection in the foveal region of the sensor, while non-foveal regions of the sensor array are simultaneously stimulated via direct injection into the post-detector electronics.

  14. Image based performance analysis of thermal imagers

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  15. Automated processing of thermal infrared images of Osservatorio Vesuviano permanent surveillance network by using Matlab code

    NASA Astrophysics Data System (ADS)

    Sansivero, Fabio; Vilardo, Giuseppe; Caputo, Teresa

    2017-04-01

    The permanent thermal infrared surveillance network of Osservatorio Vesuviano (INGV) is composed of 6 stations which acquire IR frames of fumarole fields in the Campi Flegrei caldera and inside the Vesuvius crater (Italy). The IR frames are uploaded to a dedicated server in the Surveillance Center of Osservatorio Vesuviano in order to process the infrared data and to excerpt all the information contained. In a first phase the infrared data are processed by an automated system (A.S.I.R.A. Acq- Automated System of IR Analysis and Acquisition) developed in Matlab environment and with a user-friendly graphic user interface (GUI). ASIRA daily generates time-series of residual temperature values of the maximum temperatures observed in the IR scenes after the removal of seasonal effects. These time-series are displayed in the Surveillance Room of Osservatorio Vesuviano and provide information about the evolution of shallow temperatures field of the observed areas. In particular the features of ASIRA Acq include: a) efficient quality selection of IR scenes, b) IR images co-registration in respect of a reference frame, c) seasonal correction by using a background-removal methodology, a) filing of IR matrices and of the processed data in shared archives accessible to interrogation. The daily archived records can be also processed by ASIRA Plot (Matlab code with GUI) to visualize IR data time-series and to help in evaluating inputs parameters for further data processing and analysis. Additional processing features are accomplished in a second phase by ASIRA Tools which is Matlab code with GUI developed to extract further information from the dataset in automated way. The main functions of ASIRA Tools are: a) the analysis of temperature variations of each pixel of the IR frame in a given time interval, b) the removal of seasonal effects from temperature of every pixel in the IR frames by using an analytic approach (removal of sinusoidal long term seasonal component by using a polynomial fit Matlab function - LTFC_SCOREF), c) the export of data in different raster formats (i.e. Surfer grd). An interesting example of elaborations of the data produced by ASIRA Tools is the map of the temperature changing rate, which provide remarkable information about the potential migration of fumarole activity. The high efficiency of Matlab in processing matrix data from IR scenes and the flexibility of this code-developing tool proved to be very useful to produce applications to use in volcanic surveillance aimed to monitor the evolution of surface temperatures field in diffuse degassing volcanic areas.

  16. Current LWIR HSI Remote Sensing Activities at Defence R&D Canada - Valcartier

    DTIC Science & Technology

    2009-10-01

    measures the IR radiation from a target scene which is optically combined onto a single detector out-of-phase with the IR radiation from a corresponding...Hyper-Cam-LW. The MODDIFS project involves the development of a leading edge infrared ( IR ) hyperspectral sensor optimized for the standoff detection...essentially offer the optical subtraction capability of the CATSI system but at high-spatial resolution using an MCT focal plane array of 8484

  17. Detection of latent bloodstains beneath painted surfaces using reflected infrared photography.

    PubMed

    Farrar, Andrew; Porter, Glenn; Renshaw, Adrian

    2012-09-01

    Bloodstain evidence is a highly valued form of physical evidence commonly found at scenes involving violent crimes. However, painting over bloodstains will often conceal this type of evidence. There is limited research in the scientific literature that describes methods of detecting painted-over bloodstains. This project employed a modified digital single-lens reflex camera to investigate the effectiveness of infrared (IR) photography in detecting latent bloodstain evidence beneath a layer or multiple layers of paint. A qualitative evaluation was completed by comparing images taken of a series of samples using both IR and standard (visible light) photography. Further quantitative image analysis was used to verify the findings. Results from this project indicate that bloodstain evidence can be detected beneath up to six layers of paint using reflected IR; however, the results vary depending on the characteristics of the paint. This technique provides crime scene specialists with a new field method to assist in locating, visualizing, and documenting painted-over bloodstain evidence. © 2012 American Academy of Forensic Sciences.

  18. EO/IR scene generation open source initiative for real-time hardware-in-the-loop and all-digital simulation

    NASA Astrophysics Data System (ADS)

    Morris, Joseph W.; Lowry, Mac; Boren, Brett; Towers, James B.; Trimble, Darian E.; Bunfield, Dennis H.

    2011-06-01

    The US Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) and the Redstone Test Center (RTC) has formed the Scene Generation Development Center (SGDC) to support the Department of Defense (DoD) open source EO/IR Scene Generation initiative for real-time hardware-in-the-loop and all-digital simulation. Various branches of the DoD have invested significant resources in the development of advanced scene and target signature generation codes. The SGDC goal is to maintain unlimited government rights and controlled access to government open source scene generation and signature codes. In addition, the SGDC provides development support to a multi-service community of test and evaluation (T&E) users, developers, and integrators in a collaborative environment. The SGDC has leveraged the DoD Defense Information Systems Agency (DISA) ProjectForge (https://Project.Forge.mil) which provides a collaborative development and distribution environment for the DoD community. The SGDC will develop and maintain several codes for tactical and strategic simulation, such as the Joint Signature Image Generator (JSIG), the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC), and Office of the Secretary of Defense (OSD) Test and Evaluation Science and Technology (T&E/S&T) thermal modeling and atmospherics packages, such as EOView, CHARM, and STAR. Other utility packages included are the ContinuumCore for real-time messaging and data management and IGStudio for run-time visualization and scenario generation.

  19. Making methane visible

    NASA Astrophysics Data System (ADS)

    Gålfalk, Magnus; Olofsson, Göran; Crill, Patrick; Bastviken, David

    2016-04-01

    Methane (CH4) is one of the most important greenhouse gases, and an important energy carrier in biogas and natural gas. Its large scale emission patterns have been unpredictable and the source and sink distributions are poorly constrained. Remote assessment of CH4 with high sensitivity at m2 spatial resolution would allow detailed mapping of near ground distribution and anthropogenic sources and sinks in landscapes but has hitherto not been possible. Here we show that CH4 gradients can be imaged on

  20. ARGUS/LLNL IR Camera Calibration and Characterization

    DTIC Science & Technology

    1989-11-01

    122 of the 244 rows, once every 1/60 second. The even-numbered detector rows, beginning with row zero , are read out in one field; the odd-numbered...Radiometrically, a very cold reference scene is desirable because the absolute signal level of the reference scene is subtracted from all subsequent...to have effectively zero radiant energy within the spectral passband of the sensor, and so may be ignored. 1.3 LABORATORY EQUIPMENT CONFIGURATION The

  1. Radiometric consistency assessment of hyperspectral infrared sounders

    NASA Astrophysics Data System (ADS)

    Wang, L.; Han, Y.; Jin, X.; Chen, Y.; Tremblay, D. A.

    2015-07-01

    The radiometric and spectral consistency among the Atmospheric Infrared Sounder (AIRS), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) is fundamental for the creation of long-term infrared (IR) hyperspectral radiance benchmark datasets for both inter-calibration and climate-related studies. In this study, the CrIS radiance measurements on Suomi National Polar-orbiting Partnership (SNPP) satellite are directly compared with IASI on MetOp-A and -B at the finest spectral scale and with AIRS on Aqua in 25 selected spectral regions through one year of simultaneous nadir overpass (SNO) observations to evaluate radiometric consistency of these four hyperspectral IR sounders. The spectra from different sounders are paired together through strict spatial and temporal collocation. The uniform scenes are selected by examining the collocated Visible Infrared Imaging Radiometer Suite (VIIRS) pixels. Their brightness temperature (BT) differences are then calculated by converting the spectra onto common spectral grids. The results indicate that CrIS agrees well with IASI on MetOp-A and IASI on MetOp-B at the longwave IR (LWIR) and middle-wave IR (MWIR) bands with 0.1-0.2 K differences. There are no apparent scene-dependent patterns for BT differences between CrIS and IASI for individual spectral channels. CrIS and AIRS are compared at the 25 spectral regions for both Polar and Tropical SNOs. The combined global SNO datasets indicate that, the CrIS-AIRS BT differences are less than or around 0.1 K among 21 of 25 comparison spectral regions and they range from 0.15 to 0.21 K in the remaining 4 spectral regions. CrIS-AIRS BT differences in some comparison spectral regions show weak scene-dependent features.

  2. Radiometric consistency assessment of hyperspectral infrared sounders

    NASA Astrophysics Data System (ADS)

    Wang, L.; Han, Y.; Jin, X.; Chen, Y.; Tremblay, D. A.

    2015-11-01

    The radiometric and spectral consistency among the Atmospheric Infrared Sounder (AIRS), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) is fundamental for the creation of long-term infrared (IR) hyperspectral radiance benchmark data sets for both intercalibration and climate-related studies. In this study, the CrIS radiance measurements on Suomi National Polar-orbiting Partnership (SNPP) satellite are directly compared with IASI on MetOp-A and MetOp-B at the finest spectral scale and with AIRS on Aqua in 25 selected spectral regions through simultaneous nadir overpass (SNO) observations in 2013, to evaluate radiometric consistency of these four hyperspectral IR sounders. The spectra from different sounders are paired together through strict spatial and temporal collocation. The uniform scenes are selected by examining the collocated Visible Infrared Imaging Radiometer Suite (VIIRS) pixels. Their brightness temperature (BT) differences are then calculated by converting the spectra onto common spectral grids. The results indicate that CrIS agrees well with IASI on MetOp-A and IASI on MetOp-B at the long-wave IR (LWIR) and middle-wave IR (MWIR) bands with 0.1-0.2 K differences. There are no apparent scene-dependent patterns for BT differences between CrIS and IASI for individual spectral channels. CrIS and AIRS are compared at the 25 spectral regions for both polar and tropical SNOs. The combined global SNO data sets indicate that the CrIS-AIRS BT differences are less than or around 0.1 K among 21 of 25 spectral regions and they range from 0.15 to 0.21 K in the remaining four spectral regions. CrIS-AIRS BT differences in some comparison spectral regions show weak scene-dependent features.

  3. Shutterless non-uniformity correction for the long-term stability of an uncooled long-wave infrared camera

    NASA Astrophysics Data System (ADS)

    Liu, Chengwei; Sui, Xiubao; Gu, Guohua; Chen, Qian

    2018-02-01

    For the uncooled long-wave infrared (LWIR) camera, the infrared (IR) irradiation the focal plane array (FPA) receives is a crucial factor that affects the image quality. Ambient temperature fluctuation as well as system power consumption can result in changes of FPA temperature and radiation characteristics inside the IR camera; these will further degrade the imaging performance. In this paper, we present a novel shutterless non-uniformity correction method to compensate for non-uniformity derived from the variation of ambient temperature. Our method combines a calibration-based method and the properties of a scene-based method to obtain correction parameters at different ambient temperature conditions, so that the IR camera performance can be less influenced by ambient temperature fluctuation or system power consumption. The calibration process is carried out in a temperature chamber with slowly changing ambient temperature and a black body as uniform radiation source. Enough uniform images are captured and the gain coefficients are calculated during this period. Then in practical application, the offset parameters are calculated via the least squares method based on the gain coefficients, the captured uniform images and the actual scene. Thus we can get a corrected output through the gain coefficients and offset parameters. The performance of our proposed method is evaluated on realistic IR images and compared with two existing methods. The images we used in experiments are obtained by a 384× 288 pixels uncooled LWIR camera. Results show that our proposed method can adaptively update correction parameters as the actual target scene changes and is more stable to temperature fluctuation than the other two methods.

  4. Description of the dynamic infrared background/target simulator (DIBS)

    NASA Astrophysics Data System (ADS)

    Lujan, Ignacio

    1988-01-01

    The purpose of the Dynamic Infrared Background/Target Simulator (DIBS) is to project dynamic infrared scenes to a test sensor; e.g., a missile seeker that is sensitive to infrared energy. The projected scene will include target(s) and background. This system was designed to present flicker-free infrared scenes in the 8 micron to 12 micron wavelength region. The major subassemblies of the DIBS are the laser write system (LWS), vanadium dioxide modulator assembly, scene data buffer (SDB), and the optical image translator (OIT). This paper describes the overall concept and design of the infrared scene projector followed by some details of the LWS and VO2 modulator. Also presented are brief descriptions of the SDB and OIT.

  5. A rain pixel recovery algorithm for videos with highly dynamic scenes.

    PubMed

    Jie Chen; Lap-Pui Chau

    2014-03-01

    Rain removal is a very useful and important technique in applications such as security surveillance and movie editing. Several rain removal algorithms have been proposed these years, where photometric, chromatic, and probabilistic properties of the rain have been exploited to detect and remove the rainy effect. Current methods generally work well with light rain and relatively static scenes, when dealing with heavier rainfall in dynamic scenes, these methods give very poor visual results. The proposed algorithm is based on motion segmentation of dynamic scene. After applying photometric and chromatic constraints for rain detection, rain removal filters are applied on pixels such that their dynamic property as well as motion occlusion clue are considered; both spatial and temporal informations are then adaptively exploited during rain pixel recovery. Results show that the proposed algorithm has a much better performance for rainy scenes with large motion than existing algorithms.

  6. High-resolution focal plane array IR detection modules and digital signal processing technologies at AIM

    NASA Astrophysics Data System (ADS)

    Cabanski, Wolfgang A.; Breiter, Rainer; Koch, R.; Mauk, Karl-Heinz; Rode, Werner; Ziegler, Johann; Eberhardt, Kurt; Oelmaier, Reinhard; Schneider, Harald; Walther, Martin

    2000-07-01

    Full video format focal plane array (FPA) modules with up to 640 X 512 pixels have been developed for high resolution imaging applications in either mercury cadmium telluride (MCT) mid wave (MWIR) infrared (IR) or platinum silicide (PtSi) and quantum well infrared photodetector (QWIP) technology as low cost alternatives to MCT for high performance IR imaging in the MWIR or long wave spectral band (LWIR). For the QWIP's, a new photovoltaic technology was introduced for improved NETD performance and higher dynamic range. MCT units provide fast frame rates > 100 Hz together with state of the art thermal resolution NETD < 20 mK for short snapshot integration times of typically 2 ms. PtSi and QWIP modules are usually operated in a rolling frame integration mode with frame rates of 30 - 60 Hz and provide thermal resolutions of NETD < 80 mK for PtSi and NETD < 20 mK for QWIP, respectively. Due to the lower quantum efficiency compared to MCT, however, the integration time is typically chosen to be as long 10 - 20 ms. The heat load of the integrated detector cooler assemblies (IDCAs) could be reduced to an amount as low, that a 1 W split liner cooler provides sufficient cooling power to operate the modules -- including the QWIP with 60 K operation temperature -- at ambient temperatures up to 65 degrees Celsius. Miniaturized command/control electronics (CCE) available for all modules provide a standardized digital interface, with 14 bit analogue to digital conversion for state to the art correctability, access to highly dynamic scenes without any loss of information and simplified exchangeability of the units. New modular image processing hardware platforms and software for image visualization and nonuniformity correction including scene based self learning algorithms had to be developed to accomplish for the high data rates of up to 18 M pixels/s with 14-bit deep data, allowing to take into account nonlinear effects to access the full NETD by accurate reduction of residual fixed pattern noise. The main features of these modules are summarized together with measured performance data for long range detection systems with moderately fast to slow F-numbers like F/2.0 - F/3.5. An outlook shows most recent activities at AIM, heading for multicolor and faster frame rate detector modules based on MCT devices.

  7. Irma 5.1 multisensor signature prediction model

    NASA Astrophysics Data System (ADS)

    Savage, James; Coker, Charles; Thai, Bea; Aboutalib, Omar; Yamaoka, Neil; Kim, Charles

    2005-05-01

    The Irma synthetic signature prediction code is being developed to facilitate the research and development of multisensor systems. Irma was one of the first high resolution Infrared (IR) target and background signature models to be developed for tactical weapon application. Originally developed in 1980 by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN), the Irma model was used exclusively to generate IR scenes. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser (or active) channel. This two-channel version was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model, which supported correlated frame-to-frame imagery. A passive IR/millimeter wave (MMW) code was completed in 1994. This served as the cornerstone for the development of the co-registered active/passive IR/MMW model, Irma 4.0. In 2000, Irma version 5.0 was released which encompassed several upgrades to both the physical models and software. Circular polarization was added to the passive channel and the doppler capability was added to the active MMW channel. In 2002, the multibounce technique was added to the Irma passive channel. In the ladar channel, a user-friendly Ladar Sensor Assistant (LSA) was incorporated which provides capability and flexibility for sensor modeling. Irma 5.0 runs on several platforms including Windows, Linux, Solaris, and SGI Irix. Since 2000, additional capabilities and enhancements have been added to the ladar channel including polarization and speckle effect. Work is still ongoing to add time-jittering model to the ladar channel. A new user interface has been introduced to aid users in the mechanism of scene generation and running the Irma code. The user interface provides a canvas where a user can add and remove objects using mouse clicks to construct a scene. The scene can then be visualized to find the desired sensor position. The synthetic ladar signatures have been validated twice and underwent a third validation test near the end of 04. These capabilities will be integrated into the next release, Irma 5.1, scheduled for completion in the summer of FY05. Irma is currently being used to support a number of civilian and military applications. The Irma user base includes over 130 agencies within the Air Force, Army, Navy, DARPA, NASA, Department of Transportation, academia, and industry. The purpose of this paper is to report the progress of the Irma 5.1 development effort.

  8. Skidmore Clips of Neutral and Expressive Scenarios (SCENES): Novel dynamic stimuli for social cognition research.

    PubMed

    Schofield, Casey A; Weeks, Justin W; Taylor, Lea; Karnedy, Colten

    2015-12-30

    Social cognition research has relied primarily on photographic emotional stimuli. Such stimuli likely have limited ecological validity in terms of representing real world social interactions. The current study presents evidence for the validity of a new stimuli set of dynamic social SCENES (Skidmore Clips of Emotional and Neutral Expressive Scenarios). To develop these stimuli, ten undergraduate theater students were recruited to portray members of an audience. This audience was configured to display (seven) varying configurations of social feedback, ranging from unequivocally approving to unequivocally disapproving (including three different versions of balanced/neutral scenes). Validity data were obtained from 383 adult participants recruited from Amazon's Mechanical Turk. Each participant viewed three randomly assigned scenes and provided a rating of the perceived criticalness of each scene. Results indicate that the SCENES reflect the intended range of emotionality, and pairwise comparisons suggest that the SCENES capture distinct levels of critical feedback. Overall, the SCENES stimuli set represents a publicly available (www.scenesstimuli.com) resource for researchers interested in measuring social cognition in the presence of dynamic and naturalistic social stimuli. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.

    PubMed

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-03-05

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.

  10. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  11. Rapid gist perception of meaningful real-life scenes: Exploring individual and gender differences in multiple categorization tasks

    PubMed Central

    Vanmarcke, Steven; Wagemans, Johan

    2015-01-01

    In everyday life, we are generally able to dynamically understand and adapt to socially (ir)elevant encounters, and to make appropriate decisions about these. All of this requires an impressive ability to directly filter and obtain the most informative aspects of a complex visual scene. Such rapid gist perception can be assessed in multiple ways. In the ultrafast categorization paradigm developed by Simon Thorpe et al. (1996), participants get a clear categorization task in advance and succeed at detecting the target object of interest (animal) almost perfectly (even with 20 ms exposures). Since this pioneering work, follow-up studies consistently reported population-level reaction time differences on different categorization tasks, indicating a superordinate advantage (animal versus dog) and effects of perceptual similarity (animals versus vehicles) and object category size (natural versus animal versus dog). In this study, we replicated and extended these separate findings by using a systematic collection of different categorization tasks (varying in presentation time, task demands, and stimuli) and focusing on individual differences in terms of e.g., gender and intelligence. In addition to replicating the main findings from the literature, we find subtle, yet consistent gender differences (women faster than men). PMID:26034569

  12. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling

    PubMed Central

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-01-01

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method. PMID:27690028

  13. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.

    PubMed

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-09-27

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method.

  14. Using articulated scene models for dynamic 3d scene analysis in vista spaces

    NASA Astrophysics Data System (ADS)

    Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven

    2010-09-01

    In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.

  15. Cortical networks dynamically emerge with the interplay of slow and fast oscillations for memory of a natural scene.

    PubMed

    Mizuhara, Hiroaki; Sato, Naoyuki; Yamaguchi, Yoko

    2015-05-01

    Neural oscillations are crucial for revealing dynamic cortical networks and for serving as a possible mechanism of inter-cortical communication, especially in association with mnemonic function. The interplay of the slow and fast oscillations might dynamically coordinate the mnemonic cortical circuits to rehearse stored items during working memory retention. We recorded simultaneous EEG-fMRI during a working memory task involving a natural scene to verify whether the cortical networks emerge with the neural oscillations for memory of the natural scene. The slow EEG power was enhanced in association with the better accuracy of working memory retention, and accompanied cortical activities in the mnemonic circuits for the natural scene. Fast oscillation showed a phase-amplitude coupling to the slow oscillation, and its power was tightly coupled with the cortical activities for representing the visual images of natural scenes. The mnemonic cortical circuit with the slow neural oscillations would rehearse the distributed natural scene representations with the fast oscillation for working memory retention. The coincidence of the natural scene representations could be obtained by the slow oscillation phase to create a coherent whole of the natural scene in the working memory. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    NASA Astrophysics Data System (ADS)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  17. Real-time generation of infrared ocean scene based on GPU

    NASA Astrophysics Data System (ADS)

    Jiang, Zhaoyi; Wang, Xun; Lin, Yun; Jin, Jianqiu

    2007-12-01

    Infrared (IR) image synthesis for ocean scene has become more and more important nowadays, especially for remote sensing and military application. Although a number of works present ready-to-use simulations, those techniques cover only a few possible ways of water interacting with the environment. And the detail calculation of ocean temperature is rarely considered by previous investigators. With the advance of programmable features of graphic card, many algorithms previously limited to offline processing have become feasible for real-time usage. In this paper, we propose an efficient algorithm for real-time rendering of infrared ocean scene using the newest features of programmable graphics processors (GPU). It differs from previous works in three aspects: adaptive GPU-based ocean surface tessellation, sophisticated balance equation of thermal balance for ocean surface, and GPU-based rendering for infrared ocean scene. Finally some results of infrared image are shown, which are in good accordance with real images.

  18. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor †

    PubMed Central

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-01-01

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes. PMID:29510599

  19. Image registration of naval IR images

    NASA Astrophysics Data System (ADS)

    Rodland, Arne J.

    1996-06-01

    In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.

  20. Developmental changes in attention to faces and bodies in static and dynamic scenes.

    PubMed

    Stoesz, Brenda M; Jakobson, Lorna S

    2014-01-01

    Typically developing individuals show a strong visual preference for faces and face-like stimuli; however, this may come at the expense of attending to bodies or to other aspects of a scene. The primary goal of the present study was to provide additional insight into the development of attentional mechanisms that underlie perception of real people in naturalistic scenes. We examined the looking behaviors of typical children, adolescents, and young adults as they viewed static and dynamic scenes depicting one or more people. Overall, participants showed a bias to attend to faces more than on other parts of the scenes. Adding motion cues led to a reduction in the number, but an increase in the average duration of face fixations in single-character scenes. When multiple characters appeared in a scene, motion-related effects were attenuated and participants shifted their gaze from faces to bodies, or made off-screen glances. Children showed the largest effects related to the introduction of motion cues or additional characters, suggesting that they find dynamic faces difficult to process, and are especially prone to look away from faces when viewing complex social scenes-a strategy that could reduce the cognitive and the affective load imposed by having to divide one's attention between multiple faces. Our findings provide new insights into the typical development of social attention during natural scene viewing, and lay the foundation for future work examining gaze behaviors in typical and atypical development.

  1. IR in Norway

    NASA Astrophysics Data System (ADS)

    Haakenaasen, Randi; Lovold, Stian

    2003-01-01

    Infrared technology in Norway started at the Norwegian Defense Research Establishment (FFI) in the 1960s, and has since then spread to universities, other research institutes and industry. FFI has a large, integrated IR activity that includes research and development in IR detectors, optics design, optical coatings, advanced dewar design, modelling/simulation of IR scenes, and image analysis. Part of the integrated activity is a laboratory for more basic research in materials science and semiconductor physics, in which thin films of CdHgTe are grown by molecular beam epitaxy and processed into IR detectors by various techniques. FFI also has a lot of experience in research and development of tunable infrared lasers for various applications. Norwegian industrial activities include production of infrared homing anti-ship missiles, laser rangefinders, various infrared gas sensors, hyperspectral cameras, and fiberoptic sensor systems for structural health monitoring and offshore oil well diagnostics.

  2. General review of multispectral cooled IR development at CEA-Leti, France

    NASA Astrophysics Data System (ADS)

    Boulard, F.; Marmonier, F.; Grangier, C.; Adelmini, L.; Gravrand, O.; Ballet, P.; Baudry, X.; Baylet, J.; Badano, G.; Espiau de Lamaestre, R.; Bisotto, S.

    2017-02-01

    Multicolor detection capabilities, which bring information on the thermal and chemical composition of the scene, are desirable for advanced infrared (IR) imaging systems. This communication reviews intra and multiband solutions developed at CEA-Leti, from dual-band molecular beam epitaxy grown Mercury Cadmium Telluride (MCT) photodiodes to plasmon-enhanced multicolor IR detectors and backside pixelated filters. Spectral responses, quantum efficiency and detector noise performances, pros and cons regarding global system are discussed in regards to technology maturity, pixel pitch reduction, and affordability. From MWIR-LWIR large band to intra MWIR or LWIR bands peaked detection, results underline the full possibility developed at CEA-Leti.

  3. Diode Lasers and Light Emitting Diodes Operating at Room Temperature with Wavelengths Above 3 Micrometers

    DTIC Science & Technology

    2011-11-29

    as an active region of mid - infrared LEDs. It should be noted that active region based on interband transition is equally useful for both laser and...IR LED technology for infrared scene projectors”, Dr. E. Golden, Air Force Research Laboratory, Eglin Air Force Base .  “A stable mid -IR, GaSb...multimode lasers. Single spatial mode 3-3.2 J.lm diode lasers were developed. LEDs operate at wavelength above 4 J.lm at RT. Dual color mid - infrared

  4. High-dynamic-range scene compression in humans

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2006-02-01

    Single pixel dynamic-range compression alters a particular input value to a unique output value - a look-up table. It is used in chemical and most digital photographic systems having S-shaped transforms to render high-range scenes onto low-range media. Post-receptor neural processing is spatial, as shown by the physiological experiments of Dowling, Barlow, Kuffler, and Hubel & Wiesel. Human vision does not render a particular receptor-quanta catch as a unique response. Instead, because of spatial processing, the response to a particular quanta catch can be any color. Visual response is scene dependent. Stockham proposed an approach to model human range compression using low-spatial frequency filters. Campbell, Ginsberg, Wilson, Watson, Daly and many others have developed spatial-frequency channel models. This paper describes experiments measuring the properties of desirable spatial-frequency filters for a variety of scenes. Given the radiances of each pixel in the scene and the observed appearances of objects in the image, one can calculate the visual mask for that individual image. Here, visual mask is the spatial pattern of changes made by the visual system in processing the input image. It is the spatial signature of human vision. Low-dynamic range images with many white areas need no spatial filtering. High-dynamic-range images with many blacks, or deep shadows, require strong spatial filtering. Sun on the right and shade on the left requires directional filters. These experiments show that variable scene- scenedependent filters are necessary to mimic human vision. Although spatial-frequency filters can model human dependent appearances, the problem still remains that an analysis of the scene is still needed to calculate the scene-dependent strengths of each of the filters for each frequency.

  5. Fast-response IR spatial light modulators with a polymer network liquid crystal

    NASA Astrophysics Data System (ADS)

    Peng, Fenglin; Chen, Haiwei; Tripathi, Suvagata; Twieg, Robert J.; Wu, Shin-Tson

    2015-03-01

    Liquid crystals (LC) have widespread applications for amplitude modulation (e.g. flat panel displays) and phase modulation (e.g. beam steering). For phase modulation, a 2π phase modulo is required. To extend the electro-optic application into infrared region (MWIR and LWIR), several key technical challenges have to be overcome: 1. low absorption loss, 2. high birefringence, 3. low operation voltage, and 4. fast response time. After three decades of extensive development, an increasing number of IR devices adopting LC technology have been demonstrated, such as liquid crystal waveguide, laser beam steering at 1.55μm and 10.6 μm, spatial light modulator in the MWIR (3~5μm) band, dynamic scene projectors for infrared seekers in the LWIR (8~12μm) band. However, several fundamental molecular vibration bands and overtones exist in the MWIR and LWIR regions, which contribute to high absorption coefficient and hinder its widespread application. Therefore, the inherent absorption loss becomes a major concern for IR devices. To suppress IR absorption, several approaches have been investigated: 1) Employing thin cell gap by choosing a high birefringence liquid crystal mixture; 2) Shifting the absorption bands outside the spectral region of interest by deuteration, fluorination and chlorination; 3) Reducing the overlap vibration bands by using shorter alkyl chain compounds. In this paper, we report some chlorinated LC compounds and mixtures with a low absorption loss in the near infrared and MWIR regions. To achieve fast response time, we have demonstrated a polymer network liquid crystal with 2π phase change at MWIR and response time less than 5 ms.

  6. ColorChecker at the beach: dangers of sunburn and glare

    NASA Astrophysics Data System (ADS)

    McCann, John

    2014-01-01

    In High-Dynamic-Range (HDR) imaging, optical veiling glare sets the limits of accurate scene information recorded by a camera. But, what happens at the beach? Here we have a Low-Dynamic-Range (LDR) scene with maximal glare. Can we calibrate a camera at the beach and not be burnt? We know that we need sunscreen and sunglasses, but what about our cameras? The effect of veiling glare is scene-dependent. When we compare RAW camera digits with spotmeter measurements we find significant differences. As well, these differences vary, depending on where we aim the camera. When we calibrate our camera at the beach we get data that is valid for only that part of that scene. Camera veiling glare is an issue in LDR scenes in uniform illumination with a shaded lens.

  7. Infrared radiation scene generation of stars and planets in celestial background

    NASA Astrophysics Data System (ADS)

    Guo, Feng; Hong, Yaohui; Xu, Xiaojian

    2014-10-01

    An infrared (IR) radiation generation model of stars and planets in celestial background is proposed in this paper. Cohen's spectral template1 is modified for high spectral resolution and accuracy. Based on the improved spectral template for stars and the blackbody assumption for planets, an IR radiation model is developed which is able to generate the celestial IR background for stars and planets appearing in sensor's field of view (FOV) for specified observing date and time, location, viewpoint and spectral band over 1.2μm ~ 35μm. In the current model, the initial locations of stars are calculated based on midcourse space experiment (MSX) IR astronomical catalogue (MSX-IRAC) 2 , while the initial locations of planets are calculated using secular variations of the planetary orbits (VSOP) theory. Simulation results show that the new IR radiation model has higher resolution and accuracy than common model.

  8. Cross-sensor comparisons between Landsat 5 TM and IRS-P6 AWiFS and disturbance detection using integrated Landsat and AWiFS time-series images

    USGS Publications Warehouse

    Chen, Xuexia; Vogelmann, James E.; Chander, Gyanesh; Ji, Lei; Tolk, Brian; Huang, Chengquan; Rollins, Matthew

    2013-01-01

    Routine acquisition of Landsat 5 Thematic Mapper (TM) data was discontinued recently and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) has an ongoing problem with the scan line corrector (SLC), thereby creating spatial gaps when covering images obtained during the process. Since temporal and spatial discontinuities of Landsat data are now imminent, it is therefore important to investigate other potential satellite data that can be used to replace Landsat data. We thus cross-compared two near-simultaneous images obtained from Landsat 5 TM and the Indian Remote Sensing (IRS)-P6 Advanced Wide Field Sensor (AWiFS), both captured on 29 May 2007 over Los Angeles, CA. TM and AWiFS reflectances were compared for the green, red, near-infrared (NIR), and shortwave infrared (SWIR) bands, as well as the normalized difference vegetation index (NDVI) based on manually selected polygons in homogeneous areas. All R2 values of linear regressions were found to be higher than 0.99. The temporally invariant cluster (TIC) method was used to calculate the NDVI correlation between the TM and AWiFS images. The NDVI regression line derived from selected polygons passed through several invariant cluster centres of the TIC density maps and demonstrated that both the scene-dependent polygon regression method and TIC method can generate accurate radiometric normalization. A scene-independent normalization method was also used to normalize the AWiFS data. Image agreement assessment demonstrated that the scene-dependent normalization using homogeneous polygons provided slightly higher accuracy values than those obtained by the scene-independent method. Finally, the non-normalized and relatively normalized ‘Landsat-like’ AWiFS 2007 images were integrated into 1984 to 2010 Landsat time-series stacks (LTSS) for disturbance detection using the Vegetation Change Tracker (VCT) model. Both scene-dependent and scene-independent normalized AWiFS data sets could generate disturbance maps similar to what were generated using the LTSS data set, and their kappa coefficients were higher than 0.97. These results indicate that AWiFS can be used instead of Landsat data to detect multitemporal disturbance in the event of Landsat data discontinuity.

  9. Airborne Infrared and Visible Image Fusion Combined with Region Segmentation

    PubMed Central

    Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao

    2017-01-01

    This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. PMID:28505137

  10. Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.

    PubMed

    Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao

    2017-05-15

    This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems.

  11. Utilization of DIRSIG in support of real-time infrared scene generation

    NASA Astrophysics Data System (ADS)

    Sanders, Jeffrey S.; Brown, Scott D.

    2000-07-01

    Real-time infrared scene generation for hardware-in-the-loop has been a traditionally difficult challenge. Infrared scenes are usually generated using commercial hardware that was not designed to properly handle the thermal and environmental physics involved. Real-time infrared scenes typically lack details that are included in scenes rendered in no-real- time by ray-tracing programs such as the Digital Imaging and Remote Sensing Scene Generation (DIRSIG) program. However, executing DIRSIG in real-time while retaining all the physics is beyond current computational capabilities for many applications. DIRSIG is a first principles-based synthetic image generation model that produces multi- or hyper-spectral images in the 0.3 to 20 micron region of the electromagnetic spectrum. The DIRSIG model is an integrated collection of independent first principles based on sub-models, each of which works in conjunction to produce radiance field images with high radiometric fidelity. DIRSIG uses the MODTRAN radiation propagation model for exo-atmospheric irradiance, emitted and scattered radiances (upwelled and downwelled) and path transmission predictions. This radiometry submodel utilizes bidirectional reflectance data, accounts for specular and diffuse background contributions, and features path length dependent extinction and emission for transmissive bodies (plumes, clouds, etc.) which may be present in any target, background or solar path. This detailed environmental modeling greatly enhances the number of rendered features and hence, the fidelity of a rendered scene. While DIRSIG itself cannot currently be executed in real-time, its outputs can be used to provide scene inputs for real-time scene generators. These inputs can incorporate significant features such as target to background thermal interactions, static background object thermal shadowing, and partially transmissive countermeasures. All of these features represent significant improvements over the current state of the art in real-time IR scene generation.

  12. Real-time scene and signature generation for ladar and imaging sensors

    NASA Astrophysics Data System (ADS)

    Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios

    2014-05-01

    This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.

  13. A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios

    PubMed Central

    Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J.; Iborra, Andrés

    2011-01-01

    Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor’s maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method. PMID:22164083

  14. A novel method to increase LinLog CMOS sensors' performance in high dynamic range scenarios.

    PubMed

    Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J; Iborra, Andrés

    2011-01-01

    Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor's maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method.

  15. Modeling repetitive motions using structured light.

    PubMed

    Xu, Yi; Aliaga, Daniel G

    2010-01-01

    Obtaining models of dynamic 3D objects is an important part of content generation for computer graphics. Numerous methods have been extended from static scenarios to model dynamic scenes. If the states or poses of the dynamic object repeat often during a sequence (but not necessarily periodically), we call such a repetitive motion. There are many objects, such as toys, machines, and humans, undergoing repetitive motions. Our key observation is that when a motion-state repeats, we can sample the scene under the same motion state again but using a different set of parameters; thus, providing more information of each motion state. This enables robustly acquiring dense 3D information difficult for objects with repetitive motions using only simple hardware. After the motion sequence, we group temporally disjoint observations of the same motion state together and produce a smooth space-time reconstruction of the scene. Effectively, the dynamic scene modeling problem is converted to a series of static scene reconstructions, which are easier to tackle. The varying sampling parameters can be, for example, structured-light patterns, illumination directions, and viewpoints resulting in different modeling techniques. Based on this observation, we present an image-based motion-state framework and demonstrate our paradigm using either a synchronized or an unsynchronized structured-light acquisition method.

  16. Manipulating the content of dynamic natural scenes to characterize response in human MT/MST.

    PubMed

    Durant, Szonya; Wall, Matthew B; Zanker, Johannes M

    2011-09-09

    Optic flow is one of the most important sources of information for enabling human navigation through the world. A striking finding from single-cell studies in monkeys is the rapid saturation of response of MT/MST areas with the density of optic flow type motion information. These results are reflected psychophysically in human perception in the saturation of motion aftereffects. We began by comparing responses to natural optic flow scenes in human visual brain areas to responses to the same scenes with inverted contrast (photo negative). This changes scene familiarity while preserving local motion signals. This manipulation had no effect; however, the response was only correlated with the density of local motion (calculated by a motion correlation model) in V1, not in MT/MST. To further investigate this, we manipulated the visible proportion of natural dynamic scenes and found that areas MT and MST did not increase in response over a 16-fold increase in the amount of information presented, i.e., response had saturated. This makes sense in light of the sparseness of motion information in natural scenes, suggesting that the human brain is well adapted to exploit a small amount of dynamic signal and extract information important for survival.

  17. Compound simulator IR radiation characteristics test and calibration

    NASA Astrophysics Data System (ADS)

    Li, Yanhong; Zhang, Li; Li, Fan; Tian, Yi; Yang, Yang; Li, Zhuo; Shi, Rui

    2015-10-01

    The Hardware-in-the-loop simulation can establish the target/interference physical radiation and interception of product flight process in the testing room. In particular, the simulation of environment is more difficult for high radiation energy and complicated interference model. Here the development in IR scene generation produced by a fiber array imaging transducer with circumferential lamp spot sources is introduced. The IR simulation capability includes effective simulation of aircraft signatures and point-source IR countermeasures. Two point-sources as interference can move in two-dimension random directions. For simulation the process of interference release, the radiation and motion characteristic is tested. Through the zero calibration for optical axis of simulator, the radiation can be well projected to the product detector. The test and calibration results show the new type compound simulator can be used in the hardware-in-the-loop simulation trial.

  18. Automatic temperature computation for realistic IR simulation

    NASA Astrophysics Data System (ADS)

    Le Goff, Alain; Kersaudy, Philippe; Latger, Jean; Cathala, Thierry; Stolte, Nilo; Barillot, Philippe

    2000-07-01

    Polygon temperature computation in 3D virtual scenes is fundamental for IR image simulation. This article describes in detail the temperature calculation software and its current extensions, briefly presented in [1]. This software, called MURET, is used by the simulation workshop CHORALE of the French DGA. MURET is a one-dimensional thermal software, which accurately takes into account the material thermal attributes of three-dimensional scene and the variation of the environment characteristics (atmosphere) as a function of the time. Concerning the environment, absorbed incident fluxes are computed wavelength by wavelength, for each half an hour, druing 24 hours before the time of the simulation. For each polygon, incident fluxes are compsed of: direct solar fluxes, sky illumination (including diffuse solar fluxes). Concerning the materials, classical thermal attributes are associated to several layers, such as conductivity, absorption, spectral emissivity, density, specific heat, thickness and convection coefficients are taken into account. In the future, MURET will be able to simulate permeable natural materials (water influence) and vegetation natural materials (woods). This model of thermal attributes induces a very accurate polygon temperature computation for the complex 3D databases often found in CHORALE simulations. The kernel of MUET consists of an efficient ray tracer allowing to compute the history (over 24 hours) of the shadowed parts of the 3D scene and a library, responsible for the thermal computations. The great originality concerns the way the heating fluxes are computed. Using ray tracing, the flux received in each 3D point of the scene accurately takes into account the masking (hidden surfaces) between objects. By the way, this library supplies other thermal modules such as a thermal shows computation tool.

  19. The thermal background determines how the infrared and visual systems interact in pit vipers.

    PubMed

    Chen, Qin; Liu, Yang; Brauth, Steven E; Fang, Guangzhan; Tang, Yezhong

    2017-09-01

    The thermal infrared (IR) sensing system of pit vipers is believed to complement vision and provide a substitute imaging system in dark environments. Theoretically, the IR system would best image a scene consisting of a homothermal target in cold surroundings as a bright spot on a dark background. To test this hypothesis, we evaluated how the pit viper ( Gloydius brevicaudus ) discriminates and strikes prey when the background temperature is either higher or lower than that of the prey (approximately 32-33°C) in different parts of the scene. Snakes were tested in a modified predation cage in which background temperatures were set to 26°C on one side and either 33 or 40°C on the opposite side when the eyes, the pit organs or neither sensory system was occluded. When the eyes were blocked, snakes preferred to strike prey on the 26°C side rather than on the 33°C side but showed no bias in the other conditions. Snakes showed no preference for 26 versus 40°C background temperature, although more missed strikes occurred when the eyes were occluded. The results thus revealed that the pit viper IR system can accomplish a 'brightness constancy' computation reflecting the difference between the target and background temperatures, much as the visual system compares the luminance of a figure and the background. Furthermore, the results show that the IR system performs less well for locating prey when the background is warmer than the target. © 2017. Published by The Company of Biologists Ltd.

  20. Thermal monitoring of hydrothermal activity by permanent infrared automatic stations: Results obtained at Solfatara di Pozzuoli, Campi Flegrei (Italy)

    NASA Astrophysics Data System (ADS)

    Chiodini, G.; Vilardo, G.; Augusti, V.; Granieri, D.; Caliro, S.; Minopoli, C.; Terranova, C.

    2007-12-01

    A permanent automatic infrared (IR) station was installed at Solfatara crater, the most active zone of Campi Flegrei caldera. After a positive in situ calibration of the IR camera, we analyze 2175 thermal IR images of the same scene from 2004 to 2007. The scene includes a portion of the steam heated hot soils of Solfatara. The experiment was initiated to detect and quantify temperature changes of the shallow thermal structure of a quiescent volcano such as Solfatara over long periods. Ambient temperature is the main parameter affecting IR temperatures, while air humidity and rain control image quality. A geometric correction of the images was necessary to remove the effects of slow movement of the camera. After a suitable correction the images give a reliable and detailed picture of the temperature changes, over the period October 2004 to January 2007, which suggests that origin of the changes were linked to anthropogenic activity, vegetation growth, and the increase of the flux of hydrothermal fluids in the area of the hottest fumaroles. Two positive temperature anomalies were registered after the occurrence of two seismic swarms which affected the hydrothermal system of Solfatara in October 2005 and October 2006. It is worth noting that these signs were detected in a system characterized by a low level of activity with respect to systems affected by real volcanic crisis where more spectacular results will be expected. Results of the experiment show that this kind of monitoring system can be a suitable tool for volcanic surveillance.

  1. Coherent infrared imaging camera (CIRIC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerousmore » and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.« less

  2. Camera pose estimation for augmented reality in a small indoor dynamic scene

    NASA Astrophysics Data System (ADS)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  3. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)

    1982-01-01

    Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.

  4. Parsing Heterogeneity in Autism Spectrum Disorders: Visual Scanning of Dynamic Social Scenes in School-Aged Children

    ERIC Educational Resources Information Center

    Rice, Katherine; Moriuchi, Jennifer M.; Jones, Warren; Klin, Ami

    2012-01-01

    Objective: To examine patterns of variability in social visual engagement and their relationship to standardized measures of social disability in a heterogeneous sample of school-aged children with autism spectrum disorders (ASD). Method: Eye-tracking measures of visual fixation during free-viewing of dynamic social scenes were obtained for 109…

  5. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  6. Moving through a multiplex holographic scene

    NASA Astrophysics Data System (ADS)

    Mrongovius, Martina

    2013-02-01

    This paper explores how movement can be used as a compositional element in installations of multiplex holograms. My holographic images are created from montages of hand-held video and photo-sequences. These spatially dynamic compositions are visually complex but anchored to landmarks and hints of the capturing process - such as the appearance of the photographer's shadow - to establish a sense of connection to the holographic scene. Moving around in front of the hologram, the viewer animates the holographic scene. A perception of motion then results from the viewer's bodily awareness of physical motion and the visual reading of dynamics within the scene or movement of perspective through a virtual suggestion of space. By linking and transforming the physical motion of the viewer with the visual animation, the viewer's bodily awareness - including proprioception, balance and orientation - play into the holographic composition. How multiplex holography can be a tool for exploring coupled, cross-referenced and transformed perceptions of movement is demonstrated with a number of holographic image installations. Through this process I expanded my creative composition practice to consider how dynamic and spatial scenes can be conveyed through the fragmented view of a multiplex hologram. This body of work was developed through an installation art practice and was the basis of my recently completed doctoral thesis: 'The Emergent Holographic Scene — compositions of movement and affect using multiplex holographic images'.

  7. Speed Limits: Orientation and Semantic Context Interactions Constrain Natural Scene Discrimination Dynamics

    ERIC Educational Resources Information Center

    Rieger, Jochem W.; Kochy, Nick; Schalk, Franziska; Gruschow, Marcus; Heinze, Hans-Jochen

    2008-01-01

    The visual system rapidly extracts information about objects from the cluttered natural environment. In 5 experiments, the authors quantified the influence of orientation and semantics on the classification speed of objects in natural scenes, particularly with regard to object-context interactions. Natural scene photographs were presented in an…

  8. Optical system design of dynamic infrared scene projector based on DMD

    NASA Astrophysics Data System (ADS)

    Lu, Jing; Fu, Yuegang; Liu, Zhiying; Li, Yandong

    2014-09-01

    Infrared scene simulator is now widely used to simulate infrared scene practicality in the laboratory, which can greatly reduce the research cost of the optical electrical system and offer economical experiment environment. With the advantage of large dynamic range and high spatial resolution, dynamic infrared projection technology, which is the key part of the infrared scene simulator, based on digital micro-mirror device (DMD) has been rapidly developed and widely applied in recent years. In this paper, the principle of the digital micro-mirror device is briefly introduced and the characteristics of the DLP (Digital Light Procession) system based on digital micromirror device (DMD) are analyzed. The projection system worked at 8~12μm with 1024×768 pixel DMD is designed by ZEMAX. The MTF curve is close to the diffraction limited curve and the radius of the spot diagram is smaller than that of the airy disk. The result indicates that the system meets the design requirements.

  9. Real-time maritime scene simulation for ladar sensors

    NASA Astrophysics Data System (ADS)

    Christie, Chad L.; Gouthas, Efthimios; Swierkowski, Leszek; Williams, Owen M.

    2011-06-01

    Continuing interest exists in the development of cost-effective synthetic environments for testing Laser Detection and Ranging (ladar) sensors. In this paper we describe a PC-based system for real-time ladar scene simulation of ships and small boats in a dynamic maritime environment. In particular, we describe the techniques employed to generate range imagery accompanied by passive radiance imagery. Our ladar scene generation system is an evolutionary extension of the VIRSuite infrared scene simulation program and includes all previous features such as ocean wave simulation, the physically-realistic representation of boat and ship dynamics, wake generation and simulation of whitecaps, spray, wake trails and foam. A terrain simulation extension is also under development. In this paper we outline the development, capabilities and limitations of the VIRSuite extensions.

  10. Adaptive foveated single-pixel imaging with dynamic supersampling

    PubMed Central

    Phillips, David B.; Sun, Ming-Jie; Taylor, Jonathan M.; Edgar, Matthew P.; Barnett, Stephen M.; Gibson, Graham M.; Padgett, Miles J.

    2017-01-01

    In contrast to conventional multipixel cameras, single-pixel cameras capture images using a single detector that measures the correlations between the scene and a set of patterns. However, these systems typically exhibit low frame rates, because to fully sample a scene in this way requires at least the same number of correlation measurements as the number of pixels in the reconstructed image. To mitigate this, a range of compressive sensing techniques have been developed which use a priori knowledge to reconstruct images from an undersampled measurement set. Here, we take a different approach and adopt a strategy inspired by the foveated vision found in the animal kingdom—a framework that exploits the spatiotemporal redundancy of many dynamic scenes. In our system, a high-resolution foveal region tracks motion within the scene, yet unlike a simple zoom, every frame delivers new spatial information from across the entire field of view. This strategy rapidly records the detail of quickly changing features in the scene while simultaneously accumulating detail of more slowly evolving regions over several consecutive frames. This architecture provides video streams in which both the resolution and exposure time spatially vary and adapt dynamically in response to the evolution of the scene. The degree of local frame rate enhancement is scene-dependent, but here, we demonstrate a factor of 4, thereby helping to mitigate one of the main drawbacks of single-pixel imaging techniques. The methods described here complement existing compressive sensing approaches and may be applied to enhance computational imagers that rely on sequential correlation measurements. PMID:28439538

  11. Plant cover, soil temperature, freeze, water stress, and evapotranspiration conditions. [south Texas

    NASA Technical Reports Server (NTRS)

    Wiegand, C. L.; Nixon, P. R.; Gausman, H. W.; Namken, L. N.; Leamer, R. W.; Richardson, A. J. (Principal Investigator)

    1981-01-01

    Emissive and reflective data for 10 days, and IR data for 6 nights in south Texas scenes were analyzed after procedures were developed for removing cloud-affected data. HCMM radiometric temperatures were: within 2 C of dewpoint temperatures on nights when air temperature approached dewpoint temperatures; significantly correlated with variables important in evapotranspiration; and, related to freeze severity and planting depth soil temperatures. Vegetation greenness indexes calculated from visible and reflective IR bands of NOAA-6 to -9 meteorological satellites will be useful in the AgRISTARS program for seasonal crop development, crop condition, and drought applications.

  12. A gallery of HCMM images

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A gallery of what might be called the ""Best of HCMM'' imagery is presented. These 100 images, consisting mainly of Day-VIS, Day-IR, and Night-IR scenes plus a few thermal inertia images, were selected from the collection accrued in the Missions Utilization Office (Code 902) at the Goddard Space Flight Center. They were selected because of both their pictorial quality and their information or interest content. Nearly all the images are the computer processed and contrast stretched products routinely produced by the image processing facility at GSFC. Several LANDSAT images, special HCMM images made by HCMM investigators, and maps round out the input.

  13. Infrared hyperspectral imaging for chemical vapour detection

    NASA Astrophysics Data System (ADS)

    Ruxton, K.; Robertson, G.; Miller, W.; Malcolm, G. P. A.; Maker, G. T.; Howle, C. R.

    2012-10-01

    Active hyperspectral imaging is a valuable tool in a wide range of applications. One such area is the detection and identification of chemicals, especially toxic chemical warfare agents, through analysis of the resulting absorption spectrum. This work presents a selection of results from a prototype midwave infrared (MWIR) hyperspectral imaging instrument that has successfully been used for compound detection at a range of standoff distances. Active hyperspectral imaging utilises a broadly tunable laser source to illuminate the scene with light at a range of wavelengths. While there are a number of illumination methods, the chosen configuration illuminates the scene by raster scanning the laser beam using a pair of galvanometric mirrors. The resulting backscattered light from the scene is collected by the same mirrors and focussed onto a suitable single-point detector, where the image is constructed pixel by pixel. The imaging instrument that was developed in this work is based around an IR optical parametric oscillator (OPO) source with broad tunability, operating in the 2.6 to 3.7 μm (MWIR) and 1.5 to 1.8 μm (shortwave IR, SWIR) spectral regions. The MWIR beam was primarily used as it addressed the fundamental absorption features of the target compounds compared to the overtone and combination bands in the SWIR region, which can be less intense by more than an order of magnitude. We show that a prototype NCI instrument was able to locate hydrocarbon materials at distances up to 15 metres.

  14. Development of an inverse distance weighted active infrared stealth scheme using the repulsive particle swarm optimization algorithm.

    PubMed

    Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk

    2018-04-20

    Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.

  15. Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks

    PubMed Central

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude

    2017-01-01

    Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100 ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250 ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. PMID:27039703

  16. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyper-spectral dynamic scene and image sequence for hyper-spectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyper-spectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyper-spectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyper-spectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyper-spectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyper-spectral images are consistent with the theoretical analysis results.

  17. Sandia National Laboratories: Bumper crop of partnerships

    Science.gov Websites

    of IR Dynamics LLC of Santa Fe, is working with Sandia's Nelson Bell (1815) through a Cooperative Research and Development Agreement. IR Dynamics is developing thermochromic materials to control infrared analysis of human visual perception and cognition with dynamic content. IR Dynamics LLC: The Santa Fe

  18. Uncooled radiometric camera performance

    NASA Astrophysics Data System (ADS)

    Meyer, Bill; Hoelter, T.

    1998-07-01

    Thermal imaging equipment utilizing microbolometer detectors operating at room temperature has found widespread acceptance in both military and commercial applications. Uncooled camera products are becoming effective solutions to applications currently using traditional, photonic infrared sensors. The reduced power consumption and decreased mechanical complexity offered by uncooled cameras have realized highly reliable, low-cost, hand-held instruments. Initially these instruments displayed only relative temperature differences which limited their usefulness in applications such as Thermography. Radiometrically calibrated microbolometer instruments are now available. The ExplorIR Thermography camera leverages the technology developed for Raytheon Systems Company's first production microbolometer imaging camera, the Sentinel. The ExplorIR camera has a demonstrated temperature measurement accuracy of 4 degrees Celsius or 4% of the measured value (whichever is greater) over scene temperatures ranges of minus 20 degrees Celsius to 300 degrees Celsius (minus 20 degrees Celsius to 900 degrees Celsius for extended range models) and camera environmental temperatures of minus 10 degrees Celsius to 40 degrees Celsius. Direct temperature measurement with high resolution video imaging creates some unique challenges when using uncooled detectors. A temperature controlled, field-of-view limiting aperture (cold shield) is not typically included in the small volume dewars used for uncooled detector packages. The lack of a field-of-view shield allows a significant amount of extraneous radiation from the dewar walls and lens body to affect the sensor operation. In addition, the transmission of the Germanium lens elements is a function of ambient temperature. The ExplorIR camera design compensates for these environmental effects while maintaining the accuracy and dynamic range required by today's predictive maintenance and condition monitoring markets.

  19. Predicting top-of-atmosphere radiance for arbitrary viewing geometries from the visible to thermal infrared: generalization to arbitrary average scene temperatures

    NASA Astrophysics Data System (ADS)

    Florio, Christopher J.; Cota, Steve A.; Gaffney, Stephanie K.

    2010-08-01

    In a companion paper presented at this conference we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) may be used in conjunction with a limited number of runs of AFRL's MODTRAN4 radiative transfer code, to quickly predict the top-of-atmosphere (TOA) radiance received in the visible through midwave IR (MWIR) by an earth viewing sensor, for any arbitrary combination of solar and sensor elevation angles. The method is particularly useful for large-scale scene simulations where each pixel could have a unique value of reflectance/emissivity and temperature, making the run-time required for direct prediction via MODTRAN4 prohibitive. In order to be self-consistent, the method described requires an atmospheric model (defined, at a minimum, as a set of vertical temperature, pressure and water vapor profiles) that is consistent with the average scene temperature. MODTRAN4 provides only six model atmospheres, ranging from sub-arctic winter to tropical conditions - too few to cover with sufficient temperature resolution the full range of average scene temperatures that might be of interest. Model atmospheres consistent with intermediate temperature values can be difficult to come by, and in any event, their use would be too cumbersome for use in trade studies involving a large number of average scene temperatures. In this paper we describe and assess a method for predicting TOA radiance for any arbitrary average scene temperature, starting from only a limited number of model atmospheres.

  20. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  1. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  2. Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  3. The EarthCARE mission BBR instrument: ground testing of radiometric performance

    NASA Astrophysics Data System (ADS)

    Caldwell, Martin E.; Spilling, David; Grainger, William; Theocharous, E.; Whalley, Martin; Wright, Nigel; Ward, Anthony K.; Jones, Edward; Hampton, Joseph; Parker, David; Delderfield, John; Pearce, Alan; Richards, Anthony G.; Munro, Grant J.; Poynz Wright, Oliver; Hampson, Matthew; Forster, David

    2017-09-01

    In the EarthCARE mission the BBR (Broad Band Radiometer) has the role of measuring the net earth radiance (i.e. total reflected-solar and thermally-emitted radiances), from the same earth scene as viewed by the other instruments (aerosol lidar, cloud radar and spectral imager). It does this measurement at 10km scene size and in 3 view angles. It is an imaging radiometer in that it uses micro-bolometer linear-array detector (pushbroom orientation), to over-sample these required scenes, with the samples being binned on-ground to produce the 10km radiance data. For the measurements of total earth radiance, the BBR is based on the heritage of Earth Radiation Budget (ERB) instruments. The ground calibration methods of this type of sensor is technically very similar to other EO instruments that measure in the thermalIR, but with added challenges: (1) The thermal-IR measurement has to have a much wider spectral range than normal thermal-IR channels to cover the whole earth-emission spectrum i.e. 4 to >50microns (2) The 2nd channel (reflected solar radiance) must also have a broad response to cover almost the whole solar spectrum, i.e. 0.3 to 4microns. And this solar channel must be measured on the same radiometric calibration as the thermal channel, which in practice is best done by using the same radiometer for both channels. The radiometer is designed to be very broad-band i.e. 0.3 to 50microns (i.e. more than two decades), to cover both ranges, and a switchable spectral filter (short-pass cutoff at 4μm) is used to separate the channels. The on-ground measurements which are required to link the calibration of these channels will be described. A calibration of absolute responsivity in each of the two bands is needed; in the thermal-IR channel this is by the normal method of using a calibrated blackbody test source, and in the solar channel it is by means of a narrow-band (laser) and a reference radiometer (from NPL). A calibration of relative spectral response is also needed, across this wide range, for the purpose of linking the two channels, and for converting the narrow-band solar channel measurement to broad-band.

  4. Dynamic Textures Modeling via Joint Video Dictionary Learning.

    PubMed

    Wei, Xian; Li, Yuanxiang; Shen, Hao; Chen, Fang; Kleinsteuber, Martin; Wang, Zhongfeng

    2017-04-06

    Video representation is an important and challenging task in the computer vision community. In this paper, we consider the problem of modeling and classifying video sequences of dynamic scenes which could be modeled in a dynamic textures (DT) framework. At first, we assume that image frames of a moving scene can be modeled as a Markov random process. We propose a sparse coding framework, named joint video dictionary learning (JVDL), to model a video adaptively. By treating the sparse coefficients of image frames over a learned dictionary as the underlying "states", we learn an efficient and robust linear transition matrix between two adjacent frames of sparse events in time series. Hence, a dynamic scene sequence is represented by an appropriate transition matrix associated with a dictionary. In order to ensure the stability of JVDL, we impose several constraints on such transition matrix and dictionary. The developed framework is able to capture the dynamics of a moving scene by exploring both sparse properties and the temporal correlations of consecutive video frames. Moreover, such learned JVDL parameters can be used for various DT applications, such as DT synthesis and recognition. Experimental results demonstrate the strong competitiveness of the proposed JVDL approach in comparison with state-of-the-art video representation methods. Especially, it performs significantly better in dealing with DT synthesis and recognition on heavily corrupted data.

  5. Automatic acquisition of motion trajectories: tracking hockey players

    NASA Astrophysics Data System (ADS)

    Okuma, Kenji; Little, James J.; Lowe, David

    2003-12-01

    Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for research in automatic video annotation in this domain.

  6. The design and application of a multi-band IR imager

    NASA Astrophysics Data System (ADS)

    Li, Lijuan

    2018-02-01

    Multi-band IR imaging system has many applications in security, national defense, petroleum and gas industry, etc. So the relevant technologies are getting more and more attention in rent years. As we know, when used in missile warning and missile seeker systems, multi-band IR imaging technology has the advantage of high target recognition capability and low false alarm rate if suitable spectral bands are selected. Compared with traditional single band IR imager, multi-band IR imager can make use of spectral features in addition to space and time domain features to discriminate target from background clutters and decoys. So, one of the key work is to select the right spectral bands in which the feature difference between target and false target is evident and is well utilized. Multi-band IR imager is a useful instrument to collect multi-band IR images of target, backgrounds and decoys for spectral band selection study at low cost and with adjustable parameters and property compared with commercial imaging spectrometer. In this paper, a multi-band IR imaging system is developed which is suitable to collect 4 spectral band images of various scenes at every turn and can be expanded to other short-wave and mid-wave IR spectral bands combination by changing filter groups. The multi-band IR imaging system consists of a broad band optical system, a cryogenic InSb large array detector, a spinning filter wheel and electronic processing system. The multi-band IR imaging system's performance is tested in real data collection experiments.

  7. High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection

    NASA Astrophysics Data System (ADS)

    Zuo, Chao; Chen, Qian; Gu, Guohua; Feng, Shijie; Feng, Fangxiaoyu; Li, Rubin; Shen, Guochen

    2013-08-01

    This paper introduces a high-speed three-dimensional (3-D) shape measurement technique for dynamic scenes by using bi-frequency tripolar pulse-width-modulation (TPWM) fringe projection. Two wrapped phase maps with different wavelengths can be obtained simultaneously by our bi-frequency phase-shifting algorithm. Then the two phase maps are unwrapped using a simple look-up-table based number-theoretical approach. To guarantee the robustness of phase unwrapping as well as the high sinusoidality of projected patterns, TPWM technique is employed to generate ideal fringe patterns with slight defocus. We detailed our technique, including its principle, pattern design, and system setup. Several experiments on dynamic scenes were performed, verifying that our method can achieve a speed of 1250 frames per second for fast, dense, and accurate 3-D measurements.

  8. Processing of multispectral thermal IR data for geologic applications

    NASA Technical Reports Server (NTRS)

    Kahle, A. B.; Madura, D. P.; Soha, J. M.

    1979-01-01

    Multispectral thermal IR data were acquired with a 24-channel scanner flown in an aircraft over the E. Tintic Utah mining district. These digital image data required extensive computer processing in order to put the information into a format useful for a geologic photointerpreter. Simple enhancement procedures were not sufficient to reveal the total information content because the data were highly correlated in all channels. The data were shown to be dominated by temperature variations across the scene, while the much more subtle spectral variations between the different rock types were of interest. The image processing techniques employed to analyze these data are described.

  9. Visualization of fluid dynamics at NASA Ames

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1989-01-01

    The hardware and software currently used for visualization of fluid dynamics at NASA Ames is described. The software includes programs to create scenes (for example particle traces representing the flow over an aircraft), programs to interactively view the scenes, and programs to control the creation of video tapes and 16mm movies. The hardware includes high performance graphics workstations, a high speed network, digital video equipment, and film recorders.

  10. Noise and contrast comparison of visual and infrared images of hazards as seen inside an automobile

    NASA Astrophysics Data System (ADS)

    Meitzler, Thomas J.; Bryk, Darryl; Sohn, Eui J.; Lane, Kimberly; Bednarz, David; Jusela, Daniel; Ebenstein, Samuel; Smith, Gregory H.; Rodin, Yelena; Rankin, James S., II; Samman, Amer M.

    2000-06-01

    The purpose of this experiment was to quantitatively measure driver performance for detecting potential road hazards in visual and infrared (IR) imagery of road scenes containing varying combinations of contrast and noise. This pilot test is a first step toward comparing various IR and visual sensors and displays for the purpose of an enhanced vision system to go inside the driver compartment. Visible and IR road imagery obtained was displayed on a large screen and on a PC monitor and subject response times were recorded. Based on the response time, detection probabilities were computed and compared to the known time of occurrence of a driving hazard. The goal was to see what combinations of sensor, contrast and noise enable subjects to have a higher detection probability of potential driving hazards.

  11. Effect of Clouds on Apertures of Space-based Air Fluorescence Detectors

    NASA Technical Reports Server (NTRS)

    Sokolsky, P.; Krizmanic, J.

    2003-01-01

    Space-based ultra-high-energy cosmic ray detectors observe fluorescence light from extensive air showers produced by these particles in the troposphere. Clouds can scatter and absorb this light and produce systematic errors in energy determination and spectrum normalization. We study the possibility of using IR remote sensing data from MODIS and GOES satellites to delimit clear areas of the atmosphere. The efficiency for detecting ultra-high-energy cosmic rays whose showers do not intersect clouds is determined for real, night-time cloud scenes. We use the MODIS SST cloud mask product to define clear pixels for cloud scenes along the equator and use the OWL Monte Carlo to generate showers in the cloud scenes. We find the efficiency for cloud-free showers with closest approach of three pixels to a cloudy pixel is 6.5% exclusive of other factors. We conclude that defining a totally cloud-free aperture reduces the sensitivity of space-based fluorescence detectors to unacceptably small levels.

  12. Library Search Prefilters for Vehicle Manufacturers to Assist in the Forensic Examination of Automotive Paints.

    PubMed

    Lavine, Barry K; White, Collin G; Ding, Tao

    2018-03-01

    Pattern recognition techniques have been applied to the infrared (IR) spectral libraries of the Paint Data Query (PDQ) database to differentiate between nonidentical but similar IR spectra of automotive paints. To tackle the problem of library searching, search prefilters were developed to identify the vehicle make from IR spectra of the clear coat, surfacer-primer, and e-coat layers. To develop these search prefilters with the appropriate degree of accuracy, IR spectra from the PDQ database were preprocessed using the discrete wavelet transform to enhance subtle but significant features in the IR spectral data. Wavelet coefficients characteristic of vehicle make were identified using a genetic algorithm for pattern recognition and feature selection. Search prefilters to identify automotive manufacturer through IR spectra obtained from a paint chip recovered at a crime scene were developed using 1596 original manufacturer's paint systems spanning six makes (General Motors, Chrysler, Ford, Honda, Nissan, and Toyota) within a limited production year range (2000-2006). Search prefilters for vehicle manufacturer that were developed as part of this study were successfully validated using IR spectra obtained directly from the PDQ database. Information obtained from these search prefilters can serve to quantify the discrimination power of original automotive paint encountered in casework and further efforts to succinctly communicate trace evidential significance to the courts.

  13. Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks.

    PubMed

    Martin Cichy, Radoslaw; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude

    2017-06-01

    Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Can IR scene projectors reduce total system cost?

    NASA Astrophysics Data System (ADS)

    Ginn, Robert; Solomon, Steven

    2006-05-01

    There is an incredible amount of system engineering involved in turning the typical infrared system needs of probability of detection, probability of identification, and probability of false alarm into focal plane array (FPA) requirements of noise equivalent irradiance (NEI), modulation transfer function (MTF), fixed pattern noise (FPN), and defective pixels. Unfortunately, there are no analytic solutions to this problem so many approximations and plenty of "seat of the pants" engineering is employed. This leads to conservative specifications, which needlessly drive up system costs by increasing system engineering costs, reducing FPA yields, increasing test costs, increasing rework and the never ending renegotiation of requirements in an effort to rein in costs. These issues do not include the added complexity to the FPA factory manager of trying to meet varied, and changing, requirements for similar products because different customers have made different approximations and flown down different specifications. Scene generation technology may well be mature and cost effective enough to generate considerable overall savings for FPA based systems. We will compare the costs and capabilities of various existing scene generation systems and estimate the potential savings if implemented at several locations in the IR system fabrication cycle. The costs of implementing this new testing methodology will be compared to the probable savings in systems engineering, test, rework, yield improvement and others. The diverse requirements and techniques required for testing missile warning systems, missile seekers, and FLIRs will be defined. Last, we will discuss both the hardware and software requirements necessary to meet the new test paradigm and discuss additional cost improvements related to the incorporation of these technologies.

  15. Effects of Scene Modulation Image Blur and Noise Upon Human Target Acquisition Performance.

    DTIC Science & Technology

    1997-06-01

    AFRL-HE-WP-TR-1998-0012 UNITED STATES AIR FORCE RESEARCH LABORATORY EFFECTS OF SCENE MODULATION IMAGE BLUR AND NOISE UPON HUMAN TARGET...COVERED INTERIM (July 1996 - August 1996) TITLE AND SUBTITLE Effects of Scene Modulation Image Blur and Noise Upon Human Target Acquisition...dilemma in image transmission and display is that we must compromise between die conflicting constraints of dynamic range and noise . Three target

  16. Far-IR transparency and dynamic infrared signature control with novel conducting polymer systems

    NASA Astrophysics Data System (ADS)

    Chandrasekhar, Prasanna; Dooley, T. J.

    1995-09-01

    Materials which possess transparency, coupled with active controllability of this transparency in the infrared (IR), are today an increasingly important requirement, for varied applications. These applications include windows for IR sensors, IR-region flat panel displays used in camouflage as well as in communication and sight through night-vision goggles, coatings with dynamically controllable IR-emissivity, and thermal conservation coatings. Among stringent requirements for these applications are large dynamic ranges (color contrast), 'multi-color' or broad-band characteristics, extended cyclability, long memory retention, matrix addressability, small area fabricability, low power consumption, and environmental stability. Among materials possessing the requirements for variation of IR signature, conducting polymers (CPs) appear to be the only materials with dynamic, actively controllable signature and acceptable dynamic range. Conventional CPs such as poly(alkyl thiophene), poly(pyrrole) or poly(aniline) show very limited dynamic range, especially in the far-IR, while also showing poor transparency. We have developed a number of novel CP systems ('system' implying the CP, the selected dopant, the synthesis method, and the electrolyte) with very wide dynamic range (up to 90% in both important IR regions, 3 - 5 (mu) and 8 - 12 (mu) ), high cyclability (to 105 cycles with less than 10% optical degradation), nearly indefinite optical memory retention, matrix addressability of multi-pixel displays, very wide operating temperature and excellent environmental stability, low charge capacity, and processability into areas from less than 1 mm2 to more than 100 cm2. The criteria used to design and arrive at these CP systems, together with representative IR signature data, are presented in this paper.

  17. NASA Fundamental Remote Sensing Science Research Program

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The NASA Fundamental Remote Sensing Research Program is described. The program provides a dynamic scientific base which is continually broadened and from which future applied research and development can draw support. In particular, the overall objectives and current studies of the scene radiation and atmospheric effect characterization (SRAEC) project are reviewed. The SRAEC research can be generically structured into four types of activities including observation of phenomena, empirical characterization, analytical modeling, and scene radiation analysis and synthesis. The first three activities are the means by which the goal of scene radiation analysis and synthesis is achieved, and thus are considered priority activities during the early phases of the current project. Scene radiation analysis refers to the extraction of information describing the biogeophysical attributes of the scene from the spectral, spatial, and temporal radiance characteristics of the scene including the atmosphere. Scene radiation synthesis is the generation of realistic spectral, spatial, and temporal radiance values for a scene with a given set of biogeophysical attributes and atmospheric conditions.

  18. Infrared target simulation environment for pattern recognition applications

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas E.; George, Nicholas

    1994-07-01

    The generation of complete databases of IR data is extremely useful for training human observers and testing automatic pattern recognition algorithms. Field data may be used for realism, but require expensive and time-consuming procedures. IR scene simulation methods have emerged as a more economical and efficient alternative for the generation of IR databases. A novel approach to IR target simulation is presented in this paper. Model vehicles at 1:24 scale are used for the simulation of real targets. The temperature profile of the model vehicles is controlled using resistive circuits which are embedded inside the models. The IR target is recorded using an Inframetrics dual channel IR camera system. Using computer processing we place the recorded IR target in a prerecorded background. The advantages of this approach are: (1) the range and 3D target aspect can be controlled by the relative position between the camera and model vehicle; (2) the temperature profile can be controlled by adjusting the power delivered to the resistive circuit; (3) the IR sensor effects are directly incorporated in the recording process, because the real sensor is used; (4) the recorded target can embedded in various types of backgrounds recorded under different weather conditions, times of day etc. The effectiveness of this approach is demonstrated by generating an IR database of three vehicles which is used to train a back propagation neural network. The neural network is capable of classifying vehicle type, vehicle aspect, and relative temperature with a high degree of accuracy.

  19. Single-shot thermal ghost imaging using wavelength-division multiplexing

    NASA Astrophysics Data System (ADS)

    Deng, Chao; Suo, Jinli; Wang, Yuwang; Zhang, Zhili; Dai, Qionghai

    2018-01-01

    Ghost imaging (GI) is an emerging technique that reconstructs the target scene from its correlated measurements with a sequence of patterns. Restricted by the multi-shot principle, GI usually requires long acquisition time and is limited in observation of dynamic scenes. To handle this problem, this paper proposes a single-shot thermal ghost imaging scheme via a wavelength-division multiplexing technique. Specifically, we generate thousands of correlated patterns simultaneously by modulating a broadband light source with a wavelength dependent diffuser. These patterns carry the scene's spatial information and then the correlated photons are coupled into a spectrometer for the final reconstruction. This technique increases the speed of ghost imaging and promotes the applications in dynamic ghost imaging with high scalability and compatibility.

  20. Noise-cancellation-based nonuniformity correction algorithm for infrared focal-plane arrays.

    PubMed

    Godoy, Sebastián E; Pezoa, Jorge E; Torres, Sergio N

    2008-10-10

    The spatial fixed-pattern noise (FPN) inherently generated in infrared (IR) imaging systems compromises severely the quality of the acquired imagery, even making such images inappropriate for some applications. The FPN refers to the inability of the photodetectors in the focal-plane array to render a uniform output image when a uniform-intensity scene is being imaged. We present a noise-cancellation-based algorithm that compensates for the additive component of the FPN. The proposed method relies on the assumption that a source of noise correlated to the additive FPN is available to the IR camera. An important feature of the algorithm is that all the calculations are reduced to a simple equation, which allows for the bias compensation of the raw imagery. The algorithm performance is tested using real IR image sequences and is compared to some classical methodologies. (c) 2008 Optical Society of America

  1. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R. (Principal Investigator); Wiegand, C. L.; Richardson, A. J.; Johnson, M. P.

    1982-01-01

    Practical methods of computer screening cloud-contaminated pixels from data of various satellite systems are proposed. Examples are given of the location of clouds and representative landscape features in HCMM spectral space of reflectance (VIS) vs emission (IR). Methods of screening out cloud affected HCMM are discussed. The character of subvisible absorbing-emitting atmospheric layers (subvisible cirrus or SCi) in HCMM data is considered and radiosonde soundings are examined in relation to the presence of SCi. The statistical characteristics of multispectral meteorological satellite data in clear and SCi affected areas are discussed. Examples in TIROS-N and NOAA-7 data from several states and Mexico are presented. The VIS-IR cluster screening method for removing clouds is applied to a 262, 144 pixel HCMM scene from south Texas and northeast Mexico. The SCi that remain after cluster screening are sited out by applying a statistically determined IR limit.

  2. Contrast, size, and orientation-invariant target detection in infrared imagery

    NASA Astrophysics Data System (ADS)

    Zhou, Yi-Tong; Crawshaw, Richard D.

    1991-08-01

    Automatic target detection in IR imagery is a very difficult task due to variations in target brightness, shape, size, and orientation. In this paper, the authors present a contrast, size, and orientation invariant algorithm based on Gabor functions for detecting targets from a single IR image frame. The algorithms consists of three steps. First, it locates potential targets by using low-resolution Gabor functions which resist noise and background clutter effects, then, it removes false targets and eliminates redundant target points based on a similarity measure. These two steps mimic human vision processing but are different from Zeevi's Foveating Vision System. Finally, it uses both low- and high-resolution Gabor functions to verify target existence. This algorithm has been successfully tested on several IR images that contain multiple examples of military vehicles with different size and brightness in various background scenes and orientations.

  3. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain.

    PubMed

    Groen, Iris I A; Silson, Edward H; Baker, Chris I

    2017-02-19

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  4. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain

    PubMed Central

    2017-01-01

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044013

  5. Infrared negative luminescent devices and higher operating temperature detectors

    NASA Astrophysics Data System (ADS)

    Nash, G. R.; Gordon, N. T.; Hall, D. J.; Ashby, M. K.; Little, J. C.; Masterton, G.; Hails, J. E.; Giess, J.; Haworth, L.; Emeny, M. T.; Ashley, T.

    2004-01-01

    Infrared LEDs and negative luminescent devices, where less light is emitted than in equilibrium, have been attracting an increasing amount of interest recently. They have a variety of applications, including as a ‘source’ of IR radiation for gas sensing; radiation shielding for, and non-uniformity correction of, high sensitivity staring infrared detectors; and dynamic infrared scene projection. Similarly, infrared (IR) detectors are used in arrays for thermal imaging and, discretely, in applications such as gas sensing. Multi-layer heterostructure epitaxy enables the growth of both types of device using designs in which the electronic processes can be precisely controlled and techniques such as carrier exclusion and extraction can be implemented. This enables detectors to be made which offer good performance at higher than normal operating temperatures, and efficient negative luminescent devices to be made which simulate a range of effective temperatures whilst operating uncooled. In both cases, however, additional performance benefits can be achieved by integrating optical concentrators around the diodes to reduce the volume of semiconductor material, and so minimise the thermally activated generation-recombination processes which compete with radiative mechanisms. The integrated concentrators are in the form of Winston cones, which can be formed using an iterative dry etch process involving methane/hydrogen and oxygen. We present results on negative luminescence in the mid- and long-IR wavebands, from devices made from indium antimonide and mercury cadmium telluride, where the aim is sizes greater than 1 cm×1 cm. We also discuss progress on, and the potential for, operating temperature and/or sensitivity improvement of detectors, where very high-performance imaging is anticipated from systems which require no mechanical cooling.

  6. Infrared negative luminescent devices and higher operating temperature detectors

    NASA Astrophysics Data System (ADS)

    Nash, Geoff R.; Gordon, Neil T.; Hall, David J.; Little, J. Chris; Masterton, G.; Hails, J. E.; Giess, J.; Haworth, L.; Emeny, Martin T.; Ashley, Tim

    2004-02-01

    Infrared LEDs and negative luminescent devices, where less light is emitted than in equilibrium, have been attracting an increasing amount of interest recently. They have a variety of applications, including as a ‘source" of IR radiation for gas sensing; radiation shielding for and non-uniformity correction of high sensitivity starring infrared detectors; and dynamic infrared scene projection. Similarly, IR detectors are used in arrays for thermal imaging and, discretely, in applications such as gas sensing. Multi-layer heterostructure epitaxy enables the growth of both types of device using designs in which the electronic processes can be precisely controlled and techniques such as carrier exclusion and extraction can be implemented. This enables detectors to be made which offer good performance at higher than normal operating temperatures, and efficient negative luminescent devices to be made which simulate a range of effective temperatures whilst operating uncooled. In both cases, however, additional performance benefits can be achieved by integrating optical concentrators around the diodes to reduce the volume of semiconductor material, and so minimise the thermally activated generation-recombination processes which compete with radiative mechanisms. The integrated concentrators are in the form of Winston cones, which can be formed using an iterative dry etch process involving methane/hydrogen and oxygen. We will present results on negative luminescence in the mid and long IR wavebands, from devices made from indium antimonide and mercury cadmium telluride, where the aim is sizes greater than 1cm x 1cm. We will also discuss progress on, and the potential for, operating temperature and/or sensitivity improvement of detectors, where very higher performance imaging is anticipated from systems which require no mechanical cooling.

  7. Infrared Negative Luminescent Devices and Higher Operating Temperature Detectors

    NASA Astrophysics Data System (ADS)

    Ashley, Tim

    2003-03-01

    Infrared LEDs and negative luminescent devices, where less light is emitted than in equilibrium, have been attracting an increasing amount of interest recently. They have a variety of applications, including as a source' of IR radiation for gas sensing; radiation shielding for and non-uniformity correction of high sensitivity starring infrared detectors; and dynamic infrared scene projection. Similarly, IR detectors are used in arrays for thermal imaging and, discretely, in applications such as gas sensing. Multi-layer heterostructure epitaxy enables the growth of both types of device using designs in which the electronic processes can be precisely controlled and techniques such as carrier exclusion and extraction can be implemented. This enables detectors to be made which offer good performance at higher than normal operating temperatures, and efficient negative luminescent devices to be made which simulate a range of effective temperatures whilst operating uncooled. In both cases, however, additional performance benefits can be achieved by integrating optical concentrators around the diodes to reduce the volume of semiconductor material, and so minimise the thermally activated generation-recombination processes which compete with radiative mechanisms. The integrated concentrators are in the form of Winston cones, which can be formed using an iterative dry etch process involving methane/hydrogen and oxygen. We will present results on negative luminescence in the mid and long IR wavebands, from devices made from indium antimonide and mercury cadmium telluride, where the aim is sizes greater than 1cm x 1cm. We will also discuss progress on, and the potential for, operating temperature and/or sensitivity improvement of detectors, where very high performance imaging is anticipated from systems which require no mechanical cooling.

  8. Autonomous UAV-Based Mapping of Large-Scale Urban Firefights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snarski, S; Scheibner, K F; Shaw, S

    2006-03-09

    This paper describes experimental results from a live-fire data collect designed to demonstrate the ability of IR and acoustic sensing systems to detect and map high-volume gunfire events from tactical UAVs. The data collect supports an exploratory study of the FightSight concept in which an autonomous UAV-based sensor exploitation and decision support capability is being proposed to provide dynamic situational awareness for large-scale battalion-level firefights in cluttered urban environments. FightSight integrates IR imagery, acoustic data, and 3D scene context data with prior time information in a multi-level, multi-step probabilistic-based fusion process to reliably locate and map the array of urbanmore » firing events and firepower movements and trends associated with the evolving urban battlefield situation. Described here are sensor results from live-fire experiments involving simultaneous firing of multiple sub/super-sonic weapons (2-AK47, 2-M16, 1 Beretta, 1 Mortar, 1 rocket) with high optical and acoustic clutter at ranges up to 400m. Sensor-shooter-target configurations and clutter were designed to simulate UAV sensing conditions for a high-intensity firefight in an urban environment. Sensor systems evaluated were an IR bullet tracking system by Lawrence Livermore National Laboratory (LLNL) and an acoustic gunshot detection system by Planning Systems, Inc. (PSI). The results demonstrate convincingly the ability for the LLNL and PSI sensor systems to accurately detect, separate, and localize multiple shooters and the associated shot directions during a high-intensity firefight (77 rounds in 5 sec) in a high acoustic and optical clutter environment with no false alarms. Preliminary fusion processing was also examined that demonstrated an ability to distinguish co-located shooters (shooter density), range to <0.5 m accuracy at 400m, and weapon type.« less

  9. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications

    PubMed Central

    Kim, Byeong Hak; Kim, Min Young; Chae, You Seong

    2017-01-01

    Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC. PMID:29280970

  10. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications.

    PubMed

    Kim, Byeong Hak; Kim, Min Young; Chae, You Seong

    2017-12-27

    Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.

  11. A survey of infrared and visual image fusion methods

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Hai, Jinjin; He, Kangjian

    2017-09-01

    Infrared (IR) and visual (VI) image fusion is designed to fuse multiple source images into a comprehensive image to boost imaging quality and reduce redundancy information, which is widely used in various imaging equipment to improve the visual ability of human and robot. The accurate, reliable and complementary descriptions of the scene in fused images make these techniques be widely used in various fields. In recent years, a large number of fusion methods for IR and VI images have been proposed due to the ever-growing demands and the progress of image representation methods; however, there has not been published an integrated survey paper about this field in last several years. Therefore, we make a survey to report the algorithmic developments of IR and VI image fusion. In this paper, we first characterize the IR and VI image fusion based applications to represent an overview of the research status. Then we present a synthesize survey of the state of the art. Thirdly, the frequently-used image fusion quality measures are introduced. Fourthly, we perform some experiments of typical methods and make corresponding analysis. At last, we summarize the corresponding tendencies and challenges in IR and VI image fusion. This survey concludes that although various IR and VI image fusion methods have been proposed, there still exist further improvements or potential research directions in different applications of IR and VI image fusion.

  12. Generalized algebraic scene-based nonuniformity correction algorithm.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2005-02-01

    A generalization of a recently developed algebraic scene-based nonuniformity correction algorithm for focal plane array (FPA) sensors is presented. The new technique uses pairs of image frames exhibiting arbitrary one- or two-dimensional translational motion to compute compensator quantities that are then used to remove nonuniformity in the bias of the FPA response. Unlike its predecessor, the generalization does not require the use of either a blackbody calibration target or a shutter. The algorithm has a low computational overhead, lending itself to real-time hardware implementation. The high-quality correction ability of this technique is demonstrated through application to real IR data from both cooled and uncooled infrared FPAs. A theoretical and experimental error analysis is performed to study the accuracy of the bias compensator estimates in the presence of two main sources of error.

  13. Hdr Imaging for Feature Detection on Detailed Architectural Scenes

    NASA Astrophysics Data System (ADS)

    Kontogianni, G.; Stathopoulou, E. K.; Georgopoulos, A.; Doulamis, A.

    2015-02-01

    3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR) images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR) images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  14. High dynamic range image acquisition based on multiplex cameras

    NASA Astrophysics Data System (ADS)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  15. Operation Desert Storm: Evaluation of the Air Campaign.

    DTIC Science & Technology

    1997-06-12

    210Weight of Effort and TOE Platform Comparisons 217 Type of Effort Analysis Appendix IX 22RTreSesrRadar 221 Target Sensor Electro- optical 221 Technologies...DSMAC Digital Scene Matching Area Correlator ELE electrical facilities EO electro- optical EW electronic warfare FLIR forward-looking infrared FOV...the exposure of aircraft to clouds, haze, smoke, and high humidity, thereby impeding IR and electro- optical (EO) sensors and laser designators for

  16. HDR imaging and color constancy: two sides of the same coin?

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2011-01-01

    At first, we think that High Dynamic Range (HDR) imaging is a technique for improved recordings of scene radiances. Many of us think that human color constancy is a variation of a camera's automatic white balance algorithm. However, on closer inspection, glare limits the range of light we can detect in cameras and on retinas. All scene regions below middle gray are influenced, more or less, by the glare from the bright scene segments. Instead of accurate radiance reproduction, HDR imaging works well because it preserves the details in the scene's spatial contrast. Similarly, on closer inspection, human color constancy depends on spatial comparisons that synthesize appearances from all the scene segments. Can spatial image processing play similar principle roles in both HDR imaging and color constancy?

  17. Early-Time Excited-State Relaxation Dynamics of Iridium Compounds: Distinct Roles of Electron and Hole Transfer.

    PubMed

    Liu, Xiang-Yang; Zhang, Ya-Hui; Fang, Wei-Hai; Cui, Ganglong

    2018-06-28

    Excited-state and photophysical properties of Ir-containing complexes have been extensively studied because of their potential applications as organic light-emitting diode emitting materials. However, their early time excited-state relaxation dynamics are less explored computationally. Herein we have employed our recently implemented TDDFT-based generalized surface-hopping dynamics method to simulate excited-state relaxation dynamics of three Ir(III) compounds having distinct ligands. According to our multistate dynamics simulations including five excited singlet states i.e., S n ( n = 1-5) and ten excited triplet states, i.e., T n ( n = 1-10), we have found that the intersystem crossing (ISC) processes from the S n to T n are very efficient and ultrafast in these three Ir(III) compounds. The corresponding ISC rates are estimated to be 65, 81, and 140 fs, which are reasonably close to the experimentally measured ca. 80, 80, and 110 fs. In addition, the internal conversion (IC) processes within respective singlet and triplet manifolds are also ultrafast. These ultrafast IC and ISC processes are caused by large nonadiabatic and spin-orbit couplings, respectively, as well as small energy gaps. Importantly, although these Ir(III) complexes share similar macroscopic phenomena, i.e., ultrafast IC and ISC, their microscopic excited-state relaxation mechanism and dynamics are qualitatively distinct. Specifically, the dynamical behaviors of electron and hole and their roles are variational in modulating the excited-state relaxation dynamics of these Ir(III) compounds. In other words, the electronic properties of the ligands that are coordinated with the central Ir(III) atom play important roles in regulating the microscopic excited-state relaxation dynamics. These gained insights could be useful for rationally designing Ir(III) compounds with excellent photoluminescence.

  18. Active modulation of laser coded systems using near infrared video projection system based on digital micromirror device (DMD)

    NASA Astrophysics Data System (ADS)

    Khalifa, Aly A.; Aly, Hussein A.; El-Sherif, Ashraf F.

    2016-02-01

    Near infrared (NIR) dynamic scene projection systems are used to perform hardware in-the-loop (HWIL) testing of a unit under test operating in the NIR band. The common and complex requirement of a class of these units is a dynamic scene that is spatio-temporal variant. In this paper we apply and investigate active external modulation of NIR laser in different ranges of temporal frequencies. We use digital micromirror devices (DMDs) integrated as the core of a NIR projection system to generate these dynamic scenes. We deploy the spatial pattern to the DMD controller to simultaneously yield the required amplitude by pulse width modulation (PWM) of the mirror elements as well as the spatio-temporal pattern. Desired modulation and coding of high stable, high power visible (Red laser at 640 nm) and NIR (Diode laser at 976 nm) using the combination of different optical masks based on DMD were achieved. These spatial versatile active coding strategies for both low and high frequencies in the range of kHz for irradiance of different targets were generated by our system and recorded using VIS-NIR fast cameras. The temporally-modulated laser pulse traces were measured using array of fast response photodetectors. Finally using a high resolution spectrometer, we evaluated the NIR dynamic scene projection system response in terms of preserving the wavelength and band spread of the NIR source after projection.

  19. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  20. Generative technique for dynamic infrared image sequences

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Cao, Zhiguo; Zhang, Tianxu

    2001-09-01

    The generative technique of the dynamic infrared image was discussed in this paper. Because infrared sensor differs from CCD camera in imaging mechanism, it generates the infrared image by incepting the infrared radiation of scene (including target and background). The infrared imaging sensor is affected deeply by the atmospheric radiation, the environmental radiation and the attenuation of atmospheric radiation transfers. Therefore at first in this paper the imaging influence of all kinds of the radiations was analyzed and the calculation formula of radiation was provided, in addition, the passive scene and the active scene were analyzed separately. Then the methods of calculation in the passive scene were provided, and the functions of the scene model, the atmospheric transmission model and the material physical attribute databases were explained. Secondly based on the infrared imaging model, the design idea, the achievable way and the software frame for the simulation software of the infrared image sequence were introduced in SGI workstation. Under the guidance of the idea above, in the third segment of the paper an example of simulative infrared image sequences was presented, which used the sea and sky as background and used the warship as target and used the aircraft as eye point. At last the simulation synthetically was evaluated and the betterment scheme was presented.

  1. Forensic botany as a useful tool in the crime scene: Report of a case.

    PubMed

    Margiotta, Gabriele; Bacaro, Giovanni; Carnevali, Eugenia; Severini, Simona; Bacci, Mauro; Gabbrielli, Mario

    2015-08-01

    The ubiquitous presence of plant species makes forensic botany useful for many criminal cases. Particularly, bryophytes are useful for forensic investigations because many of them are clonal and largely distributed. Bryophyte shoots can easily become attached to shoes and clothes and it is possible to be found on footwear, providing links between crime scene and individuals. We report a case of suicide of a young girl happened in Siena, Tuscany, Italia. The cause of traumatic injuries could be ascribed to suicide, to homicide, or to accident. In absence of eyewitnesses who could testify the dynamics of the event, the crime scene investigation was fundamental to clarify the accident. During the scene analysis, some fragments of Tortula muralis Hedw. and Bryum capillare Hedw were found. The fragments were analyzed by a bryologists in order to compare them with the moss present on the stairs that the victim used immediately before the death. The analysis of these bryophytes found at the crime scene allowed to reconstruct the accident. Even if this evidence, of course, is circumstantial, it can be useful in forensic cases, together with the other evidences, to reconstruct the dynamics of events. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  2. ERTS-1 imagery and native plant distributions

    NASA Technical Reports Server (NTRS)

    Musick, H. B.; Mcginnies, W.; Haase, E.; Lepley, L. K.

    1974-01-01

    A method is developed for using ERTS spectral signature data to determine plant community distribution and phenology without resolving individual plants. An Exotech ERTS radiometer was used near ground level to obtain spectral signatures for a desert plant community, including two shrub species, ground covered with live annuals in April and dead ones in June, and bare ground. It is shown that comparisons of scene types can be made when spectral signatures are expressed as a ratio of red reflectivity to IR reflectivity or when they are plotted as red reflectivity vs. IR reflectivity, in which case the signature clusters of each component are more distinct. A method for correcting and converting the ERTS radiance values to reflectivity values for comparison with ground truth data is appended.

  3. Imaging spectrometry - Technology and applications

    NASA Technical Reports Server (NTRS)

    Solomon, Jerry E.

    1989-01-01

    The development history and current status of NASA imaging-spectrometer (IS) technology are discussed in a review covering the period 1982-1988. Consideration is given to the Airborne IS first flown in 1982, the second-generation Airborne Visible and IR IS (AVIRIS), the High-Resolution IS being developed for the EOS polar platform, improved two-dimensional focal-plane arrays for the short-wave IR spectral region, and noncollinear acoustooptic tunable filters for use as spectral dispersing elements. Also examined are approaches to solving the data-processing problems posed by the high data volumes of state-of-the-art ISs (e.g., 160 MB per 600 x 600-pixel AVIRIS scene), including intelligent data editing, lossless and lossy data compression techniques, and direct extraction of scientifically meaningful geophysical and biophysical parameters.

  4. The use of an image registration technique in the urban growth monitoring

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Foresti, C.; Deoliveira, M. D. L. N.; Niero, M.; Parreira, E. M. D. M. F.

    1984-01-01

    The use of an image registration program in the studies of urban growth is described. This program permits a quick identification of growing areas with the overlap of the same scene in different periods, and with the use of adequate filters. The city of Brasilia, Brazil, is selected for the test area. The dynamics of Brasilia urban growth are analyzed with the overlap of scenes dated June 1973, 1978 and 1983. The results showed the utilization of the image registration technique for the monitoring of dynamic urban growth.

  5. Eye Movements Reveal the Dynamic Simulation of Speed in Language

    ERIC Educational Resources Information Center

    Speed, Laura J.; Vigliocco, Gabriella

    2014-01-01

    This study investigates how speed of motion is processed in language. In three eye-tracking experiments, participants were presented with visual scenes and spoken sentences describing fast or slow events (e.g., "The lion ambled/dashed to the balloon"). Results showed that looking time to relevant objects in the visual scene was affected…

  6. Enhancement tuning and control for high dynamic range images in multi-scale locally adaptive contrast enhancement algorithms

    NASA Astrophysics Data System (ADS)

    Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.

    2009-01-01

    For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.

  7. Dynamic binding of visual features by neuronal/stimulus synchrony.

    PubMed

    Iwabuchi, A

    1998-05-01

    When people see a visual scene, certain parts of the visual scene are treated as belonging together and we regard them as a perceptual unit, which is called a "figure". People focus on figures, and the remaining parts of the scene are disregarded as "ground". In Gestalt psychology this process is called "figure-ground segregation". According to current perceptual psychology, a figure is formed by binding various visual features in a scene, and developments in neuroscience have revealed that there are many feature-encoding neurons, which respond to such features specifically. It is not known, however, how the brain binds different features of an object into a coherent visual object representation. Recently, the theory of binding by neuronal synchrony, which argues that feature binding is dynamically mediated by neuronal synchrony of feature-encoding neurons, has been proposed. This review article portrays the problem of figure-ground segregation and features binding, summarizes neurophysiological and psychophysical experiments and theory relevant to feature binding by neuronal/stimulus synchrony, and suggests possible directions for future research on this topic.

  8. A dynamic intron retention program enriched in RNA processing genes regulates gene expression during terminal erythropoiesis

    DOE PAGES

    Pimentel, Harold; Parra, Marilyn; Gee, Sherry L.; ...

    2015-11-03

    Differentiating erythroblasts execute a dynamic alternative splicing program shown here to include extensive and diverse intron retention (IR) events. Cluster analysis revealed hundreds of developmentallydynamic introns that exhibit increased IR in mature erythroblasts, and are enriched in functions related to RNA processing such as SF3B1 spliceosomal factor. Distinct, developmentally-stable IR clusters are enriched in metal-ion binding functions and include mitoferrin genes SLC25A37 and SLC25A28 that are critical for iron homeostasis. Some IR transcripts are abundant, e.g. comprising ~50% of highly-expressed SLC25A37 and SF3B1 transcripts in late erythroblasts, and thereby limiting functional mRNA levels. IR transcripts tested were predominantly nuclearlocalized. Splicemore » site strength correlated with IR among stable but not dynamic intron clusters, indicating distinct regulation of dynamically-increased IR in late erythroblasts. Retained introns were preferentially associated with alternative exons with premature termination codons (PTCs). High IR was observed in disease-causing genes including SF3B1 and the RNA binding protein FUS. Comparative studies demonstrated that the intron retention program in erythroblasts shares features with other tissues but ultimately is unique to erythropoiesis. Finally, we conclude that IR is a multi-dimensional set of processes that post-transcriptionally regulate diverse gene groups during normal erythropoiesis, misregulation of which could be responsible for human disease.« less

  9. A dynamic intron retention program enriched in RNA processing genes regulates gene expression during terminal erythropoiesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pimentel, Harold; Parra, Marilyn; Gee, Sherry L.

    Differentiating erythroblasts execute a dynamic alternative splicing program shown here to include extensive and diverse intron retention (IR) events. Cluster analysis revealed hundreds of developmentallydynamic introns that exhibit increased IR in mature erythroblasts, and are enriched in functions related to RNA processing such as SF3B1 spliceosomal factor. Distinct, developmentally-stable IR clusters are enriched in metal-ion binding functions and include mitoferrin genes SLC25A37 and SLC25A28 that are critical for iron homeostasis. Some IR transcripts are abundant, e.g. comprising ~50% of highly-expressed SLC25A37 and SF3B1 transcripts in late erythroblasts, and thereby limiting functional mRNA levels. IR transcripts tested were predominantly nuclearlocalized. Splicemore » site strength correlated with IR among stable but not dynamic intron clusters, indicating distinct regulation of dynamically-increased IR in late erythroblasts. Retained introns were preferentially associated with alternative exons with premature termination codons (PTCs). High IR was observed in disease-causing genes including SF3B1 and the RNA binding protein FUS. Comparative studies demonstrated that the intron retention program in erythroblasts shares features with other tissues but ultimately is unique to erythropoiesis. Finally, we conclude that IR is a multi-dimensional set of processes that post-transcriptionally regulate diverse gene groups during normal erythropoiesis, misregulation of which could be responsible for human disease.« less

  10. Dual time-resolved temperature-jump fluorescence and infrared spectroscopy for the study of fast protein dynamics.

    PubMed

    Davis, Caitlin M; Reddish, Michael J; Dyer, R Brian

    2017-05-05

    Time-resolved temperature-jump (T-jump) coupled with fluorescence and infrared (IR) spectroscopy is a powerful technique for monitoring protein dynamics. Although IR spectroscopy of the polypeptide amide I mode is more technically challenging, it offers complementary information because it directly probes changes in the protein backbone, whereas, fluorescence spectroscopy is sensitive to the environment of specific side chains. With the advent of widely tunable quantum cascade lasers (QCL) it is possible to efficiently probe multiple IR frequencies with high sensitivity and reproducibility. Here we describe a dual time-resolved T-jump fluorescence and IR spectrometer and its application to study protein folding dynamics. A Q-switched Ho:YAG laser provides the T-jump source for both time-resolved IR and fluorescence spectroscopy, which are probed by a QCL and Ti:Sapphire laser, respectively. The Ho:YAG laser simultaneously pumps the time-resolved IR and fluorescence spectrometers. The instrument has high sensitivity, with an IR absorbance detection limit of <0.2mOD and a fluorescence sensitivity of 2% of the overall fluorescence intensity. Using a computer controlled QCL to rapidly tune the IR frequency it is possible to create a T-jump induced difference spectrum from 50ns to 0.5ms. This study demonstrates the power of the dual time-resolved T-jump fluorescence and IR spectroscopy to resolve complex folding mechanisms by complementary IR absorbance and fluorescence measurements of protein dynamics. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Dual time-resolved temperature-jump fluorescence and infrared spectroscopy for the study of fast protein dynamics

    NASA Astrophysics Data System (ADS)

    Davis, Caitlin M.; Reddish, Michael J.; Dyer, R. Brian

    2017-05-01

    Time-resolved temperature-jump (T-jump) coupled with fluorescence and infrared (IR) spectroscopy is a powerful technique for monitoring protein dynamics. Although IR spectroscopy of the polypeptide amide I mode is more technically challenging, it offers complementary information because it directly probes changes in the protein backbone, whereas, fluorescence spectroscopy is sensitive to the environment of specific side chains. With the advent of widely tunable quantum cascade lasers (QCL) it is possible to efficiently probe multiple IR frequencies with high sensitivity and reproducibility. Here we describe a dual time-resolved T-jump fluorescence and IR spectrometer and its application to study protein folding dynamics. A Q-switched Ho:YAG laser provides the T-jump source for both time-resolved IR and fluorescence spectroscopy, which are probed by a QCL and Ti:Sapphire laser, respectively. The Ho:YAG laser simultaneously pumps the time-resolved IR and fluorescence spectrometers. The instrument has high sensitivity, with an IR absorbance detection limit of < 0.2 mOD and a fluorescence sensitivity of 2% of the overall fluorescence intensity. Using a computer controlled QCL to rapidly tune the IR frequency it is possible to create a T-jump induced difference spectrum from 50 ns to 0.5 ms. This study demonstrates the power of the dual time-resolved T-jump fluorescence and IR spectroscopy to resolve complex folding mechanisms by complementary IR absorbance and fluorescence measurements of protein dynamics.

  12. Effective connectivity in the neural network underlying coarse-to-fine categorization of visual scenes. A dynamic causal modeling study.

    PubMed

    Kauffmann, Louise; Chauvin, Alan; Pichat, Cédric; Peyrin, Carole

    2015-10-01

    According to current models of visual perception scenes are processed in terms of spatial frequencies following a predominantly coarse-to-fine processing sequence. Low spatial frequencies (LSF) reach high-order areas rapidly in order to activate plausible interpretations of the visual input. This triggers top-down facilitation that guides subsequent processing of high spatial frequencies (HSF) in lower-level areas such as the inferotemporal and occipital cortices. However, dynamic interactions underlying top-down influences on the occipital cortex have never been systematically investigated. The present fMRI study aimed to further explore the neural bases and effective connectivity underlying coarse-to-fine processing of scenes, particularly the role of the occipital cortex. We used sequences of six filtered scenes as stimuli depicting coarse-to-fine or fine-to-coarse processing of scenes. Participants performed a categorization task on these stimuli (indoor vs. outdoor). Firstly, we showed that coarse-to-fine (compared to fine-to-coarse) sequences elicited stronger activation in the inferior frontal gyrus (in the orbitofrontal cortex), the inferotemporal cortex (in the fusiform and parahippocampal gyri), and the occipital cortex (in the cuneus). Dynamic causal modeling (DCM) was then used to infer effective connectivity between these regions. DCM results revealed that coarse-to-fine processing resulted in increased connectivity from the occipital cortex to the inferior frontal gyrus and from the inferior frontal gyrus to the inferotemporal cortex. Critically, we also observed an increase in connectivity strength from the inferior frontal gyrus to the occipital cortex, suggesting that top-down influences from frontal areas may guide processing of incoming signals. The present results support current models of visual perception and refine them by emphasizing the role of the occipital cortex as a cortical site for feedback projections in the neural network underlying coarse-to-fine processing of scenes. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. The new generation of OpenGL support in ROOT

    NASA Astrophysics Data System (ADS)

    Tadel, M.

    2008-07-01

    OpenGL has been promoted to become the main 3D rendering engine of the ROOT framework. This required a major re-modularization of OpenGL support on all levels, from basic window-system specific interface to medium-level object-representation and top-level scene management. This new architecture allows seamless integration of external scene-graph libraries into the ROOT OpenGL viewer as well as inclusion of ROOT 3D scenes into external GUI and OpenGL-based 3D-rendering frameworks. Scene representation was removed from inside of the viewer, allowing scene-data to be shared among several viewers and providing for a natural implementation of multi-view canvas layouts. The object-graph traversal infrastructure allows free mixing of 3D and 2D-pad graphics and makes implementation of ROOT canvas in pure OpenGL possible. Scene-elements representing ROOT objects trigger automatic instantiation of user-provided rendering-objects based on the dictionary information and class-naming convention. Additionally, a finer, per-object control over scene-updates is available to the user, allowing overhead-free maintenance of dynamic 3D scenes and creation of complex real-time animations. User-input handling was modularized as well, making it easy to support application-specific scene navigation, selection handling and tool management.

  14. Low-cost digital dynamic visualization system

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    1995-05-01

    High speed photographic systems like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording systems requiring time consuming and tedious wet processing of the films. Currently digital cameras are replacing to certain extent the conventional cameras for static experiments. Recently, there is lot of interest in developing and modifying CCD architectures and recording arrangements for dynamic scene analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration (TDI) mode for digitally recording dynamic scenes. Applications in solid as well as fluid impact problems are presented.

  15. Local adaptive contrast enhancement for color images

    NASA Astrophysics Data System (ADS)

    Dijk, Judith; den Hollander, Richard J. M.; Schavemaker, John G. M.; Schutte, Klamer

    2007-04-01

    A camera or display usually has a smaller dynamic range than the human eye. For this reason, objects that can be detected by the naked eye may not be visible in recorded images. Lighting is here an important factor; improper local lighting impairs visibility of details or even entire objects. When a human is observing a scene with different kinds of lighting, such as shadows, he will need to see details in both the dark and light parts of the scene. For grey value images such as IR imagery, algorithms have been developed in which the local contrast of the image is enhanced using local adaptive techniques. In this paper, we present how such algorithms can be adapted so that details in color images are enhanced while color information is retained. We propose to apply the contrast enhancement on color images by applying a grey value contrast enhancement algorithm to the luminance channel of the color signal. The color coordinates of the signal will remain the same. Care is taken that the saturation change is not too high. Gamut mapping is performed so that the output can be displayed on a monitor. The proposed technique can for instance be used by operators monitoring movements of people in order to detect suspicious behavior. To do this effectively, specific individuals should both be easy to recognize and track. This requires optimal local contrast, and is sometimes much helped by color when tracking a person with colored clothes. In such applications, enhanced local contrast in color images leads to more effective monitoring.

  16. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods

    PubMed Central

    Hogervorst, Maarten A.; Pinkus, Alan R.

    2016-01-01

    The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4–0.7μm), near-infrared (NIR, 0.7–1.0μm) and long-wave infrared (LWIR, 8–14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance. PMID:28036328

  17. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods.

    PubMed

    Toet, Alexander; Hogervorst, Maarten A; Pinkus, Alan R

    2016-01-01

    The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm), near-infrared (NIR, 0.7-1.0μm) and long-wave infrared (LWIR, 8-14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance.

  18. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.

    2017-10-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.

  19. General Metropolis-Hastings jump diffusions for automatic target recognition in infrared scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1997-04-01

    To locate and recognize ground-based targets in forward- looking IR (FLIR) images, 3D faceted models with associated pose parameters are formulated to accommodate the variability found in FLIR imagery. Taking a Bayesian approach, scenes are simulated from the emissive characteristics of the CAD models and compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. To accommodate scenes with variable numbers of targets, the posterior distribution is defined over parameter vectors of varying dimension. An inference algorithm based on Metropolis-Hastings jump- diffusion processes empirically samples from the posterior distribution, generating configurations of templates and transformations that match the collected sensor data with high probability. The jumps accommodate the addition and deletion of targets and the estimation of target identities; diffusions refine the hypotheses by drifting along the gradient of the posterior distribution with respect to the orientation and position parameters. Previous results on jumps strategies analogous to the Metropolis acceptance/rejection algorithm, with proposals drawn from the prior and accepted based on the likelihood, are extended to encompass general Metropolis-Hastings proposal densities. In particular, the algorithm proposes moves by drawing from the posterior distribution over computationally tractible subsets of the parameter space. The algorithm is illustrated by an implementation on a Silicon Graphics Onyx/Reality Engine.

  20. Small maritime target detection through false color fusion

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Wu, Tirui

    2008-04-01

    We present an algorithm that produces a fused false color representation of a combined multiband IR and visual imaging system for maritime applications. Multispectral IR imaging techniques are increasingly deployed in maritime operations, to detect floating mines or to find small dinghies and swimmers during search and rescue operations. However, maritime backgrounds usually contain a large amount of clutter that severely hampers the detection of small targets. Our new algorithm deploys the correlation between the target signatures in two different IR frequency bands (3-5 and 8-12 μm) to construct a fused IR image with a reduced amount of clutter. The fused IR image is then combined with a visual image in a false color RGB representation for display to a human operator. The algorithm works as follows. First, both individual IR bands are filtered with a morphological opening top-hat transform to extract small details. Second, a common image is extracted from the two filtered IR bands, and assigned to the red channel of an RGB image. Regions of interest that appear in both IR bands remain in this common image, while most uncorrelated noise details are filtered out. Third, the visual band is assigned to the green channel and, after multiplication with a constant (typically 1.6) also to the blue channel. Fourth, the brightness and colors of this intermediate false color image are renormalized by adjusting its first order statistics to those of a representative reference scene. The result of these four steps is a fused color image, with naturalistic colors (bluish sky and grayish water), in which small targets are clearly visible.

  1. Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes

    NASA Astrophysics Data System (ADS)

    Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen

    2016-06-01

    Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.

  2. Frontiers in Chemical Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowlan, Pamela Renee

    2016-05-02

    These are slides dealing with frontiers in chemical physics. The following topics are covered: Time resolving chemistry with ultrashort pulses in the 0.1-40 THz spectral range; Example: Mid-infrared absorption spectrum of the intermediate state CH 2OO; Tracking reaction dynamics through changes in the spectra; Single-shot measurement of the mid-IR absorption dynamics; Applying 2D coherent mid-IR spectroscopy to learn more about transition states; Time resolving chemical reactions at a catalysis using mid-IR and THz pulses; Studying topological insulators requires a surface sensitive probe; Nonlinear phonon dynamics in Bi 2Se 3; THz-pump, SHG-probe as a surface sensitive coherent 2D spectroscopy; Nanometer andmore » femtosecond spatiotemporal resolution mid-IR spectroscopy; Coherent two-dimensional THz/mid-IR spectroscopy with 10nm spatial resolution; Pervoskite oxides as catalysts; Functionalized graphene for catalysis; Single-shot spatiotemporal measurements; Spatiotemporal pulse measurement; Intense, broad-band THz/mid-IR generation with organic crystals.« less

  3. Photonically enabled Ka-band radar and infrared sensor subscale testbed

    NASA Astrophysics Data System (ADS)

    Lohr, Michele B.; Sova, Raymond M.; Funk, Kevin B.; Airola, Marc B.; Dennis, Michael L.; Pavek, Richard E.; Hollenbeck, Jennifer S.; Garrison, Sean K.; Conard, Steven J.; Terry, David H.

    2014-10-01

    A subscale radio frequency (RF) and infrared (IR) testbed using novel RF-photonics techniques for generating radar waveforms is currently under development at The Johns Hopkins University Applied Physics Laboratory (JHU/APL) to study target scenarios in a laboratory setting. The linearity of Maxwell's equations allows the use of millimeter wavelengths and scaled-down target models to emulate full-scale RF scene effects. Coupled with passive IR and visible sensors, target motions and heating, and a processing and algorithm development environment, this testbed provides a means to flexibly and cost-effectively generate and analyze multi-modal data for a variety of applications, including verification of digital model hypotheses, investigation of correlated phenomenology, and aiding system capabilities assessment. In this work, concept feasibility is demonstrated for simultaneous RF, IR, and visible sensor measurements of heated, precessing, conical targets and of a calibration cylinder. Initial proof-of-principle results are shown of the Ka-band subscale radar, which models S-band for 1/10th scale targets, using stretch processing and Xpatch models.

  4. Silicon Based Schottky Barrier Infrared Sensors For Power System And Industrial Applications

    NASA Astrophysics Data System (ADS)

    Elabd, Hammam; Kosonocky, Walter F.

    1984-03-01

    Schottky barrier infrared charge coupled device sensors (IR-CCDs) have been developed. PtSi Schottky barrier detectors require cooling to liquid Nitrogen temperature and cover the wavelength range between 1 and 6 μm. The PtSi IR-CCDs can be used in industrial thermography with NEAT below 0.1°C. Pd Si-Schottkybarrier detectors require cooling to 145K and cover the spectral range between 1 and 3.5 μm. 11d2Si-IR-CCDs can be used in imaging high temperature scenes with NE▵T around 100°C. Several high density staring area and line imagers are available. Both interlaced and noninterlaced area imagers can be operated with variable and TV compatible frame rates as well as various field of view angles. The advantages of silicon fabrication technology in terms of cost and high density structures opens the doors for the design of special purpose thermal camera systems for a number of power aystem and industrial applications.

  5. Real-time motion artifacts compensation of ToF sensors data on GPU

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Hoegg, Thomas; Kolb, Andreas

    2013-05-01

    Over the last decade, ToF sensors attracted many computer vision and graphics researchers. Nevertheless, ToF devices suffer from severe motion artifacts for dynamic scenes as well as low-resolution depth data which strongly justifies the importance of a valid correction. To counterbalance this effect, a pre-processing approach is introduced to greatly improve range image data on dynamic scenes. We first demonstrate the robustness of our approach using simulated data to finally validate our method using sensor range data. Our GPU-based processing pipeline enhances range data reliability in real-time.

  6. A Simulation Program for Dynamic Infrared (IR) Spectra

    ERIC Educational Resources Information Center

    Zoerb, Matthew C.; Harris, Charles B.

    2013-01-01

    A free program for the simulation of dynamic infrared (IR) spectra is presented. The program simulates the spectrum of two exchanging IR peaks based on simple input parameters. Larger systems can be simulated with minor modifications. The program is available as an executable program for PCs or can be run in MATLAB on any operating system. Source…

  7. Repercussion of geometric and dynamic constraints on the 3D rendering quality in structurally adaptive multi-view shooting systems

    NASA Astrophysics Data System (ADS)

    Ali-Bey, Mohamed; Moughamir, Saïd; Manamanni, Noureddine

    2011-12-01

    in this paper a simulator of a multi-view shooting system with parallel optical axes and structurally variable configuration is proposed. The considered system is dedicated to the production of 3D contents for auto-stereoscopic visualization. The global shooting/viewing geometrical process, which is the kernel of this shooting system, is detailed and the different viewing, transformation and capture parameters are then defined. An appropriate perspective projection model is afterward derived to work out a simulator. At first, this latter is used to validate the global geometrical process in the case of a static configuration. Next, the simulator is used to show the limitations of a static configuration of this shooting system type by considering the case of dynamic scenes and then a dynamic scheme is achieved to allow a correct capture of this kind of scenes. After that, the effect of the different geometrical capture parameters on the 3D rendering quality and the necessity or not of their adaptation is studied. Finally, some dynamic effects and their repercussions on the 3D rendering quality of dynamic scenes are analyzed using error images and some image quantization tools. Simulation and experimental results are presented throughout this paper to illustrate the different studied points. Some conclusions and perspectives end the paper. [Figure not available: see fulltext.

  8. Perceptual evaluation of color transformed multispectral imagery

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; de Jong, Michael J.; Hogervorst, Maarten A.; Hooge, Ignace T. C.

    2014-04-01

    Color remapping can give multispectral imagery a realistic appearance. We assessed the practical value of this technique in two observer experiments using monochrome intensified (II) and long-wave infrared (IR) imagery, and color daylight (REF) and fused multispectral (CF) imagery. First, we investigated the amount of detail observers perceive in a short timespan. REF and CF imagery yielded the highest precision and recall measures, while II and IR imagery yielded significantly lower values. This suggests that observers have more difficulty in extracting information from monochrome than from color imagery. Next, we measured eye fixations during free image exploration. Although the overall fixation behavior was similar across image modalities, the order in which certain details were fixated varied. Persons and vehicles were typically fixated first in REF, CF, and IR imagery, while they were fixated later in II imagery. In some cases, color remapping II imagery and fusion with IR imagery restored the fixation order of these image details. We conclude that color remapping can yield enhanced scene perception compared to conventional monochrome nighttime imagery, and may be deployed to tune multispectral image representations such that the resulting fixation behavior resembles the fixation behavior corresponding to daylight color imagery.

  9. High-performance electronic image stabilisation for shift and rotation correction

    NASA Astrophysics Data System (ADS)

    Parker, Steve C. J.; Hickman, D. L.; Wu, F.

    2014-06-01

    A novel low size, weight and power (SWaP) video stabiliser called HALO™ is presented that uses a SoC to combine the high processing bandwidth of an FPGA, with the signal processing flexibility of a CPU. An image based architecture is presented that can adapt the tiling of frames to cope with changing scene dynamics. A real-time implementation is then discussed that can generate several hundred optical flow vectors per video frame, to accurately calculate the unwanted rigid body translation and rotation of camera shake. The performance of the HALO™ stabiliser is comprehensively benchmarked against the respected Deshaker 3.0 off-line stabiliser plugin to VirtualDub. Eight different videos are used for benchmarking, simulating: battlefield, surveillance, security and low-level flight applications in both visible and IR wavebands. The results show that HALO™ rivals the performance of Deshaker within its operating envelope. Furthermore, HALO™ may be easily reconfigured to adapt to changing operating conditions or requirements; and can be used to host other video processing functionality like image distortion correction, fusion and contrast enhancement.

  10. Object Scene Flow

    NASA Astrophysics Data System (ADS)

    Menze, Moritz; Heipke, Christian; Geiger, Andreas

    2018-06-01

    This work investigates the estimation of dense three-dimensional motion fields, commonly referred to as scene flow. While great progress has been made in recent years, large displacements and adverse imaging conditions as observed in natural outdoor environments are still very challenging for current approaches to reconstruction and motion estimation. In this paper, we propose a unified random field model which reasons jointly about 3D scene flow as well as the location, shape and motion of vehicles in the observed scene. We formulate the problem as the task of decomposing the scene into a small number of rigidly moving objects sharing the same motion parameters. Thus, our formulation effectively introduces long-range spatial dependencies which commonly employed local rigidity priors are lacking. Our inference algorithm then estimates the association of image segments and object hypotheses together with their three-dimensional shape and motion. We demonstrate the potential of the proposed approach by introducing a novel challenging scene flow benchmark which allows for a thorough comparison of the proposed scene flow approach with respect to various baseline models. In contrast to previous benchmarks, our evaluation is the first to provide stereo and optical flow ground truth for dynamic real-world urban scenes at large scale. Our experiments reveal that rigid motion segmentation can be utilized as an effective regularizer for the scene flow problem, improving upon existing two-frame scene flow methods. At the same time, our method yields plausible object segmentations without requiring an explicitly trained recognition model for a specific object class.

  11. Effects of capacity limits, memory loss, and sound type in change deafness.

    PubMed

    Gregg, Melissa K; Irsik, Vanessa C; Snyder, Joel S

    2017-11-01

    Change deafness, the inability to notice changes to auditory scenes, has the potential to provide insights about sound perception in busy situations typical of everyday life. We determined the extent to which change deafness to sounds is due to the capacity of processing multiple sounds and the loss of memory for sounds over time. We also determined whether these processing limitations work differently for varying types of sounds within a scene. Auditory scenes composed of naturalistic sounds, spectrally dynamic unrecognizable sounds, tones, and noise rhythms were presented in a change-detection task. On each trial, two scenes were presented that were same or different. We manipulated the number of sounds within each scene to measure memory capacity and the silent interval between scenes to measure memory loss. For all sounds, change detection was worse as scene size increased, demonstrating the importance of capacity limits. Change detection to the natural sounds did not deteriorate much as the interval between scenes increased up to 2,000 ms, but it did deteriorate substantially with longer intervals. For artificial sounds, in contrast, change-detection performance suffered even for very short intervals. The results suggest that change detection is generally limited by capacity, regardless of sound type, but that auditory memory is more enduring for sounds with naturalistic acoustic structures.

  12. Comparing the Immediate Effects of a Total Motion Release Warm-up and a Dynamic Warm-up Protocol on the Dominant Shoulder in Baseball Athletes.

    PubMed

    Gamma, Stephen C; Baker, Russell; May, James; Seegmiller, Jeff G; Nasypany, Alan; Iorio, Steven M

    2018-04-10

    Gamma, SC, Baker, R, May, J, Seegmiller, JG, Nasypany, A, and Iorio, SM. Comparing the immediate effects of a total motion release warm-up and a dynamic warm-up protocol on the dominant shoulder in baseball athletes. J Strength Cond Res XX(X): 000-000, 2017-A decrease in total range of motion (ROM) of the dominant shoulder may predispose baseball athletes to increased shoulder injury risk; the most effective technique for improving ROM is unknown. The purpose of this study was to compare the immediate effects of Total Motion Release (TMR) to a generic dynamic warm-up program in baseball athletes. Baseball athletes (n = 20) were randomly assigned to an intervention group: TMR group (TMRG; n = 10) or traditional warm-up group (TWG; n = 10). Shoulder ROM measurements were recorded for internal rotation (IR) and external rotation (ER), the intervention was applied, and postmeasurements were recorded. Each group then received the other intervention and postmeasurements were again recorded. The time main effect (p ≤ 0.001) and the time × group interaction effect were significant (p ≤ 0.001) for IR and ER. Post hoc analysis revealed that TMR produced significant increases in mean IR (p ≤ 0.005, d = 1.52) and ER (p ≤ 0.018, d = 1.22) of the dominant shoulder initially. When groups crossed-over, the TMRG experienced a decrease in mean IR and ER after the dynamic warm-up, whereas the TWG experienced a significant increase in mean IR (p ≤ 0.001, d = 3.08) and ER (p ≤ 0.001, d = 2.56) after TMR intervention. Total Motion Release increased IR and ER of the dominant shoulder more than a dynamic warm-up. Dynamic warm-up after TMR also resulted in decreased IR and ER; however, TMR after dynamic warm-up significantly improved IR and ER. Based on these results, TMR is more effective than a generic dynamic warm-up for improving dominant shoulder ROM in baseball players.

  13. Brief Report: Diminished Gaze Preference for Dynamic Social Interaction Scenes in Youth with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Shaffer, Rebecca C.; Pedapati, Ernest V.; Shic, Frederick; Gaietto, Kristina; Bowers, Katherine; Wink, Logan K.; Erickson, Craig A.

    2017-01-01

    In this study, we present an eye-tracking paradigm, adapted from previous work with toddlers, for assessing social-interaction looking preferences in youth ages 5-17 with ASD and typically-developing controls (TDC). Videos of children playing together (Social Scenes, SS) were presented side-by-side with animated geometric shapes (GS). Participants…

  14. Use of LANDSAT imagery for wildlife habitat mapping in northeast and eastcentral Alaska

    NASA Technical Reports Server (NTRS)

    Laperriere, A. J. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. Indications are that Alaskan scenes dated later than about September 5th are unsuitable for vegetational analyses. Such fall data exhibit a limited dynamic range relative to summer scenes and the informational content of the data is reduced such that discrimination between many vegetation types is no longer possible.

  15. Assessing Multiple Object Tracking in Young Children Using a Game

    ERIC Educational Resources Information Center

    Ryokai, Kimiko; Farzin, Faraz; Kaltman, Eric; Niemeyer, Greg

    2013-01-01

    Visual tracking of multiple objects in a complex scene is a critical survival skill. When we attempt to safely cross a busy street, follow a ball's position during a sporting event, or monitor children in a busy playground, we rely on our brain's capacity to selectively attend to and track the position of specific objects in a dynamic scene. This…

  16. Plant cover, soil temperature, freeze, water stress, and evapotranspiration conditions. [Lower Rio Grande Valley Test Site: Weslaco, Texas; Falco Reservoir and the Gulf of Mexico

    NASA Technical Reports Server (NTRS)

    Wiegand, C. L.; Nixon, P. R.; Gausman, H. W.; Namken, L. N.; Leamer, R. W.; Richardson, A. J. (Principal Investigator)

    1980-01-01

    The author has identified the following significant results. HCMM day/night coverage 12 hours apart cannot be obtained at 26 deg N latitude; nor have any pairs 36 hours apart been obtained. A day-IR scene and a night scene for two different dates were analyzed. A profile across the test site for the same latitude shows that the two profiles are near mirror images of each other over land surfaces and that the temperature of two large water bodies, Falcon Reservoir and the Gulf of Mexico, are nearly identical on two dates. During the time interval between overpasses, the vegetative cover remained static due to winter dormancy. The data suggest that day/night temperature differences measured weeks apart may yield meaningful information about the contrast between daytime maximum and nighttime minimum temperatures for a given site.

  17. The influence of behavioral relevance on the processing of global scene properties: An ERP study.

    PubMed

    Hansen, Natalie E; Noesen, Birken T; Nador, Jeffrey D; Harel, Assaf

    2018-05-02

    Recent work studying the temporal dynamics of visual scene processing (Harel et al., 2016) has found that global scene properties (GSPs) modulate the amplitude of early Event-Related Potentials (ERPs). It is still not clear, however, to what extent the processing of these GSPs is influenced by their behavioral relevance, determined by the goals of the observer. To address this question, we investigated how behavioral relevance, operationalized by the task context impacts the electrophysiological responses to GSPs. In a set of two experiments we recorded ERPs while participants viewed images of real-world scenes, varying along two GSPs, naturalness (manmade/natural) and spatial expanse (open/closed). In Experiment 1, very little attention to scene content was required as participants viewed the scenes while performing an orthogonal fixation-cross task. In Experiment 2 participants saw the same scenes but now had to actively categorize them, based either on their naturalness or spatial expense. We found that task context had very little impact on the early ERP responses to the naturalness and spatial expanse of the scenes: P1, N1, and P2 could distinguish between open and closed scenes and between manmade and natural scenes across both experiments. Further, the specific effects of naturalness and spatial expanse on the ERP components were largely unaffected by their relevance for the task. A task effect was found at the N1 and P2 level, but this effect was manifest across all scene dimensions, indicating a general effect rather than an interaction between task context and GSPs. Together, these findings suggest that the extraction of global scene information reflected in the early ERP components is rapid and very little influenced by top-down observer-based goals. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Isotropic and anisotropic regimes of the field-dependent spin dynamics in Sr 2 IrO 4 : Raman scattering studies

    DOE PAGES

    Gim, Y.; Sethi, A.; Zhao, Q.; ...

    2016-01-11

    A major focus of experimental interest in Sr 2IrO 4 has been to clarify how the magnetic excitations of this strongly spin-orbit coupled system differ from the predictions of an isotropic 2D spin-1/2 Heisenberg model and to explore the extent to which strong spin-orbit coupling affects the magnetic properties of iridates. Here, we present a high-resolution inelastic light (Raman) scattering study of the low energy magnetic excitation spectrum of Sr 2IrO 4 and doped Eu-doped Sr 2IrO 4 as functions of both temperature and applied magnetic field. We show that the high-field (H > 1.5 T) in-plane spin dynamics ofmore » Sr 2IrO 4 are isotropic and governed by the interplay between the applied field and the small in-plane ferromagnetic spin components induced by the Dzyaloshinskii-Moriya interaction. However, the spin dynamics of Sr 2IrO 4 at lower fields (H < 1.5 T) exhibit important effects associated with interlayer coupling and in-plane anisotropy, including a spin-flop transition at Hc in Sr 2IrO 4 that occurs either discontinuously or via a continuous rotation of the spins, depending upon the in-plane orientation of the applied field. Furthermore, these results show that in-plane anisotropy and interlayer coupling effects play important roles in the low-field magnetic and dynamical properties of Sr 2IrO 4.« less

  19. Ultrafast structural molecular dynamics investigated with 2D infrared spectroscopy methods.

    PubMed

    Kraack, Jan Philip

    2017-10-25

    Ultrafast, multi-dimensional infrared (IR) spectroscopy has been advanced in recent years to a versatile analytical tool with a broad range of applications to elucidate molecular structure on ultrafast timescales, and it can be used for samples in a many different environments. Following a short and general introduction on the benefits of 2D IR spectroscopy, the first part of this chapter contains a brief discussion on basic descriptions and conceptual considerations of 2D IR spectroscopy. Outstanding classical applications of 2D IR are used afterwards to highlight the strengths and basic applicability of the method. This includes the identification of vibrational coupling in molecules, characterization of spectral diffusion dynamics, chemical exchange of chemical bond formation and breaking, as well as dynamics of intra- and intermolecular energy transfer for molecules in bulk solution and thin films. In the second part, several important, recently developed variants and new applications of 2D IR spectroscopy are introduced. These methods focus on (i) applications to molecules under two- and three-dimensional confinement, (ii) the combination of 2D IR with electrochemistry, (iii) ultrafast 2D IR in conjunction with diffraction-limited microscopy, (iv) several variants of non-equilibrium 2D IR spectroscopy such as transient 2D IR and 3D IR, and (v) extensions of the pump and probe spectral regions for multi-dimensional vibrational spectroscopy towards mixed vibrational-electronic spectroscopies. In light of these examples, the important open scientific and conceptual questions with regard to intra- and intermolecular dynamics are highlighted. Such questions can be tackled with the existing arsenal of experimental variants of 2D IR spectroscopy to promote the understanding of fundamentally new aspects in chemistry, biology and materials science. The final part of the chapter introduces several concepts of currently performed technical developments, which aim at exploiting 2D IR spectroscopy as an analytical tool. Such developments embrace the combination of 2D IR spectroscopy and plasmonic spectroscopy for ultrasensitive analytics, merging 2D IR spectroscopy with ultra-high-resolution microscopy (nanoscopy), future variants of transient 2D IR methods, or 2D IR in conjunction with microfluidics. It is expected that these techniques will allow for groundbreaking research in many new areas of natural sciences.

  20. High Dynamic Range Imaging Using Multiple Exposures

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Zhou, Peipei; Zhou, Wei

    2017-06-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range (LDR) camera. This paper presents an approach for improving the dynamic range of cameras by using multiple exposure images of same scene taken under different exposure times. First, the camera response function (CRF) is recovered by solving a high-order polynomial in which only the ratios of the exposures are used. Then, the HDR radiance image is reconstructed by weighted summation of the each radiance maps. After that, a novel local tone mapping (TM) operator is proposed for the display of the HDR radiance image. By solving the high-order polynomial, the CRF can be recovered quickly and easily. Taken the local image feature and characteristic of histogram statics into consideration, the proposed TM operator could preserve the local details efficiently. Experimental result demonstrates the effectiveness of our method. By comparison, the method outperforms other methods in terms of imaging quality.

  1. Multi-sensor analysis of urban ecosystems

    USGS Publications Warehouse

    Gallo, Kevin P.; Ji, Lei

    2004-01-01

    This study examines the synthesis of multiple space-based sensors to characterize the urban environment Single scene data (e.g., ASTER visible and near-IR surface reflectance, and land surface temperature data), multi-temporal data (e.g., one year of 16-day MODIS and AVHRR vegetation index data), and DMSP-OLS nighttime light data acquired in the early 1990s and 2000 were evaluated for urban ecosystem analysis. The advantages of a multi-sensor approach for the analysis of urban ecosystem processes are discussed.

  2. SSC Geopositional Assessment of the Advanced Wide Field Sensor

    NASA Technical Reports Server (NTRS)

    Ross, Kenton

    2007-01-01

    The objective is to provide independent verification of IRS geopositional accuracy claims and of the internal geopositional characterization provided by Lutes (2005). Six sub-scenes (quads) were assessed; Three from each AWiFS camera. Check points were manually matched to digital orthophoto quarter quadrangle (DOQQ) reference (assumed accuracy approx. 5 m, RMSE) Check points were selected to meet or exceed Federal Geographic Data Committee's guidelines. Used ESRI ArcGIS for data collection and SSC-written MATLAB scripts for data analysis.

  3. Complex Dynamic Scene Perception: Effects of Attentional Set on Perceiving Single and Multiple Event Types

    ERIC Educational Resources Information Center

    Sanocki, Thomas; Sulman, Noah

    2013-01-01

    Three experiments measured the efficiency of monitoring complex scenes composed of changing objects, or events. All events lasted about 4 s, but in a given block of trials, could be of a single type (single task) or of multiple types (multitask, with a total of four event types). Overall accuracy of detecting target events amid distractors was…

  4. Autonomous UAV-based mapping of large-scale urban firefights

    NASA Astrophysics Data System (ADS)

    Snarski, Stephen; Scheibner, Karl; Shaw, Scott; Roberts, Randy; LaRow, Andy; Breitfeller, Eric; Lupo, Jasper; Nielson, Darron; Judge, Bill; Forren, Jim

    2006-05-01

    This paper describes experimental results from a live-fire data collect designed to demonstrate the ability of IR and acoustic sensing systems to detect and map high-volume gunfire events from tactical UAVs. The data collect supports an exploratory study of the FightSight concept in which an autonomous UAV-based sensor exploitation and decision support capability is being proposed to provide dynamic situational awareness for large-scale battalion-level firefights in cluttered urban environments. FightSight integrates IR imagery, acoustic data, and 3D scene context data with prior time information in a multi-level, multi-step probabilistic-based fusion process to reliably locate and map the array of urban firing events and firepower movements and trends associated with the evolving urban battlefield situation. Described here are sensor results from live-fire experiments involving simultaneous firing of multiple sub/super-sonic weapons (2-AK47, 2-M16, 1 Beretta, 1 Mortar, 1 rocket) with high optical and acoustic clutter at ranges up to 400m. Sensor-shooter-target configurations and clutter were designed to simulate UAV sensing conditions for a high-intensity firefight in an urban environment. Sensor systems evaluated were an IR bullet tracking system by Lawrence Livermore National Laboratory (LLNL) and an acoustic gunshot detection system by Planning Systems, Inc. (PSI). The results demonstrate convincingly the ability for the LLNL and PSI sensor systems to accurately detect, separate, and localize multiple shooters and the associated shot directions during a high-intensity firefight (77 rounds in 5 sec) in a high acoustic and optical clutter environment with very low false alarms. Preliminary fusion processing was also examined that demonstrated an ability to distinguish co-located shooters (shooter density), range to <0.5 m accuracy at 400m, and weapon type. The combined results of the high-intensity firefight data collect and a detailed systems study demonstrate the readiness of the FightSight concept for full system development and integration.

  5. Hybrid classical/quantum simulation for infrared spectroscopy of water

    NASA Astrophysics Data System (ADS)

    Maekawa, Yuki; Sasaoka, Kenji; Ube, Takuji; Ishiguro, Takashi; Yamamoto, Takahiro

    2018-05-01

    We have developed a hybrid classical/quantum simulation method to calculate the infrared (IR) spectrum of water. The proposed method achieves much higher accuracy than conventional classical molecular dynamics (MD) simulations at a much lower computational cost than ab initio MD simulations. The IR spectrum of water is obtained as an ensemble average of the eigenvalues of the dynamical matrix constructed by ab initio calculations, using the positions of oxygen atoms that constitute water molecules obtained from the classical MD simulation. The calculated IR spectrum is in excellent agreement with the experimental IR spectrum.

  6. A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors

    PubMed Central

    Mishra, Abhishek; Ghosh, Rohan; Principe, Jose C.; Thakor, Nitish V.; Kukreja, Sunil L.

    2017-01-01

    Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of spike-groups as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%. PMID:28316563

  7. Ghost detection and removal based on super-pixel grouping in exposure fusion

    NASA Astrophysics Data System (ADS)

    Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun

    2014-09-01

    A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.

  8. Visualization of spatial-temporal data based on 3D virtual scene

    NASA Astrophysics Data System (ADS)

    Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang

    2009-10-01

    The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.

  9. How Early is Infants' Attention to Objects and Actions Shaped by Culture? New Evidence from 24-Month-Olds Raised in the US and China

    PubMed Central

    Waxman, Sandra R.; Fu, Xiaolan; Ferguson, Brock; Geraghty, Kathleen; Leddon, Erin; Liang, Jing; Zhao, Min-Fang

    2016-01-01

    Researchers have proposed that the culture in which we are raised shapes the way that we attend to the objects and events that surround us. What remains unclear, however, is how early any such culturally-inflected differences emerge in development. Here, we address this issue directly, asking how 24-month-old infants from the US and China deploy their attention to objects and actions in dynamic scenes. By analyzing infants' eye movements while they observed dynamic scenes, the current experiment revealed striking convergences, overall, in infants' patterns of visual attention in the two communities, but also pinpointed a brief period during which their attention reliably diverged. This divergence, though modest, suggested that infants from the US devoted relatively more attention to the objects and those from China devoted relatively more attention to the actions in which they were engaged. This provides the earliest evidence for strong overlap in infants' attention to objects and events in dynamic scenes, but also raises the possibility that by 24 months, infants' attention may also be shaped subtly by the culturally-inflected attentional proclivities characteristic of adults in their cultural communities. PMID:26903905

  10. Method for separating video camera motion from scene motion for constrained 3D displacement measurements

    NASA Astrophysics Data System (ADS)

    Gauthier, L. R.; Jansen, M. E.; Meyer, J. R.

    2014-09-01

    Camera motion is a potential problem when a video camera is used to perform dynamic displacement measurements. If the scene camera moves at the wrong time, the apparent motion of the object under study can easily be confused with the real motion of the object. In some cases, it is practically impossible to prevent camera motion, as for instance, when a camera is used outdoors in windy conditions. A method to address this challenge is described that provides an objective means to measure the displacement of an object of interest in the scene, even when the camera itself is moving in an unpredictable fashion at the same time. The main idea is to synchronously measure the motion of the camera and to use those data ex post facto to subtract out the apparent motion in the scene that is caused by the camera motion. The motion of the scene camera is measured by using a reference camera that is rigidly attached to the scene camera and oriented towards a stationary reference object. For instance, this reference object may be on the ground, which is known to be stationary. It is necessary to calibrate the reference camera by simultaneously measuring the scene images and the reference images at times when it is known that the scene object is stationary and the camera is moving. These data are used to map camera movement data to apparent scene movement data in pixel space and subsequently used to remove the camera movement from the scene measurements.

  11. Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli

    PubMed Central

    Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.

    2010-01-01

    Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356

  12. The Sport Expert's Attention Superiority on Skill-related Scene Dynamic by the Activation of left Medial Frontal Gyrus: An ERP and LORETA Study.

    PubMed

    He, Mengyang; Qi, Changzhu; Lu, Yang; Song, Amanda; Hayat, Saba Z; Xu, Xia

    2018-05-21

    Extensive studies have shown that a sports expert is superior to a sports novice in visually perceptual-cognitive processes of sports scene information, however the attentional and neural basis of it has not been thoroughly explored. The present study examined whether a sport expert has the attentional superiority on scene information relevant to his/her sport skill, and explored what factor drives this superiority. To address this problem, EEGs were recorded as participants passively viewed sport scenes (tennis vs. non-tennis) and negative emotional faces in the context of a visual attention task, where the pictures of sport scenes or of negative emotional faces randomly followed the pictures with overlapping sport scenes and negative emotional faces. ERP results showed that for experts, the evoked potential of attentional competition elicited by the overlap of tennis scene was significantly larger than that evoked by the overlap of non-tennis scene, while this effect was absent for novices. The LORETA showed that the experts' left medial frontal gyrus (MFG) cortex was significantly more active as compared to the right MFG when processing the overlap of tennis scene, but the lateralization effect was not significant in novices. Those results indicate that experts have attentional superiority on skill-related scene information, despite intruding the scene through negative emotional faces that are prone to cause negativity bias toward their visual field as a strong distractor. This superiority is actuated by the activation of left MFG cortex and probably due to self-reference. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. A Stochastic-entropic Approach to Detect Persistent Low-temperature Volcanogenic Thermal Anomalies

    NASA Astrophysics Data System (ADS)

    Pieri, D. C.; Baxter, S.

    2011-12-01

    Eruption prediction is a chancy idiosyncratic affair, as volcanoes often manifest waxing and/or waning pre-eruption emission, geodetic, and seismic behavior that is unsystematic. Thus, fundamental to increased prediction accuracy and precision are good and frequent assessments of the time-series behavior of relevant precursor geophysical, geochemical, and geological phenomena, especially when volcanoes become restless. The Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER), in orbit since 1999 on the NASA Terra Earth Observing System satellite is an important capability for detection of thermal eruption precursors (even subtle ones) and increased passive gas emissions. The unique combination of ASTER high spatial resolution multi-spectral thermal IR imaging data (90m/pixel; 5 bands in the 8-12um region), combined with simultaneous visible and near-IR imaging data, and stereo-photogrammetric capabilities make it a useful, especially thermal, precursor detection tool. The JPL ASTER Volcano Archive consisting of 80,000+ASTER volcano images allows systematic analysis of (a) baseline thermal emissions for 1550+ volcanoes, (b) important aspects of the time-dependent thermal variability, and (c) the limits of detection of temporal dynamics of eruption precursors. We are analyzing a catalog of the magnitude, frequency, and distribution of ASTER-documented volcano thermal signatures, compiled from 2000 onward, at 90m/pixel. Low contrast thermal anomalies of relatively low apparent absolute temperature (e.g., summit lakes, fumarolically altered areas, geysers, very small sub-pixel hotspots), for which the signal-to-noise ratio may be marginal (e.g., scene confusion due to clouds, water and water vapor, fumarolic emissions, variegated ground emissivity, and their combinations), are particularly important to discern and monitor. We have developed a technique to detect persistent hotspots that takes into account in-scene observed pixel joint frequency distributions over time, temperature contrast, and Shannon entropy. Preliminary analyses of Fogo Volcano and Yellowstone hotspots, among others, indicate that this is a very sensitive technique with good potential to be applied over the entire ASTER global night-time archive. We will discuss our progress in creating the global thermal anomaly catalog as well as algorithm approach and results. This work was carried out at the Jet Propulsion Laboratory of the California Institute of Technology under contract to NASA.

  14. The Southampton-York Natural Scenes (SYNS) dataset: Statistics of surface attitude

    PubMed Central

    Adams, Wendy J.; Elder, James H.; Graf, Erich W.; Leyland, Julian; Lugtigheid, Arthur J.; Muryy, Alexander

    2016-01-01

    Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude. PMID:27782103

  15. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach

    PubMed Central

    Teng, Santani

    2017-01-01

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019

  16. Memory-guided attention during active viewing of edited dynamic scenes.

    PubMed

    Valuch, Christian; König, Peter; Ansorge, Ulrich

    2017-01-01

    Films, TV shows, and other edited dynamic scenes contain many cuts, which are abrupt transitions from one video shot to the next. Cuts occur within or between scenes, and often join together visually and semantically related shots. Here, we tested to which degree memory for the visual features of the precut shot facilitates shifting attention to the postcut shot. We manipulated visual similarity across cuts, and measured how this affected covert attention (Experiment 1) and overt attention (Experiments 2 and 3). In Experiments 1 and 2, participants actively viewed a target movie that randomly switched locations with a second, distractor movie at the time of the cuts. In Experiments 1 and 2, participants were able to deploy attention more rapidly and accurately to the target movie's continuation when visual similarity was high than when it was low. Experiment 3 tested whether this could be explained by stimulus-driven (bottom-up) priming by feature similarity, using one clip at screen center that was followed by two alternative continuations to the left and right. Here, even the highest similarity across cuts did not capture attention. We conclude that following cuts of high visual similarity, memory-guided attention facilitates the deployment of attention, but this effect is (top-down) dependent on the viewer's active matching of scene content across cuts.

  17. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach.

    PubMed

    Cichy, Radoslaw Martin; Teng, Santani

    2017-02-19

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  18. Applications of 2D IR spectroscopy to peptides, proteins, and hydrogen-bond dynamics

    PubMed Central

    Kim, Yung Sam; Hochstrasser, Robin M.

    2010-01-01

    Following a survey of 2D IR principles this Feature Article describes recent experiments on the hydrogen-bond dynamics of small ions, amide-I modes, nitrile probes, peptides, reverse transcriptase inhibitors, and amyloid fibrils. PMID:19351162

  19. The detection and discrimination of human body fluids using ATR FT-IR spectroscopy.

    PubMed

    Orphanou, Charlotte-Maria; Walton-Williams, Laura; Mountain, Harry; Cassella, John

    2015-07-01

    Blood, saliva, semen and vaginal secretions are the main human body fluids encountered at crime scenes. Currently presumptive tests are routinely utilised to indicate the presence of body fluids, although these are often subject to false positives and limited to particular body fluids. Over the last decade more sensitive and specific body fluid identification methods have been explored, such as mRNA analysis and proteomics, although these are not yet appropriate for routine application. This research investigated the application of ATR FT-IR spectroscopy for the detection and discrimination of human blood, saliva, semen and vaginal secretions. The results demonstrated that ATR FT-IR spectroscopy can detect and distinguish between these body fluids based on the unique spectral pattern, combination of peaks and peak frequencies corresponding to the macromolecule groups common within biological material. Comparisons with known abundant proteins relevant to each body fluid were also analysed to enable specific peaks to be attributed to the relevant protein components, which further reinforced the discrimination and identification of each body fluid. Overall, this preliminary research has demonstrated the potential for ATR FT-IR spectroscopy to be utilised in the routine confirmatory screening of biological evidence due to its quick and robust application within forensic science. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Common and Innovative Visuals: A sparsity modeling framework for video.

    PubMed

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  1. Individual differences in the spontaneous recruitment of brain regions supporting mental state understanding when viewing natural social scenes.

    PubMed

    Wagner, Dylan D; Kelley, William M; Heatherton, Todd F

    2011-12-01

    People are able to rapidly infer complex personality traits and mental states even from the most minimal person information. Research has shown that when observers view a natural scene containing people, they spend a disproportionate amount of their time looking at the social features (e.g., faces, bodies). Does this preference for social features merely reflect the biological salience of these features or are observers spontaneously attempting to make sense of complex social dynamics? Using functional neuroimaging, we investigated neural responses to social and nonsocial visual scenes in a large sample of participants (n = 48) who varied on an individual difference measure assessing empathy and mentalizing (i.e., empathizing). Compared with other scene categories, viewing natural social scenes activated regions associated with social cognition (e.g., dorsomedial prefrontal cortex and temporal poles). Moreover, activity in these regions during social scene viewing was strongly correlated with individual differences in empathizing. These findings offer neural evidence that observers spontaneously engage in social cognition when viewing complex social material but that the degree to which people do so is mediated by individual differences in trait empathizing.

  2. Auditory salience using natural soundscapes.

    PubMed

    Huang, Nicholas; Elhilali, Mounya

    2017-03-01

    Salience describes the phenomenon by which an object stands out from a scene. While its underlying processes are extensively studied in vision, mechanisms of auditory salience remain largely unknown. Previous studies have used well-controlled auditory scenes to shed light on some of the acoustic attributes that drive the salience of sound events. Unfortunately, the use of constrained stimuli in addition to a lack of well-established benchmarks of salience judgments hampers the development of comprehensive theories of sensory-driven auditory attention. The present study explores auditory salience in a set of dynamic natural scenes. A behavioral measure of salience is collected by having human volunteers listen to two concurrent scenes and indicate continuously which one attracts their attention. By using natural scenes, the study takes a data-driven rather than experimenter-driven approach to exploring the parameters of auditory salience. The findings indicate that the space of auditory salience is multidimensional (spanning loudness, pitch, spectral shape, as well as other acoustic attributes), nonlinear and highly context-dependent. Importantly, the results indicate that contextual information about the entire scene over both short and long scales needs to be considered in order to properly account for perceptual judgments of salience.

  3. Emotional contexts modulate intentional memory suppression of neutral faces: Insights from ERPs.

    PubMed

    Pierguidi, Lapo; Righi, Stefania; Gronchi, Giorgio; Marzi, Tessa; Caharel, Stephanie; Giovannelli, Fabio; Viggiano, Maria Pia

    2016-08-01

    The main goal of present work is to gain new insight into the temporal dynamics underlying the voluntary memory control for neutral faces associated with neutral, positive and negative contexts. A directed forgetting (DF) procedure was used during the recording of EEG to answer the question whether is it possible to forget a face that has been encoded within a particular emotional context. A face-scene phase in which a neutral face was showed in a neutral or emotional scene (positive, negative) was followed by the voluntary memory cue (cue phase) indicating whether the face had to-be remember or to-be-forgotten (TBR and TBF). Memory for faces was then assessed with an old/new recognition task. Behaviorally, we found that it is harder to suppress faces-in-positive-scenes compared to faces-in-negative and neutral-scenes. The temporal information obtained by the ERPs showed: 1) during the face-scene phase, the Late Positive Potential (LPP), which indexes motivated emotional attention, was larger for faces-in-negative-scenes compared to faces-in-neutral-scenes. 2) Remarkably, during the cue phase, ERPs were significantly modulated by the emotional contexts. Faces-in-neutral scenes showed an ERP pattern that has been typically associated to DF effect whereas faces-in-positive-scenes elicited the reverse ERP pattern. Faces-in-negative scenes did not show differences in the DF-related neural activities but larger N1 amplitude for TBF vs. TBR faces may index early attentional deployment. These results support the hypothesis that the pleasantness or unpleasantness of the contexts (through attentional broadening and narrowing mechanisms, respectively) may modulate the effectiveness of intentional memory suppression for neutral information. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Thematic Mapper Data Quality and Performance Assessment in Renewable Resources/agriculture Remote Sensing

    NASA Technical Reports Server (NTRS)

    Bizzell, R. M.; Prior, H. L.

    1984-01-01

    It is believed that the increased spatial resolution will provide solutions to proportion estimation error due to mixed pixels, and the increased spectral resolution will provide for the identification of important agricultural features such as crop stage, and condition. The results of analyses conducted relative to these hypothesis from sample segments extracted from the 4-band Detroit scene and the 7-band Mississippi County, Arkansas engineering test scene are described. Several studies were conducted to evaluate the geometric and radiometric performance of the TM to determine data viability for the more pertinent investigations of TM utility. In most cases this requirement was more than sufficiently satisfied. This allowed the opportunity to take advantage of detailed ground observations for several of the sample segments to assess class separability and detection of other important features with TM. The results presented regarding these TM characteristics show that not only is the increased definition of the within scene variance captured by the increased spatial and spectral resolution, but that the mid-IR bands (5 and 7) are necessary for optimum crop type classification. Both qualitative and quantitative results are presented that describe the improvements gained with the TM both relative to the MSS and on its own merit.

  5. Water of Hydration Dynamics in Minerals Gypsum and Bassanite: Ultrafast 2D IR Spectroscopy of Rocks.

    PubMed

    Yan, Chang; Nishida, Jun; Yuan, Rongfeng; Fayer, Michael D

    2016-08-03

    Water of hydration plays an important role in minerals, determining their crystal structures and physical properties. Here ultrafast nonlinear infrared (IR) techniques, two-dimensional infrared (2D IR) and polarization selective pump-probe (PSPP) spectroscopies, were used to measure the dynamics and disorder of water of hydration in two minerals, gypsum (CaSO4·2H2O) and bassanite (CaSO4·0.5H2O). 2D IR spectra revealed that water arrangement in freshly precipitated gypsum contained a small amount of inhomogeneity. Following annealing at 348 K, water molecules became highly ordered; the 2D IR spectrum became homogeneously broadened (motional narrowed). PSPP measurements observed only inertial orientational relaxation. In contrast, water in bassanite's tubular channels is dynamically disordered. 2D IR spectra showed a significant amount of inhomogeneous broadening caused by a range of water configurations. At 298 K, water dynamics cause spectral diffusion that sampled a portion of the inhomogeneous line width on the time scale of ∼30 ps, while the rest of inhomogeneity is static on the time scale of the measurements. At higher temperature, the dynamics become faster. Spectral diffusion accelerates, and a portion of the lower temperature spectral diffusion became motionally narrowed. At sufficiently high temperature, all of the dynamics that produced spectral diffusion at lower temperatures became motionally narrowed, and only homogeneous broadening and static inhomogeneity were observed. Water angular motions in bassanite exhibit temperature-dependent diffusive orientational relaxation in a restricted cone of angles. The experiments were made possible by eliminating the vast amount of scattered light produced by the granulated powder samples using phase cycling methods.

  6. Role of Dynamics in the Autoinhibition and Activation of the Hyperpolarization-activated Cyclic Nucleotide-modulated (HCN) Ion Channels*♦

    PubMed Central

    VanSchouwen, Bryan; Akimoto, Madoka; Sayadi, Maryam; Fogolari, Federico; Melacini, Giuseppe

    2015-01-01

    The hyperpolarization-activated cyclic nucleotide-modulated (HCN) ion channels control rhythmicity in neurons and cardiomyocytes. Cyclic AMP allosterically modulates HCN through the cAMP-dependent formation of a tetrameric gating ring spanning the intracellular region (IR) of HCN, to which cAMP binds. Although the apo versus holo conformational changes of the cAMP-binding domain (CBD) have been previously mapped, only limited information is currently available on the HCN IR dynamics, which have been hypothesized to play a critical role in the cAMP-dependent gating of HCN. Here, using molecular dynamics simulations validated and complemented by experimental NMR and CD data, we comparatively analyze HCN IR dynamics in the four states of the thermodynamic cycle arising from the coupling between cAMP binding and tetramerization equilibria. This extensive set of molecular dynamics trajectories captures the active-to-inactive transition that had remained elusive for other CBDs, and it provides unprecedented insight on the role of IR dynamics in HCN autoinhibition and its release by cAMP. Specifically, the IR tetramerization domain becomes more flexible in the monomeric states, removing steric clashes that the apo-CDB structure would otherwise impose. Furthermore, the simulations reveal that the active/inactive structural transition for the apo-monomeric CBD occurs through a manifold of pathways that are more divergent than previously anticipated. Upon cAMP binding, these pathways become disallowed, pre-confining the CBD conformational ensemble to a tetramer-compatible state. This conformational confinement primes the IR for tetramerization and thus provides a model of how cAMP controls HCN channel gating. PMID:25944904

  7. Evaluation of High Dynamic Range Photography as a Luminance Mapping Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inanici, Mehlika; Galvin, Jim

    2004-12-30

    The potential, limitations, and applicability of the High Dynamic Range (HDR) photography technique is evaluated as a luminance mapping tool. Multiple exposure photographs of static scenes are taken with a Nikon 5400 digital camera to capture the wide luminance variation within the scenes. The camera response function is computationally derived using the Photosphere software, and is used to fuse the multiple photographs into HDR images. The vignetting effect and point spread function of the camera and lens system is determined. Laboratory and field studies have shown that the pixel values in the HDR photographs can correspond to the physical quantitymore » of luminance with reasonable precision and repeatability.« less

  8. Visual Acuity Using Head-fixed Displays During Passive Self and Surround Motion

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Black, F. Owen; Stallings, Valerie; Peters, Brian

    2007-01-01

    The ability to read head-fixed displays on various motion platforms requires the suppression of vestibulo-ocular reflexes. This study examined dynamic visual acuity while viewing a head-fixed display during different self and surround rotation conditions. Twelve healthy subjects were asked to report the orientation of Landolt C optotypes presented on a micro-display fixed to a rotating chair at 50 cm distance. Acuity thresholds were determined by the lowest size at which the subjects correctly identified 3 of 5 optotype orientations at peak velocity. Visual acuity was compared across four different conditions, each tested at 0.05 and 0.4 Hz (peak amplitude of 57 deg/s). The four conditions included: subject rotated in semi-darkness (i.e., limited to background illumination of the display), subject stationary while visual scene rotated, subject rotated around a stationary visual background, and both subject and visual scene rotated together. Visual acuity performance was greatest when the subject rotated around a stationary visual background; i.e., when both vestibular and visual inputs provided concordant information about the motion. Visual acuity performance was most reduced when the subject and visual scene rotated together; i.e., when the visual scene provided discordant information about the motion. Ranges of 4-5 logMAR step sizes across the conditions indicated the acuity task was sufficient to discriminate visual performance levels. The background visual scene can influence the ability to read head-fixed displays during passive motion disturbances. Dynamic visual acuity using head-fixed displays can provide an operationally relevant screening tool for visual performance during exposure to novel acceleration environments.

  9. Space flight visual simulation.

    PubMed

    Xu, L

    1985-01-01

    In this paper, based on the scenes of stars seen by astronauts in their orbital flights, we have studied the mathematical model which must be constructed for CGI system to realize the space flight visual simulation. Considering such factors as the revolution and rotation of the Earth, exact date, time and site of orbital injection of the spacecraft, as well as its orbital flight and attitude motion, etc., we first defined all the instantaneous lines of sight and visual fields of astronauts in space. Then, through a series of coordinate transforms, the pictures of the scenes of stars changing with time-space were photographed one by one mathematically. In the procedure, we have designed a method of three-times "mathematical cutting." Finally, we obtained each instantaneous picture of the scenes of stars observed by astronauts through the window of the cockpit. Also, the dynamic conditions shaded by the Earth in the varying pictures of scenes of stars could be displayed.

  10. Temporal and spatial neural dynamics in the perception of basic emotions from complex scenes

    PubMed Central

    Costa, Tommaso; Cauda, Franco; Crini, Manuella; Tatu, Mona-Karina; Celeghin, Alessia; de Gelder, Beatrice

    2014-01-01

    The different temporal dynamics of emotions are critical to understand their evolutionary role in the regulation of interactions with the surrounding environment. Here, we investigated the temporal dynamics underlying the perception of four basic emotions from complex scenes varying in valence and arousal (fear, disgust, happiness and sadness) with the millisecond time resolution of Electroencephalography (EEG). Event-related potentials were computed and each emotion showed a specific temporal profile, as revealed by distinct time segments of significant differences from the neutral scenes. Fear perception elicited significant activity at the earliest time segments, followed by disgust, happiness and sadness. Moreover, fear, disgust and happiness were characterized by two time segments of significant activity, whereas sadness showed only one long-latency time segment of activity. Multidimensional scaling was used to assess the correspondence between neural temporal dynamics and the subjective experience elicited by the four emotions in a subsequent behavioral task. We found a high coherence between these two classes of data, indicating that psychological categories defining emotions have a close correspondence at the brain level in terms of neural temporal dynamics. Finally, we localized the brain regions of time-dependent activity for each emotion and time segment with the low-resolution brain electromagnetic tomography. Fear and disgust showed widely distributed activations, predominantly in the right hemisphere. Happiness activated a number of areas mostly in the left hemisphere, whereas sadness showed a limited number of active areas at late latency. The present findings indicate that the neural signature of basic emotions can emerge as the byproduct of dynamic spatiotemporal brain networks as investigated with millisecond-range resolution, rather than in time-independent areas involved uniquely in the processing one specific emotion. PMID:24214921

  11. Overexpression of insulin-like growth factor-I receptor as a pertinent biomarker for hepatocytes malignant transformation

    PubMed Central

    Yan, Xiao-Di; Yao, Min; Wang, Li; Zhang, Hai-Jian; Yan, Mei-Juan; Gu, Xing; Shi, Yun; Chen, Jie; Dong, Zhi-Zhen; Yao, Deng-Fu

    2013-01-01

    AIM: To investigate the dynamic features of insulin-like growth factor-I receptor (IGF-IR) expression in rat hepatocarcinogenesis, and the relationship between IGF-IR and hepatocytes malignant transformation at mRNA or protein level. METHODS: Hepatoma models were made by inducing with 2-fluorenylacetamide (2-FAA) on male Sprague-Dawley rats. Morphological changes of hepatocytes were observed by pathological Hematoxylin and eosin staining, the dynamic expressions of liver and serum IGF-IR were quantitatively analyzed by an enzyme-linked immunosorbent assay. The distribution of hepatic IGF-IR was located by immunohistochemistry. The fragments of IGF-IR gene were amplified by reverse transcription-polymerase chain reaction, and confirmed by sequencing. RESULTS: Rat hepatocytes after induced by 2-FAA were changed dynamically from granule-like degeneration, precancerous to hepatoma formation with the progressing increasing of hepatic mRNA or IGF-IR expression. The incidences of liver IGF-IR, IGF-IR mRNA, specific IGF-IR concentration (ng/mg wet liver), and serum IGF-IR level (ng/mL) were 0.0%, 0.0%, 0.63 ± 0.17, and 1.33 ± 0.47 in the control; 50.0%, 61.1%, 0.65 ± 0.2, and 1.51 ± 0.46 in the degeneration; 88.9%, 100%, 0.66 ± 0.14, and 1.92 ± 0.29 in the precancerosis; and 100%, 100%, 0.96 ± 0.09, and 2.43 ± 0.57 in the cancerous group, respectively. IGF-IR expression in the cancerous group was significantly higher (P < 0.01) than that in any of other groups at mRNA or protein level. The closely positive IGF-IR relationship was found between livers and sera (r = 0.91, t = 14.222, P < 0.01), respectively. CONCLUSION: IGF-IR expression may participate in rat hepatocarcinogenesis and its abnormality should be an early marker for hepatocytes malignant transformation. PMID:24106410

  12. Using VIS/NIR and IR spectral cameras for detecting and separating crime scene details

    NASA Astrophysics Data System (ADS)

    Kuula, Jaana; Pölönen, Ilkka; Puupponen, Hannu-Heikki; Selander, Tuomas; Reinikainen, Tapani; Kalenius, Tapani; Saari, Heikki

    2012-06-01

    Detecting invisible details and separating mixed evidence is critical for forensic inspection. If this can be done reliably and fast at the crime scene, irrelevant objects do not require further examination at the laboratory. This will speed up the inspection process and release resources for other critical tasks. This article reports on tests which have been carried out at the University of Jyväskylä in Finland together with the Central Finland Police Department and the National Bureau of Investigation for detecting and separating forensic details with hyperspectral technology. In the tests evidence was sought after at an assumed violent burglary scene with the use of VTT's 500-900 nm wavelength VNIR camera, Specim's 400- 1000 nm VNIR camera, and Specim's 1000-2500 nm SWIR camera. The tested details were dried blood on a ceramic plate, a stain of four types of mixed and absorbed blood, and blood which had been washed off a table. Other examined details included untreated latent fingerprints, gunshot residue, primer residue, and layered paint on small pieces of wood. All cameras could detect visible details and separate mixed paint. The SWIR camera could also separate four types of human and animal blood which were mixed in the same stain and absorbed into a fabric. None of the cameras could however detect primer residue, untreated latent fingerprints, or blood that had been washed off. The results are encouraging and indicate the need for further studies. The results also emphasize the importance of creating optimal imaging conditions into the crime scene for each kind of subjects and backgrounds.

  13. Adaptive convergence nonuniformity correction algorithm.

    PubMed

    Qian, Weixian; Chen, Qian; Bai, Junqi; Gu, Guohua

    2011-01-01

    Nowadays, convergence and ghosting artifacts are common problems in scene-based nonuniformity correction (NUC) algorithms. In this study, we introduce the idea of space frequency to the scene-based NUC. Then the convergence speed factor is presented, which can adaptively change the convergence speed by a change of the scene dynamic range. In fact, the convergence speed factor role is to decrease the statistical data standard deviation. The nonuniformity space relativity characteristic was summarized by plenty of experimental statistical data. The space relativity characteristic was used to correct the convergence speed factor, which can make it more stable. Finally, real and simulated infrared image sequences were applied to demonstrate the positive effect of our algorithm.

  14. Time-Resolved IR-Absorption Spectroscopy of Hot-Electron Dynamics in Satellite and Upper Conduction Bands in GaP

    NASA Technical Reports Server (NTRS)

    Cavicchia, M. A.; Alfano, R. R.

    1995-01-01

    The relaxation dynamics of hot electrons in the X6 and X7 satellite and upper conduction bands in GaP was directly measured by femtosecond UV-pump-IR-probe absorption spectroscopy. From a fit to the induced IR-absorption spectra the dominant scattering mechanism giving rise to the absorption at early delay times was determined to be intervalley scattering of electrons out of the X7 upper conduction-band valley. For long delay times the dominant scattering mechanism is electron-hole scattering. Electron transport dynamics of the upper conduction band of GaP has been time resolved.

  15. Assessment of Thematic Mapper band-to-band registration by the block correlation method

    NASA Technical Reports Server (NTRS)

    Card, D. H.; Wrigley, R. C.; Mertz, F. C.; Hall, J. R.

    1983-01-01

    Rectangular blocks of pixels from one band image were statistically correlated against blocks centered on identical pixels from a second band image. The block pairs were shifted in pixel increments both vertically and horizontally with respect to each other and the correlation coefficient to the maximum correlation was taken as the best estimate of registration error for each block pair. For the band combinations of the Arkansas scene studied, the misregistration of TM spectral bands within the noncooled focal plane lie well within the 0.2 pixel target specification. Misregistration between the middle IR bands is well within this specification also. The thermal IR band has an apparent misregistration with TM band 7 of approximately 3 pixels in each direction. The TM band 3 has a misregistration of approximately 0.2 pixel in the across-scan direction and 0.5 pixel in the along-scan direction, with both TM bands 5 and 7.

  16. Assessment of Thematic Mapper Band-to-band Registration by the Block Correlation Method

    NASA Technical Reports Server (NTRS)

    Card, D. H.; Wrigley, R. C.; Mertz, F. C.; Hall, J. R.

    1985-01-01

    Rectangular blocks of pixels from one band image were statistically correlated against blocks centered on identical pixels from a second band image. The block pairs were shifted in pixel increments both vertically and horizontally with respect to each other and the correlation coefficient to the maximum correlation was taken as the best estimate of registration error for each block pair. For the band combinations of the Arkansas scene studied, the misregistration of TM spectral bands within the noncooled focal plane lie well within the 0.2 pixel target specification. Misregistration between the middle IR bands is well within this specification also. The thermal IR band has an apparent misregistration with TM band 7 of approximately 3 pixels in each direction. The TM band 3 has a misregistration of approximately 0.2 pixel in the across-scan direction and 0.5 pixel in the along-scan direction, with both TM bands 5 and 7.

  17. Fast and robust wavelet-based dynamic range compression and contrast enhancement model with color restoration

    NASA Astrophysics Data System (ADS)

    Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur

    2009-05-01

    Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.

  18. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    NASA Astrophysics Data System (ADS)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  19. The Neural Dynamics of Attentional Selection in Natural Scenes.

    PubMed

    Kaiser, Daniel; Oosterhof, Nikolaas N; Peelen, Marius V

    2016-10-12

    The human visual system can only represent a small subset of the many objects present in cluttered scenes at any given time, such that objects compete for representation. Despite these processing limitations, the detection of object categories in cluttered natural scenes is remarkably rapid. How does the brain efficiently select goal-relevant objects from cluttered scenes? In the present study, we used multivariate decoding of magneto-encephalography (MEG) data to track the neural representation of within-scene objects as a function of top-down attentional set. Participants detected categorical targets (cars or people) in natural scenes. The presence of these categories within a scene was decoded from MEG sensor patterns by training linear classifiers on differentiating cars and people in isolation and testing these classifiers on scenes containing one of the two categories. The presence of a specific category in a scene could be reliably decoded from MEG response patterns as early as 160 ms, despite substantial scene clutter and variation in the visual appearance of each category. Strikingly, we find that these early categorical representations fully depend on the match between visual input and top-down attentional set: only objects that matched the current attentional set were processed to the category level within the first 200 ms after scene onset. A sensor-space searchlight analysis revealed that this early attention bias was localized to lateral occipitotemporal cortex, reflecting top-down modulation of visual processing. These results show that attention quickly resolves competition between objects in cluttered natural scenes, allowing for the rapid neural representation of goal-relevant objects. Efficient attentional selection is crucial in many everyday situations. For example, when driving a car, we need to quickly detect obstacles, such as pedestrians crossing the street, while ignoring irrelevant objects. How can humans efficiently perform such tasks, given the multitude of objects contained in real-world scenes? Here we used multivariate decoding of magnetoencephalogaphy data to characterize the neural underpinnings of attentional selection in natural scenes with high temporal precision. We show that brain activity quickly tracks the presence of objects in scenes, but crucially only for those objects that were immediately relevant for the participant. These results provide evidence for fast and efficient attentional selection that mediates the rapid detection of goal-relevant objects in real-world environments. Copyright © 2016 the authors 0270-6474/16/3610522-07$15.00/0.

  20. Infrared Imaging for Inquiry-Based Learning

    ERIC Educational Resources Information Center

    Xie, Charles; Hazzard, Edmund

    2011-01-01

    Based on detecting long-wavelength infrared (IR) radiation emitted by the subject, IR imaging shows temperature distribution instantaneously and heat flow dynamically. As a picture is worth a thousand words, an IR camera has great potential in teaching heat transfer, which is otherwise invisible. The idea of using IR imaging in teaching was first…

  1. VIIRS day-night band gain and offset determination and performance

    NASA Astrophysics Data System (ADS)

    Geis, J.; Florio, C.; Moyer, D.; Rausch, K.; De Luccia, F. J.

    2012-09-01

    On October 28th, 2011, the Visible-Infrared Imaging Radiometer Suite (VIIRS) was launched on-board the Suomi National Polar-orbiting Partnership (NPP) spacecraft. The instrument has 22 spectral bands: 14 reflective solar bands (RSB), 7 thermal emissive bands (TEB), and a Day Night Band (DNB). The DNB is a panchromatic, solar reflective band that provides visible through near infrared (IR) imagery of earth scenes with radiances spanning 7 orders of magnitude. In order to function over this large dynamic range, the DNB employs a focal plane array (FPA) consisting of three gain stages: the low gain stage (LGS), the medium gain stage (MGS), and the high gain stage (HGS). The final product generated from a DNB raw data record (RDR) is a radiance sensor data record (SDR). Generation of the SDR requires accurate knowledge of the dark offsets and gain coefficients for each DNB stage. These are measured on-orbit and stored in lookup tables (LUT) that are used during ground processing. This paper will discuss the details of the offset and gain measurement, data analysis methodologies, the operational LUT update process, and results to date including a first look at trending of these parameters over the early life of the instrument.

  2. When anticipation beats accuracy: Threat alters memory for dynamic scenes.

    PubMed

    Greenstein, Michael; Franklin, Nancy; Martins, Mariana; Sewack, Christine; Meier, Markus A

    2016-05-01

    Threat frequently leads to the prioritization of survival-relevant processes. Much of the work examining threat-related processing advantages has focused on the detection of static threats or long-term memory for details. In the present study, we examined immediate memory for dynamic threatening situations. We presented participants with visually neutral, dynamic stimuli using a representational momentum (RM) paradigm, and manipulated threat conceptually. Although the participants in both the threatening and nonthreatening conditions produced classic RM effects, RM was stronger for scenarios involving threat (Exps. 1 and 2). Experiments 2 and 3 showed that this effect does not generalize to the nonthreatening objects within a threatening scene, and that it does not extend to arousing happy situations. Although the increased RM effect for threatening objects by definition reflects reduced accuracy, we argue that this reduced accuracy may be offset by a superior ability to predict, and thereby evade, a moving threat.

  3. Dynamic Target Acquisition: Empirical Models of Operator Performance.

    DTIC Science & Technology

    1980-08-01

    for 30,000 Ft Initial Slant Range VARIABLES MEAN Signature X Scene Complexity Low Medium High Active Target FLIR 22794 20162 20449 Inactive Target...Interactions for 30,000 Ft Initial Slant Range I Signature X Scene Complexity V * ORDERED MEANS 14867 18076 18079 18315 19105 19643 20162 20449 22794...14867 18076 1 183159 19105* 1 19643 20162* 20449 * 1 22794Signature X Speed I ORDERED MEANS 13429 15226 16604 17344 19033 20586 22641 24033 24491 1

  4. An HDR imaging method with DTDI technology for push-broom cameras

    NASA Astrophysics Data System (ADS)

    Sun, Wu; Han, Chengshan; Xue, Xucheng; Lv, Hengyi; Shi, Junxia; Hu, Changhong; Li, Xiangzhi; Fu, Yao; Jiang, Xiaonan; Huang, Liang; Han, Hongyin

    2018-03-01

    Conventionally, high dynamic-range (HDR) imaging is based on taking two or more pictures of the same scene with different exposure. However, due to a high-speed relative motion between the camera and the scene, it is hard for this technique to be applied to push-broom remote sensing cameras. For the sake of HDR imaging in push-broom remote sensing applications, the present paper proposes an innovative method which can generate HDR images without redundant image sensors or optical components. Specifically, this paper adopts an area array CMOS (complementary metal oxide semiconductor) with the digital domain time-delay-integration (DTDI) technology for imaging, instead of adopting more than one row of image sensors, thereby taking more than one picture with different exposure. And then a new HDR image by fusing two original images with a simple algorithm can be achieved. By conducting the experiment, the dynamic range (DR) of the image increases by 26.02 dB. The proposed method is proved to be effective and has potential in other imaging applications where there is a relative motion between the cameras and scenes.

  5. Automatic segmentation of trees in dynamic outdoor environments

    USDA-ARS?s Scientific Manuscript database

    Segmentation in dynamic outdoor environments can be difficult when the illumination levels and other aspects of the scene cannot be controlled. Specifically in agricultural contexts, a background material is often used to shield a camera's field of view from other rows of crops. In this paper, we ...

  6. ULTRAFAST CHEMISTRY: Using Time-Resolved Vibrational Spectroscopy for Interrogation of Structural Dynamics

    NASA Astrophysics Data System (ADS)

    Nibbering, Erik T. J.; Fidder, Henk; Pines, Ehud

    2005-05-01

    Time-resolved infrared (IR) and Raman spectroscopy elucidates molecular structure evolution during ultrafast chemical reactions. Following vibrational marker modes in real time provides direct insight into the structural dynamics, as is evidenced in studies on intramolecular hydrogen transfer, bimolecular proton transfer, electron transfer, hydrogen bonding during solvation dynamics, bond fission in organometallic compounds and heme proteins, cis-trans isomerization in retinal proteins, and transformations in photochromic switch pairs. Femtosecond IR spectroscopy monitors the site-specific interactions in hydrogen bonds. Conversion between excited electronic states can be followed for intramolecular electron transfer by inspection of the fingerprint IR- or Raman-active vibrations in conjunction with quantum chemical calculations. Excess internal vibrational energy, generated either by optical excitation or by internal conversion from the electronic excited state to the ground state, is observable through transient frequency shifts of IR-active vibrations and through nonequilibrium populations as deduced by Raman resonances.

  7. Memory conformity and the perceived accuracy of self versus other.

    PubMed

    Allan, Kevin; Midjord, J Palli; Martin, Doug; Gabbert, Fiona

    2012-02-01

    Here, we demonstrate that the decision to conform to another person's memory involves a strategic trade-off that balances the accuracy of one's own memory against that of another person. We showed participants three household scenes, one for 30 s, one for 60 s, and one for 120 s. Half were told that they would encode each scene for half as long as their virtual partner, and half were told that they would encode each scene for twice as long as their virtual partner. On a subsequent two-alternative-forced choice (2AFC) memory test, the simulated answer of the partner (accurate, errant, or no response) was shown before participants responded. Conformity to the partner's responses was significantly enhanced for the 30-s versus the 60- and 120-s scenes. This pattern, however, was present only in the group who believed that they had encoded each scene for half as long as their partner, even though the short-duration scene had the lowest baseline 2AFC accuracy in both groups and was also subjectively rated as the least memorable by both groups. Our reliance on other people's memory is therefore dynamically and strategically adjusted according to knowledge of the conditions under which we and other people have acquired different memories.

  8. Micromachined single-level nonplanar polycrystalline SiGe thermal microemitters for infrared dynamic scene projection

    NASA Astrophysics Data System (ADS)

    Malyutenko, V. K.; Malyutenko, O. Yu.; Leonov, V.; Van Hoof, C.

    2009-05-01

    The technology for self-supported membraneless polycrystalline SiGe thermal microemitters, their design, and performance are presented. The 128-element arrays with a fill factor of 88% and a 2.5-μm-thick resonant cavity have been grown by low-pressure chemical vapor deposition and fabricated using surface micromachining technology. The 200-nm-thick 60×60 μm2 emitting pixels enforced with a U-shape profile pattern demonstrate a thermal time constant of 2-7 ms and an apparent temperature of 700 K in the 3-5 and 8-12 μm atmospheric transparency windows. The application of the devices to the infrared dynamic scene simulation and their benefit over conventional planar membrane-supported emitters are discussed.

  9. MO-F-CAMPUS-J-03: Sorting 2D Dynamic MR Images Using Internal Respiratory Signal for 4D MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, Z; Hui, C; Beddar, S

    Purpose: To develop a novel algorithm to extract internal respiratory signal (IRS) for sorting dynamic magnetic resonance (MR) images in order to achieve four-dimensional (4D) MR imaging. Methods: Dynamic MR images were obtained with the balanced steady state free precession by acquiring each two-dimensional sagittal slice repeatedly for more than one breathing cycle. To generate a robust IRS, we used 5 different representative internal respiratory surrogates in both the image space (body area) and the Fourier space (the first two low-frequency phase components in the anterior-posterior direction, and the first two low-frequency phase components in the superior-inferior direction). A clusteringmore » algorithm was then used to search for a group of similar individual internal signals, which was then used to formulate the final IRS. A phantom study and a volunteer study were performed to demonstrate the effectiveness of this algorithm. The IRS was compared to the signal from the respiratory bellows. Results: The IRS computed by our algorithm matched well with the bellows signal in both the phantom and the volunteer studies. On average, the normalized cross correlation between the IRS and the bellows signal was 0.97 in the phantom study and 0.87 in the volunteer study, respectively. The average difference between the end inspiration times in the IRS and bellows signal was 0.18 s in the phantom study and 0.14 s in the volunteer study, respectively. 4D images sorted based on the IRS showed minimal mismatched artifacts, and the motion of the anatomy was coherent with the respiratory phases. Conclusion: A novel algorithm was developed to generate IRS from dynamic MR images to achieve 4D MR imaging. The performance of the IRS was comparable to that of the bellows signal. It can be easily implemented into the clinic and potentially could replace the use of external respiratory surrogates. This research was partially funded by the the Center for Radiation Oncology Research from UT MD Anderson Cancer Center.« less

  10. Paramedic Checklists do not Accurately Identify Post-ictal or Hypoglycaemic Patients Suitable for Discharge at the Scene.

    PubMed

    Tohira, Hideo; Fatovich, Daniel; Williams, Teresa A; Bremner, Alexandra; Arendts, Glenn; Rogers, Ian R; Celenza, Antonio; Mountain, David; Cameron, Peter; Sprivulis, Peter; Ahern, Tony; Finn, Judith

    2016-06-01

    The objective of this study was to assess the accuracy and safety of two pre-defined checklists to identify prehospital post-ictal or hypoglycemic patients who could be discharged at the scene. A retrospective cohort study of lower acuity, adult patients attended by paramedics in 2013, and who were either post-ictal or hypoglycemic, was conducted. Two self-care pathway assessment checklists (one each for post-ictal and hypoglycemia) designed as clinical decision tools for paramedics to identify patients suitable for discharge at the scene were used. The intention of the checklists was to provide paramedics with justification to not transport a patient if all checklist criteria were met. Actual patient destination (emergency department [ED] or discharge at the scene) and subsequent events (eg, ambulance requests) were compared between patients who did and did not fulfill the checklists. The performance of the checklists against the destination determined by paramedics was also assessed. Totals of 629 post-ictal and 609 hypoglycemic patients were identified. Of these, 91 (14.5%) and 37 (6.1%) patients fulfilled the respective checklist. Among those who fulfilled the checklist, 25 (27.5%) post-ictal and 18 (48.6%) hypoglycemic patients were discharged at the scene, and 21 (23.1%) and seven (18.9%) were admitted to hospital after ED assessment. Amongst post-ictal patients, those fulfilling the checklist had more subsequent ambulance requests (P=.01) and ED attendances with seizure-related conditions (P=.04) within three days than those who did not. Amongst hypoglycemic patients, there were no significant differences in subsequent events between those who did and did not meet the criteria. Paramedics discharged five times more hypoglycemic patients at the scene than the checklist predicted with no significant differences in the rate of subsequent events. Four deaths (0.66%) occurred within seven days in the hypoglycemic cohort, and none of them were attributed directly to hypoglycemia. The checklists did not accurately identify patients suitable for discharge at the scene within the Emergency Medical Service. Patients who fulfilled the post-ictal checklist made more subsequent health care service requests within three days than those who did not. Both checklists showed similar occurrence of subsequent events to paramedics' decision, but the hypoglycemia checklist identified fewer patients who could be discharged at the scene than paramedics actually discharged. Reliance on these checklists may increase transportations to ED and delay initiation of appropriate treatment at a hospital. Tohira H , Fatovich D , Williams TA , Bremner A , Arendts G , Rogers IR , Celenza A , Mountain D , Cameron P , Sprivulis P , Ahern T , Finn J . Paramedic checklists do not accurately identify post-ictal or hypoglycaemic patients suitable for discharge at the scene. Prehosp Disaster Med. 2016;31(3):282-293.

  11. Communication: nanosecond folding dynamics of an alpha helix: time-dependent 2D-IR cross peaks observed using polarization-sensitive dispersed pump-probe spectroscopy.

    PubMed

    Panman, Matthijs R; van Dijk, Chris N; Meuzelaar, Heleen; Woutersen, S

    2015-01-28

    We present a simple method to measure the dynamics of cross peaks in time-resolved two-dimensional vibrational spectroscopy. By combining suitably weighted dispersed pump-probe spectra, we eliminate the diagonal contribution to the 2D-IR response, so that the dispersed pump-probe signal contains the projection of only the cross peaks onto one of the axes of the 2D-IR spectrum. We apply the method to investigate the folding dynamics of an alpha-helical peptide in a temperature-jump experiment and find characteristic folding and unfolding time constants of 260 ± 30 and 580 ± 70 ns at 298 K.

  12. Forensic applications of infrared imaging for the detection and recording of latent evidence.

    PubMed

    Lin, Apollo Chun-Yen; Hsieh, Hsing-Mei; Tsai, Li-Chin; Linacre, Adrian; Lee, James Chun-I

    2007-09-01

    We report on a simple method to record infrared (IR) reflected images in a forensic science context. Light sources using ultraviolet light have been used previously in the detection of latent prints, but the use of infrared light has been subjected to less investigation. IR light sources were used to search for latent evidence and the images were captured by either video or using a digital camera with a CCD array sensitive to IR wavelength. Bloodstains invisible to the eye, inks, tire prints, gunshot residue, and charred document on dark background are selected as typical matters that may be identified during a forensic investigation. All the evidence types could be detected and identified using a range of photographic techniques. In this study, a one in eight times dilution of blood could be detected on 10 different samples of black cloth. When using 81 black writing inks, the observation rates were 95%, 88% and 42% for permanent markers, fountain pens and ball-point pens, respectively, on the three kinds of dark cloth. The black particles of gunshot residue scattering around the entrance hole under IR light were still observed at a distance of 60 cm from three different shooting ranges. A requirement of IR reflectivity is that there is a contrast between the latent evidence and the background. In the absence of this contrast no latent image will be detected, which is similar to all light sources. The use of a video camera allows the recording of images either at a scene or in the laboratory. This report highlights and demonstrates the robustness of IR to detect and record the presence of latent evidence.

  13. A review on brightness preserving contrast enhancement methods for digital image

    NASA Astrophysics Data System (ADS)

    Rahman, Md Arifur; Liu, Shilong; Li, Ruowei; Wu, Hongkun; Liu, San Chi; Jahan, Mahmuda Rawnak; Kwok, Ngaiming

    2018-04-01

    Image enhancement is an imperative step for many vision based applications. For image contrast enhancement, popular methods adopt the principle of spreading the captured intensities throughout the allowed dynamic range according to predefined distributions. However, these algorithms take little or no consideration into account of maintaining the mean brightness of the original scene, which is of paramount importance to carry the true scene illumination characteristics to the viewer. Though there have been significant amount of reviews on contrast enhancement methods published, updated review on overall brightness preserving image enhancement methods is still scarce. In this paper, a detailed survey is performed on those particular methods that specifically aims to maintain the overall scene illumination characteristics while enhancing the digital image.

  14. Development and validation of the AFIT scene and sensor emulator for testing (ASSET)

    NASA Astrophysics Data System (ADS)

    Young, Shannon R.; Steward, Bryan J.; Gross, Kevin C.

    2017-05-01

    ASSET is a physics-based model used to generate synthetic data sets of wide field of view (WFOV) electro-optical and infrared (EO/IR) sensors with realistic radiometric properties, noise characteristics, and sensor artifacts. It was developed to meet the need for applications where precise knowledge of the underlying truth is required but is impractical to obtain for real sensors. For example, due to accelerating advances in imaging technology, the volume of data available from WFOV EO/IR sensors has drastically increased over the past several decades, and as a result, there is a need for fast, robust, automatic detection and tracking algorithms. Evaluation of these algorithms is difficult for objects that traverse a wide area (100-10,000 km) because obtaining accurate truth for the full object trajectory often requires costly instrumentation. Additionally, tracking and detection algorithms perform differently depending on factors such as the object kinematics, environment, and sensor configuration. A variety of truth data sets spanning these parameters are needed for thorough testing, which is often cost prohibitive. The use of synthetic data sets for algorithm development allows for full control of scene parameters with full knowledge of truth. However, in order for analysis using synthetic data to be meaningful, the data must be truly representative of real sensor collections. ASSET aims to provide a means of generating such representative data sets for WFOV sensors operating in the visible through thermal infrared. The work reported here describes the ASSET model, as well as provides validation results from comparisons to laboratory imagers and satellite data (e.g. Landsat-8).

  15. Mitochondrial Dynamics Tracking with Two-Photon Phosphorescent Terpyridyl Iridium(III) Complexes

    NASA Astrophysics Data System (ADS)

    Huang, Huaiyi; Zhang, Pingyu; Qiu, Kangqiang; Huang, Juanjuan; Chen, Yu; Ji, Liangnian; Chao, Hui

    2016-02-01

    Mitochondrial dynamics, including fission and fusion, control the morphology and function of mitochondria, and disruption of mitochondrial dynamics leads to Parkinson’s disease, Alzheimer’s disease, metabolic diseases, and cancers. Currently, many types of commercial mitochondria probes are available, but high excitation energy and low photo-stability render them unsuitable for tracking mitochondrial dynamics in living cells. Therefore, mitochondrial targeting agents that exhibit superior anti-photo-bleaching ability, deep tissue penetration and intrinsically high three-dimensional resolutions are urgently needed. Two-photon-excited compounds that use low-energy near-infrared excitation lasers have emerged as non-invasive tools for cell imaging. In this work, terpyridyl cyclometalated Ir(III) complexes (Ir1-Ir3) are demonstrated as one- and two-photon phosphorescent probes for real-time imaging and tracking of mitochondrial morphology changes in living cells.

  16. Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view

    NASA Astrophysics Data System (ADS)

    Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.

    2015-12-01

    Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons & Fractals, 28, 337-360 (2006).[5] Mangiarotti, Coudret, Drapeau, & Jarlan, Polynomial search and global modeling, Phys. Rev. E 86(4), 046205 (2012).[6] Mangiarotti, Modélisation globale et Caractérisation Topologique de dynamiques environnementales. Habilitation à Diriger des Recherches, Univ. Toulouse 3 (2014).

  17. Achieving ultra-high temperatures with a resistive emitter array

    NASA Astrophysics Data System (ADS)

    Danielson, Tom; Franks, Greg; Holmes, Nicholas; LaVeigne, Joe; Matis, Greg; McHugh, Steve; Norton, Dennis; Vengel, Tony; Lannon, John; Goodwin, Scott

    2016-05-01

    The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to also develop larger-format infrared emitter arrays to support the testing of systems incorporating these detectors. In addition to larger formats, many scene projector users require much higher simulated temperatures than can be generated with current technology in order to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024 x 1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During earlier phases of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1400 K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. A 'scalable' Read In Integrated Circuit (RIIC) is also being developed under the same UHT program to drive the high temperature pixels. This RIIC will utilize through-silicon via (TSV) and Quilt Packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the yield limitations inherent in large-scale integrated circuits. Results of design verification testing of the completed RIIC will be presented and discussed.

  18. Application of Fourier transform infrared (FT-IR) spectroscopy in determination of microalgal compositions.

    PubMed

    Meng, Yingying; Yao, Changhong; Xue, Song; Yang, Haibo

    2014-01-01

    Fourier transform infrared spectroscopy (FT-IR) was applied in algal strain screening and monitoring cell composition dynamics in a marine microalga Isochrysis zhangjiangensis during algal cultivation. The content of lipid, carbohydrate and protein of samples determined by traditional methods had validated the accuracy of FT-IR method. For algal screening, the band absorption ratios of lipid/amide I and carbo/amide I from FT-IR measurements allowed for the selection of Isochrysis sp. and Tetraselmis subcordiformis as the most potential lipid and carbohydrate producers, respectively. The cell composition dynamics of I. zhangjiangensis measured by FT-IR revealed the diversion of carbon allocation from protein to carbohydrate and neutral lipid when nitrogen-replete cells were subjected to nitrogen limitation. The carbo/amide I band absorption ratio had also been demonstrated to depict physiological status under nutrient stress in T. subcordiformis. FT-IR serves as a tool for the simultaneous measurement of lipid, carbohydrate, and protein content in cell. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Non-rigid Reconstruction of Casting Process with Temperature Feature

    NASA Astrophysics Data System (ADS)

    Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Ying; Wang, Lu

    2017-09-01

    Off-line reconstruction of rigid scene has made a great progress in the past decade. However, the on-line reconstruction of non-rigid scene is still a very challenging task. The casting process is a non-rigid reconstruction problem, it is a high-dynamic molding process lacking of geometric features. In order to reconstruct the casting process robustly, an on-line fusion strategy is proposed for dynamic reconstruction of casting process. Firstly, the geometric and flowing feature of casting are parameterized in manner of TSDF (truncated signed distance field) which is a volumetric block, parameterized casting guarantees real-time tracking and optimal deformation of casting process. Secondly, data structure of the volume grid is extended to have temperature value, the temperature interpolation function is build to generate the temperature of each voxel. This data structure allows for dynamic tracking of temperature of casting during deformation stages. Then, the sparse RGB features is extracted from casting scene to search correspondence between geometric representation and depth constraint. The extracted color data guarantees robust tracking of flowing motion of casting. Finally, the optimal deformation of the target space is transformed into a nonlinear regular variational optimization problem. This optimization step achieves smooth and optimal deformation of casting process. The experimental results show that the proposed method can reconstruct the casting process robustly and reduce drift in the process of non-rigid reconstruction of casting.

  20. Spatial-area selective retrieval of multiple object-place associations in a hierarchical cognitive map formed by theta phase coding.

    PubMed

    Sato, Naoyuki; Yamaguchi, Yoko

    2009-06-01

    The human cognitive map is known to be hierarchically organized consisting of a set of perceptually clustered landmarks. Patient studies have demonstrated that these cognitive maps are maintained by the hippocampus, while the neural dynamics are still poorly understood. The authors have shown that the neural dynamic "theta phase precession" observed in the rodent hippocampus may be capable of forming hierarchical cognitive maps in humans. In the model, a visual input sequence consisting of object and scene features in the central and peripheral visual fields, respectively, results in the formation of a hierarchical cognitive map for object-place associations. Surprisingly, it is possible for such a complex memory structure to be formed in a few seconds. In this paper, we evaluate the memory retrieval of object-place associations in the hierarchical network formed by theta phase precession. The results show that multiple object-place associations can be retrieved with the initial cue of a scene input. Importantly, according to the wide-to-narrow unidirectional connections among scene units, the spatial area for object-place retrieval can be controlled by the spatial area of the initial cue input. These results indicate that the hierarchical cognitive maps have computational advantages on a spatial-area selective retrieval of multiple object-place associations. Theta phase precession dynamics is suggested as a fundamental neural mechanism of the human cognitive map.

  1. An automated approach for tone mapping operator parameter adjustment in security applications

    NASA Astrophysics Data System (ADS)

    Krasula, LukáÅ.¡; Narwaria, Manish; Le Callet, Patrick

    2014-05-01

    High Dynamic Range (HDR) imaging has been gaining popularity in recent years. Different from the traditional low dynamic range (LDR), HDR content tends to be visually more appealing and realistic as it can represent the dynamic range of the visual stimuli present in the real world. As a result, more scene details can be faithfully reproduced. As a direct consequence, the visual quality tends to improve. HDR can be also directly exploited for new applications such as video surveillance and other security tasks. Since more scene details are available in HDR, it can help in identifying/tracking visual information which otherwise might be difficult with typical LDR content due to factors such as lack/excess of illumination, extreme contrast in the scene, etc. On the other hand, with HDR, there might be issues related to increased privacy intrusion. To display the HDR content on the regular screen, tone-mapping operators (TMO) are used. In this paper, we present the universal method for TMO parameters tuning, in order to maintain as many details as possible, which is desirable in security applications. The method's performance is verified on several TMOs by comparing the outcomes from tone-mapping with default and optimized parameters. The results suggest that the proposed approach preserves more information which could be of advantage for security surveillance but, on the other hand, makes us consider possible increase in privacy intrusion.

  2. Motion-based nonuniformity correction in DoFP polarimeters

    NASA Astrophysics Data System (ADS)

    Kumar, Rakesh; Tyo, J. Scott; Ratliff, Bradley M.

    2007-09-01

    Division of Focal Plane polarimeters (DoFP) operate by integrating an array of micropolarizer elements with a focal plane array. These devices have been investigated for over a decade, and example systems have been built in all regions of the optical spectrum. DoFP devices have the distinct advantage that they are mechanically rugged, inherently temporally synchronized, and optically aligned. They have the concomitant disadvantage that each pixel in the FPA has a different instantaneous field of view (IFOV), meaning that the polarization component measurements that go into estimating the Stokes vector across the image come from four different points in the field. In addition to IFOV errors, microgrid camera systems operating in the LWIR have the additional problem that FPA nonuniformity (NU) noise can be quite severe. The spatial differencing nature of a DoFP system exacerbates the residual NU noise that is remaining after calibration, and is often the largest source of false polarization signatures away from regions where IFOV error dominates. We have recently presented a scene based algorithm that uses frame-to-frame motion to compensate for NU noise in unpolarized IR imagers. In this paper, we have extended that algorithm so that it can be used to compensate for NU noise on a DoFP polarimeter. Furthermore, the additional information provided by the scene motion can be used to significantly reduce the IFOV error. We have found a reduction of IFOV error by a factor of 10 if the scene motion is known exactly. Performance is reduced when the motion must be estimated from the scene, but still shows a marked improvement over static DoFP images.

  3. Development of an ultra-high temperature infrared scene projector at Santa Barbara Infrared Inc.

    NASA Astrophysics Data System (ADS)

    Franks, Greg; Laveigne, Joe; Danielson, Tom; McHugh, Steve; Lannon, John; Goodwin, Scott

    2015-05-01

    The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to develop correspondingly larger-format infrared emitter arrays to support the testing needs of systems incorporating these detectors. As with most integrated circuits, fabrication yields for the read-in integrated circuit (RIIC) that drives the emitter pixel array are expected to drop dramatically with increasing size, making monolithic RIICs larger than the current 1024x1024 format impractical and unaffordable. Additionally, many scene projector users require much higher simulated temperatures than current technology can generate to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024x1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During an earlier phase of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1000K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. Also in development under the same UHT program is a 'scalable' RIIC that will be used to drive the high temperature pixels. This RIIC will utilize through-silicon vias (TSVs) and quilt packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the inherent yield limitations of very-large-scale integrated circuits. Current status of the RIIC development effort will also be presented.

  4. Progress in high-level exploratory vision

    NASA Astrophysics Data System (ADS)

    Brand, Matthew

    1993-08-01

    We have been exploring the hypothesis that vision is an explanatory process, in which causal and functional reasoning about potential motion plays an intimate role in mediating the activity of low-level visual processes. In particular, we have explored two of the consequences of this view for the construction of purposeful vision systems: Causal and design knowledge can be used to (1) drive focus of attention, and (2) choose between ambiguous image interpretations. An important result of visual understanding is an explanation of the scene's causal structure: How action is originated, constrained, and prevented, and what will happen in the immediate future. In everyday visual experience, most action takes the form of motion, and most causal analysis takes the form of dynamical analysis. This is even true of static scenes, where much of a scene's interest lies in how possible motions are arrested. This paper describes our progress in developing domain theories and visual processes for the understanding of various kinds of structured scenes, including structures built out of children's constructive toys and simple mechanical devices.

  5. Structure preserving clustering-object tracking via subgroup motion pattern segmentation

    NASA Astrophysics Data System (ADS)

    Fan, Zheyi; Zhu, Yixuan; Jiang, Jiao; Weng, Shuqin; Liu, Zhiwen

    2018-01-01

    Tracking clustering objects with similar appearances simultaneously in collective scenes is a challenging task in the field of collective motion analysis. Recent work on clustering-object tracking often suffers from poor tracking accuracy and terrible real-time performance due to the neglect or the misjudgment of the motion differences among objects. To address this problem, we propose a subgroup motion pattern segmentation framework based on a multilayer clustering structure and establish spatial constraints only among objects in the same subgroup, which entails having consistent motion direction and close spatial position. In addition, the subgroup segmentation results are updated dynamically because crowd motion patterns are changeable and affected by objects' destinations and scene structures. The spatial structure information combined with the appearance similarity information is used in the structure preserving object tracking framework to track objects. Extensive experiments conducted on several datasets containing multiple real-world crowd scenes validate the accuracy and the robustness of the presented algorithm for tracking objects in collective scenes.

  6. The role of forensic botany in crime scene investigation: case report and review of literature.

    PubMed

    Aquila, Isabella; Ausania, Francesco; Di Nunzio, Ciro; Serra, Arianna; Boca, Silvia; Capelli, Arnaldo; Magni, Paola; Ricci, Pietrantonio

    2014-05-01

    Management of a crime is the process of ensuring accurate and effective collection and preservation of physical evidence. Forensic botany can provide significant supporting evidences during criminal investigations. The aim of this study is to demonstrate the importance of forensic botany in the crime scene. We reported a case of a woman affected by dementia who had disappeared from nursing care and was found dead near the banks of a river that flowed under a railroad. Two possible ways of access to crime scene were identified and denominated "Path A" and "Path B." Both types of soil and plants were identified. Botanical survey was performed. Some samples of Xanthium Orientalis subsp. Italicum were identified. The fall of woman resulted in external injuries and vertebral fracture at autopsy. The botanical evidence is important when crime scene and autopsy findings are not sufficient to define the dynamics and the modality of death. © 2014 American Academy of Forensic Sciences.

  7. Modelling Technology for Building Fire Scene with Virtual Geographic Environment

    NASA Astrophysics Data System (ADS)

    Song, Y.; Zhao, L.; Wei, M.; Zhang, H.; Liu, W.

    2017-09-01

    Building fire is a risky activity that can lead to disaster and massive destruction. The management and disposal of building fire has always attracted much interest from researchers. Integrated Virtual Geographic Environment (VGE) is a good choice for building fire safety management and emergency decisions, in which a more real and rich fire process can be computed and obtained dynamically, and the results of fire simulations and analyses can be much more accurate as well. To modelling building fire scene with VGE, the application requirements and modelling objective of building fire scene were analysed in this paper. Then, the four core elements of modelling building fire scene (the building space environment, the fire event, the indoor Fire Extinguishing System (FES) and the indoor crowd) were implemented, and the relationship between the elements was discussed also. Finally, with the theory and framework of VGE, the technology of building fire scene system with VGE was designed within the data environment, the model environment, the expression environment, and the collaborative environment as well. The functions and key techniques in each environment are also analysed, which may provide a reference for further development and other research on VGE.

  8. A novel automated method for doing registration and 3D reconstruction from multi-modal RGB/IR image sequences

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2016-09-01

    In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.

  9. Application of infrared uncooled cameras in surveillance systems

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Bareła, J.; Trzaskawka, P.; PiÄ tkowski, T.

    2013-10-01

    The recent necessity to protect military bases, convoys and patrols gave serious impact to the development of multisensor security systems for perimeter protection. One of the most important devices used in such systems are IR cameras. The paper discusses technical possibilities and limitations to use uncooled IR camera in a multi-sensor surveillance system for perimeter protection. Effective ranges of detection depend on the class of the sensor used and the observed scene itself. Application of IR camera increases the probability of intruder detection regardless of the time of day or weather conditions. It also simultaneously decreased the false alarm rate produced by the surveillance system. The role of IR cameras in the system was discussed as well as technical possibilities to detect human being. Comparison of commercially available IR cameras, capable to achieve desired ranges was done. The required spatial resolution for detection, recognition and identification was calculated. The simulation of detection ranges was done using a new model for predicting target acquisition performance which uses the Targeting Task Performance (TTP) metric. Like its predecessor, the Johnson criteria, the new model bounds the range performance with image quality. The scope of presented analysis is limited to the estimation of detection, recognition and identification ranges for typical thermal cameras with uncooled microbolometer focal plane arrays. This type of cameras is most widely used in security systems because of competitive price to performance ratio. Detection, recognition and identification range calculations were made, and the appropriate results for the devices with selected technical specifications were compared and discussed.

  10. Irma 5.1 multisensor signature prediction model

    NASA Astrophysics Data System (ADS)

    Savage, James; Coker, Charles; Edwards, Dave; Thai, Bea; Aboutalib, Omar; Chow, Anthony; Yamaoka, Neil; Kim, Charles

    2006-05-01

    The Irma synthetic signature prediction code is being developed to facilitate the research and development of multi-sensor systems. Irma was one of the first high resolution, physics-based Infrared (IR) target and background signature models to be developed for tactical weapon applications. Originally developed in 1980 by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN), the Irma model was used exclusively to generate IR scenes. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser (or active) channel. This two-channel version was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model, which supported correlated frame-to-frame imagery. A passive IR/millimeter wave (MMW) code was completed in 1994. This served as the cornerstone for the development of the co-registered active/passive IR/MMW model, Irma 4.0. In 2000, Irma version 5.0 was released which encompassed several upgrades to both the physical models and software. Circular polarization was added to the passive channel, and a Doppler capability was added to the active MMW channel. In 2002, the multibounce technique was added to the Irma passive channel. In the ladar channel, a user-friendly Ladar Sensor Assistant (LSA) was incorporated which provides capability and flexibility for sensor modeling. Irma 5.0 runs on several platforms including Windows, Linux, Solaris, and SGI Irix. Irma is currently used to support a number of civilian and military applications. The Irma user base includes over 130 agencies within the Air Force, Army, Navy, DARPA, NASA, Department of Transportation, academia, and industry. In 2005, Irma version 5.1 was released to the community. In addition to upgrading the Ladar channel code to an object oriented language (C++) and providing a new graphical user interface to construct scenes, this new release significantly improves the modeling of the ladar channel and includes polarization effects, time jittering, speckle effect, and atmospheric turbulence. More importantly, the Munitions Directorate has funded three field tests to verify and validate the re-engineered ladar channel. Each of the field tests was comprehensive and included one month of sensor characterization and a week of data collection. After each field test, the analysis included comparisons of Irma predicted signatures with measured signatures, and if necessary, refining the model to produce realistic imagery. This paper will focus on two areas of the Irma 5.1 development effort: report on the analysis results of the validation and verification of the Irma 5.1 ladar channel, and the software development plan and validation efforts of the Irma passive channel. As scheduled, the Irma passive code is being re-engineered using object oriented language (C++), and field data collection is being conducted to validate the re-engineered passive code. This software upgrade will remove many constraints and limitations of the legacy code including limits on image size and facet counts. The field test to validate the passive channel is expected to be complete in the second quarter of 2006.

  11. Dynamics of a Room Temperature Ionic Liquid in Supported Ionic Liquid Membranes vs the Bulk Liquid: 2D IR and Polarized IR Pump-Probe Experiments.

    PubMed

    Shin, Jae Yoon; Yamada, Steven A; Fayer, Michael D

    2017-01-11

    Supported ionic liquid membranes (SILMs) are membranes that have ionic liquids impregnated in their pores. SILMs have been proposed for advanced carbon capture materials. Two-dimensional infrared (2D IR) and polarization selective IR pump-probe (PSPP) techniques were used to investigate the dynamics of reorientation and spectral diffusion of the linear triatomic anion, SeCN - , in poly(ether sulfone) (PES) membranes and room-temperature ionic liquid (RTIL), 1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide (EmimNTf 2 ). The dynamics in the bulk EmimNTf 2 were compared to its dynamics in the SILM samples. Two PES membranes, PES200 and PES30, have pores with average sizes, ∼300 nm and ∼100 nm, respectively. Despite the relatively large pore sizes, the measurements reveal that the reorientation of SeCN - and the RTIL structural fluctuations are substantially slower in the SILMs than in the bulk liquid. The complete orientational randomization, slows from 136 ps in the bulk to 513 ps in the PES30. 2D IR measurements yield three time scales for structural spectral diffusion (SSD), that is, the time evolution of the liquid structure. The slowest decay constant increases from 140 ps in the bulk to 504 ps in the PES200 and increases further to 1660 ps in the PES30. The results suggest that changes at the interface propagate out and influence the RTIL structural dynamics even more than a hundred nanometers from the polymer surface. The differences between the IL dynamics in the bulk and in the membranes suggest that studies of bulk RTIL properties may be poor guides to their use in SILMs in carbon capture applications.

  12. Temporal dynamics of the knowledge-mediated visual disambiguation process in humans: a magnetoencephalography study.

    PubMed

    Urakawa, Tomokazu; Ogata, Katsuya; Kimura, Takahiro; Kume, Yuko; Tobimatsu, Shozo

    2015-01-01

    Disambiguation of a noisy visual scene with prior knowledge is an indispensable task of the visual system. To adequately adapt to a dynamically changing visual environment full of noisy visual scenes, the implementation of knowledge-mediated disambiguation in the brain is imperative and essential for proceeding as fast as possible under the limited capacity of visual image processing. However, the temporal profile of the disambiguation process has not yet been fully elucidated in the brain. The present study attempted to determine how quickly knowledge-mediated disambiguation began to proceed along visual areas after the onset of a two-tone ambiguous image using magnetoencephalography with high temporal resolution. Using the predictive coding framework, we focused on activity reduction for the two-tone ambiguous image as an index of the implementation of disambiguation. Source analysis revealed that a significant activity reduction was observed in the lateral occipital area at approximately 120 ms after the onset of the ambiguous image, but not in preceding activity (about 115 ms) in the cuneus when participants perceptually disambiguated the ambiguous image with prior knowledge. These results suggested that knowledge-mediated disambiguation may be implemented as early as approximately 120 ms following an ambiguous visual scene, at least in the lateral occipital area, and provided an insight into the temporal profile of the disambiguation process of a noisy visual scene with prior knowledge. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Total variation approach for adaptive nonuniformity correction in focal-plane arrays.

    PubMed

    Vera, Esteban; Meza, Pablo; Torres, Sergio

    2011-01-15

    In this Letter we propose an adaptive scene-based nonuniformity correction method for fixed-pattern noise removal in imaging arrays. It is based on the minimization of the total variation of the estimated irradiance, and the resulting function is optimized by an isotropic total variation approach making use of an alternating minimization strategy. The proposed method provides enhanced results when applied to a diverse set of real IR imagery, accurately estimating the nonunifomity parameters of each detector in the focal-plane array at a fast convergence rate, while also forming fewer ghosting artifacts.

  14. Robotic vision techniques for space operations

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar

    1994-01-01

    Automation and robotics for space applications are being pursued for increased productivity, enhanced reliability, increased flexibility, higher safety, and for the automation of time-consuming tasks and those activities which are beyond the capacity of the crew. One of the key functional elements of an automated robotic system is sensing and perception. As the robotics era dawns in space, vision systems will be required to provide the key sensory data needed for multifaceted intelligent operations. In general, the three-dimensional scene/object description, along with location, orientation, and motion parameters will be needed. In space, the absence of diffused lighting due to a lack of atmosphere gives rise to: (a) high dynamic range (10(exp 8)) of scattered sunlight intensities, resulting in very high contrast between shadowed and specular portions of the scene; (b) intense specular reflections causing target/scene bloom; and (c) loss of portions of the image due to shadowing and presence of stars, Earth, Moon, and other space objects in the scene. In this work, developments for combating the adverse effects described earlier and for enhancing scene definition are discussed. Both active and passive sensors are used. The algorithm for selecting appropriate wavelength, polarization, look angle of vision sensors is based on environmental factors as well as the properties of the target/scene which are to be perceived. The environment is characterized on the basis of sunlight and other illumination incident on the target/scene and the temperature profiles estimated on the basis of the incident illumination. The unknown geometrical and physical parameters are then derived from the fusion of the active and passive microwave, infrared, laser, and optical data.

  15. Color appearance and color rendering of HDR scenes: an experiment

    NASA Astrophysics Data System (ADS)

    Parraman, Carinna; Rizzi, Alessandro; McCann, John J.

    2009-01-01

    In order to gain a deeper understanding of the appearance of coloured objects in a three-dimensional scene, the research introduces a multidisciplinary experimental approach. The experiment employed two identical 3-D Mondrians, which were viewed and compared side by side. Each scene was subjected to different lighting conditions. First, we used an illumination cube to diffuse the light and illuminate all the objects from each direction. This produced a low-dynamicrange (LDR) image of the 3-D Mondrian scene. Second, in order to make a high-dynamic range (HDR) image of the same objects, we used a directional 150W spotlight and an array of WLEDs assembled in a flashlight. The scenes were significant as each contained exactly the same three-dimensional painted colour blocks that were arranged in the same position in the still life. The blocks comprised 6 hue colours and 5 tones from white to black. Participants from the CREATE project were asked to consider the change in the appearance of a selection of colours according to lightness, hue, and chroma, and to rate how the change in illumination affected appearance. We measured the light coming to the eye from still-life surfaces with a colorimeter (Yxy). We captured the scene radiance using multiple exposures with a number of different cameras. We have begun a programme of digital image processing of these scene capture methods. This multi-disciplinary programme continues until 2010, so this paper is an interim report on the initial phases and a description of the ongoing project.

  16. Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.

    2017-05-01

    Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.

  17. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process.

    PubMed

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-12

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.

  18. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process †

    PubMed Central

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-01

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210

  19. Heart rate recovery in elite Spanish male athletes.

    PubMed

    Peinado, A B; Benito, P J; Barriopedro, M; Lorenzo, I; Maffulli, N; Calderón, F J

    2014-06-01

    During postexercise recovery, heart rate (HR) initially falls rapidly, followed by a period of slower decrease, until resting values are reached. The aim of the present work was to examine the differences in the recovery heart rate (RHR) between athletes engaged in static and dynamic sports. The study subjects were 294 federated sportsmen competing at the national and international level in sports classified using the criteria of Mitchell et al. as either prevalently static (N.=89) or prevalently dynamic (N.=205). Within the dynamic group, the subjects who practised the most dynamic sports were assigned to further subgroups: triathlon (N.=20), long distance running (N.=58), cycling (N.=28) and swimming (N.=12). All athletes were subjected to a maximum exertion stress test and their HR recorded at 1, 2, 3 and 4 min (RHR1,2,3,4) into the HR recovery period. The following indices of recovery (IR) were then calculated: IR1=(HRpeak-RHR1,2,3,4)/(HRmax-HRrest)*100, IR2=(HRpeak-RHR1,2,3,4)/(HRmax/HRpeak), and IR3=HRpeak-RHR1,2,3,4. The differences in the RHR and IR for the static and dynamic groups were examined using two way ANOVA. The RHR at minutes 2 (138.7±15.2 vs. 134.8±14.4 beats·min⁻¹) and 3 (128.5±15.2 vs. 123.3±14.4 beats·min⁻¹) were significantly higher for the static group (Group S) than the dynamic group (Group D), respectively. Significant differences were seen between Group D and S with respect to IR1 at minutes 1 (26.4±8.7 vs. 24.8±8.4%), 2 (43.8±8.1 vs. 41.5±7.8%), 3 (52.1±8.3 vs. 49.1±8%) and 4 (56.8±8.6 vs. 55.4±7.4%) of recovery. For IR2, significant differences were seen between the same groups at minutes 2 (59.7±12.5 vs. 55.9±10.8 beats·min⁻¹) and 3 (71.0±13.5 vs. 66.1±11.4 beats·min⁻¹) of recovery. Finally, for IR3, the only significant difference between Group D and S was recorded at minute 3 of recovery (72.2±12.5 vs. 66.2±11.5 beats·min⁻¹). This work provides information on RHR of a large population of elite Spanish athletes, and shows marked differences in the way that HR recovers in dynamic and static sports.

  20. Signature modelling and radiometric rendering equations in infrared scene simulation systems

    NASA Astrophysics Data System (ADS)

    Willers, Cornelius J.; Willers, Maria S.; Lapierre, Fabian

    2011-11-01

    The development and optimisation of modern infrared systems necessitates the use of simulation systems to create radiometrically realistic representations (e.g. images) of infrared scenes. Such simulation systems are used in signature prediction, the development of surveillance and missile sensors, signal/image processing algorithm development and aircraft self-protection countermeasure system development and evaluation. Even the most cursory investigation reveals a multitude of factors affecting the infrared signatures of realworld objects. Factors such as spectral emissivity, spatial/volumetric radiance distribution, specular reflection, reflected direct sunlight, reflected ambient light, atmospheric degradation and more, all affect the presentation of an object's instantaneous signature. The signature is furthermore dynamically varying as a result of internal and external influences on the object, resulting from the heat balance comprising insolation, internal heat sources, aerodynamic heating (airborne objects), conduction, convection and radiation. In order to accurately render the object's signature in a computer simulation, the rendering equations must therefore account for all the elements of the signature. In this overview paper, the signature models, rendering equations and application frameworks of three infrared simulation systems are reviewed and compared. The paper first considers the problem of infrared scene simulation in a framework for simulation validation. This approach provides concise definitions and a convenient context for considering signature models and subsequent computer implementation. The primary radiometric requirements for an infrared scene simulator are presented next. The signature models and rendering equations implemented in OSMOSIS (Belgian Royal Military Academy), DIRSIG (Rochester Institute of Technology) and OSSIM (CSIR & Denel Dynamics) are reviewed. In spite of these three simulation systems' different application focus areas, their underlying physics-based approach is similar. The commonalities and differences between the different systems are investigated, in the context of their somewhat different application areas. The application of an infrared scene simulation system towards the development of imaging missiles and missile countermeasures are briefly described. Flowing from the review of the available models and equations, recommendations are made to further enhance and improve the signature models and rendering equations in infrared scene simulators.

  1. Two-Magnon Raman Scattering and Pseudospin-Lattice Interactions in Sr_{2}IrO_{4} and Sr_{3}Ir_{2}O_{7}.

    PubMed

    Gretarsson, H; Sung, N H; Höppner, M; Kim, B J; Keimer, B; Le Tacon, M

    2016-04-01

    We have used Raman scattering to investigate the magnetic excitations and lattice dynamics in the prototypical spin-orbit Mott insulators Sr_{2}IrO_{4} and Sr_{3}Ir_{2}O_{7}. Both compounds exhibit pronounced two-magnon Raman scattering features with different energies, line shapes, and temperature dependencies, which in part reflect the different influence of long-range frustrating exchange interactions. Additionally, we find strong Fano asymmetries in the line shapes of low-energy phonon modes in both compounds, which disappear upon cooling below the antiferromagnetic ordering temperatures. These unusual phonon anomalies indicate that the spin-orbit coupling in Mott-insulating iridates is not sufficiently strong to quench the orbital dynamics in the paramagnetic state.

  2. Assessment of vegetation change in a fire-altered forest landscape

    NASA Technical Reports Server (NTRS)

    Jakubauskas, Mark E.; Lulla, Kamlesh P.; Mausel, Paul W.

    1990-01-01

    This research focused on determining the degree to which differences in burn severity relate to postfire vegetative cover within a Michigan pine forest. Landsat MSS data from June 1973 and TM data from October 1982 were classified using an unsupervised approach to create prefire and postfire cover maps of the study area. Using a raster-based geographic information system (GIS), the maps were compared, and a map of vegetation change was created. An IR/red band ratio from a June 1980 Landsat scene was classified to create a map of three degres of burn severity, which was then compared with the vegetation change map using a GIS. Classification comparisons of pine and deciduous forest classes (1973 to 1982) revealed that the most change in vegetation occurred in areas subjected to the most intense burn. Two classes of regenerating forest comprised the majority of the change, while the remaining change was associated with shrub vegetation or another forest class.

  3. Comparison of the information content of data from the LANDSAT 4 Thematic Mapper and the multispectral scanner

    NASA Technical Reports Server (NTRS)

    Price, J. C.

    1984-01-01

    Evaluation of information contained in data from the visible and near-IR channels of LANDSAT 4 TM and MSS for five agricultural scenes shows that the TM provides a significant advance in information gathering capability as expressed in terms of bits per pixel or bits per unit area. The six reflective channels of the TM acquire 18 bits of information per pixel out of a possible 48 bits, while the four MSS channels acquire 10 bits of information per pixel out of a possible 28 bits. Thus the TM and MSS are equally efficient in gathering information (18/48 to approximately 10/28), contrary to the expected tendency toward lower efficiency as spatial resolution is improved and spectral channels are added to an observing system. The TM thermal IR data appear to be of interest mainly for mapping water bodies, which do not change temperature during the day, for assessing surface moisture, and for monitoring thermal features associated with human activity.

  4. A simple solubility tests for the discrimination of acrylic and modacrylic fibers.

    PubMed

    Suga, Keisuke; Narita, Yuji; Suzuki, Shinichi

    2014-05-01

    In a crime scene investigation, single fibers play an important role as significant trace physical evidence. Acrylic fibers are frequently encountered in forensic analysis. Currently, acrylic and modacrylic are not discriminated clearly in Japan. Only results of FT-IR, some of acrylics were difficult to separate clearly to acrylic and modacrylic fibers. Solubility test is primitive but convenient useful method, and Japan Industrial Standards (JIS) recommends FT-IR and solubility test to distinguish acrylic and modacrylic fibers. But recommended JIS dissolving test using 100% N,N-dimethylformamide (DMF) as a solvent, some acrylics could not be discriminated. In this report, we used DMF and ethanol (90:10, v/v) solvent. The JIS method could not discriminate 6 acrylics in 60 acrylics; hence, DMF and ethanol (90:10, v/v) solvent discriminated 59 of the 60 fibers (43 acrylic and 16 modacrylic fibers) clearly, but only one modacrylic fiber incorrectly identified as acrylic. © 2014 American Academy of Forensic Sciences.

  5. Synchronous Computer-Mediated Dynamic Assessment: A Case Study of L2 Spanish Past Narration

    ERIC Educational Resources Information Center

    Darhower, Mark Anthony

    2014-01-01

    In this study, dynamic assessment is employed to help understand the developmental processes of two university Spanish learners as they produce a series of past narrations in a synchronous computer mediated environment. The assessments were conducted in six weekly one-hour chat sessions about various scenes of a Spanish language film. The analysis…

  6. Observation of Water-Protein Interaction Dynamics with Broadband Two-Dimensional Infrared Spectroscopy

    NASA Astrophysics Data System (ADS)

    De Marco, Luigi; Haky, Andrew; Tokmakoff, Andrei

    Two-dimensional infrared (2D IR) spectroscopy has proven itself an indispensable tool for studying molecular dynamics and intermolecular interactions on ultrafast timescales. Using a novel source of broadband mid-IR pulses, we have collected 2D IR spectra of protein films at varying levels of hydration. With 2D IR, we can directly observe coupling between water's motions and the protein's. Protein films provide us with the ability to discriminate hydration waters from bulk water and thus give us access to studying water dynamics along the protein backbone, fluctuations in the protein structure, and the interplay between the molecular dynamics of the two. We present two representative protein films: poly-L-proline (PLP) and hen egg-white lysozyme (HEWL). Having no N-H groups, PLP allows us to look at water dynamics without interference from resonant energy transfer between the protein N-H stretch and the water O-H stretch. We conclude that at low hydration levels water-protein interactions dominate, and the water's dynamics are tied to those of the protein. In HEWL films, we take advantage of the robust secondary structure to partially deuterate the film, allowing us to spectrally distinguish the protein core from the exterior. From this, we show that resonant energy transfer to water provides an effective means of dissipating excess energy within the protein, while maintaining the structure. These methods are general and can easily be extended to studying specific protein-water interactions.

  7. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  8. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  9. Evaluating methods for controlling depth perception in stereoscopic cinematography

    NASA Astrophysics Data System (ADS)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography. We anticipate the results will be of particular interest to 3D filmmaking and real time computer games.

  10. Ultrafast Dynamics of Energetic Materials

    DTIC Science & Technology

    2014-01-23

    redistributed in condensed-phase materials. In this subproject we developed a technique termed three-dimensional IR- Raman spectroscopy that allowed us to...Fang, 2011, “The distribution of local enhancement factors in surface enhanced Raman -active substrates and the vibrational dynamics in the liquid phase...3. (invited) “Vibrational energy and molecular thermometers in liquids: Ultrafast IR- Raman spectroscopy”, Brandt C. Pein and Dana D. Dlott, To

  11. Vibrational dynamics (IR, Raman, NRVS) and DFT study of new antitumor tetranuclearstannoxanecluster, Sn(IV)$-$oxo$-$${di$$-$o$-$vanillin} dimethyl dichloride

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arjmand, F.; Sharma, S.; Usman, M.

    2016-06-21

    The vibrational dynamics of a newly synthesized tetrastannoxane was characterized with a combination of experimental (Raman, IR and tin-based nuclear resonance vibrational spectroscopy) and computational (DFT/B3LYP) methods, with an emphasis on the vibrations of the tin sites. The cytotoxic activity revealed a significant regression selectively against the human pancreatic cell lines.

  12. Monitoring the long term stability of the IRS-P6 AWiFS sensor using the Sonoran and RVPN sites

    NASA Astrophysics Data System (ADS)

    Chander, Gyanesh; Sampath, Aparajithan; Angal, Amit; Choi, Taeyoung; Xiong, Xiaoxiong

    2010-10-01

    This paper focuses on radiometric and geometric assessment of the Indian Remote Sensing (IRS-P6) Advanced Wide Field Sensor (AWiFS) sensor using the Sonoran desert and Railroad Valley Playa, Nevada (RVPN) ground sites. Imageto- Image (I2I) accuracy and relative band-to-band (B2B) accuracy were measured. I2I accuracy of the AWiFS imagery was assessed by measuring the imagery against Landsat Global Land Survey (GLS) 2000. The AWiFS images were typically registered to within one pixel to the GLS 2000 mosaic images. The B2B process used the same concepts as the I2I, except instead of a reference image and a search image; the individual bands of a multispectral image are tested against each other. The B2B results showed that all the AWiFS multispectral bands are registered to sub-pixel accuracy. Using the limited amount of scenes available over these ground sites, the reflective bands of AWiFS sensor indicate a long-term drift in the top-of-atmosphere (TOA) reflectance. Because of the limited availability of AWiFS scenes over these ground sites, a comprehensive evaluation of the radiometric stability using these sites is not possible. In order to overcome this limitation, a cross-comparison between AWiFS and Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) was performed using image statistics based on large common areas observed by the sensors within 30 minutes. Regression curves and coefficients of determination for the TOA trends from these sensors were generated to quantify the uncertainty in these relationships and to provide an assessment of the calibration differences between these sensors.

  13. Robotics On-Board Trainer (ROBoT)

    NASA Technical Reports Server (NTRS)

    Johnson, Genevieve; Alexander, Greg

    2013-01-01

    ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.

  14. Better Pictures in a Snap

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Retinex Imaging Processing, winner of NASA's 1999 Space Act Award, is commercially available through TruView Imaging Company. With this technology, amateur photographers use their personal computers to improve the brightness, scene contrast, detail, and overall sharpness of images with increased ease. The process was originally developed for remote sensing of the Earth by researchers at Langley Research Center and Science and Technology Corporation (STC). It automatically enhances a digital image in terms of dynamic range compression, color independence from the spectral distribution of the scene illuminant, and color/lightness rendition. As a result, the enhanced digital image is much closer to the scene perceived by the human visual system, under all kinds and levels of lighting variations. TruView believes there are other applications for the software in medical imaging, forensics, security, recognizance, mining, assembly, and other industrial areas.

  15. Functional neuroanatomy of intuitive physical inference

    PubMed Central

    Mikhael, John G.; Tenenbaum, Joshua B.; Kanwisher, Nancy

    2016-01-01

    To engage with the world—to understand the scene in front of us, plan actions, and predict what will happen next—we must have an intuitive grasp of the world’s physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events—a “physics engine” in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general “multiple demand” system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action. PMID:27503892

  16. Functional neuroanatomy of intuitive physical inference.

    PubMed

    Fischer, Jason; Mikhael, John G; Tenenbaum, Joshua B; Kanwisher, Nancy

    2016-08-23

    To engage with the world-to understand the scene in front of us, plan actions, and predict what will happen next-we must have an intuitive grasp of the world's physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events-a "physics engine" in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general "multiple demand" system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action.

  17. Nitromethane decomposition under high static pressure.

    PubMed

    Citroni, Margherita; Bini, Roberto; Pagliai, Marco; Cardini, Gianni; Schettino, Vincenzo

    2010-07-29

    The room-temperature pressure-induced reaction of nitromethane has been studied by means of infrared spectroscopy in conjunction with ab initio molecular dynamics simulations. The evolution of the IR spectrum during the reaction has been monitored at 32.2 and 35.5 GPa performing the measurements in a diamond anvil cell. The simulations allowed the characterization of the onset of the high-pressure reaction, showing that its mechanism has a complex bimolecular character and involves the formation of the aci-ion of nitromethane. The growth of a three-dimensional disordered polymer has been evidenced both in the experiments and in the simulations. On decompression of the sample, after the reaction, a continuous evolution of the product is observed with a decomposition into smaller molecules. This behavior has been confirmed by the simulations and represents an important novelty in the scene of the known high-pressure reactions of molecular systems. The major reaction product on decompression is N-methylformamide, the smallest molecule containing the peptide bond. The high-pressure reaction of crystalline nitromethane under irradiation at 458 nm was also experimentally studied. The reaction threshold pressure is significantly lowered by the electronic excitation through two-photon absorption, and methanol, not detected in the purely pressure-induced reaction, is formed. The presence of ammonium carbonate is also observed.

  18. Accurate screening for insulin resistance in PCOS women using fasting insulin concentrations.

    PubMed

    Lunger, Fabian; Wildt, Ludwig; Seeber, Beata

    2013-06-01

    The aims of this cross-sectional study were to evaluate the relative agreement of both static and dynamic methods of diagnosing IR in women with polycystic ovary syndrome (PCOS) and to suggest a simple screening method for IR. All participants underwent serial blood draws for hormonal profiling and lipid assessment, a 3 h, 75 g load oral glucose tolerance test (OGTT) with every 15 min measurements of glucose and insulin, and an ACTH stimulation test. The prevalence of IR ranged from 12.2% to 60.5%, depending on the IR index used. Based on largest area under the curve on receiver operating curve (ROC) analyses, the dynamic indices outperformed the static indices with glucose to insulin ratio and fasting insulin (fInsulin) demonstrating the best diagnostic properties. Applying two cut-offs representing fInsulin extremes (<7 and >13 mIU/l, respectively) gave the diagnosis in 70% of the patients with high accuracy. Currently utilized indices for assessing IR give highly variable results in women with PCOS. The most accurate indices based on dynamic testing can be time-consuming and labor-intensive. We suggest the use of fInsulin as a simple screening test, which can reduce the number of OGTTs needed to routinely assess insulin resistance in women with PCOS.

  19. Irma 5.2 multi-sensor signature prediction model

    NASA Astrophysics Data System (ADS)

    Savage, James; Coker, Charles; Thai, Bea; Aboutalib, Omar; Chow, Anthony; Yamaoka, Neil; Kim, Charles

    2007-04-01

    The Irma synthetic signature prediction code is being developed by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN) to facilitate the research and development of multi-sensor systems. There are over 130 users within the Department of Defense, NASA, Department of Transportation, academia, and industry. Irma began as a high-resolution, physics-based Infrared (IR) target and background signature model for tactical weapon applications and has grown to include: a laser (or active) channel (1990), improved scene generator to support correlated frame-to-frame imagery (1992), and passive IR/millimeter wave (MMW) channel for a co-registered active/passive IR/MMW model (1994). Irma version 5.0 was released in 2000 and encompassed several upgrades to both the physical models and software; host support was expanded to Windows, Linux, Solaris, and SGI Irix platforms. In 2005, version 5.1 was released after an extensive verification and validation of an upgraded and reengineered active channel. Since 2005, the reengineering effort has focused on the Irma passive channel. Field measurements for the validation effort include the unpolarized data collection. Irma 5.2 is scheduled for release in the summer of 2007. This paper will report the validation test results of the Irma passive models and discuss the new features in Irma 5.2.

  20. Variability of eye movements when viewing dynamic natural scenes.

    PubMed

    Dorr, Michael; Martinetz, Thomas; Gegenfurtner, Karl R; Barth, Erhardt

    2010-08-26

    How similar are the eye movement patterns of different subjects when free viewing dynamic natural scenes? We collected a large database of eye movements from 54 subjects on 18 high-resolution videos of outdoor scenes and measured their variability using the Normalized Scanpath Saliency, which we extended to the temporal domain. Even though up to about 80% of subjects looked at the same image region in some video parts, variability usually was much greater. Eye movements on natural movies were then compared with eye movements in several control conditions. "Stop-motion" movies had almost identical semantic content as the original videos but lacked continuous motion. Hollywood action movie trailers were used to probe the upper limit of eye movement coherence that can be achieved by deliberate camera work, scene cuts, etc. In a "repetitive" condition, subjects viewed the same movies ten times each over the course of 2 days. Results show several systematic differences between conditions both for general eye movement parameters such as saccade amplitude and fixation duration and for eye movement variability. Most importantly, eye movements on static images are initially driven by stimulus onset effects and later, more so than on continuous videos, by subject-specific idiosyncrasies; eye movements on Hollywood movies are significantly more coherent than those on natural movies. We conclude that the stimuli types often used in laboratory experiments, static images and professionally cut material, are not very representative of natural viewing behavior. All stimuli and gaze data are publicly available at http://www.inb.uni-luebeck.de/tools-demos/gaze.

  1. Atomic structure of self-organizing iridium induced nanowires on Ge(001)

    NASA Astrophysics Data System (ADS)

    Kabanov, N. S.; Heimbuch, R.; Zandvliet, H. J. W.; Saletsky, A. M.; Klavsyuk, A. L.

    2017-05-01

    The atomic structure of self-organizing iridium (Ir) induced nanowires on Ge(001) is studied by density functional theory (DFT) calculations and variable-temperature scanning tunneling microscopy. The Ir induced nanowires are aligned in a direction perpendicular to the Ge(001) substrate dimer rows, have a width of two atoms and are completely kink-less. Density functional theory calculations show that the Ir atoms prefer to dive into the Ge(001) substrate and push up the neighboring Ge substrate atoms. The nanowires are composed of Ge atoms and not Ir atoms as previously assumed. The regions in the vicinity of the nanowires are very dynamic, even at temperatures as low as 77 K. Time-resolved scanning tunneling microscopy measurements reveal that this dynamics is caused by buckled Ge substrate dimers that flip back and forth between their two buckled configurations.

  2. Time-resolved photoelectron spectroscopy of IR-driven electron dynamics in a charge transfer model system.

    PubMed

    Falge, Mirjam; Fröbel, Friedrich Georg; Engel, Volker; Gräfe, Stefanie

    2017-08-02

    If the adiabatic approximation is valid, electrons smoothly adapt to molecular geometry changes. In contrast, as a characteristic of diabatic dynamics, the electron density does not follow the nuclear motion. Recently, we have shown that the asymmetry in time-resolved photoelectron spectra serves as a tool to distinguish between these dynamics [Falge et al., J. Phys. Chem. Lett., 2012, 3, 2617]. Here, we investigate the influence of an additional, moderately intense infrared (IR) laser field, as often applied in attosecond time-resolved experiments, on such asymmetries. This is done using a simple model for coupled electronic-nuclear motion. We calculate time-resolved photoelectron spectra and their asymmetries and demonstrate that the spectra directly map the bound electron-nuclear dynamics. From the asymmetries, we can trace the IR field-induced population transfer and both the field-driven and intrinsic (non-)adiabatic dynamics. This holds true when considering superposition states accompanied by electronic coherences. The latter are observable in the asymmetries for sufficiently short XUV pulses to coherently probe the coupled states. It is thus documented that the asymmetry is a measure for phases in bound electron wave packets and non-adiabatic dynamics.

  3. Determination of cloud fields from analysis of HIRS2/MSU sounding data. [20 channel infrared and 4 channel microwave atmospheric sounders

    NASA Technical Reports Server (NTRS)

    Susskind, J.; Reuter, D.

    1986-01-01

    IR and microwave remote sensing data collected with the HIRS2 and MSU sensors on the NOAA polar-orbiting satellites were evaluated for their effectiveness as bases for determining the cloud cover and cloud physical characteristics. Techniques employed to adjust for day-night alterations in the radiance fields are described, along with computational procedures applied to compare scene pixel values with reference values for clear skies. Sample results are provided for the mean cloud coverage detected over South America and Africa June 1979, with attention given to concurrent surface pressure and cloud top pressure values.

  4. Multi exposure image fusion algorithm based on YCbCr space

    NASA Astrophysics Data System (ADS)

    Yang, T. T.; Fang, P. Y.

    2018-05-01

    To solve the problem that scene details and visual effects are difficult to be optimized in high dynamic image synthesis, we proposes a multi exposure image fusion algorithm for processing low dynamic range images in YCbCr space, and weighted blending of luminance and chromatic aberration components respectively. The experimental results show that the method can retain color effect of the fused image while balancing details of the bright and dark regions of the high dynamic image.

  5. Temporal dynamics of motor cortex excitability during perception of natural emotional scenes

    PubMed Central

    Borgomaneri, Sara; Gazzola, Valeria

    2014-01-01

    Although it is widely assumed that emotions prime the body for action, the effects of visual perception of natural emotional scenes on the temporal dynamics of the human motor system have scarcely been investigated. Here, we used single-pulse transcranial magnetic stimulation (TMS) to assess motor excitability during observation and categorization of positive, neutral and negative pictures from the International Affective Picture System database. Motor-evoked potentials (MEPs) from TMS of the left motor cortex were recorded from hand muscles, at 150 and 300 ms after picture onset. In the early temporal condition we found an increase in hand motor excitability that was specific for the perception of negative pictures. This early negative bias was predicted by interindividual differences in the disposition to experience aversive feelings (personal distress) in interpersonal emotional contexts. In the later temporal condition, we found that MEPs were similarly increased for both positive and negative pictures, suggesting an increased reactivity to emotionally arousing scenes. By highlighting the temporal course of motor excitability during perception of emotional pictures, our study provides direct neurophysiological support for the evolutionary notions that emotion perception is closely linked to action systems and that emotionally negative events require motor reactions to be more urgently mobilized. PMID:23945998

  6. Guided exploration in virtual environments

    NASA Astrophysics Data System (ADS)

    Beckhaus, Steffi; Eckel, Gerhard; Strothotte, Thomas

    2001-06-01

    We describe an application supporting alternating interaction and animation for the purpose of exploration in a surround- screen projection-based virtual reality system. The exploration of an environment is a highly interactive and dynamic process in which the presentation of objects of interest can give the user guidance while exploring the scene. Previous systems for automatic presentation of models or scenes need either cinematographic rules, direct human interaction, framesets or precalculation (e.g. precalculation of paths to a predefined goal). We report on the development of a system that can deal with rapidly changing user interest in objects of a scene or model as well as with dynamic models and changes of the camera position introduced interactively by the user. It is implemented as a potential-field based camera data generating system. In this paper we describe the implementation of our approach in a virtual art museum on the CyberStage, our surround-screen projection-based stereoscopic display. The paradigm of guided exploration is introduced describing the freedom of the user to explore the museum autonomously. At the same time, if requested by the user, guided exploration provides just-in-time navigational support. The user controls this support by specifying the current field of interest in high-level search criteria. We also present an informal user study evaluating this approach.

  7. MIRAGE: system overview and status

    NASA Astrophysics Data System (ADS)

    Robinson, Richard M.; Oleson, Jim; Rubin, Lane; McHugh, Stephen W.

    2000-07-01

    Santa Barbara Infrared's (SBIR) MIRAGE (Multispectral InfraRed Animation Generation Equipment) is a state-of-the-art dynamic infrared scene projector system. Imagery from the first MIRAGE system was presented to the scene simulation community during last year's SPIE AeroSense 99 Symposium. Since that time, SBIR has delivered five MIRAGE systems. This paper will provide an overview of the MIRAGE system and discuss the current status of the MIRAGE. Included is an update of system hardware, and the current configuration. Proposed upgrades to this configuration and options will be discussed. Updates on the latest installations, applications and measured data will also be presented.

  8. Photophysical dynamics of the efficient emission and photosensitization of [Ir(pqi)2(NN)]+ complexes.

    PubMed

    Zanoni, Kassio P S; Ito, Akitaka; Grüner, Malte; Murakami Iha, Neyde Y; de Camargo, Andrea S S

    2018-01-23

    The photophysical dynamics of three complexes in the highly-emissive [Ir(pqi) 2 (NN)] + series were investigated aiming at unique photophysical features and applications in light-emitting and singlet oxygen sensitizing research fields. Rational elucidation and Franck-Condon analyses of the observed emission spectra in nitrile solutions at 298 and 77 K reveal the true emissive nature of the lowest-lying triplet excited state (T 1 ), consisting of a hybrid 3 MLCT/LC Ir(pqi)→pqi state. Emissive deactivations from T 1 occur mainly by very intense, yellow-orange phosphorescence with high quantum yields and radiative rates. The emission nature experimentally verified is corroborated by theoretical calculations (TD-DFT), with T 1 arising from a mixing of several transitions induced by the spin-orbit coupling, majorly ascribed to 3 MLCT/LC Ir(pqi)→pqi and increasing contributions of 3 MLCT/LLCT Ir(pqi)→NN . The microsecond-lived emission of T 1 is rapidly quenched by molecular oxygen, with an efficient generation of singlet oxygen. Our findings show that the photophysics of [Ir(pqi) 2 (NN)][PF 6 ] complexes is suitable for many applications, from the active layer of electroluminescent devices to photosensitizers for photodynamic therapy and theranostics.

  9. Hebbian learning in a model with dynamic rate-coded neurons: an alternative to the generative model approach for learning receptive fields from natural scenes.

    PubMed

    Hamker, Fred H; Wiltschut, Jan

    2007-09-01

    Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.

  10. Structural dynamics inside a functionalized metal–organic framework probed by ultrafast 2D IR spectroscopy

    DOE PAGES

    Nishida, Jun; Tamimi, Amr; Fei, Honghan; ...

    2014-12-15

    One key property of metal-organic frameworks (MOFs) are their structural elasticity. IHere we show that 2D IR spectroscopy with pulse-shaping techniques can probe the ultrafast structural fluctuations of MOFs. 2D IR data, obtained from a vibrational probe attached to the linkers of UiO-66 MOF in low concentration, revealed that the structural fluctuations have time constants of 7 and 670 ps with no solvent. Filling the MOF pores with dimethylformamide (DMF) slows the structural fluctuations by reducing the ability of the MOF to undergo deformations, and the dynamics of the DMF molecules are also greatly restricted. Finally, methodology advances were requiredmore » to remove the severe light scattering caused by the macroscopic-sized MOF particles, eliminate interfering oscillatory components from the 2D IR data, and address Förster vibrational excitation transfer.« less

  11. IR thermography for dynamic detection of laminar-turbulent transition

    NASA Astrophysics Data System (ADS)

    Simon, Bernhard; Filius, Adrian; Tropea, Cameron; Grundmann, Sven

    2016-05-01

    This work investigates the potential of infrared (IR) thermography for the dynamic detection of laminar-turbulent transition. The experiments are conducted on a flat plate at velocities of 8-14 m/s, and the transition of the laminar boundary layer to turbulence is forced by a disturbance source which is turned on and off with frequencies up to 10 Hz. Three different heating techniques are used to apply the required difference between fluid and structure temperature: a heated aluminum structure is used as an internal structure heating technique, a conductive paint acts as a surface bounded heater, while an IR heater serves as an example for an external heating technique. For comparison of all heating techniques, a normalization is introduced and the frequency response of the measured IR camera signal is analyzed. Finally, the different heating techniques are compared and consequences for the design of experiments on laminar-turbulent transition are discussed.

  12. Resonant and resistive dual-mode uncooled infrared detectors toward expanded dynamic range and high linearity

    NASA Astrophysics Data System (ADS)

    Li, Xin; Liang, Ji; Zhang, Hongxiang; Yang, Xing; Zhang, Hao; Pang, Wei; Zhang, Menglun

    2017-06-01

    This paper reports an uncooled infrared (IR) detector based on a micromachined piezoelectric resonator operating in resonant and resistive dual-modes. The two sensing modes achieved IR responsivities of 2.5 Hz/nW and 900 μdB/nW, respectively. Compared with the single mode operation, the dual-mode measurement improves the limit of detection by two orders of magnitude and meanwhile maintains high linearity and responsivity in a higher IR intensity range. A combination of the two sensing modes compensates for its own shortcomings and provides a much larger dynamic range, and thus, a wider application field of the proposed detector is realized.

  13. Impact of amorphization on the electronic properties of Zn-Ir-O systems.

    PubMed

    Muñoz Ramo, David; Bristowe, Paul D

    2016-09-01

    We analyze the geometry and electronic structure of a series of amorphous Zn-Ir-O systems using classical molecular dynamics followed by density functional theory taking into account two different charge states of Ir (+3 and  +4). The structures obtained consist of a matrix of interconnected metal-oxygen polyhedra, with Zn adopting preferentially a coordination of 4 and Ir a mixture of coordinations between 4 and 6 that depend on the charge state of Ir and its concentration. The amorphous phases display reduced band gaps compared to crystalline ZnIr2O4 and exhibit localized states near the band edges, which harm their transparency and hole mobility. Increasing amounts of Ir in the Ir(4+) phases decrease the band gap further while not altering it significantly in the Ir(3+) phases. The results are consistent with recent transmittance and resistivity measurements.

  14. Dynamic Denoising of Tracking Sequences

    PubMed Central

    Michailovich, Oleg; Tannenbaum, Allen

    2009-01-01

    In this paper, we describe an approach to the problem of simultaneously enhancing image sequences and tracking the objects of interest represented by the latter. The enhancement part of the algorithm is based on Bayesian wavelet denoising, which has been chosen due to its exceptional ability to incorporate diverse a priori information into the process of image recovery. In particular, we demonstrate that, in dynamic settings, useful statistical priors can come both from some reasonable assumptions on the properties of the image to be enhanced as well as from the images that have already been observed before the current scene. Using such priors forms the main contribution of the present paper which is the proposal of the dynamic denoising as a tool for simultaneously enhancing and tracking image sequences. Within the proposed framework, the previous observations of a dynamic scene are employed to enhance its present observation. The mechanism that allows the fusion of the information within successive image frames is Bayesian estimation, while transferring the useful information between the images is governed by a Kalman filter that is used for both prediction and estimation of the dynamics of tracked objects. Therefore, in this methodology, the processes of target tracking and image enhancement “collaborate” in an interlacing manner, rather than being applied separately. The dynamic denoising is demonstrated on several examples of SAR imagery. The results demonstrated in this paper indicate a number of advantages of the proposed dynamic denoising over “static” approaches, in which the tracking images are enhanced independently of each other. PMID:18482881

  15. Impact of seasonality upon the dynamics of a novel pathogen in a seabird colony

    NASA Astrophysics Data System (ADS)

    O'Regan, S. M.

    2008-11-01

    A seasonally perturbed variant of the basic Susceptible-Infected-Recovered (SIR) model in epidemiology is considered in this paper. The effect of seasonality on an IR system of ordinary differential equations describing the dynamics of a novel pathogen, e.g., highly pathogenic avian influenza, in a seabird colony is investigated. The method of Lyapunov functions is used to determine the long-term behaviour of this system. Numerical simulations of the seasonally perturbed IR system indicate that the system exhibits complex dynamics as the amplitude of the seasonal perturbation term is increased. These findings suggest that seasonality may exert a considerable effect on the dynamics of epidemics in a seabird colony.

  16. Quantifying the microphysical impacts of fire aerosols on clouds in Indonesia using remote sensing observations

    NASA Astrophysics Data System (ADS)

    Tosca, M. G.; Diner, D. J.; Garay, M. J.; Kalashnikova, O. V.

    2012-12-01

    Fire-emitted aerosols modify cloud and precipitation dynamics by acting as cloud condensation nuclei in what is known as the first and second aerosol indirect effect. The cloud response to the indirect effect varies regionally and is not well understood in the highly convective tropics. We analyzed nine years (2003-2011) of aerosol data from the Multi-angle Imaging SpectroRadiometer (MISR), and fire emissions data from the Global Fire Emissions Database, version 3 (GFED3) over southeastern tropical Asia (Indonesia), and identified scenes that contained both a high atmospheric aerosol burden and large surface fire emissions. We then collected scenes from the Cloud Profiling Radar (CPR) on board the CLOUDSAT satellite that corresponded both spatially and temporally to the high-burning scenes from MISR, and identified differences in convective cloud dynamics over areas with varying aerosol optical depths. Differences in overpass times (MISR in the morning, CLOUDSAT in the afternoon) improved our ability to infer that changes in cloud dynamics were a response to increased or decreased aerosol emissions. Our results extended conclusions from initial studies over the Amazon that used remote sensing techniques to identify cloud fraction reductions in high burning areas (Koren et al., 2004; Rosenfeld, 1999) References Koren, I., Y.J. Kaufman, L.A. Remer and J.V. Martins (2004), Measurement of the effect of Amazon smoke on inhibition of cloud formation, Science, 303, 1342-1345 Rosenfeld, D. (1999), TRMM observed first direct evidence of smoke from forest fires inhibiting rainfall, Gephys. Res. Lett., 26, 3105.

  17. Utilising E-on Vue and Unity 3D scenes to generate synthetic images and videos for visible signature analysis

    NASA Astrophysics Data System (ADS)

    Madden, Christopher S.; Richards, Noel J.; Culpepper, Joanne B.

    2016-10-01

    This paper investigates the ability to develop synthetic scenes in an image generation tool, E-on Vue, and a gaming engine, Unity 3D, which can be used to generate synthetic imagery of target objects across a variety of conditions in land environments. Developments within these tools and gaming engines have allowed the computer gaming industry to dramatically enhance the realism of the games they develop; however they utilise short cuts to ensure that the games run smoothly in real-time to create an immersive effect. Whilst these short cuts may have an impact upon the realism of the synthetic imagery, they do promise a much more time efficient method of developing imagery of different environmental conditions and to investigate the dynamic aspect of military operations that is currently not evaluated in signature analysis. The results presented investigate how some of the common image metrics used in target acquisition modelling, namely the Δμ1, Δμ2, Δμ3, RSS, and Doyle metrics, perform on the synthetic scenes generated by E-on Vue and Unity 3D compared to real imagery of similar scenes. An exploration of the time required to develop the various aspects of the scene to enhance its realism are included, along with an overview of the difficulties associated with trying to recreate specific locations as a virtual scene. This work is an important start towards utilising virtual worlds for visible signature evaluation, and evaluating how equivalent synthetic imagery is to real photographs.

  18. High Dynamic Range Digital Imaging of Spacecraft

    NASA Technical Reports Server (NTRS)

    Karr, Brian A.; Chalmers, Alan; Debattista, Kurt

    2014-01-01

    The ability to capture engineering imagery with a wide degree of dynamic range during rocket launches is critical for post launch processing and analysis [USC03, NNC86]. Rocket launches often present an extreme range of lightness, particularly during night launches. Night launches present a two-fold problem: capturing detail of the vehicle and scene that is masked by darkness, while also capturing detail in the engine plume.

  19. Human, Nature, Dynamism: The Effects of Content and Movement Perception on Brain Activations during the Aesthetic Judgment of Representational Paintings

    PubMed Central

    Di Dio, Cinzia; Ardizzi, Martina; Massaro, Davide; Di Cesare, Giuseppe; Gilli, Gabriella; Marchetti, Antonella; Gallese, Vittorio

    2016-01-01

    Movement perception and its role in aesthetic experience have been often studied, within empirical aesthetics, in relation to the human body. No such specificity has been defined in neuroimaging studies with respect to contents lacking a human form. The aim of this work was to explore, through functional magnetic imaging (f MRI), how perceived movement is processed during the aesthetic judgment of paintings using two types of content: human subjects and scenes of nature. Participants, untutored in the arts, were shown the stimuli and asked to make aesthetic judgments. Additionally, they were instructed to observe the paintings and to rate their perceived movement in separate blocks. Observation highlighted spontaneous processes associated with aesthetic experience, whereas movement judgment outlined activations specifically related to movement processing. The ratings recorded during aesthetic judgment revealed that nature scenes received higher scored than human content paintings. The imaging data showed similar activation, relative to baseline, for all stimuli in the three tasks, including activation of occipito-temporal areas, posterior parietal, and premotor cortices. Contrast analyses within aesthetic judgment task showed that human content activated, relative to nature, precuneus, fusiform gyrus, and posterior temporal areas, whose activation was prominent for dynamic human paintings. In contrast, nature scenes activated, relative to human stimuli, occipital and posterior parietal cortex/precuneus, involved in visuospatial exploration and pragmatic coding of movement, as well as central insula. Static nature paintings further activated, relative to dynamic nature stimuli, central and posterior insula. Besides insular activation, which was specific for aesthetic judgment, we found a large overlap in the activation pattern characterizing each stimulus dimension (content and dynamism) across observation, aesthetic judgment, and movement judgment tasks. These findings support the idea that the aesthetic evaluation of artworks depicting both human subjects and nature scenes involves a motor component, and that the associated neural processes occur quite spontaneously in the viewer. Furthermore, considering the functional roles of posterior and central insula, we suggest that nature paintings may evoke aesthetic processes requiring an additional proprioceptive and sensori-motor component implemented by “motor accessibility” to the represented scenario, which is needed to judge the aesthetic value of the observed painting. PMID:26793087

  20. Design and implementation of a scene-dependent dynamically selfadaptable wavefront coding imaging system

    NASA Astrophysics Data System (ADS)

    Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador

    2012-01-01

    A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator and the image processing operations synchronously. The spatial light modulator is used to implement the phase mask with flexibility given the trade-off between depth-of-field extension and image quality achieved. The action of the program is to evaluate the depth-of-field requirements of the specific scene and subsequently control the coding established by the spatial light modulator, in real time.

  1. Delocalization and stretch-bend mixing of the HOH bend in liquid water

    NASA Astrophysics Data System (ADS)

    Carpenter, William B.; Fournier, Joseph A.; Biswas, Rajib; Voth, Gregory A.; Tokmakoff, Andrei

    2017-08-01

    Liquid water's rich sub-picosecond vibrational dynamics arise from the interplay of different high- and low-frequency modes evolving in a strong yet fluctuating hydrogen bond network. Recent studies of the OH stretching excitations of H2O indicate that they are delocalized over several molecules, raising questions about whether the bending vibrations are similarly delocalized. In this paper, we take advantage of an improved 50 fs time-resolution and broadband infrared (IR) spectroscopy to interrogate the 2D IR lineshape and spectral dynamics of the HOH bending vibration of liquid H2O. Indications of strong bend-stretch coupling are observed in early time 2D IR spectra through a broad excited state absorption that extends from 1500 cm-1 to beyond 1900 cm-1, which corresponds to transitions from the bend to the bend overtone and OH stretching band between 3150 and 3550 cm-1. Pump-probe measurements reveal a fast 180 fs vibrational relaxation time, which results in a hot-ground state spectrum that is the same as observed for water IR excitation at any other frequency. The fastest dynamical time scale is 80 fs for the polarization anisotropy decay, providing evidence for the delocalized or excitonic character of the bend. Normal mode analysis conducted on water clusters extracted from molecular dynamics simulations corroborate significant stretch-bend mixing and indicate delocalization of δHOH on 2-7 water molecules.

  2. Ultrafast Silicon Photonics with Visible to Mid-Infrared Pumping of Silicon Nanocrystals.

    PubMed

    Diroll, Benjamin T; Schramke, Katelyn S; Guo, Peijun; Kortshagen, Uwe R; Schaller, Richard D

    2017-10-11

    Dynamic optical control of infrared (IR) transparency and refractive index is achieved using boron-doped silicon nanocrystals excited with mid-IR optical pulses. Unlike previous silicon-based optical switches, large changes in transmittance are achieved without a fabricated structure by exploiting strong light coupling of the localized surface plasmon resonance (LSPR) produced from free holes of p-type silicon nanocrystals. The choice of optical excitation wavelength allows for selectivity between hole heating and carrier generation through intraband or interband photoexcitation, respectively. Mid-IR optical pumping heats the free holes of p-Si nanocrystals to effective temperatures greater than 3500 K. Increases of the hole effective mass at high effective hole temperatures lead to a subpicosecond change of the dielectric function, resulting in a redshift of the LSPR, modulating mid-IR transmission by as much as 27%, and increasing the index of refraction by more than 0.1 in the mid-IR. Low hole heat capacity dictates subpicosecond hole cooling, substantially faster than carrier recombination, and negligible heating of the Si lattice, permitting mid-IR optical switching at terahertz repetition frequencies. Further, the energetic distribution of holes at high effective temperatures partially reverses the Burstein-Moss effect, permitting the modulation of transmittance at telecommunications wavelengths. The results presented here show that doped silicon, particularly in micro- or nanostructures, is a promising dynamic metamaterial for ultrafast IR photonics.

  3. Ultrafast Silicon Photonics with Visible to Mid-Infrared Pumping of Silicon Nanocrystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diroll, Benjamin T.; Schramke, Katelyn S.; Guo, Peijun

    Dynamic optical control of infrared (IR) transparency and refractive index is achieved using boron-doped silicon nanocrystals excited with mid-IR optical pulses. Also, unlike previous silicon-based optical switches, large changes in transmittance are achieved without a fabricated structure by exploiting strong light coupling of the localized surface plasmon resonance (LSPR) produced from free holes of p-type silicon nanocrystals. The choice of optical excitation wavelength allows selectivity between hole heating and carrier generation through intraband or interband photoexcitation, respectively. Mid-IR optical pumping heats the free holes of p-Si nanocrystals to effective temperatures greater than 3500 K. Increases of the hole effective massmore » at high effective hole temperatures lead to a sub-picosecond change of the dielectric function resulting in a redshift of the LSPR, modulating mid-IR transmission by as much as 27% and increasing the index of refraction by more than 0.1 in the mid-IR. Low hole heat capacity dictates sub-picosecond hole cooling, substantially faster than carrier recombination, and negligible heating of the Si lattice, permitting mid-IR optical switching at terahertz repetition frequencies. Further, the energetic distribution of holes at high effective temperatures partially reverses the Burstein-Moss effect, permitting modulation of transmittance at telecommunications wavelengths. Lastly, the results presented here show that doped silicon, particularly in micro- or nanostructures, is a promising dynamic metamaterial for ultrafast IR photonics.« less

  4. Ultrafast Silicon Photonics with Visible to Mid-Infrared Pumping of Silicon Nanocrystals

    DOE PAGES

    Diroll, Benjamin T.; Schramke, Katelyn S.; Guo, Peijun; ...

    2017-09-11

    Dynamic optical control of infrared (IR) transparency and refractive index is achieved using boron-doped silicon nanocrystals excited with mid-IR optical pulses. Also, unlike previous silicon-based optical switches, large changes in transmittance are achieved without a fabricated structure by exploiting strong light coupling of the localized surface plasmon resonance (LSPR) produced from free holes of p-type silicon nanocrystals. The choice of optical excitation wavelength allows selectivity between hole heating and carrier generation through intraband or interband photoexcitation, respectively. Mid-IR optical pumping heats the free holes of p-Si nanocrystals to effective temperatures greater than 3500 K. Increases of the hole effective massmore » at high effective hole temperatures lead to a sub-picosecond change of the dielectric function resulting in a redshift of the LSPR, modulating mid-IR transmission by as much as 27% and increasing the index of refraction by more than 0.1 in the mid-IR. Low hole heat capacity dictates sub-picosecond hole cooling, substantially faster than carrier recombination, and negligible heating of the Si lattice, permitting mid-IR optical switching at terahertz repetition frequencies. Further, the energetic distribution of holes at high effective temperatures partially reverses the Burstein-Moss effect, permitting modulation of transmittance at telecommunications wavelengths. Lastly, the results presented here show that doped silicon, particularly in micro- or nanostructures, is a promising dynamic metamaterial for ultrafast IR photonics.« less

  5. High-resolution dynamic imaging and quantitative analysis of lung cancer xenografts in nude mice using clinical PET/CT

    PubMed Central

    Wang, Ying Yi; Wang, Kai; Xu, Zuo Yu; Song, Yan; Wang, Chu Nan; Zhang, Chong Qing; Sun, Xi Lin; Shen, Bao Zhong

    2017-01-01

    Considering the general application of dedicated small-animal positron emission tomography/computed tomography is limited, an acceptable alternative in many situations might be clinical PET/CT. To estimate the feasibility of using clinical PET/CT with [F-18]-fluoro-2-deoxy-D-glucose for high-resolution dynamic imaging and quantitative analysis of cancer xenografts in nude mice. Dynamic clinical PET/CT scans were performed on xenografts for 60 min after injection with [F-18]-fluoro-2-deoxy-D-glucose. Scans were reconstructed with or without SharpIR method in two phases. And mice were sacrificed to extracting major organs and tumors, using ex vivo γ-counting as a reference. Strikingly, we observed that the image quality and the correlation between the all quantitive data from clinical PET/CT and the ex vivo counting was better with the SharpIR reconstructions than without. Our data demonstrate that clinical PET/CT scanner with SharpIR reconstruction is a valuable tool for imaging small animals in preclinical cancer research, offering dynamic imaging parameters, good image quality and accurate data quatification. PMID:28881772

  6. High-resolution dynamic imaging and quantitative analysis of lung cancer xenografts in nude mice using clinical PET/CT.

    PubMed

    Wang, Ying Yi; Wang, Kai; Xu, Zuo Yu; Song, Yan; Wang, Chu Nan; Zhang, Chong Qing; Sun, Xi Lin; Shen, Bao Zhong

    2017-08-08

    Considering the general application of dedicated small-animal positron emission tomography/computed tomography is limited, an acceptable alternative in many situations might be clinical PET/CT. To estimate the feasibility of using clinical PET/CT with [F-18]-fluoro-2-deoxy-D-glucose for high-resolution dynamic imaging and quantitative analysis of cancer xenografts in nude mice. Dynamic clinical PET/CT scans were performed on xenografts for 60 min after injection with [F-18]-fluoro-2-deoxy-D-glucose. Scans were reconstructed with or without SharpIR method in two phases. And mice were sacrificed to extracting major organs and tumors, using ex vivo γ-counting as a reference. Strikingly, we observed that the image quality and the correlation between the all quantitive data from clinical PET/CT and the ex vivo counting was better with the SharpIR reconstructions than without. Our data demonstrate that clinical PET/CT scanner with SharpIR reconstruction is a valuable tool for imaging small animals in preclinical cancer research, offering dynamic imaging parameters, good image quality and accurate data quatification.

  7. Enhanced Graphics for Extended Scale Range

    NASA Technical Reports Server (NTRS)

    Hanson, Andrew J.; Chi-Wing Fu, Philip

    2012-01-01

    Enhanced Graphics for Extended Scale Range is a computer program for rendering fly-through views of scene models that include visible objects differing in size by large orders of magnitude. An example would be a scene showing a person in a park at night with the moon, stars, and galaxies in the background sky. Prior graphical computer programs exhibit arithmetic and other anomalies when rendering scenes containing objects that differ enormously in scale and distance from the viewer. The present program dynamically repartitions distance scales of objects in a scene during rendering to eliminate almost all such anomalies in a way compatible with implementation in other software and in hardware accelerators. By assigning depth ranges correspond ing to rendering precision requirements, either automatically or under program control, this program spaces out object scales to match the precision requirements of the rendering arithmetic. This action includes an intelligent partition of the depth buffer ranges to avoid known anomalies from this source. The program is written in C++, using OpenGL, GLUT, and GLUI standard libraries, and nVidia GEForce Vertex Shader extensions. The program has been shown to work on several computers running UNIX and Windows operating systems.

  8. Context matters: Anterior and posterior cortical midline responses to sad movie scenes.

    PubMed

    Schlochtermeier, L H; Pehrs, C; Bakels, J-H; Jacobs, A M; Kappelhoff, H; Kuchinke, L

    2017-04-15

    Narrative movies can create powerful emotional responses. While recent research has advanced the understanding of neural networks involved in immersive movie viewing, their modulation within a movie's dynamic context remains inconclusive. In this study, 24 healthy participants passively watched sad scene climaxes taken from 24 romantic comedies, while brain activity was measured using functional magnetic resonance (fMRI). To study effects of context, the sad scene climaxes were presented with either coherent scene context, replaced non-coherent context or without context. In a second viewing, the same clips were rated continuously for sadness. The ratings varied over time with peaks of experienced sadness within the assumed climax intervals. Activations in anterior and posterior cortical midline regions increased if presented with both coherent and replaced context, while activation in the temporal gyri decreased. This difference was more pronounced for the coherent context condition. Psycho-Physiological interactions (PPI) analyses showed a context-dependent coupling of midline regions with occipital visual and sub-cortical reward regions. Our results demonstrate the pivotal role of midline structures and their interaction with perceptual and reward areas in processing contextually embedded socio-emotional information in movies. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Decision Aiding in Europe: Assessment Report,

    DTIC Science & Technology

    1983-05-26

    does not need extreme realism ; combined to yield an attractive index of rather, he needs a dynamic scene represen- mental workload. In the same...graphic functions but are multicriteria aspirations are often contra- not specifically European. Cinematic dictory and cannot be achieved simulta

  10. Reflectance of vegetation, soil, and water

    NASA Technical Reports Server (NTRS)

    Wiegand, C. L. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. The ability to read the 24-channel MSS CCT tapes, select specified agricultural land use areas from the CCT, and perform multivariate statistical and pattern recognition analyses has been demonstrated. The 5 optimum channels chosen for classifying an agricultural scene were, in the order of their selection the far red visible, short reflective IR, visible blue, thermal infrared, and ultraviolet portions of the electromagnetic spectrum, respectively. Although chosen by a training set containing only vegetal categories, the optimum 4 channels discriminated pavement, water, bare soil, and building roofs, as well as the vegetal categories. Among the vegetal categories, sugar cane and cotton had distinctive signatures that distinguished them from grass and citrus. Acreages estimated spectrally by the computer for the test scene were acceptably close to acreages estimated from aerial photographs for cotton, sugar cane, and water. Many nonfarmable land resolution elements representing drainage ditch, field road, and highway right-of-way as well as farm headquarters area fell into the grass, bare soil plus weeds, and citrus categories and lessened the accuracy of the farmable acreage estimates in these categories. The expertise developed using the 24-channel data will be applied to the ERTS-1 data.

  11. An Enhanced Algorithm for Automatic Radiometric Harmonization of High-Resolution Optical Satellite Imagery Using Pseudoinvariant Features and Linear Regression

    NASA Astrophysics Data System (ADS)

    Langheinrich, M.; Fischer, P.; Probeck, M.; Ramminger, G.; Wagner, T.; Krauß, T.

    2017-05-01

    The growing number of available optical remote sensing data providing large spatial and temporal coverage enables the coherent and gapless observation of the earth's surface on the scale of whole countries or continents. To produce datasets of that size, individual satellite scenes have to be stitched together forming so-called mosaics. Here the problem arises that the different images feature varying radiometric properties depending on the momentary acquisition conditions. The interpretation of optical remote sensing data is to a great extent based on the analysis of the spectral composition of an observed surface reflection. Therefore the normalization of all images included in a large image mosaic is necessary to ensure consistent results concerning the application of procedures to the whole dataset. In this work an algorithm is described which enables the automated spectral harmonization of satellite images to a reference scene. As the stable and satisfying functionality of the proposed algorithm was already put to operational use to process a high number of SPOT-4/-5, IRS LISS-III and Landsat-5 scenes in the frame of the European Environment Agency's Copernicus/GMES Initial Operations (GIO) High-Resolution Layer (HRL) mapping of the HRL Forest for 20 Western, Central and (South)Eastern European countries, it is further evaluated on its reliability concerning the application to newer Sentinel-2 multispectral imaging products. The results show that the algorithm is comparably efficient for the processing of satellite image data from sources other than the sensor configurations it was originally designed for.

  12. Low-cost thermal-IR imager for an Earth observation microsatellite

    NASA Astrophysics Data System (ADS)

    Oelrich, Brian D.; Underwood, Craig I.

    2017-11-01

    A new class of thermal infrared (TIR) Earth Observation (EO) data will become available with the flight of miniature TIR EO instruments in a multiple micro-satellite constellation. This data set will provide a unique service for those wishing to analyse trends or rapidly detect anomalous changes in the TIR characteristics of the Earth's surface or atmosphere (e.g. fire detection). Following a preliminary study of potential mission applications, uncooled commercial-off-the-shelf (COTS) technology was selected to form the basis of a low-cost, compact instrument capable of complementing existing visible and near IR EO capabilities on a sub-100kg Surrey micro-satellite. The preliminary 2-3 kg instrument concept has been designed to yield a 325 m ground sample distance over a 200 km swath width from a constellation altitude of 700 km. The radiometric performance, enhanced with time-delayed integration (TDI), is expected to yield a NETD less than 0.5 K for a 300 K ground scene. Fabrication and characterization of a space-ready instrument is planned for late 2004.

  13. Radiometric modeling and calibration of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) ground based measurement experiment

    NASA Astrophysics Data System (ADS)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-12-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data collected during the moon tracking and viewing experiment events. From which, we derive the lunar surface temperature and emissivity associated with the moon viewing measurements.

  14. Radiometric Modeling and Calibration of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS)Ground Based Measurement Experiment

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-01-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere s thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data collected during the moon tracking and viewing experiment events. From which, we derive the lunar surface temperature and emissivity associated with the moon viewing measurements.

  15. Dynamic thermal signature prediction for real-time scene generation

    NASA Astrophysics Data System (ADS)

    Christie, Chad L.; Gouthas, Efthimios (Themie); Williams, Owen M.; Swierkowski, Leszek

    2013-05-01

    At DSTO, a real-time scene generation framework, VIRSuite, has been developed in recent years, within which trials data are predominantly used for modelling the radiometric properties of the simulated objects. Since in many cases the data are insufficient, a physics-based simulator capable of predicting the infrared signatures of objects and their backgrounds has been developed as a new VIRSuite module. It includes transient heat conduction within the materials, and boundary conditions that take into account the heat fluxes due to solar radiation, wind convection and radiative transfer. In this paper, an overview is presented, covering both the steady-state and transient performance.

  16. Approximating SIR-B response characteristics and estimating wave height and wavelength for ocean imagery

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1987-01-01

    NASA Space Shuttle Challenger SIR-B ocean scenes are used to derive directional wave spectra for which speckle noise is modeled as a function of Rayleigh random phase coherence downrange and Poisson random amplitude errors inherent in the Doppler measurement of along-track position. A Fourier filter that preserves SIR-B image phase relations is used to correct the stationary and dynamic response characteristics of the remote sensor and scene correlator, as well as to subtract an estimate of the speckle noise component. A two-dimensional map of sea surface elevation is obtained after the filtered image is corrected for both random and deterministic motions.

  17. The Orbital Maneuvering Vehicle Training Facility visual system concept

    NASA Technical Reports Server (NTRS)

    Williams, Keith

    1989-01-01

    The purpose of the Orbital Maneuvering Vehicle (OMV) Training Facility (OTF) is to provide effective training for OMV pilots. A critical part of the training environment is the Visual System, which will simulate the video scenes produced by the OMV Closed-Circuit Television (CCTV) system. The simulation will include camera models, dynamic target models, moving appendages, and scene degradation due to the compression/decompression of video signal. Video system malfunctions will also be provided to ensure that the pilot is ready to meet all challenges the real-world might provide. One possible visual system configuration for the training facility that will meet existing requirements is described.

  18. Monitoring combat wound healing by IR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Howle, Chris R.; Spear, Abigail M.; Gazi, Ehsan; Crane, Nicole J.

    2016-03-01

    In recent conflicts, battlefield injuries consist largely of extensive soft injuries from blasts and high energy projectiles, including gunshot wounds. Repair of these large, traumatic wounds requires aggressive surgical treatment, including multiple surgical debridements to remove devitalised tissue and to reduce bacterial load. Identifying those patients with wound complications, such as infection and impaired healing, could greatly assist health care teams in providing the most appropriate and personalised care for combat casualties. Candidate technologies to enable this benefit include the fusion of imaging and optical spectroscopy to enable rapid identification of key markers. Hence, a novel system based on IR negative contrast imaging (NCI) is presented that employs an optical parametric oscillator (OPO) source comprising a periodically-poled LiNbO3 (PPLN) crystal. The crystal operates in the shortwave and midwave IR spectral regions (ca. 1.5 - 1.9 μm and 2.4 - 3.8 μm, respectively). Wavelength tuning is achieved by translating the crystal within the pump beam. System size and complexity are minimised by the use of single element detectors and the intracavity OPO design. Images are composed by raster scanning the monochromatic beam over the scene of interest; the reflection and/or absorption of the incident radiation by target materials and their surrounding environment provide a method for spatial location. Initial results using the NCI system to characterise wound biopsies are presented here.

  19. Use of a compact range approach to evaluate rf and dual-mode missiles

    NASA Astrophysics Data System (ADS)

    Willis, Kenneth E.; Weiss, Yosef

    2000-07-01

    This paper describes a hardware-in-the-loop (HWIL) system developed for testing Radio Frequency (RF), Infra-Red (IR), and Dual-Mode missile seekers. The system consists of a unique hydraulic five-axis (three seeker axes plus two target axes) Flight Motion Table (FMT), an off-axis parabolic reflector, and electronics required to generate the signals to the RF feeds. RF energy that simulates the target is fed into the reflector from three orthogonal feeds mounted on the inner target axis, at the focal point area of the parabolic reflector. The parabolic reflector, together with the three RF feeds (the Compact Range), effectively produces a far-field image of the target. Both FMT target axis motion and electronic control of the RF beams (deflection) modify the simulated line-of-sight target angles. Multiple targets, glint, multi-path, ECM, and clutter can be introduced electronically. To evaluate dual-mode seekers, the center section of the parabolic reflector is replaced with an IR- transparent, but RF-reflective section. An IR scene projector mounts to the FMT target axes, with its image focused on the intersection of the FMT seeker axes. The system eliminates the need for a large anechoic chamber and 'Target Wall' or target motion system used with conventional HWIL systems. This reduces acquisition and operating costs of the facility.

  20. Modeling and performance assessment in QinetiQ of EO and IR airborne reconnaissance systems

    NASA Astrophysics Data System (ADS)

    Williams, John W.; Potter, Gary E.

    2002-11-01

    QinetiQ are the technical authority responsible for specifying the performance requirements for the procurement of airborne reconnaissance systems, on behalf of the UK MoD. They are also responsible for acceptance of delivered systems, overseeing and verifying the installed system performance as predicted and then assessed by the contractor. Measures of functional capability are central to these activities. The conduct of these activities utilises the broad technical insight and wide range of analysis tools and models available within QinetiQ. This paper focuses on the tools, methods and models that are applicable to systems based on EO and IR sensors. The tools, methods and models are described, and representative output for systems that QinetiQ has been responsible for is presented. The principle capability applicable to EO and IR airborne reconnaissance systems is the STAR (Simulation Tools for Airborne Reconnaissance) suite of models. STAR generates predictions of performance measures such as GRD (Ground Resolved Distance) and GIQE (General Image Quality) NIIRS (National Imagery Interpretation Rating Scales). It also generates images representing sensor output, using the scene generation software CAMEO-SIM and the imaging sensor model EMERALD. The simulated image 'quality' is fully correlated with the predicted non-imaging performance measures. STAR also generates image and table data that is compliant with STANAG 7023, which may be used to test ground station functionality.

  1. Boat, wake, and wave real-time simulation

    NASA Astrophysics Data System (ADS)

    Świerkowski, Leszek; Gouthas, Efthimios; Christie, Chad L.; Williams, Owen M.

    2009-05-01

    We describe the extension of our real-time scene generation software VIRSuite to include the dynamic simulation of small boats and their wakes within an ocean environment. Extensive use has been made of the programmabilty available in the current generation of GPUs. We have demonstrated that real-time simulation is feasible, even including such complexities as dynamical calculation of the boat motion, wake generation and calculation of an FFTgenerated sea state.

  2. Taming Crowded Visual Scenes

    DTIC Science & Technology

    2014-08-12

    Nolan Warner, Mubarak Shah. Tracking in Dense Crowds Using Prominenceand Neighborhood Motion Concurrence, IEEE Transactions on Pattern Analysis...of  computer  vision,   computer   graphics  and  evacuation  dynamics  by  providing  a  common  platform,  and  provides...areas  that  includes  Computer  Vision,  Computer   Graphics ,  and  Pedestrian   Evacuation  Dynamics.  Despite  the

  3. High dynamic range hyperspectral imaging for camouflage performance test and evaluation

    NASA Astrophysics Data System (ADS)

    Pearce, D.; Feenan, J.

    2016-10-01

    This paper demonstrates the use of high dynamic range processing applied to the specific technique of hyper-spectral imaging with linescan spectrometers. The technique provides an improvement in signal to noise for reflectance estimation. This is demonstrated for field measurements of rural imagery collected from a ground-based linescan spectrometer of rural scenes. Once fully developed, the specific application is expected to improve the colour estimation approaches and consequently the test and evaluation accuracy of camouflage performance tests. Data are presented on both field and laboratory experiments that have been used to evaluate the improvements granted by the adoption of high dynamic range data acquisition in the field of hyperspectral imaging. High dynamic ranging imaging is well suited to the hyperspectral domain due to the large variation in solar irradiance across the visible and short wave infra-red (SWIR) spectrum coupled with the wavelength dependence of the nominal silicon detector response. Under field measurement conditions it is generally impractical to provide artificial illumination; consequently, an adaptation of the hyperspectral imaging and re ectance estimation process has been developed to accommodate the solar spectrum. This is shown to improve the signal to noise ratio for the re ectance estimation process of scene materials in the 400-500 nm and 700-900 nm regions.

  4. Temporal dynamics of motor cortex excitability during perception of natural emotional scenes.

    PubMed

    Borgomaneri, Sara; Gazzola, Valeria; Avenanti, Alessio

    2014-10-01

    Although it is widely assumed that emotions prime the body for action, the effects of visual perception of natural emotional scenes on the temporal dynamics of the human motor system have scarcely been investigated. Here, we used single-pulse transcranial magnetic stimulation (TMS) to assess motor excitability during observation and categorization of positive, neutral and negative pictures from the International Affective Picture System database. Motor-evoked potentials (MEPs) from TMS of the left motor cortex were recorded from hand muscles, at 150 and 300 ms after picture onset. In the early temporal condition we found an increase in hand motor excitability that was specific for the perception of negative pictures. This early negative bias was predicted by interindividual differences in the disposition to experience aversive feelings (personal distress) in interpersonal emotional contexts. In the later temporal condition, we found that MEPs were similarly increased for both positive and negative pictures, suggesting an increased reactivity to emotionally arousing scenes. By highlighting the temporal course of motor excitability during perception of emotional pictures, our study provides direct neurophysiological support for the evolutionary notions that emotion perception is closely linked to action systems and that emotionally negative events require motor reactions to be more urgently mobilized. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  5. Investigating the benefits of scene linking for a pathway HMD: from laboratory flight experiments to flight tests

    NASA Astrophysics Data System (ADS)

    Schmerwitz, Sven; Többen, Helmut; Lorenz, Bernd; Iijima, Tomoko; Kuritz-Kaiser, Anthea

    2006-05-01

    Pathway-in-the-sky displays enable pilots to accurately fly difficult trajectories. However, these displays may drive pilots' attention to the aircraft guidance task at the expense of other tasks particularly when the pathway display is located head-down. A pathway HUD may be a viable solution to overcome this disadvantage. Moreover, the pathway may mitigate the perceptual segregation between the static near domain and the dynamic far domain and hence, may improve attention switching between both sources. In order to more comprehensively overcome the perceptual near-to-far domain disconnect alphanumeric symbols could be attached to the pathway leading to a HUD design concept called 'scene-linking'. Two studies are presented that investigated this concept. The first study used a simplified laboratory flight experiment. Pilots (N=14) flew a curved trajectory through mountainous terrain and had to detect display events (discrete changes in a command speed indicator to be matched with current speed) and outside scene events (hostile SAM station on ground). The speed indicators were presented in superposition to the scenery either in fixed position or scene-linked to the pathway. Outside scene event detection was found improved with scene linking, however, flight-path tracking was markedly deteriorated. In the second study a scene-linked pathway concept was implemented on a monocular retinal scanning HMD and tested in real flights on a Do228 involving 5 test pilots. The flight test mainly focused at usability issues of the display in combination with an optical head tracker. Visual and instrument departure and approach tasks were evaluated comparing HMD navigation with standard instrument or terrestrial navigation. The study revealed limitations of the HMD regarding its see-through capability, field of view, weight and wearing comfort that showed to have a strong influence on pilot acceptance rather than rebutting the approach of the display concept as such.

  6. Bringing color to emotion: The influence of color on attentional bias to briefly presented emotional images.

    PubMed

    Bekhtereva, Valeria; Müller, Matthias M

    2017-10-01

    Is color a critical feature in emotional content extraction and involuntary attentional orienting toward affective stimuli? Here we used briefly presented emotional distractors to investigate the extent to which color information can influence the time course of attentional bias in early visual cortex. While participants performed a demanding visual foreground task, complex unpleasant and neutral background images were displayed in color or grayscale format for a short period of 133 ms and were immediately masked. Such a short presentation poses a challenge for visual processing. In the visual detection task, participants attended to flickering squares that elicited the steady-state visual evoked potential (SSVEP), allowing us to analyze the temporal dynamics of the competition for processing resources in early visual cortex. Concurrently we measured the visual event-related potentials (ERPs) evoked by the unpleasant and neutral background scenes. The results showed (a) that the distraction effect was greater with color than with grayscale images and (b) that it lasted longer with colored unpleasant distractor images. Furthermore, classical and mass-univariate ERP analyses indicated that, when presented in color, emotional scenes elicited more pronounced early negativities (N1-EPN) relative to neutral scenes, than when the scenes were presented in grayscale. Consistent with neural data, unpleasant scenes were rated as being more emotionally negative and received slightly higher arousal values when they were shown in color than when they were presented in grayscale. Taken together, these findings provide evidence for the modulatory role of picture color on a cascade of coordinated perceptual processes: by facilitating the higher-level extraction of emotional content, color influences the duration of the attentional bias to briefly presented affective scenes in lower-tier visual areas.

  7. Impact of age-related macular degeneration on object searches in realistic panoramic scenes.

    PubMed

    Thibaut, Miguel; Tran, Thi-Ha-Chau; Szaffarczyk, Sebastien; Boucart, Muriel

    2018-05-01

    This study investigated whether realistic immersive conditions with dynamic indoor scenes presented on a large, hemispheric panoramic screen covering 180° of the visual field improved the visual search abilities of participants with age-related macular degeneration (AMD). Twenty-one participants with AMD, 16 age-matched controls and 16 young observers were included. Realistic indoor scenes were presented on a panoramic five metre diameter screen. Twelve different objects were used as targets. The participants were asked to search for a target object, shown on paper before each trial, within a room composed of various objects. A joystick was used for navigation within the scene views. A target object was present in 24 trials and absent in 24 trials. The percentage of correct detection of the target, the percentage of false alarms (that is, the detection of the target when it was absent), the number of scene views explored and the search time were measured. The search time was slower for participants with AMD than for the age-matched controls, who in turn were slower than the young participants. The participants with AMD were able to accomplish the task with a performance of 75 per cent correct detections. This was slightly lower than older controls (79.2 per cent) while young controls were at ceiling (91.7 per cent). Errors were mainly due to false alarms resulting from confusion between the target object and another object present in the scene in the target-absent trials. The outcomes of the present study indicate that, under realistic conditions, although slower than age-matched, normally sighted controls, participants with AMD were able to accomplish visual searches of objects with high accuracy. © 2017 Optometry Australia.

  8. Adaptive Correlation Concepts for Non-Compatible Imagery.

    DTIC Science & Technology

    1981-10-31

    5.4 Ohio Files 26 and 27 Image Segments . . . . 34 5.5 HSV Files 9 and 10 Image Segments ..... . 35 5.6 Scene 1 , Vertical Profiles Through Peak...Simulation Options Summary .e m "lch c .i a ir t.e j-r.-ct cross c t.e"P- lat absclut - differcr.ce a!a .u iy-i.cidd dif’err.c; - metrics. e 1 T0 1ff ct of...cdlf o spectral bands, di~ffe,-,-rurt tie f loy. "Ind d:;7o T hes’,-e -simu lat i onso ae 1 - s o2( 1 r 1 b- in th I s-,sI 31. / / s. ILI Z1.I I SO .& i

  9. Effect of Ischemia Duration and Protective Interventions on the Temporal Dynamics of Tissue Composition After Myocardial Infarction

    PubMed Central

    Fernández-Jiménez, Rodrigo; Galán-Arriola, Carlos; Sánchez-González, Javier; Agüero, Jaume; López-Martín, Gonzalo J.; Gomez-Talavera, Sandra; Garcia-Prieto, Jaime; Benn, Austin; Molina-Iracheta, Antonio; Barreiro-Pérez, Manuel; Martin-García, Ana; García-Lunar, Inés; Pizarro, Gonzalo; Sanz, Javier; Sánchez, Pedro L.; Fuster, Valentin

    2017-01-01

    Rationale: The impact of cardioprotective strategies and ischemia duration on postischemia/reperfusion (I/R) myocardial tissue composition (edema, myocardium at risk, infarct size, salvage, intramyocardial hemorrhage, and microvascular obstruction) is not well understood. Objective: To study the effect of ischemia duration and protective interventions on the temporal dynamics of myocardial tissue composition in a translational animal model of I/R by the use of state-of-the-art imaging technology. Methods and Results: Four 5-pig groups underwent different I/R protocols: 40-minute I/R (prolonged ischemia, controls), 20-minute I/R (short-duration ischemia), prolonged ischemia preceded by preconditioning, or prolonged ischemia followed by postconditioning. Serial cardiac magnetic resonance (CMR)-based tissue characterization was done in all pigs at baseline and at 120 minutes, day 1, day 4, and day 7 after I/R. Reference myocardium at risk was assessed by multidetector computed tomography during the index coronary occlusion. After the final CMR, hearts were excised and processed for water content quantification and histology. Five additional healthy pigs were euthanized after baseline CMR as reference. Edema formation followed a bimodal pattern in all 40-minute I/R pigs, regardless of cardioprotective strategy and the degree of intramyocardial hemorrhage or microvascular obstruction. The hyperacute edematous wave was ameliorated only in pigs showing cardioprotection (ie, those undergoing short-duration ischemia or preconditioning). In all groups, CMR-measured edema was barely detectable at 24 hours postreperfusion. The deferred healing-related edematous wave was blunted or absent in pigs undergoing preconditioning or short-duration ischemia, respectively. CMR-measured infarct size declined progressively after reperfusion in all groups. CMR-measured myocardial salvage, and the extent of intramyocardial hemorrhage and microvascular obstruction varied dramatically according to CMR timing, ischemia duration, and cardioprotective strategy. Conclusions: Cardioprotective therapies, duration of index ischemia, and the interplay between these greatly influence temporal dynamics and extent of tissue composition changes after I/R. Consequently, imaging techniques and protocols for assessing edema, myocardium at risk, infarct size, salvage, intramyocardial hemorrhage, and microvascular obstruction should be standardized accordingly. PMID:28596216

  10. Photoinduced coherent acoustic phonon dynamics inside Mott insulator Sr2IrO4 films observed by femtosecond X-ray pulses

    NASA Astrophysics Data System (ADS)

    Zhang, Bing-Bing; Liu, Jian; Wei, Xu; Sun, Da-Rui; Jia, Quan-Jie; Li, Yuelin; Tao, Ye

    2017-04-01

    We investigate the transient photoexcited lattice dynamics in a layered perovskite Mott insulator Sr2IrO4 film by femtosecond X-ray diffraction using a laser plasma-based X-ray source. The ultrafast structural dynamics of Sr2IrO4 thin films are determined by observing the shift and broadening of (0012) Bragg diffraction after excitation by 1.5 eV and 3.0 eV pump photons for films with different thicknesses. The observed transient lattice response can be well interpreted as a distinct three-step dynamics due to the propagation of coherent acoustic phonons generated by photoinduced quasiparticles (QPs). Employing a normalized phonon propagation model, we found that the photoinduced angular shifts of the Bragg peak collapse into a universal curve after introducing normalized coordinates to account for different thicknesses and pump photon energies, pinpointing the origin of the lattice distortion and its early evolution. In addition, a transient photocurrent measurement indicates that the photoinduced QPs are charge neutral excitons. Mapping the phonon propagation and correlating its dynamics with the QP by ultrafast X-ray diffraction (UXRD) establish a powerful way to study electron-phonon coupling and uncover the exotic physics in strongly correlated systems under nonequilibrium conditions.

  11. Band Edge Dynamics and Multiexciton Generation in Narrow Band Gap HgTe Nanocrystals.

    PubMed

    Livache, Clément; Goubet, Nicolas; Martinez, Bertille; Jagtap, Amardeep; Qu, Junling; Ithurria, Sandrine; Silly, Mathieu G; Dubertret, Benoit; Lhuillier, Emmanuel

    2018-04-11

    Mercury chalcogenide nanocrystals and especially HgTe appear as an interesting platform for the design of low cost mid-infrared (mid-IR) detectors. Nevertheless, their electronic structure and transport properties remain poorly understood, and some critical aspects such as the carrier relaxation dynamics at the band edge have been pushed under the rug. Some of the previous reports on dynamics are setup-limited, and all of them have been obtained using photon energy far above the band edge. These observations raise two main questions: (i) what are the carrier dynamics at the band edge and (ii) should we expect some additional effect (multiexciton generation (MEG)) as such narrow band gap materials are excited far above the band edge? To answer these questions, we developed a high-bandwidth setup that allows us to understand and compare the carrier dynamics resonantly pumped at the band edge in the mid-IR and far above the band edge. We demonstrate that fast (>50 MHz) photoresponse can be obtained even in the mid-IR and that MEG is occurring in HgTe nanocrystal arrays with a threshold around 3 times the band edge energy. Furthermore, the photoresponse can be effectively tuned in magnitude and sign using a phototransistor configuration.

  12. Elucidating the Vibrational Fingerprint of the Flexible Metal–Organic Framework MIL-53(Al) Using a Combined Experimental/Computational Approach

    PubMed Central

    2018-01-01

    In this work, mid-infrared (mid-IR), far-IR, and Raman spectra are presented for the distinct (meta)stable phases of the flexible metal–organic framework MIL-53(Al). Static density functional theory (DFT) simulations are performed, allowing for the identification of all IR-active modes, which is unprecedented in the low-frequency region. A unique vibrational fingerprint is revealed, resulting from aluminum-oxide backbone stretching modes, which can be used to clearly distinguish the IR spectra of the closed- and large-pore phases. Furthermore, molecular dynamics simulations based on a DFT description of the potential energy surface enable determination of the theoretical Raman spectrum of the closed- and large-pore phases for the first time. An excellent correspondence between theory and experiment is observed. Both the low-frequency IR and Raman spectra show major differences in vibrational modes between the closed- and large-pore phases, indicating changes in lattice dynamics between the two structures. In addition, several collective modes related to the breathing mechanism in MIL-53(Al) are identified. In particular, we rationalize the importance of the trampoline-like motion of the linker for the phase transition. PMID:29449906

  13. Mapping and controlling ultrafast dynamics of highly excited H 2 molecules by VUV-IR pump-probe schemes

    DOE PAGES

    Sturm, F. P.; Tong, X. M.; Palacios, A.; ...

    2017-01-09

    Here, we used ultrashort femtosecond vacuum ultraviolet (VUV) and infrared (IR) pulses in a pump-probe scheme to map the dynamics and nonequilibrium dissociation channels of excited neutral H 2 molecules. A nuclear wave packet is created in the B 1Σmore » $$+\\atop{u}$$ state of the neutral H 2 molecule by absorption of the ninth harmonic of the driving infrared laser field. Due to the large stretching amplitude of the molecule excited in the B 1Σ$$+\\atop{u}$$ electronic state, the effective H 2 + ionization potential changes significantly as the nuclear wave packet vibrates in the bound, highly electronically and vibrationally excited B potential-energy curve. We probed such dynamics by ionizing the excited neutral molecule using time-delayed VUV-or-IR radiation. We identified the nonequilibrium dissociation channels by utilizing three-dimensional momentum imaging of the ion fragments. We also found that different dissociation channels can be controlled, to some extent, by changing the IR laser intensity and by choosing the wavelength of the probe laser light. Furthermore, we concluded that even in a benchmark molecular system such as H 2*, the interpretation of the nonequilibrium multiphoton and multicolor ionization processes is still a challenging task, requiring intricate theoretical analysis.« less

  14. Airflow analyses using thermal imaging in Arizona's Meteor Crater as part of METCRAX II

    NASA Astrophysics Data System (ADS)

    Grudzielanek, A. Martina; Vogt, Roland; Cermak, Jan; Maric, Mateja; Feigenwinter, Iris; Whiteman, C. David; Lehner, Manuela; Hoch, Sebastian W.; Krauß, Matthias G.; Bernhofer, Christian; Pitacco, Andrea

    2016-04-01

    In October 2013 the second Meteor Crater Experiment (METCRAX II) took place at the Barringer Meteorite Crater (aka Meteor Crater) in north central Arizona, USA. Downslope-windstorm-type flows (DWF), the main research objective of METCRAX II, were measured by a comprehensive set of meteorological sensors deployed in and around the crater. During two weeks of METCRAX II five infrared (IR) time lapse cameras (VarioCAM® hr research & VarioCAM® High Definition, InfraTec) were installed at various locations on the crater rim to record high-resolution images of the surface temperatures within the crater from different viewpoints. Changes of surface temperature are indicative of air temperature changes induced by flow dynamics inside the crater, including the DWF. By correlating thermal IR surface temperature data with meteorological sensor data during intensive observational periods the applicability of the IR method of representing flow dynamics can be assessed. We present evaluation results and draw conclusions relative to the application of this method for observing air flow dynamics in the crater. In addition we show the potential of the IR method for METCRAX II in 1) visualizing airflow processes to improve understanding of these flows, and 2) analyzing cold-air flows and cold-air pooling.

  15. Perception of Object-Context Relations: Eye-Movement Analyses in Infants and Adults

    PubMed Central

    Bornstein, Marc H.; Mash, Clay; Arterberry, Martha E.

    2011-01-01

    Twenty-eight 4-month-olds’ and 22 20-year-olds’ attention to object-context relations was investigated using a common eye-movement paradigm. Infants and adults scanned both objects and contexts. Infants showed equivalent preferences for animals and vehicles and for congruent and incongruent object-context relations overall, more fixations of objects in congruent object-context relations, more fixations of contexts in incongruent object-context relations, more fixations of objects than contexts in vehicle scenes, and more fixation shifts in incongruent than congruent vehicle scenes. Adults showed more fixations of congruent than incongruent scenes, vehicles than animals, and objects than contexts, equal fixations of animals and their contexts but more fixations of vehicles than their contexts, and more shifts of fixation when inspecting animals in context than vehicles in context. These findings for location, number, and order of eye movements indicate that object-context relations play a dynamic role in the development and allocation of attention. PMID:21244146

  16. Fly-through viewpoint video system for multi-view soccer movie using viewpoint interpolation

    NASA Astrophysics Data System (ADS)

    Inamoto, Naho; Saito, Hideo

    2003-06-01

    This paper presents a novel method for virtual view generation that allows viewers to fly through in a real soccer scene. A soccer match is captured by multiple cameras at a stadium and images of arbitrary viewpoints are synthesized by view-interpolation of two real camera images near the given viewpoint. In the proposed method, cameras do not need to be strongly calibrated, but epipolar geometry between the cameras is sufficient for the view-interpolation. Therefore, it can easily be applied to a dynamic event even in a large space, because the efforts for camera calibration can be reduced. A soccer scene is classified into several regions and virtual view images are generated based on the epipolar geometry in each region. Superimposition of the images completes virtual views for the whole soccer scene. An application for fly-through observation of a soccer match is introduced as well as the algorithm of the view-synthesis and experimental results..

  17. Slow changing postural cues cancel visual field dependence on self-tilt detection.

    PubMed

    Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L

    2015-01-01

    Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. 'Working behind the scenes'. An ethical view of mental health nursing and first-episode psychosis.

    PubMed

    Moe, Cathrine; Kvig, Erling I; Brinchmann, Beate; Brinchmann, Berit S

    2013-08-01

    The aim of this study was to explore and reflect upon mental health nursing and first-episode psychosis. Seven multidisciplinary focus group interviews were conducted, and data analysis was influenced by a grounded theory approach. The core category was found to be a process named 'working behind the scenes'. It is presented along with three subcategories: 'keeping the patient in mind', 'invisible care' and 'invisible network contact'. Findings are illuminated with the ethical principles of respect for autonomy and paternalism. Nursing care is dynamic, and clinical work moves along continuums between autonomy and paternalism and between ethical reflective and non-reflective practice. 'Working behind the scenes' is considered to be in a paternalistic area, containing an ethical reflection. Treating and caring for individuals experiencing first-episode psychosis demands an ethical awareness and great vigilance by nurses. The study is a contribution to reflection upon everyday nursing practice, and the conclusion concerns the importance of making invisible work visible.

  19. Photorealistic scene presentation: virtual video camera

    NASA Astrophysics Data System (ADS)

    Johnson, Michael J.; Rogers, Joel Clark W.

    1994-07-01

    This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on `a priori' information. It accesses out-the-window `snapshots' from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a `clear-day' video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible.

  20. Reversible Information Flow across the Medial Temporal Lobe: The Hippocampus Links Cortical Modules during Memory Retrieval

    PubMed Central

    Cooper, Elisa; Henson, Richard N.

    2013-01-01

    A simple cue can be sufficient to elicit vivid recollection of a past episode. Theoretical models suggest that upon perceiving such a cue, disparate episodic elements held in neocortex are retrieved through hippocampal pattern completion. We tested this fundamental assumption by applying functional magnetic resonance imaging (fMRI) while objects or scenes were used to cue participants' recall of previously paired scenes or objects, respectively. We first demonstrate functional segregation within the medial temporal lobe (MTL), showing domain specificity in perirhinal and parahippocampal cortices (for object-processing vs scene-processing, respectively), but domain generality in the hippocampus (retrieval of both stimulus types). Critically, using fMRI latency analysis and dynamic causal modeling, we go on to demonstrate functional integration between these MTL regions during successful memory retrieval, with reversible signal flow from the cue region to the target region via the hippocampus. This supports the claim that the human hippocampus provides the vital associative link that integrates information held in different parts of cortex. PMID:23986252

  1. SIR-B ocean-wave enhancement with fast Fourier transform techniques

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1987-01-01

    Shuttle Imaging Radar (SIR-B) imagery is Fourier filtered to remove the estimated system-transfer function, reduce speckle noise, and produce ocean scenes with a gray scale that is proportional to wave height. The SIR-B system response to speckled scenes of uniform surfaces yields an estimate of the stationary wavenumber response of the imaging radar, modeled by the 15 even terms of an eighth-order two-dimensional polynomial. Speckle can also be used to estimate the dynamic wavenumber response of the system due to surface motion during the aperture synthesis period, modeled with a single adaptive parameter describing an exponential correlation along track. A Fourier filter can then be devised to correct for the wavenumber response of the remote sensor and scene correlation, with subsequent subtraction of an estimate of the speckle noise component. A linearized velocity bunching model, combined with a surface tilt and hydrodynamic model, is incorporated in the Fourier filter to derive estimates of wave height from the radar intensities corresponding to individual picture elements.

  2. Temporal dynamics of different cases of bi-stable figure-ground perception.

    PubMed

    Kogo, Naoki; Hermans, Lore; Stuer, David; van Ee, Raymond; Wagemans, Johan

    2015-01-01

    Segmentation of a visual scene in "figure" and "ground" is essential for perception of the three-dimensional layout of a scene. In cases of bi-stable perception, two distinct figure-ground interpretations alternate over time. We were interested in the temporal dynamics of these alternations, in particular when the same image is presented repeatedly, with short blank periods in-between. Surprisingly, we found that the intermittent presentation of Rubin's classical "face-or-vase" figure, which is frequently taken as a standard case of bi-stable figure-ground perception, often evoked perceptual switches during the short presentations and stabilization was not prominent. Interestingly, bi-stable perception of Kanizsa's anomalous transparency figure did strongly stabilize across blanks. We also found stabilization for the Necker cube, which we used for comparison. The degree of stabilization (and the lack of it) varied across stimuli and across individuals. Our results indicate, against common expectation, that the stabilization phenomenon cannot be generally evoked by intermittent presentation. We argue that top-down feedback factors such as familiarity, semantics, expectation, and perceptual bias contribute to the complex processes underlying the temporal dynamics of bi-stable figure-ground perception. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Morphology of drying blood pools

    NASA Astrophysics Data System (ADS)

    Laan, Nick; Smith, Fiona; Nicloux, Celine; Brutin, David; D-Blood project Collaboration

    2016-11-01

    Often blood pools are found on crime scenes providing information concerning the events and sequence of events that took place on the scene. However, there is a lack of knowledge concerning the drying dynamics of blood pools. This study focuses on the drying process of blood pools to determine what relevant information can be obtained for the forensic application. We recorded the drying process of blood pools with a camera and measured the weight. We found that the drying process can be separated into five different: coagulation, gelation, rim desiccation, centre desiccation, and final desiccation. Moreover, we found that the weight of the blood pool diminishes similarly and in a reproducible way for blood pools created in various conditions. In addition, we verify that the size of the blood pools is directly related to its volume and the wettability of the surface. Our study clearly shows that blood pools dry in a reproducible fashion. This preliminary work highlights the difficult task that represents blood pool analysis in forensic investigations, and how internal and external parameters influence its dynamics. We conclude that understanding the drying process dynamics would be advancement in timeline reconstitution of events. ANR funded project: D-Blood Project.

  4. Individual predictions of eye-movements with dynamic scenes

    NASA Astrophysics Data System (ADS)

    Barth, Erhardt; Drewes, Jan; Martinetz, Thomas

    2003-06-01

    We present a model that predicts saccadic eye-movements and can be tuned to a particular human observer who is viewing a dynamic sequence of images. Our work is motivated by applications that involve gaze-contingent interactive displays on which information is displayed as a function of gaze direction. The approach therefore differs from standard approaches in two ways: (1) we deal with dynamic scenes, and (2) we provide means of adapting the model to a particular observer. As an indicator for the degree of saliency we evaluate the intrinsic dimension of the image sequence within a geometric approach implemented by using the structure tensor. Out of these candidate saliency-based locations, the currently attended location is selected according to a strategy found by supervised learning. The data are obtained with an eye-tracker and subjects who view video sequences. The selection algorithm receives candidate locations of current and past frames and a limited history of locations attended in the past. We use a linear mapping that is obtained by minimizing the quadratic difference between the predicted and the actually attended location by gradient descent. Being linear, the learned mapping can be quickly adapted to the individual observer.

  5. First-principles study of the infrared spectra of the ice Ih (0001) surface

    DOE PAGES

    Pham, T. Anh; Huang, P.; Schwegler, E.; ...

    2012-08-22

    Here, we present a study of the infrared (IR) spectra of the (0001) deuterated ice surface based on first-principles molecular dynamics simulations. The computed spectra show a good agreement with available experimental IR measurements. We identified the bonding configurations associated with specific features in the spectra, allowing us to provide a detailed interpretation of IR signals. We computed the spectra of several proton ordered and disordered models of the (0001) surface of ice, and we found that IR spectra do not appear to be a sensitive probe of the microscopic arrangement of protons at ice surfaces.

  6. Computational model of lightness perception in high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Krawczyk, Grzegorz; Myszkowski, Karol; Seidel, Hans-Peter

    2006-02-01

    An anchoring theory of lightness perception by Gilchrist et al. [1999] explains many characteristics of human visual system such as lightness constancy and its spectacular failures which are important in the perception of images. The principal concept of this theory is the perception of complex scenes in terms of groups of consistent areas (frameworks). Such areas, following the gestalt theorists, are defined by the regions of common illumination. The key aspect of the image perception is the estimation of lightness within each framework through the anchoring to the luminance perceived as white, followed by the computation of the global lightness. In this paper we provide a computational model for automatic decomposition of HDR images into frameworks. We derive a tone mapping operator which predicts lightness perception of the real world scenes and aims at its accurate reproduction on low dynamic range displays. Furthermore, such a decomposition into frameworks opens new grounds for local image analysis in view of human perception.

  7. Creating cinematic wide gamut HDR-video for the evaluation of tone mapping operators and HDR-displays

    NASA Astrophysics Data System (ADS)

    Froehlich, Jan; Grandinetti, Stefan; Eberhardt, Bernd; Walter, Simon; Schilling, Andreas; Brendel, Harald

    2014-03-01

    High quality video sequences are required for the evaluation of tone mapping operators and high dynamic range (HDR) displays. We provide scenic and documentary scenes with a dynamic range of up to 18 stops. The scenes are staged using professional film lighting, make-up and set design to enable the evaluation of image and material appearance. To address challenges for HDR-displays and temporal tone mapping operators, the sequences include highlights entering and leaving the image, brightness changing over time, high contrast skin tones, specular highlights and bright, saturated colors. HDR-capture is carried out using two cameras mounted on a mirror-rig. To achieve a cinematic depth of field, digital motion picture cameras with Super-35mm size sensors are used. We provide HDR-video sequences to serve as a common ground for the evaluation of temporal tone mapping operators and HDR-displays. They are available to the scientific community for further research.

  8. An improved contrast enhancement algorithm for infrared images based on adaptive double plateaus histogram equalization

    NASA Astrophysics Data System (ADS)

    Li, Shuo; Jin, Weiqi; Li, Li; Li, Yiyang

    2018-05-01

    Infrared thermal images can reflect the thermal-radiation distribution of a particular scene. However, the contrast of the infrared images is usually low. Hence, it is generally necessary to enhance the contrast of infrared images in advance to facilitate subsequent recognition and analysis. Based on the adaptive double plateaus histogram equalization, this paper presents an improved contrast enhancement algorithm for infrared thermal images. In the proposed algorithm, the normalized coefficient of variation of the histogram, which characterizes the level of contrast enhancement, is introduced as feedback information to adjust the upper and lower plateau thresholds. The experiments on actual infrared images show that compared to the three typical contrast-enhancement algorithms, the proposed algorithm has better scene adaptability and yields better contrast-enhancement results for infrared images with more dark areas or a higher dynamic range. Hence, it has high application value in contrast enhancement, dynamic range compression, and digital detail enhancement for infrared thermal images.

  9. A self-synchronized high speed computational ghost imaging system: A leap towards dynamic capturing

    NASA Astrophysics Data System (ADS)

    Suo, Jinli; Bian, Liheng; Xiao, Yudong; Wang, Yongjin; Zhang, Lei; Dai, Qionghai

    2015-11-01

    High quality computational ghost imaging needs to acquire a large number of correlated measurements between the to-be-imaged scene and different reference patterns, thus ultra-high speed data acquisition is of crucial importance in real applications. To raise the acquisition efficiency, this paper reports a high speed computational ghost imaging system using a 20 kHz spatial light modulator together with a 2 MHz photodiode. Technically, the synchronization between such high frequency illumination and bucket detector needs nanosecond trigger precision, so the development of synchronization module is quite challenging. To handle this problem, we propose a simple and effective computational self-synchronization scheme by building a general mathematical model and introducing a high precision synchronization technique. The resulted efficiency is around 14 times faster than state-of-the-arts, and takes an important step towards ghost imaging of dynamic scenes. Besides, the proposed scheme is a general approach with high flexibility for readily incorporating other illuminators and detectors.

  10. Neural networks: Alternatives to conventional techniques for automatic docking

    NASA Technical Reports Server (NTRS)

    Vinz, Bradley L.

    1994-01-01

    Automatic docking of orbiting spacecraft is a crucial operation involving the identification of vehicle orientation as well as complex approach dynamics. The chaser spacecraft must be able to recognize the target spacecraft within a scene and achieve accurate closing maneuvers. In a video-based system, a target scene must be captured and transformed into a pattern of pixels. Successful recognition lies in the interpretation of this pattern. Due to their powerful pattern recognition capabilities, artificial neural networks offer a potential role in interpretation and automatic docking processes. Neural networks can reduce the computational time required by existing image processing and control software. In addition, neural networks are capable of recognizing and adapting to changes in their dynamic environment, enabling enhanced performance, redundancy, and fault tolerance. Most neural networks are robust to failure, capable of continued operation with a slight degradation in performance after minor failures. This paper discusses the particular automatic docking tasks neural networks can perform as viable alternatives to conventional techniques.

  11. Towards a Molecular Movie: Real Time Observation of Hydrogen Bond Breaking by Transient 2D-IR Spectroscopy in a Cyclic Peptide

    NASA Astrophysics Data System (ADS)

    Kolano, Christoph; Helbing, Jan; Sander, Wolfram; Hamm, Peter

    Transient two-dimensional infrared spectroscopy (T2D-IR) has been used to observe in real time the non-equilibrium structural dynamics of intramolecular hydrogen bond breaking in a small cyclic disulfide-bridged peptide.

  12. Diode laser absorption sensors for gas-dynamic and combustion flows

    NASA Technical Reports Server (NTRS)

    Allen, M. G.

    1998-01-01

    Recent advances in room-temperature, near-IR and visible diode laser sources for tele-communication, high-speed computer networks, and optical data storage applications are enabling a new generation of gas-dynamic and combustion-flow sensors based on laser absorption spectroscopy. In addition to conventional species concentration and density measurements, spectroscopic techniques for temperature, velocity, pressure and mass flux have been demonstrated in laboratory, industrial and technical flows. Combined with fibreoptic distribution networks and ultrasensitive detection strategies, compact and portable sensors are now appearing for a variety of applications. In many cases, the superior spectroscopic quality of the new laser sources compared with earlier cryogenic, mid-IR devices is allowing increased sensitivity of trace species measurements, high-precision spectroscopy of major gas constituents, and stable, autonomous measurement systems. The purpose of this article is to review recent progress in this field and suggest likely directions for future research and development. The various laser-source technologies are briefly reviewed as they relate to sensor applications. Basic theory for laser absorption measurements of gas-dynamic properties is reviewed and special detection strategies for the weak near-IR and visible absorption spectra are described. Typical sensor configurations are described and compared for various application scenarios, ranging from laboratory research to automated field and airborne packages. Recent applications of gas-dynamic sensors for air flows and fluxes of trace atmospheric species are presented. Applications of gas-dynamic and combustion sensors to research and development of high-speed flows aeropropulsion engines, and combustion emissions monitoring are presented in detail, along with emerging flow control systems based on these new sensors. Finally, technology in nonlinear frequency conversion, UV laser materials, room-temperature mid-IR materials and broadly tunable multisection devices is reviewed to suggest new sensor possibilities.

  13. Conformational switching between protein substates studied with 2D IR vibrational echo spectroscopy and molecular dynamics simulations.

    PubMed

    Bagchi, Sayan; Thorpe, Dayton G; Thorpe, Ian F; Voth, Gregory A; Fayer, M D

    2010-12-30

    Myoglobin is an important protein for the study of structure and dynamics. Three conformational substates have been identified for the carbonmonoxy form of myoglobin (MbCO). These are manifested as distinct peaks in the IR absorption spectrum of the CO stretching mode. Ultrafast 2D IR vibrational echo chemical exchange experiments are used to observed switching between two of these substates, A(1) and A(3), on a time scale of <100 ps for two mutants of wild-type Mb. The two mutants are a single mutation of Mb, L29I, and a double mutation, T67R/S92D. Molecular dynamics (MD) simulations are used to model the structural differences between the substates of the two MbCO mutants. The MD simulations are also employed to examine the substate switching in the two mutants as a test of the ability of MD simulations to predict protein dynamics correctly for a system in which there is a well-defined transition over a significant potential barrier between two substates. For one mutant, L29I, the simulations show that translation of the His64 backbone may differentiate the two substates. The simulations accurately reproduce the experimentally observed interconversion time for the L29I mutant. However, MD simulations exploring the same His64 backbone coordinate fail to display substate interconversion for the other mutant, T67R/S92D, thus pointing to the likely complexity of the underlying protein interactions. We anticipate that understanding conformational dynamics in MbCO via ultrafast 2D IR vibrational echo chemical exchange experiments can help to elucidate fast conformational switching processes in other proteins.

  14. Proton transfer mediated by the vibronic coupling in oxygen core ionized states of glyoxalmonoxime studied by infrared-X-ray pump-probe spectroscopy.

    PubMed

    Felicíssimo, V C; Guimarães, F F; Cesar, A; Gel'mukhanov, F; Agren, H

    2006-11-30

    The theory of IR-X-ray pump-probe spectroscopy beyond the Born-Oppenheimer approximation is developed and applied to the study of the dynamics of intramolecular proton transfer in glyoxalmonoxime leading to the formation of the tautomer 2-nitrosoethenol. Due to the IR pump pulses the molecule gains sufficient energy to promote a proton to a weakly bound well. A femtosecond X-ray pulse snapshots the wave packet route and, hence, the dynamics of the proton transfer. The glyoxalmonoxime molecule contains two chemically nonequivalent oxygen atoms that possess distinct roles in the hydrogen bond, a hydrogen donor and an acceptor. Core ionizations of these form two intersecting core-ionized states, the vibronic coupling between which along the OH stretching mode partially delocalizes the core hole, resulting in a hopping of the core hole from one site to another. This, in turn, affects the dynamics of the proton transfer in the core-ionized state. The quantum dynamical simulations of X-ray photoelectron spectra of glyoxalmonoxime driven by strong IR pulses demonstrate the general applicability of the technique for studies of intramolecular proton transfer in systems with vibronic coupling.

  15. Molecular dynamics simulation of nonlinear spectroscopies of intermolecular motions in liquid water.

    PubMed

    Yagasaki, Takuma; Saito, Shinji

    2009-09-15

    Water is the most extensively studied of liquids because of both its ubiquity and its anomalous thermodynamic and dynamic properties. The properties of water are dominated by hydrogen bonds and hydrogen bond network rearrangements. Fundamental information on the dynamics of liquid water has been provided by linear infrared (IR), Raman, and neutron-scattering experiments; molecular dynamics simulations have also provided insights. Recently developed higher-order nonlinear spectroscopies open new windows into the study of the hydrogen bond dynamics of liquid water. For example, the vibrational lifetimes of stretches and a bend, intramolecular features of water dynamics, can be accurately measured and are found to be on the femtosecond time scale at room temperature. Higher-order nonlinear spectroscopy is expressed by a multitime correlation function, whereas traditional linear spectroscopy is given by a one-time correlation function. Thus, nonlinear spectroscopy yields more detailed information on the dynamics of condensed media than linear spectroscopy. In this Account, we describe the theoretical background and methods for calculating higher order nonlinear spectroscopy; equilibrium and nonequilibrium molecular dynamics simulations, and a combination of both, are used. We also present the intermolecular dynamics of liquid water revealed by fifth-order two-dimensional (2D) Raman spectroscopy and third-order IR spectroscopy. 2D Raman spectroscopy is sensitive to couplings between modes; the calculated 2D Raman signal of liquid water shows large anharmonicity in the translational motion and strong coupling between the translational and librational motions. Third-order IR spectroscopy makes it possible to examine the time-dependent couplings. The 2D IR spectra and three-pulse photon echo peak shift show the fast frequency modulation of the librational motion. A significant effect of the translational motion on the fast frequency modulation of the librational motion is elucidated by introducing the "translation-free" molecular dynamics simulation. The isotropic pump-probe signal and the polarization anisotropy decay show fast transfer of the librational energy to the surrounding water molecules, followed by relaxation to the hot ground state. These theoretical methods do not require frequently used assumptions and can thus be called ab initio methods; together with multidimensional nonlinear spectroscopies, they provide powerful methods for examining the inter- and intramolecular details of water dynamics.

  16. Study of jamming of the frequency modulation infrared seekers

    NASA Astrophysics Data System (ADS)

    Qian, Fang; Guo, Jin; Shao, Jun-feng; Wang, Ting-feng

    2013-09-01

    The threat of the IR guidance missile is a direct consequence of extensive proliferation of the airborne IR countermeasure. The aim of a countermeasure system is to inject false information into a sensor system to create confusion. Many optical seekers have a single detector that is used to sense the position of its victim in its field of view. A seeker has a spinning reticle in the focal plane of the optical system that collects energy from the thermal scene and focuses it on to the detector. In this paper, the principle of the conical-scan FM reticle is analyzed. Then the effect that different amplitude or frequency modulated mid-infrared laser pulse acts on the reticle system is simulated. When the ratio of jamming energy to target radiation (repression) gradually increases, the azimuth error and the misalignment angle error become larger. The results show that simply increasing the intensity of the jamming light achieves little, but it increases the received signal strength of the FM reticle system ,so that the target will be more easily exposed. A slow variation of amplitude will warp the azimuth information received by the seeker, but the target can't be completely out of the missile tracking. If the repression and the jamming frequency change at the same time, the jamming effects can be more obvious. When the jamming signal's angular frequency is twice as large as the carrier frequency of the reticle system, the seeker will can't receive an accurate signal and the jamming can be achieved. The jamming mechanism of the conical-scan FM IR seeker is described and it is helpful to the airborne IR countermeasure system.

  17. Application of DIRI dynamic infrared imaging in reconstructive surgery

    NASA Astrophysics Data System (ADS)

    Pawlowski, Marek; Wang, Chengpu; Jin, Feng; Salvitti, Matthew; Tenorio, Xavier

    2006-04-01

    We have developed the BioScanIR System based on QWIP (Quantum Well Infrared Photodetector). Data collected by this sensor are processed using the DIRI (Dynamic Infrared Imaging) algorithms. The combination of DIRI data processing methods with the unique characteristics of the QWIP sensor permit the creation of a new imaging modality capable of detecting minute changes in temperature at the surface of the tissue and organs associated with blood perfusion due to certain diseases such as cancer, vascular disease and diabetes. The BioScanIR System has been successfully applied in reconstructive surgery to localize donor flap feeding vessels (perforators) during the pre-surgical planning stage. The device is also used in post-surgical monitoring of skin flap perfusion. Since the BioScanIR is mobile; it can be moved to the bedside for such monitoring. In comparison to other modalities, the BioScanIR can localize perforators in a single, 20 seconds scan with definitive results available in minutes. The algorithms used include (FFT) Fast Fourier Transformation, motion artifact correction, spectral analysis and thermal image scaling. The BioScanIR is completely non-invasive and non-toxic, requires no exogenous contrast agents and is free of ionizing radiation. In addition to reconstructive surgery applications, the BioScanIR has shown promise as a useful functional imaging modality in neurosurgery, drug discovery in pre-clinical animal models, wound healing and peripheral vascular disease management.

  18. Effects of Pre-Encoding Stress on Brain Correlates Associated with the Long-Term Memory for Emotional Scenes

    PubMed Central

    Wirkner, Janine; Weymar, Mathias; Löw, Andreas; Hamm, Alfons O.

    2013-01-01

    Recent animal and human research indicates that stress around the time of encoding enhances long-term memory for emotionally arousing events but neural evidence remains unclear. In the present study we used the ERP old/new effect to investigate brain dynamics underlying the long-term effects of acute pre-encoding stress on memory for emotional and neutral scenes. Participants were exposed either to the Socially Evaluated Cold Pressure Test (SECPT) or a warm water control procedure before viewing 30 unpleasant, 30 neutral and 30 pleasant pictures. Two weeks after encoding, recognition memory was tested using 90 old and 90 new pictures. Emotional pictures were better recognized than neutral pictures in both groups and related to an enhanced centro-parietal ERP old/new difference (400–800 ms) during recognition, which suggests better recollection. Most interestingly, pre-encoding stress exposure specifically increased the ERP old/new-effect for emotional (unpleasant) pictures, but not for neutral pictures. These enhanced ERP/old new differences for emotional (unpleasant) scenes were particularly pronounced for those participants who reported high levels of stress during the SECPT. The results suggest that acute pre-encoding stress specifically strengthens brain signals of emotional memories, substantiating a facilitating role of stress on memory for emotional scenes. PMID:24039697

  19. Reduced gaze following and attention to heads when viewing a "live" social scene.

    PubMed

    Gregory, Nicola Jean; Lόpez, Beatriz; Graham, Gemma; Marshman, Paul; Bate, Sarah; Kargas, Niko

    2015-01-01

    Social stimuli are known to both attract and direct our attention, but most research on social attention has been conducted in highly controlled laboratory settings lacking in social context. This study examined the role of social context on viewing behaviour of participants whilst they watched a dynamic social scene, under three different conditions. In two social groups, participants believed they were watching a live webcam of other participants. The socially-engaged group believed they would later complete a group task with the people in the video, whilst the non-engaged group believed they would not meet the people in the scene. In a third condition, participants simply free-viewed the same video with the knowledge that it was pre-recorded, with no suggestion of a later interaction. Results demonstrated that the social context in which the stimulus was viewed significantly influenced viewing behaviour. Specifically, participants in the social conditions allocated less visual attention towards the heads of the actors in the scene and followed their gaze less than those in the free-viewing group. These findings suggest that by underestimating the impact of social context in social attention, researchers risk coming to inaccurate conclusions about how we attend to others in the real world.

  20. Reduced Gaze Following and Attention to Heads when Viewing a "Live" Social Scene

    PubMed Central

    Gregory, Nicola Jean; Lόpez, Beatriz

    2015-01-01

    Social stimuli are known to both attract and direct our attention, but most research on social attention has been conducted in highly controlled laboratory settings lacking in social context. This study examined the role of social context on viewing behaviour of participants whilst they watched a dynamic social scene, under three different conditions. In two social groups, participants believed they were watching a live webcam of other participants. The socially-engaged group believed they would later complete a group task with the people in the video, whilst the non-engaged group believed they would not meet the people in the scene. In a third condition, participants simply free-viewed the same video with the knowledge that it was pre-recorded, with no suggestion of a later interaction. Results demonstrated that the social context in which the stimulus was viewed significantly influenced viewing behaviour. Specifically, participants in the social conditions allocated less visual attention towards the heads of the actors in the scene and followed their gaze less than those in the free-viewing group. These findings suggest that by underestimating the impact of social context in social attention, researchers risk coming to inaccurate conclusions about how we attend to others in the real world. PMID:25853239

  1. The Hip-Hop club scene: Gender, grinding and sex.

    PubMed

    Muñoz-Laboy, Miguel; Weinstein, Hannah; Parker, Richard

    2007-01-01

    Hip-Hop culture is a key social medium through which many young men and women from communities of colour in the USA construct their gender. In this study, we focused on the Hip-Hop club scene in New York City with the intention of unpacking narratives of gender dynamics from the perspective of young men and women, and how these relate to their sexual experiences. We conducted a three-year ethnographic study that included ethnographic observations of Hip-Hop clubs and their social scene, and in-depth interviews with young men and young women aged 15-21. This paper describes how young people negotiate gender relations on the dance floor of Hip-Hop clubs. The Hip-Hop club scene represents a context or setting where young men's masculinities are contested by the social environment, where women challenge hypermasculine privilege and where young people can set the stage for what happens next in their sexual and emotional interactions. Hip-Hop culture therefore provides a window into the gender and sexual scripts of many urban minority youth. A fuller understanding of these patterns can offer key insights into the social construction of sexual risk, as well as the possibilities for sexual health promotion, among young people in urban minority populations.

  2. Modelling an advanced ManPAD with dual band detectors and a rosette scanning seeker head

    NASA Astrophysics Data System (ADS)

    Birchenall, Richard P.; Richardson, Mark A.; Butters, Brian; Walmsley, Roy

    2012-01-01

    Man Portable Air Defence Systems (ManPADs) have been a favoured anti aircraft weapon since their appearance on the military proliferation scene in the mid 1960s. Since this introduction there has been a 'cat and mouse' game of Missile Countermeasures (CMs) and the aircraft protection counter counter measures (CCMs) as missile designers attempt to defeat the aircraft platform protection equipment. Magnesium Teflon Viton (MTV) flares protected the target aircraft until the missile engineers discovered the art of flare rejection using techniques including track memory and track angle bias. These early CCMs relied upon CCM triggering techniques such as the rise rate method which would just sense a sudden increase in target energy and assume that a flare CM had been released by the target aircraft. This was not as reliable as was first thought as aspect changes (bringing another engine into the field of view) or glint from the sun could inadvertently trigger a CCM when not needed. The introduction of dual band detectors in the 1980s saw a major advance in CCM capability allowing comparisons between two distinct IR bands to be made thus allowing the recognition of an MTV flare to occur with minimal false alarms. The development of the rosette scan seeker in the 1980s complemented this advancement allowing the scene in the missile field of view (FOV) to be scanned by a much smaller (1/25) instantaneous FOV (IFOV) with the spectral comparisons being made at each scan point. This took the ManPAD from a basic IR energy detector to a pseudo imaging system capable of analysing individual elements of its overall FOV allowing more complex and robust CCM to be developed. This paper continues the work published in [1,2] and describes the method used to model an advanced ManPAD with a rosette scanning seeker head and robust CCMs similar to the Raytheon Stinger RMP.

  3. The Advanced Linked Extended Reconnaissance & Targeting Technology Demonstration project

    NASA Astrophysics Data System (ADS)

    Edwards, Mark

    2008-04-01

    The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing many operational needs of the future Canadian Army's Surveillance and Reconnaissance forces. Using the surveillance system of the Coyote reconnaissance vehicle as an experimental platform, the ALERT TD project aims to significantly enhance situational awareness by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. The project is exploiting important advances made in computer processing capability, displays technology, digital communications, and sensor technology since the design of the original surveillance system. As the major research area within the project, concepts are discussed for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as from beyond line-of-sight systems such as mini-UAVs and unattended ground sensors. Video-rate image processing has been developed to assist the operator to detect poorly visible targets. As a second major area of research, automatic target cueing capabilities have been added to the system. These include scene change detection, automatic target detection and aided target recognition algorithms processing both IR and visible-band images to draw the operator's attention to possible targets. The merits of incorporating scene change detection algorithms are also discussed. In the area of multi-sensor data fusion, up to Joint Defence Labs level 2 has been demonstrated. The human factors engineering aspects of the user interface in this complex environment are presented, drawing upon multiple user group sessions with military surveillance system operators. The paper concludes with Lessons Learned from the project. The ALERT system has been used in a number of C4ISR field trials, most recently at Exercise Empire Challenge in China Lake CA, and at Trial Quest in Norway. Those exercises provided further opportunities to investigate operator interactions. The paper concludes with recommendations for future work in operator interface design.

  4. Dynamic fluctuation of proteins watched in real time

    PubMed Central

    Ormos, Pál

    2008-01-01

    The dynamic nature of protein function is a fundamental concept in the physics of proteins. Although the basic general ideas are well accepted most experimental evidence has an indirect nature. The detailed characterization of the dynamics is necessary for the understanding in detail. The dynamic fluctuations thought crucial for the function span an extremely broad time, starting from the picosecond regime. Recently, a few new experimental techniques emerged that permit the observation of dynamical phenomena directly. Notably, pulsed infrared (IR) spectroscopy has been applied with great success to observe structural changes with picosecond time resolution. Using two-dimensional-IR vibrational echo chemical exchange spectroscopy Ishikawa and co-workers [Ishikawa et al. (2008), Proc. Natl. Acad. Sci. U.S.A. 101, 14402–14407] managed to observe the transition between well defined conformational substrates of carbonmonoxy myoglobin directly. This is an important step in improving our insight into the details of protein function. PMID:19436491

  5. Nonlinear-optical properties of thick composite media with vanadium dioxide nanoparticles. II. Self-focusing of mid-IR radiation

    NASA Astrophysics Data System (ADS)

    Vinogradova, O. P.; Ostrosablina, A. A.; Sidorov, A. I.

    2006-02-01

    This paper presents the experimental and theoretical results of a study of the interaction of pulsed laser radiation with thick composite media containing nanoparticles of vanadium dioxide (VO2). It is established that the reversible semiconductor-metal phase transition that occurs in the VO2 nanoparticles under the action of radiation can produce self-focusing of the mid-IR radiation by the formation of a photoinduced dynamic lens. An analysis is carried out of how the radiation intensity affects the dynamics of the given process.

  6. Visualizing Chemistry with Infrared Imaging

    ERIC Educational Resources Information Center

    Xie, Charles

    2011-01-01

    Almost all chemical processes release or absorb heat. The heat flow in a chemical system reflects the process it is undergoing. By showing the temperature distribution dynamically, infrared (IR) imaging provides a salient visualization of the process. This paper presents a set of simple experiments based on IR imaging to demonstrate its enormous…

  7. Simulations of the infrared, Raman, and 2D-IR photon echo spectra of water in nanoscale silica pores

    DOE PAGES

    Burris, Paul C.; Laage, Damien; Thompson, Ward H.

    2016-05-20

    Vibrational spectroscopy is frequently used to characterize nanoconfined liquids and probe the effect of the confining framework on the liquid structure and dynamics relative to the corresponding bulk fluid. However, it is still unclear what molecular-level information can be obtained from such measurements. In this Paper, we address this question by using molecular dynamics (MD) simulations to reproduce the linear infrared (IR), Raman, and two-dimensional IR (2D-IR) photon echo spectra for water confined within hydrophilic (hydroxyl-terminated) silica mesopores. To simplify the spectra the OH stretching region of isotopically dilute HOD in D 2O is considered. An empirical mapping approach ismore » used to obtain the OH vibrational frequencies, transition dipoles, and transition polarizabilities from the MD simulations. The simulated linear IR and Raman spectra are in good general agreement with measured spectra of water in mesoporous silica reported in the literature. The key effect of confinement on the water spectrum is a vibrational blueshift for OH groups that are closest to the pore interface. The blueshift can be attributed to the weaker hydrogen bonds (H-bonds) formed between the OH groups and silica oxygen acceptors. Non-Condon effects greatly diminish the contribution of these OH moieties to the linear IR spectrum, but these weaker H-bonds are readily apparent in the Raman spectrum. The 2D-IR spectra have not yet been measured and thus the present results represent a prediction. Lastly, the simulated spectra indicate that it should be possible to probe the slower spectral diffusion of confined water compared to the bulk liquid by analysis of the 2D-IR spectra.« less

  8. Simulations of the infrared, Raman, and 2D-IR photon echo spectra of water in nanoscale silica pores.

    PubMed

    Burris, Paul C; Laage, Damien; Thompson, Ward H

    2016-05-21

    Vibrational spectroscopy is frequently used to characterize nanoconfined liquids and probe the effect of the confining framework on the liquid structure and dynamics relative to the corresponding bulk fluid. However, it is still unclear what molecular-level information can be obtained from such measurements. In this paper, we address this question by using molecular dynamics (MD) simulations to reproduce the linear infrared (IR), Raman, and two-dimensional IR (2D-IR) photon echo spectra for water confined within hydrophilic (hydroxyl-terminated) silica mesopores. To simplify the spectra the OH stretching region of isotopically dilute HOD in D2O is considered. An empirical mapping approach is used to obtain the OH vibrational frequencies, transition dipoles, and transition polarizabilities from the MD simulations. The simulated linear IR and Raman spectra are in good general agreement with measured spectra of water in mesoporous silica reported in the literature. The key effect of confinement on the water spectrum is a vibrational blueshift for OH groups that are closest to the pore interface. The blueshift can be attributed to the weaker hydrogen bonds (H-bonds) formed between the OH groups and silica oxygen acceptors. Non-Condon effects greatly diminish the contribution of these OH moieties to the linear IR spectrum, but these weaker H-bonds are readily apparent in the Raman spectrum. The 2D-IR spectra have not yet been measured and thus the present results represent a prediction. The simulated spectra indicates that it should be possible to probe the slower spectral diffusion of confined water compared to the bulk liquid by analysis of the 2D-IR spectra.

  9. Simulations of the infrared, Raman, and 2D-IR photon echo spectra of water in nanoscale silica pores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burris, Paul C.; Thompson, Ward H., E-mail: wthompson@ku.edu; Laage, Damien, E-mail: damien.laage@ens.fr

    2016-05-21

    Vibrational spectroscopy is frequently used to characterize nanoconfined liquids and probe the effect of the confining framework on the liquid structure and dynamics relative to the corresponding bulk fluid. However, it is still unclear what molecular-level information can be obtained from such measurements. In this paper, we address this question by using molecular dynamics (MD) simulations to reproduce the linear infrared (IR), Raman, and two-dimensional IR (2D-IR) photon echo spectra for water confined within hydrophilic (hydroxyl-terminated) silica mesopores. To simplify the spectra the OH stretching region of isotopically dilute HOD in D{sub 2}O is considered. An empirical mapping approach ismore » used to obtain the OH vibrational frequencies, transition dipoles, and transition polarizabilities from the MD simulations. The simulated linear IR and Raman spectra are in good general agreement with measured spectra of water in mesoporous silica reported in the literature. The key effect of confinement on the water spectrum is a vibrational blueshift for OH groups that are closest to the pore interface. The blueshift can be attributed to the weaker hydrogen bonds (H-bonds) formed between the OH groups and silica oxygen acceptors. Non-Condon effects greatly diminish the contribution of these OH moieties to the linear IR spectrum, but these weaker H-bonds are readily apparent in the Raman spectrum. The 2D-IR spectra have not yet been measured and thus the present results represent a prediction. The simulated spectra indicates that it should be possible to probe the slower spectral diffusion of confined water compared to the bulk liquid by analysis of the 2D-IR spectra.« less

  10. Great Lakes Demonstration 2

    DTIC Science & Technology

    2012-06-01

    A-8 Figure A-12. Laser fluorometer...District Response Advisory Team DRMM Dynamic Risk Management Model EPA Environmental Protection Agency FL Laser fluorometer FOSC Federal On-Scene...this tactic. During this evolution the Hollyhock experimented applying its ice-breaking capabilities to cut channels and pockets into the ice for oil

  11. A "H--ll-Fired Story": Hawthorne's Rhetoric of Rumor.

    ERIC Educational Resources Information Center

    Harshbarger, Scott

    1994-01-01

    Considers Nathaniel Hawthorne's literary technique of providing various, often conflicting, accounts of a narrative scene or event. Analyzes Hawthorne's rhetoric of rumor as featured in "The Scarlet Letter." Shows how Hawthorne tried to translate the dynamics of interpersonal communication into print in this novel. (HB)

  12. Robust tracking of respiratory rate in high-dynamic range scenes using mobile thermal imaging

    PubMed Central

    Cho, Youngjun; Julier, Simon J.; Marquardt, Nicolai; Bianchi-Berthouze, Nadia

    2017-01-01

    The ability to monitor the respiratory rate, one of the vital signs, is extremely important for the medical treatment, healthcare and fitness sectors. In many situations, mobile methods, which allow users to undertake everyday activities, are required. However, current monitoring systems can be obtrusive, requiring users to wear respiration belts or nasal probes. Alternatively, contactless digital image sensor based remote-photoplethysmography (PPG) can be used. However, remote PPG requires an ambient source of light, and does not work properly in dark places or under varying lighting conditions. Recent advances in thermographic systems have shrunk their size, weight and cost, to the point where it is possible to create smart-phone based respiration rate monitoring devices that are not affected by lighting conditions. However, mobile thermal imaging is challenged in scenes with high thermal dynamic ranges (e.g. due to the different environmental temperature distributions indoors and outdoors). This challenge is further amplified by general problems such as motion artifacts and low spatial resolution, leading to unreliable breathing signals. In this paper, we propose a novel and robust approach for respiration tracking which compensates for the negative effects of variations in the ambient temperature and motion artifacts and can accurately extract breathing rates in highly dynamic thermal scenes. The approach is based on tracking the nostril of the user and using local temperature variations to infer inhalation and exhalation cycles. It has three main contributions. The first is a novel Optimal Quantization technique which adaptively constructs a color mapping of absolute temperature to improve segmentation, classification and tracking. The second is the Thermal Gradient Flow method that computes thermal gradient magnitude maps to enhance the accuracy of the nostril region tracking. Finally, we introduce the Thermal Voxel method to increase the reliability of the captured respiration signals compared to the traditional averaging method. We demonstrate the extreme robustness of our system to track the nostril-region and measure the respiratory rate by evaluating it during controlled respiration exercises in high thermal dynamic scenes (e.g. strong correlation (r = 0.9987) with the ground truth from the respiration-belt sensor). We also demonstrate how our algorithm outperformed standard algorithms in settings with different amounts of environmental thermal changes and human motion. We open the tracked ROI sequences of the datasets collected for these studies (i.e. under both controlled and unconstrained real-world settings) to the community to foster work in this area. PMID:29082079

  13. Robust tracking of respiratory rate in high-dynamic range scenes using mobile thermal imaging.

    PubMed

    Cho, Youngjun; Julier, Simon J; Marquardt, Nicolai; Bianchi-Berthouze, Nadia

    2017-10-01

    The ability to monitor the respiratory rate, one of the vital signs, is extremely important for the medical treatment, healthcare and fitness sectors. In many situations, mobile methods, which allow users to undertake everyday activities, are required. However, current monitoring systems can be obtrusive, requiring users to wear respiration belts or nasal probes. Alternatively, contactless digital image sensor based remote-photoplethysmography (PPG) can be used. However, remote PPG requires an ambient source of light, and does not work properly in dark places or under varying lighting conditions. Recent advances in thermographic systems have shrunk their size, weight and cost, to the point where it is possible to create smart-phone based respiration rate monitoring devices that are not affected by lighting conditions. However, mobile thermal imaging is challenged in scenes with high thermal dynamic ranges (e.g. due to the different environmental temperature distributions indoors and outdoors). This challenge is further amplified by general problems such as motion artifacts and low spatial resolution, leading to unreliable breathing signals. In this paper, we propose a novel and robust approach for respiration tracking which compensates for the negative effects of variations in the ambient temperature and motion artifacts and can accurately extract breathing rates in highly dynamic thermal scenes. The approach is based on tracking the nostril of the user and using local temperature variations to infer inhalation and exhalation cycles. It has three main contributions. The first is a novel Optimal Quantization technique which adaptively constructs a color mapping of absolute temperature to improve segmentation, classification and tracking. The second is the Thermal Gradient Flow method that computes thermal gradient magnitude maps to enhance the accuracy of the nostril region tracking. Finally, we introduce the Thermal Voxel method to increase the reliability of the captured respiration signals compared to the traditional averaging method. We demonstrate the extreme robustness of our system to track the nostril-region and measure the respiratory rate by evaluating it during controlled respiration exercises in high thermal dynamic scenes (e.g. strong correlation (r = 0.9987) with the ground truth from the respiration-belt sensor). We also demonstrate how our algorithm outperformed standard algorithms in settings with different amounts of environmental thermal changes and human motion. We open the tracked ROI sequences of the datasets collected for these studies (i.e. under both controlled and unconstrained real-world settings) to the community to foster work in this area.

  14. Experimental implementations of 2D IR spectroscopy through a horizontal pulse shaper design and a focal plane array detector

    PubMed Central

    Ghosh, Ayanjeet; Serrano, Arnaldo L.; Oudenhoven, Tracey A.; Ostrander, Joshua S.; Eklund, Elliot C.; Blair, Alexander F.; Zanni, Martin T.

    2017-01-01

    Aided by advances in optical engineering, two-dimensional infrared spectroscopy (2D IR) has developed into a promising method for probing structural dynamics in biophysics and material science. We report two new advances for 2D IR spectrometers. First, we report a fully reflective and totally horizontal pulse shaper, which significantly simplifies alignment. Second, we demonstrate the applicability of mid-IR focal plane arrays (FPAs) as suitable detectors in 2D IR experiments. FPAs have more pixels than conventional linear arrays and can be used to multiplex optical detection. We simultaneously measure the spectra of a reference beam, which improves the signal-to-noise by a factor of 4; and two additional beams that are orthogonally polarized probe pulses for 2D IR anisotropy experiments. PMID:26907414

  15. 2D IR spectra of cyanide in water investigated by molecular dynamics simulations

    USGS Publications Warehouse

    Lee, Myung Won; Carr, Joshua K.; Göllner, Michael; Hamm, Peter; Meuwly, Markus

    2013-01-01

    Using classical molecular dynamics simulations, the 2D infrared (IR) spectroscopy of CN− solvated in D2O is investigated. Depending on the force field parametrizations, most of which are based on multipolar interactions for the CN− molecule, the frequency-frequency correlation function and observables computed from it differ. Most notably, models based on multipoles for CN− and TIP3P for water yield quantitatively correct results when compared with experiments. Furthermore, the recent finding that T 1 times are sensitive to the van der Waals ranges on the CN− is confirmed in the present study. For the linear IR spectrum, the best model reproduces the full widths at half maximum almost quantitatively (13.0 cm−1 vs. 14.9 cm−1) if the rotational contribution to the linewidth is included. Without the rotational contribution, the lines are too narrow by about a factor of two, which agrees with Raman and IR experiments. The computed and experimental tilt angles (or nodal slopes) α as a function of the 2D IR waiting time compare favorably with the measured ones and the frequency fluctuation correlation function is invariably found to contain three time scales: a sub-ps, 1 ps, and one on the 10-ps time scale. These time scales are discussed in terms of the structural dynamics of the surrounding solvent and it is found that the longest time scale (≈10 ps) most likely corresponds to solvent exchange between the first and second solvation shell, in agreement with interpretations from nuclear magnetic resonance measurements.

  16. Dynamic Optical Filtration

    NASA Technical Reports Server (NTRS)

    Chretien, Jean-Loup (Inventor); Lu, Edward T. (Inventor)

    2005-01-01

    A dynamic optical filtration system and method effectively blocks bright light sources without impairing view of the remainder of the scene. A sensor measures light intensity and position so that selected cells of a shading matrix may interrupt the view of the bright light source by a receptor. A beamsplitter may be used so that the sensor may be located away from the receptor. The shading matrix may also be replaced by a digital micromirror device, which selectively sends image data to the receptor.

  17. Dynamic optical filtration

    NASA Technical Reports Server (NTRS)

    Chretien, Jean-Loup (Inventor); Lu, Edward T. (Inventor)

    2005-01-01

    A dynamic optical filtration system and method effectively blocks bright light sources without impairing view of the remainder of the scene. A sensor measures light intensity and position so that selected cells of a shading matrix may interrupt the view of the bright light source by a receptor. A beamsplitter may be used so that the sensor may be located away from the receptor. The shading matrix may also be replaced by a digital micromirror device, which selectively sends image data to the receptor.

  18. Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility

    NASA Astrophysics Data System (ADS)

    Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.

    2017-12-01

    The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  19. Digital amateur observations of Venus at 0.9μm

    NASA Astrophysics Data System (ADS)

    Kardasis, E.

    2017-09-01

    Venus atmosphere is extremely dynamic, though it is very difficult to observe any features on it in the visible and even in the near-IR range. Digital observations with planetary cameras in recent years routinely produce high-quality images, especially in the near-infrared (0.7-1μm), since IR wavelengths are less influenced by Earth's atmosphere and Venus's atmosphere is partially transparent in this spectral region. Continuous observations over a few hours may track dark atmospheric features in the dayside and determine their motion. In this work we will present such observations and some dark-feature motion measurements at 0.9μm. Ground-based observations at this wavelength are rare and are complementary to in situ observations by JAXA's Akatsuki orbiter, that studies the atmospheric dynamics of Venus also in this band with the IR1 camera.

  20. Enhanced Vibrational Echo Correlation Spectrometer for the Study of Molecular Dynamics, Structures, and Analytical Applications

    DTIC Science & Technology

    2006-09-10

    ultrafast IR 2D vibrational echo spectrometer. The major improvement involved a new dual MCT array detector composed of two 32 x 1 element MCT IR... detector arrays. The dual array makes it possible to improve signal- to- noise ratio in the heterodyne detection of the vibrational echo signal. To...are dispersed in a monochromator and then detected with the new 2x32-element MCT IR array detector . As discussed above, the function of the local

  1. Diogenite-like Features in the Spitzer IRS (5-35 micrometers) Spectrum of 956 ELISA

    NASA Technical Reports Server (NTRS)

    Lim, Lucy F.; Emery, Joshua P.; Moskovitz, Nicholas A.

    2009-01-01

    We report preliminary results from the Spitzer Infrared Spectrograph (IRS) observations of the V-type asteroid 956 Elisa. Elisa was observed as part of a campaign to measure the 5.2-38 micron spectra of small basaltic asteroids with the Spitzer IRS. Targets include members of the dynamical family of the unique large differentiated asteroid 4 Vesta ("Vesroids"), several outer-main-belt basaltic asteroids whose orbits exclude them from originating on 4 Vesta, and the basaltic near-Earth asteroid 4055 Magellan.

  2. Ultrafast 2D IR microscopy

    PubMed Central

    Baiz, Carlos R.; Schach, Denise; Tokmakoff, Andrei

    2014-01-01

    We describe a microscope for measuring two-dimensional infrared (2D IR) spectra of heterogeneous samples with μm-scale spatial resolution, sub-picosecond time resolution, and the molecular structure information of 2D IR, enabling the measurement of vibrational dynamics through correlations in frequency, time, and space. The setup is based on a fully collinear “one beam” geometry in which all pulses propagate along the same optics. Polarization, chopping, and phase cycling are used to isolate the 2D IR signals of interest. In addition, we demonstrate the use of vibrational lifetime as a contrast agent for imaging microscopic variations in molecular environments. PMID:25089490

  3. Quantifying the radiative and microphysical impacts of fire aerosols on cloud dynamics in the tropics using temporally offset satellite observations

    NASA Astrophysics Data System (ADS)

    Tosca, M. G.; Diner, D. J.; Garay, M. J.; Kalashnikova, O.

    2013-12-01

    Anthropogenic fires in Southeast Asia and Central America emit smoke that affects cloud dynamics, meteorology, and climate. We measured the cloud response to direct and indirect forcing from biomass burning aerosols using aerosol retrievals from the Multi-angle Imaging SpectroRadiometer (MISR) and non-synchronous cloud retrievals from the MODerate resolution Imaging Spectroradiometer (MODIS) from collocated morning and afternoon overpasses. Level 2 data from thirty-one individual scenes acquired between 2006 and 2010 were used to quantify changes in cloud fraction, cloud droplet size, cloud optical depth and cloud top temperature from morning (10:30am local time) to afternoon (1:30pm local time) in the presence of varying aerosol burdens. We accounted for large-scale meteorological differences between scenes by normalizing observed changes to the mean difference per individual scene. Elevated AODs reduced cloud fraction and cloud droplet size and increased cloud optical depths in both Southeast Asia and Central America. In mostly cloudy regions, aerosols significantly reduced cloud fraction and cloud droplet sizes, but in clear skies, cloud fraction, cloud optical thickness and cloud droplet sizes increased. In clouds with vertical development, aerosols reduced cloud fraction via semi-direct effects but spurred cloud growth via indirect effects. These results imply a positive feedback loop between anthropogenic burning and cloudiness in both Central America and Southeast Asia, and are consistent with previous studies linking smoke aerosols to both cloud reduction and convective invigoration.

  4. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences.

    PubMed

    Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael

    2014-01-01

    Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called "cocktail-party" problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.

  5. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences

    PubMed Central

    Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael

    2014-01-01

    Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called “cocktail-party” problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments. PMID:25540608

  6. High dynamic range infrared radiometry and imaging

    NASA Technical Reports Server (NTRS)

    Coon, Darryl D.; Karunasiri, R. P. G.; Bandara, K. M. S. V.

    1988-01-01

    The use is described of cryogenically cooled, extrinsic silicon infrared detectors in an unconventional mode of operation which offers an unusually large dynamic range. The system performs intensity-to-frequency conversion at the focal plane via simple circuits with very low power consumption. The incident IR intensity controls the repetition rate of short duration output pulses over a pulse rate dynamic range of about 10(6). Theory indicates the possibility of monotonic and approx. linear response over the full dynamic range. A comparison between the theoretical and the experimental results shows that the model provides a reasonably good description of experimental data. Some measurements of survivability with a very intense IR source were made on these devices and found to be very encouraging. Evidence continues to indicate that some variations in interpulse time intervals are deterministic rather than probabilistic.

  7. Change deafness for real spatialized environmental scenes.

    PubMed

    Gaston, Jeremy; Dickerson, Kelly; Hipp, Daniel; Gerhardstein, Peter

    2017-01-01

    The everyday auditory environment is complex and dynamic; often, multiple sounds co-occur and compete for a listener's cognitive resources. 'Change deafness', framed as the auditory analog to the well-documented phenomenon of 'change blindness', describes the finding that changes presented within complex environments are often missed. The present study examines a number of stimulus factors that may influence change deafness under real-world listening conditions. Specifically, an AX (same-different) discrimination task was used to examine the effects of both spatial separation over a loudspeaker array and the type of change (sound source additions and removals) on discrimination of changes embedded in complex backgrounds. Results using signal detection theory and accuracy analyses indicated that, under most conditions, errors were significantly reduced for spatially distributed relative to non-spatial scenes. A second goal of the present study was to evaluate a possible link between memory for scene contents and change discrimination. Memory was evaluated by presenting a cued recall test following each trial of the discrimination task. Results using signal detection theory and accuracy analyses indicated that recall ability was similar in terms of accuracy, but there were reductions in sensitivity compared to previous reports. Finally, the present study used a large and representative sample of outdoor, urban, and environmental sounds, presented in unique combinations of nearly 1000 trials per participant. This enabled the exploration of the relationship between change perception and the perceptual similarity between change targets and background scene sounds. These (post hoc) analyses suggest both a categorical and a stimulus-level relationship between scene similarity and the magnitude of change errors.

  8. Quick realization of a ship steering training simulation system by virtual reality

    NASA Astrophysics Data System (ADS)

    Sun, Jifeng; Zhi, Pinghua; Nie, Weiguo

    2003-09-01

    This paper addresses two problems of a ship handling simulator. Firstly, 360 scene generation, especially 3D dynamic sea wave modeling, is described. Secondly, a multi-computer complementation of ship handling simulator. This paper also gives the experimental results of the proposed ship handling simulator.

  9. Another Vision of Progressivism: Marion Richardson's Triumph and Tragedy.

    ERIC Educational Resources Information Center

    Smith, Peter

    1996-01-01

    Profiles the career and contributions of English art teacher Marion Richardson (1892-1946). A dynamic and assertive woman, Richardson's ideas and practices changed British primary and secondary art teaching for many years. She often used "word pictures" (narrative descriptions of scenes or emotions) to inspire her students. (MJP)

  10. Forces and Motion: How Young Children Understand Causal Events

    ERIC Educational Resources Information Center

    Goksun, Tilbe; George, Nathan R.; Hirsh-Pasek, Kathy; Golinkoff, Roberta M.

    2013-01-01

    How do children evaluate complex causal events? This study investigates preschoolers' representation of "force dynamics" in causal scenes, asking whether (a) children understand how single and dual forces impact an object's movement and (b) this understanding varies across cause types (Cause, Enable, Prevent). Three-and-a half- to…

  11. The Contexts of Composing: A Dynamic Scene with Movable Centers.

    ERIC Educational Resources Information Center

    Wiley, Mark L.

    An examination of the transformations that the concept of genius undergoes when viewed through the apparently incommensurable expressivistic and social views of composing helps to reconcile phenomenologically objective descriptions of composing with value-laden descriptions of the self in the act of writing. When the description of composition is…

  12. Pedagogical Potentialities in the Dynamic Symbolism of Videocy.

    ERIC Educational Resources Information Center

    Fantaousakis, Chrysoula

    2001-01-01

    Examines the communicative effectiveness of content presented in the audiovisual ode of discourse. Ninety children viewed individually four scenes from an audiovisual cartoon in three grade levels. Questions the value placed on the audiovisual mode of communication and addresses its power to organize and present cultural knowledge. (Author/VWL)

  13. Get-in-the-Zone (GITZ) Transition Display Format for Changing Camera Views in Multi-UAV Operations

    DTIC Science & Technology

    2008-12-01

    the multi-UAV operator will witch between dynamic and static missions, each potentially involving very different scenario environments and task...another. Inspired by cinematography techniques to help audiences maintain spatial understanding of a scene across discrete film cuts, use of a

  14. Simulation of the FRP Product

    NASA Astrophysics Data System (ADS)

    Paugam, Ronan; Wooster, Martin; Johnston, Joshua; Gastellu-Etchegorry, Jean-Philippe

    2014-05-01

    Among the different alternative of remote sensing technologies for estimating global fire carbon emission, the thermally-based measures of fire radiative power (FRP; and its temporal integration, fire radiative energy or FRE) has the potential to capture the spatial and temporal variability of fire occurrence. It was shown that a strong linear relationship exists between the total amount of thermal radiant energy emitted by a fire over its lifetime (the FRE) and the amount of fuel burned. Since all vegetation is 50(±5)% carbon, it is therefore in theory a potentially simple matter to measure the FRE and estimate the carbon release. In a fire inventory like the Global Fire Assimilation System (GFAS), the total carbon emission is derived from a gridded FRE product forced by the MODIS observation, using Ct = β x FRE x Ef, where β is a conversion factor initially estimated from small scale experiment as β=0.368 and later derived for different bio dome by comparison with the Global Fire Emission Database (GFED). The sensitivities of the above equation to (i) different types of fire activity (ie, flaming, smoldering, torching), (ii) sensor view angles or (iii) soot/smoke absorption have not yet been well studied. The investigation of these types of sensitivity, and of the information content of thermal IR observations of actively burning fires in general, is one of the primary subjects of this study. Our approach is based on a combination of observational work and simulations conducted via the linkage of different fire models and the 3D radiative transfer (RT) model DART operating in the thermal domain. The radiation properties of a fire as seen from above its plume (e.g. space/air borne sensor) depend on the temperature distribution, the gas concentration (mainly CO2, H2O), and the amount, shape, distribution and optical properties of the soot particles in the flame (where they are emitting) and in the cooling plume (where they are mainly absorbing). While gas and soot radiative properties can be estimated from the literature, their concentration and temperature are calculated from output of fire models. Due to the large range of length scale involved in fire dynamics, a twofold approach is use to model the fire scene with (i) first the multi-phases model WFDS which can handle fire size ranging from a 1m2 to 1ha with a particular focus on flame-plume interaction, (ii) and then the meso scale model WRF-fire which can handle larger fires and the interaction plume-atmosphere (e.g. pyroconvection). In the former case, as the Radiative Transfer is WFDS is based on a Gray Body assumption (WFDS only focuses on fire dynamics) the main challenge is to derive the radiative properties of the different component of the fire scene (soot and gas) for the different bands (optical and IR) solved in DART to re-process a multispectral RT. In the later case, because WRF-fire is running at a resolution of tens of meters, pyrolysis and combustion processes cannot be resolved and to predict the fire front dynamics, the use of an empirical model based on the Rothermel equation and the level set method is required. In this later case, it is therefore necessary to use empirical relationship to determine: (i) the 3D structure of the flame defined by: flame length, flame height and fire front depth derived from Rate of Spread and residence time, (ii) the gas and soot concentration profile within the flame, and (iii) the convective flux generated by the flame. The development of these empirical relationships presents one of the main challenges of this work. Thought this work is still undergoing, first results show the potential impact of view angle on the evaluation of FRP.

  15. Unattended real-time re-establishment of visibility in high dynamic range video and stills

    NASA Astrophysics Data System (ADS)

    Abidi, B.

    2014-05-01

    We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.

  16. Animation of natural scene by virtual eye-movements evokes high precision and low noise in V1 neurons

    PubMed Central

    Baudot, Pierre; Levy, Manuel; Marre, Olivier; Monier, Cyril; Pananceau, Marc; Frégnac, Yves

    2013-01-01

    Synaptic noise is thought to be a limiting factor for computational efficiency in the brain. In visual cortex (V1), ongoing activity is present in vivo, and spiking responses to simple stimuli are highly unreliable across trials. Stimulus statistics used to plot receptive fields, however, are quite different from those experienced during natural visuomotor exploration. We recorded V1 neurons intracellularly in the anaesthetized and paralyzed cat and compared their spiking and synaptic responses to full field natural images animated by simulated eye-movements to those evoked by simpler (grating) or higher dimensionality statistics (dense noise). In most cells, natural scene animation was the only condition where high temporal precision (in the 10–20 ms range) was maintained during sparse and reliable activity. At the subthreshold level, irregular but highly reproducible membrane potential dynamics were observed, even during long (several 100 ms) “spike-less” periods. We showed that both the spatial structure of natural scenes and the temporal dynamics of eye-movements increase the signal-to-noise ratio by a non-linear amplification of the signal combined with a reduction of the subthreshold contextual noise. These data support the view that the sparsening and the time precision of the neural code in V1 may depend primarily on three factors: (1) broadband input spectrum: the bandwidth must be rich enough for recruiting optimally the diversity of spatial and time constants during recurrent processing; (2) tight temporal interplay of excitation and inhibition: conductance measurements demonstrate that natural scene statistics narrow selectively the duration of the spiking opportunity window during which the balance between excitation and inhibition changes transiently and reversibly; (3) signal energy in the lower frequency band: a minimal level of power is needed below 10 Hz to reach consistently the spiking threshold, a situation rarely reached with visual dense noise. PMID:24409121

  17. Animation of natural scene by virtual eye-movements evokes high precision and low noise in V1 neurons.

    PubMed

    Baudot, Pierre; Levy, Manuel; Marre, Olivier; Monier, Cyril; Pananceau, Marc; Frégnac, Yves

    2013-01-01

    Synaptic noise is thought to be a limiting factor for computational efficiency in the brain. In visual cortex (V1), ongoing activity is present in vivo, and spiking responses to simple stimuli are highly unreliable across trials. Stimulus statistics used to plot receptive fields, however, are quite different from those experienced during natural visuomotor exploration. We recorded V1 neurons intracellularly in the anaesthetized and paralyzed cat and compared their spiking and synaptic responses to full field natural images animated by simulated eye-movements to those evoked by simpler (grating) or higher dimensionality statistics (dense noise). In most cells, natural scene animation was the only condition where high temporal precision (in the 10-20 ms range) was maintained during sparse and reliable activity. At the subthreshold level, irregular but highly reproducible membrane potential dynamics were observed, even during long (several 100 ms) "spike-less" periods. We showed that both the spatial structure of natural scenes and the temporal dynamics of eye-movements increase the signal-to-noise ratio by a non-linear amplification of the signal combined with a reduction of the subthreshold contextual noise. These data support the view that the sparsening and the time precision of the neural code in V1 may depend primarily on three factors: (1) broadband input spectrum: the bandwidth must be rich enough for recruiting optimally the diversity of spatial and time constants during recurrent processing; (2) tight temporal interplay of excitation and inhibition: conductance measurements demonstrate that natural scene statistics narrow selectively the duration of the spiking opportunity window during which the balance between excitation and inhibition changes transiently and reversibly; (3) signal energy in the lower frequency band: a minimal level of power is needed below 10 Hz to reach consistently the spiking threshold, a situation rarely reached with visual dense noise.

  18. Multiple pedestrian detection using IR LED stereo camera

    NASA Astrophysics Data System (ADS)

    Ling, Bo; Zeifman, Michael I.; Gibson, David R. P.

    2007-09-01

    As part of the U.S. Department of Transportations Intelligent Vehicle Initiative (IVI) program, the Federal Highway Administration (FHWA) is conducting R&D in vehicle safety and driver information systems. There is an increasing number of applications where pedestrian monitoring is of high importance. Visionbased pedestrian detection in outdoor scenes is still an open challenge. People dress in very different colors that sometimes blend with the background, wear hats or carry bags, and stand, walk and change directions unpredictably. The background is various, containing buildings, moving or parked cars, bicycles, street signs, signals, etc. Furthermore, existing pedestrian detection systems perform only during daytime, making it impossible to detect pedestrians at night. Under FHWA funding, we are developing a multi-pedestrian detection system using IR LED stereo camera. This system, without using any templates, detects the pedestrians through statistical pattern recognition utilizing 3D features extracted from the disparity map. A new IR LED stereo camera is being developed, which can help detect pedestrians during daytime and night time. Using the image differencing and denoising, we have also developed new methods to estimate the disparity map of pedestrians in near real time. Our system will have a hardware interface with the traffic controller through wireless communication. Once pedestrians are detected, traffic signals at the street intersections will change phases to alert the drivers of approaching vehicles. The initial test results using images collected at a street intersection show that our system can detect pedestrians in near real time.

  19. Evaluation of appropriate sensor specifications for space based ballistic missile detection

    NASA Astrophysics Data System (ADS)

    Schweitzer, Caroline; Stein, Karin; Wendelstein, Norbert

    2012-10-01

    The detection and tracking of ballistic missiles (BMs) during launch or cloud break using satellite based electro-optical (EO) sensors is a promising possibility for pre-instructing early warning and fire control radars. However, the successful detection of a BM is depending on the applied infrared (IR)-channel, as emission and reflection of threat and background vary in different spectral (IR-) bands and for different observation scenarios. In addition, the spatial resolution of the satellite based system also conditions the signal-to-clutter-ratio (SCR) and therefore the predictability of the flight path. Generally available satellite images provide data in spectral bands, which are suitable for remote sensing applications and earth surface observations. However, in the fields of BM early warning, these bands are not of interest making the simulation of background data essential. The paper focuses on the analysis of IR-bands suitable for missile detection by trading off the suppression of background signature against threat signal strength. This comprises a radiometric overview of the background radiation in different spectral bands for different climates and seasons as well as for various cloud types and covers. A brief investigation of the BM signature and its trajectory within a threat scenario is presented. Moreover, the influence on the SCR caused by different observation scenarios and varying spatial resolution are pointed out. The paper also introduces the software used for simulating natural background spectral radiance images, MATISSE ("Advanced Modeling of the Earth for Environment and Scenes Simulation") by ONERA [1].

  20. Directional satellite thermal IR measurements and modeling of a forest in winter and their relationship to air temperature

    NASA Astrophysics Data System (ADS)

    Balick, Lee K.; Ballard, Jerrell R., Jr.; Smith, James A.; Goltz, Stewart M.

    2002-01-01

    Data assimilation methods applied to hydrologic models can incorporate spatially distributed maps of near surface temperature, especially if such measurements can be reliably inferred from satellite observations. Uncalibrated thermal IR imagery sometimes is scaled to temperature units to obtain such observations using the assumption that dense forest canopies are close to air temperature. For fully leafed deciduous forest canopies in the summer, this approximation is usually valid within 2C. In a leafless canopy, however, the materials views are thick boles and branches and the forest floor, which can store heat and yield significantly higher variations. Winter coniferous forests are intermediate with needles and branches being the predominant viewed materials. The US Dept of Energy's Multispectral Thermal Imager (MTI) is an experimental satellite with the capability to perform quantitative scene measurements in the reflective and thermal infrared region respectively. Its multispectral thermal IR capability enables quantitative surface temperature retrieval if pixel emissivity is known. MTI is pointable and targets multiple times in the winter and spring of 2001 at the Howland, Maine AmeriFlux research site operated by the University of Maine. Supporting meteorological and optical depth measurements also were made from three towers at the site. Directional thermal models of forest woody materials and needles are driver by the surface measurements and compared to satellite data to help evaluate the relationship between air temperature and satellite thermal measurements as a function of look angles, day and night.

  1. Evidence for the interaction of the IRS 16 wind with the ionized and molecular gas at the Galactic center

    NASA Technical Reports Server (NTRS)

    Yusef-Zadeh, Farhad; Wardle, Mark

    1993-01-01

    We present a number of high-resolution radio images showing evidence for the dynamical interaction of the outflow arising from the IRS 16 complex with the ionized gas associated with the Northern Arm of Sgr A West, and with the northwestern segment of the circumnuclear molecular disk which engulfs the inner few parsecs of the Galactic center. We suggest that the wind disturbs the dynamics of the Northern Arm within 0.1 pc of the center, is responsible for the waviness of the arm at larger distances, and is collimated by Sgr A West and the circumnuclear disk. The waviness is discussed in terms of the Rayleigh-Taylor instability induced by the ram pressure of the wind incident on the surface of the Northern Arm. Another consequence of this interaction is the strong mid-IR polarization of the Northern Arm in the vicinity of the IRS 16 complex which is explained as a result of the ram pressure of the wind compressing the gas and the magnetic field.

  2. Effect of ionizing radiation exposure on Trypanosoma cruzi ubiquitin-proteasome system.

    PubMed

    Cerqueira, Paula G; Passos-Silva, Danielle G; Vieira-da-Rocha, João P; Mendes, Isabela Cecilia; de Oliveira, Karla A; Oliveira, Camila F B; Vilela, Liza F F; Nagem, Ronaldo A P; Cardoso, Joseane; Nardelli, Sheila C; Krieger, Marco A; Franco, Glória R; Macedo, Andrea M; Pena, Sérgio D J; Schenkman, Sérgio; Gomes, Dawidson A; Guerra-Sá, Renata; Machado, Carlos R

    2017-03-01

    In recent years, proteasome involvement in the damage response induced by ionizing radiation (IR) became evident. However, whether proteasome plays a direct or indirect role in IR-induced damage response still unclear. Trypanosoma cruzi is a human parasite capable of remarkable high tolerance to IR, suggesting a highly efficient damage response system. Here, we investigate the role of T. cruzi proteasome in the damage response induced by IR. We exposed epimastigotes to high doses of gamma ray and we analyzed the expression and subcellular localization of several components of the ubiquitin-proteasome system. We show that proteasome inhibition increases IR-induced cell growth arrest and proteasome-mediated proteolysis is altered after parasite exposure. We observed nuclear accumulation of 19S and 20S proteasome subunits in response to IR treatments. Intriguingly, the dynamic of 19S particle nuclear accumulation was more similar to the dynamic observed for Rad51 nuclear translocation than the observed for 20S. In the other hand, 20S increase and nuclear translocation could be related with an increase of its regulator PA26 and high levels of proteasome-mediated proteolysis in vitro. The intersection between the opposed peaks of 19S and 20S protein levels was marked by nuclear accumulation of both 20S and 19S together with Ubiquitin, suggesting a role of ubiquitin-proteasome system in the nuclear protein turnover at the time. Our results revealed the importance of proteasome-mediated proteolysis in T. cruzi IR-induced damage response suggesting that proteasome is also involved in T. cruzi IR tolerance. Moreover, our data support the possible direct/signaling role of 19S in DNA damage repair. Based on these results, we speculate that spatial and temporal differences between the 19S particle and 20S proteasome controls proteasome multiple roles in IR damage response. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Teledyne H1RG, H2RG, and H4RG Noise Generator

    NASA Technical Reports Server (NTRS)

    Rauscher, Bernard J.

    2015-01-01

    This paper describes the near-infrared detector system noise generator (NG) that we wrote for the James Webb Space Telescope (JWST) Near Infrared Spectrograph (NIRSpec). NG simulates many important noise components including; (1) white "read noise", (2) residual bias drifts, (3) pink 1/f noise, (4) alternating column noise, and (5) picture frame noise. By adjusting the input parameters, NG can simulate noise for Teledyne's H1RG, H2RG, and H4RG detectors with and without Teledyne's SIDECAR ASIC IR array controller. NG can be used as a starting point for simulating astronomical scenes by adding dark current, scattered light, and astronomical sources into the results from NG. NG is written in Python-3.4.

  4. HALO: a reconfigurable image enhancement and multisensor fusion system

    NASA Astrophysics Data System (ADS)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  5. Hyperspectral imaging using novel LWIR OPO for hazardous material detection and identification

    NASA Astrophysics Data System (ADS)

    Ruxton, Keith; Robertson, Gordon; Miller, Bill; Malcolm, Graeme P. A.; Maker, Gareth T.

    2014-05-01

    Current stand-off hyperspectral imaging detection solutions that operate in the mid-wave infrared (MWIR), nominally 2.5 - 5 μm spectral region, are limited by the number of absorption bands that can be addressed. This issue is most apparent when evaluating a scene with multiple absorbers with overlapping spectral features making accurate material identification challenging. This limitation can be overcome by moving to the long wave IR (LWIR) region, which is rich in characteristic absorption features, which can provide ample molecular information in order to perform presumptive identification relative to a spectral library. This work utilises an instrument platform to perform negative contrast imaging using a novel LWIR optical parametric oscillator (OPO) as the source. The OPO offers continuous tuning in the region 5.5 - 9.5 μm, which includes a number of molecular vibrations associated with the target material compositions. Scanning the scene of interest whilst sweeping the wavelength of the OPO emission will highlight the presence of a suspect material and by analysing the resulting absorption spectrum, presumptive identification is possible. This work presents a selection of initial results using the LWIR hyperspectral imaging platform on a range of white powder materials to highlight the benefit operating in the LWIR region compared to the MWIR.

  6. Dynamics of molecules in extreme rotational states

    PubMed Central

    Yuan, Liwei; Teitelbaum, Samuel W.; Robinson, Allison; Mullin, Amy S.

    2011-01-01

    We have constructed an optical centrifuge with a pulse energy that is more than 2 orders of magnitude larger than previously reported instruments. This high pulse energy enables us to create large enough number densities of molecules in extreme rotational states to perform high-resolution state-resolved transient IR absorption measurements. Here we report the first studies of energy transfer dynamics involving molecules in extreme rotational states. In these studies, the optical centrifuge drives CO2 molecules into states with J ∼ 220 and we use transient IR probing to monitor the subsequent rotational, translational, and vibrational energy flow dynamics. The results reported here provide the first molecular insights into the relaxation of molecules with rotational energy that is comparable to that of a chemical bond.

  7. Surface enrichment of Pt in stable Pt-Ir nano-alloy particles on MgAl 2 O 4 spinel in oxidizing atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Wei-Zhen; Nie, Lei; Cheng, Yingwen

    With the capability of MgAl2O4 spinel {111} nano-facets in stabilizing small Rh, Ir and Pt particles, bimetallic Ir-Pt catalysts on the same support were investigated, aiming at further lowering the catalyst cost by substituting expensive Pt with cheaper Ir in the bulk. Small Pt-Ir nano-alloy particles (< 2nm) were successfully stabilized on the spinel {111} nano-facets as expected. Interestingly, methanol oxidative dehydrogenation (ODH) rate on the surface Pt atoms increases with oxidizing aging but decreases upon reducing treatment, where Ir is almost inactive under the same reaction conditions. Up to three times enhancement in Pt exposure was achieved when themore » sample was oxidized at 800 °C in air for 1 week and subsequently reduced by H2 for 2 h, demonstrating successful surface enrichment of Pt on Pt-Ir nano-alloy particles. A dynamic stabilization mechanism involving wetting\

  8. EC-QCL mid-IR transmission spectroscopy for monitoring dynamic changes of protein secondary structure in aqueous solution on the example of β-aggregation in alcohol-denaturated α-chymotrypsin.

    PubMed

    Alcaráz, Mirta R; Schwaighofer, Andreas; Goicoechea, Héctor; Lendl, Bernhard

    2016-06-01

    In this work, a novel EC-QCL-based setup for mid-IR transmission measurements in the amide I region is introduced for monitoring dynamic changes in secondary structure of proteins. For this purpose, α-chymotrypsin (aCT) acts as a model protein, which gradually forms intermolecular β-sheet aggregates after adopting a non-native α-helical structure induced by exposure to 50 % TFE. In order to showcase the versatility of the presented setup, the effects of varying pH values and protein concentration on the rate of β-aggregation were studied. The influence of the pH value on the initial reaction rate was studied in the range of pH 5.8-8.2. Results indicate an increased aggregation rate at elevated pH values. Furthermore, the widely accessible concentration range of the laser-based IR transmission setup was utilized to investigate β-aggregation across a concentration range of 5-60 mg mL(-1). For concentrations lower than 20 mg mL(-1), the aggregation rate appears to be independent of concentration. At higher values, the reaction rate increases linearly with protein concentration. Extended MCR-ALS was employed to obtain pure spectral and concentration profiles of the temporal transition between α-helices and intermolecular β-sheets. Comparison of the global solutions obtained by the modelled data with results acquired by the laser-based IR transmission setup at different conditions shows excellent agreement. This demonstrates the potential and versatility of the EC-QCL-based IR transmission setup to monitor dynamic changes of protein secondary structure in aqueous solution at varying conditions and across a wide concentration range. Graphical abstract EC-QCL IR spectroscopy for monitoring protein conformation change.

  9. Characterizing Woody Vegetation Spectral and Structural Parameters with a 3-D Scene Model

    NASA Astrophysics Data System (ADS)

    Qin, W.; Yang, L.

    2004-05-01

    Quantification of structural and biophysical parameters of woody vegetation is of great significance in understanding vegetation condition, dynamics and functionality. Such information over a landscape scale is crucial for global and regional land cover characterization, global carbon-cycle research, forest resource inventories, and fire fuel estimation. While great efforts and progress have been made in mapping general land cover types over large area, at present, the ability to quantify regional woody vegetation structural and biophysical parameters is limited. One approach to address this research issue is through an integration of physically based 3-D scene model with multiangle and multispectral remote sensing data and in-situ measurements. The first step of this work is to model woody vegetation structure and its radiation regime using a physically based 3-D scene model and field data, before a robust operational algorithm can be developed for retrieval of important woody vegetation structural/biophysical parameters. In this study, we use an advanced 3-D scene model recently developed by Qin and Gerstl (2000), based on L-systems and radiosity theories. This 3-D scene model has been successfully applied to semi-arid shrubland to study structure and radiation regime at a regional scale. We apply this 3-D scene model to a more complicated and heterogeneous forest environment dominated by deciduous and coniferous trees. The data used in this study are from a field campaign conducted by NASA in a portion of the Superior National Forest (SNF) near Ely, Minnesota during the summers of 1983 and 1984, and supplement data collected during our revisit to the same area of SNF in summer of 2003. The model is first validated with reflectance measurements at different scales (ground observations, helicopter, aircraft, and satellite). Then its ability to characterize the structural and spectral parameters of the forest scene is evaluated. Based on the results from this study and the current multi-spectral and multi-angular satellite data (MODIS, MISR), a robust retrieval system to estimate woody vegetation structural/biophysical parameters is proposed.

  10. Dynamic full-field infrared imaging with multiple synchrotron beams

    PubMed Central

    Stavitski, Eli; Smith, Randy J.; Bourassa, Megan W.; Acerbo, Alvin S.; Carr, G. L.; Miller, Lisa M.

    2013-01-01

    Microspectroscopic imaging in the infrared (IR) spectral region allows for the examination of spatially resolved chemical composition on the microscale. More than a decade ago, it was demonstrated that diffraction limited spatial resolution can be achieved when an apertured, single pixel IR microscope is coupled to the high brightness of a synchrotron light source. Nowadays, many IR microscopes are equipped with multi-pixel Focal Plane Array (FPA) detectors, which dramatically improve data acquisition times for imaging large areas. Recently, progress been made toward efficiently coupling synchrotron IR beamlines to multi-pixel detectors, but they utilize expensive and highly customized optical schemes. Here we demonstrate the development and application of a simple optical configuration that can be implemented on most existing synchrotron IR beamlines in order to achieve full-field IR imaging with diffraction-limited spatial resolution. Specifically, the synchrotron radiation fan is extracted from the bending magnet and split into four beams that are combined on the sample, allowing it to fill a large section of the FPA. With this optical configuration, we are able to oversample an image by more than a factor of two, even at the shortest wavelengths, making image restoration through deconvolution algorithms possible. High chemical sensitivity, rapid acquisition times, and superior signal-to-noise characteristics of the instrument are demonstrated. The unique characteristics of this setup enabled the real time study of heterogeneous chemical dynamics with diffraction-limited spatial resolution for the first time. PMID:23458231

  11. Comparison of infrared and 3D digital image correlation techniques applied for mechanical testing of materials

    NASA Astrophysics Data System (ADS)

    Krstulović-Opara, Lovre; Surjak, Martin; Vesenjak, Matej; Tonković, Zdenko; Kodvanj, Janoš; Domazet, Željko

    2015-11-01

    To investigate the applicability of infrared thermography as a tool for acquiring dynamic yielding in metals, a comparison of infrared thermography with three dimensional digital image correlation has been made. Dynamical tension tests and three point bending tests of aluminum alloys have been performed to evaluate results obtained by IR thermography in order to detect capabilities and limits for these two methods. Both approaches detect pastification zone migrations during the yielding process. The results of the tension test and three point bending test proved the validity of the IR approach as a method for evaluating the dynamic yielding process when used on complex structures such as cellular porous materials. The stability of the yielding process in the three point bending test, as contrary to the fluctuation of the plastification front in the tension test, is of great importance for the validation of numerical constitutive models. The research proved strong performance, robustness and reliability of the IR approach when used to evaluate yielding during dynamic loading processes, while the 3D DIC method proved to be superior in the low velocity loading regimes. This research based on two basic tests, proved the conclusions and suggestions presented in our previous research on porous materials where middle wave infrared thermography was applied.

  12. S-NPP VIIRS thermal band spectral radiance performance through 18 months of operation on-orbit

    NASA Astrophysics Data System (ADS)

    Moeller, Chris; Tobin, Dave; Quinn, Greg

    2013-09-01

    The Suomi National Polar-orbiting Partnership (S-NPP) satellite, carrying the first Visible Infrared Imager Radiometer Suite (VIIRS) was successfully launched on October 28, 2011 with first light on November 21, 2011. The passive cryo-radiator cooler doors were opened on January 18, 2012 allowing the cold focal planes (S/MWIR and LWIR) to cool to the nominal operating temperature of 80K. After an early on-orbit functional checkout period, an intensive Cal/Val (ICV) phase has been underway. During the ICV, the VIIRS SDR performance for thermal emissive bands (TEB) has been under evaluation using on-orbit comparisons between VIIRS and the CrIS instrument on S-NPP, as well as VIIRS and the IASI instrument on MetOp-A. CrIS has spectral coverage of VIIRS bands M13, M15, M16, and I5 while IASI covers all VIIRS TEB. These comparisons largely verify that VIIRS TEB SDR are performing within or nearly within pre-launch requirements across the full dynamic range of these VIIRS bands, with the possible exception of warm scenes (<280 K) in band M12 as suggested by VIIRS-IASI comparisons. The comparisons with CrIS also indicate that the VIIRS Half Angle Mirror (HAM) reflectance versus scan (RVS) is well-characterized by virtue that the VIIRS-CrIS differences show little or no dependence on scan angle. The VIIRS-IASI and VIIRS-CrIS findings closely agree for bands M13, M15, and M16 for warm scenes but small offsets exist at cold scenes for M15, M16, and particularly M13. IASI comparisons also show that spectral out-of-band influence on the VIIRS SDR is <0.05 K for all bands across the full dynamic range with the exception of very cold scenes in Band M13 where the OOB influence reaches 0.10 K. TEB performance, outside of small adjustments to the SDR algorithm and supporting look-up tables, has been very stable through 18 months on-orbit. Preliminary analysis from an S-NPP underflight using a NASA ER-2 aircraft with the SHIS instrument (NIST-traceable source) confirms TEB SDR accuracy as compliant for a typical warm earth scene (285-290 K).

  13. Dynamic changes in oxygenation of intracranial tumor and contralateral brain during tumor growth and carbogen breathing: A multisite EPR oximetry with implantable resonators

    PubMed Central

    Hou, Huagang; Dong, Ruhong; Li, Hongbin; Williams, Benjamin; Lariviere, Jean P.; Hekmatyar, S.K.; Kauppinen, Risto A.; Khan, Nadeem; Swartz, Harold

    2013-01-01

    Introduction Several techniques currently exist for measuring tissue oxygen; however technical difficulties have limited their usefulness and general application. We report a recently developed electron paramagnetic resonance (EPR) oximetry approach with multiple probe implantable resonators (IRs) that allow repeated measurements of oxygen in tissue at depths of greater than 10 mm. Methods The EPR signal to noise (S/N) ratio of two probe IRs was compared with that of LiPc deposits. The feasibility of intracranial tissue pO2 measurements by EPR oximetry using IRs was tested in normal rats and rats bearing intracerebral F98 tumors. The dynamic changes in the tissue pO2 were assessed during repeated hyperoxia with carbogen breathing. Results A 6–10 times increase in the S/N ratio was observed with IRs as compared to LiPc deposits. The mean brain pO2 of normal rats was stable and increased significantly during carbogen inhalation in experiments repeated for 3 months. The pO2 of F98 glioma declined gradually, while the pO2 of contralateral brain essentially remained the same. Although a significant increase in the glioma pO2 was observed during carbogen inhalation, this effect declined in experiments repeated over days. Conclusion EPR oximetry with IRs provides a significant increase in S/N ratio. The ability to repeatedly assess orthotopic glioma pO2 is likely to play a vital role in understanding the dynamics of tissue pO2 during tumor growth and therapies designed to modulate tumor hypoxia. This information could then be used to optimize chemoradiation by scheduling treatments at times of increased glioma oxygenation. PMID:22033225

  14. iTrack: instrumented mobile electrooculography (EOG) eye-tracking in older adults and Parkinson's disease.

    PubMed

    Stuart, Samuel; Hickey, Aodhán; Galna, Brook; Lord, Sue; Rochester, Lynn; Godfrey, Alan

    2017-01-01

    Detection of saccades (fast eye-movements) within raw mobile electrooculography (EOG) data involves complex algorithms which typically process data acquired during seated static tasks only. Processing of data during dynamic tasks such as walking is relatively rare and complex, particularly in older adults or people with Parkinson's disease (PD). Development of algorithms that can be easily implemented to detect saccades is required. This study aimed to develop an algorithm for the detection and measurement of saccades in EOG data during static (sitting) and dynamic (walking) tasks, in older adults and PD. Eye-tracking via mobile EOG and infra-red (IR) eye-tracker (with video) was performed with a group of older adults (n  =  10) and PD participants (n  =  10) (⩾50 years). Horizontal saccades made between targets set 5°, 10° and 15° apart were first measured while seated. Horizontal saccades were then measured while a participant walked and executed a 40° turn left and right. The EOG algorithm was evaluated by comparing the number of correct saccade detections and agreement (ICC 2,1 ) between output from visual inspection of eye-tracker videos and IR eye-tracker. The EOG algorithm detected 75-92% of saccades compared to video inspection and IR output during static testing, with fair to excellent agreement (ICC 2,1 0.49-0.93). However, during walking EOG saccade detection reduced to 42-88% compared to video inspection or IR output, with poor to excellent (ICC 2,1 0.13-0.88) agreement between methodologies. The algorithm was robust during seated testing but less so during walking, which was likely due to increased measurement and analysis error with a dynamic task. Future studies may consider a combination of EOG and IR for comprehensive measurement.

  15. Immune-related tumor response dynamics in melanoma patients treated with pembrolizumab: Identifying markers for clinical outcome and treatment decisions

    PubMed Central

    Nishino, Mizuki; Giobbie-Hurder, Anita; Manos, Michael P.; Bailey, Nancy D.; Buchbinder, Elizabeth I.; Ott, Patrick A.; Ramaiya, Nikhil H.; Hodi, F. Stephen

    2017-01-01

    Purpose Characterize tumor burden dynamics during PD-1 inhibitor therapy and investigate the association with overall survival (OS) in advanced melanoma. Experimental Design The study included 107 advanced melanoma patients treated with pembrolizumab. Tumor burden dynamics were assessed on serial CT scans using irRECIST and were studied for the association with OS. Results Among 107 patients, 96 patients had measurable tumor burden and 11 had non-target lesions alone at baseline. In the 96 patients, maximal tumor shrinkage ranged from -100% to 567% (median:-18.5%). Overall response rate was 44% (42/96; 5 irCR, 37 irPR). Tumor burden remained <20% increase from baseline throughout therapy in 57 patients (55%). Using a 3-month landmark analysis, patients with <20% tumor burden increase from baseline had longer OS than pts with ≥20% increase (12-month OS rate: 82 vs. 53%). In extended Cox models, patients with <20% tumor burden increase during therapy had significantly reduced hazards of death (HR=0.19, 95%CI:0.08–0.43, p<0.0001 univariate; HR=0.18, 95%CI:0.08-0.41, p<0.0001, multivariable). Four patients (4%) experienced pseudoprogression; 3 patients had target lesion increase with subsequent response, which was noted after confirmed irPD. One patient without measurable disease progressed with new lesion that subsequently regressed. Conclusions Tumor burden increase of <20% from the baseline during pembrolizumab therapy was associated with longer OS, proposing a practical marker for treatment decision guides that needs to be prospectively validated. Pseudoprogressors may experience response after confirmed irPD, indicating a limitation of the current strategy for immune-related response evaluations. Evaluations of patients without measurable disease may require further attention. PMID:28592629

  16. Nonlocal Coulomb correlations in pure and electron-doped Sr2IrO4 : Spectral functions, Fermi surface, and pseudo-gap-like spectral weight distributions from oriented cluster dynamical mean-field theory

    NASA Astrophysics Data System (ADS)

    Martins, Cyril; Lenz, Benjamin; Perfetti, Luca; Brouet, Veronique; Bertran, François; Biermann, Silke

    2018-03-01

    We address the role of nonlocal Coulomb correlations and short-range magnetic fluctuations in the high-temperature phase of Sr2IrO4 within state-of-the-art spectroscopic and first-principles theoretical methods. Introducing an "oriented-cluster dynamical mean-field scheme", we compute momentum-resolved spectral functions, which we find to be in excellent agreement with angle-resolved photoemission spectra. We show that while short-range antiferromagnetic fluctuations are crucial to accounting for the electronic properties of Sr2IrO4 even in the high-temperature paramagnetic phase, long-range magnetic order is not a necessary ingredient of the insulating state. Upon doping, an exotic metallic state is generated, exhibiting cuprate-like pseudo-gap spectral properties, for which we propose a surprisingly simple theoretical mechanism.

  17. Learned saliency transformations for gaze guidance

    NASA Astrophysics Data System (ADS)

    Vig, Eleonora; Dorr, Michael; Barth, Erhardt

    2011-03-01

    The saliency of an image or video region indicates how likely it is that the viewer of the image or video fixates that region due to its conspicuity. An intriguing question is how we can change the video region to make it more or less salient. Here, we address this problem by using a machine learning framework to learn from a large set of eye movements collected on real-world dynamic scenes how to alter the saliency level of the video locally. We derive saliency transformation rules by performing spatio-temporal contrast manipulations (on a spatio-temporal Laplacian pyramid) on the particular video region. Our goal is to improve visual communication by designing gaze-contingent interactive displays that change, in real time, the saliency distribution of the scene.

  18. Holographic three-dimensional telepresence using large-area photorefractive polymer.

    PubMed

    Blanche, P-A; Bablumian, A; Voorakaranam, R; Christenson, C; Lin, W; Gu, T; Flores, D; Wang, P; Hsieh, W-Y; Kathaperumal, M; Rachwal, B; Siddiqui, O; Thomas, J; Norwood, R A; Yamamoto, M; Peyghambarian, N

    2010-11-04

    Holography is a technique that is used to display objects or scenes in three dimensions. Such three-dimensional (3D) images, or holograms, can be seen with the unassisted eye and are very similar to how humans see the actual environment surrounding them. The concept of 3D telepresence, a real-time dynamic hologram depicting a scene occurring in a different location, has attracted considerable public interest since it was depicted in the original Star Wars film in 1977. However, the lack of sufficient computational power to produce realistic computer-generated holograms and the absence of large-area and dynamically updatable holographic recording media have prevented realization of the concept. Here we use a holographic stereographic technique and a photorefractive polymer material as the recording medium to demonstrate a holographic display that can refresh images every two seconds. A 50 Hz nanosecond pulsed laser is used to write the holographic pixels. Multicoloured holographic 3D images are produced by using angular multiplexing, and the full parallax display employs spatial multiplexing. 3D telepresence is demonstrated by taking multiple images from one location and transmitting the information via Ethernet to another location where the hologram is printed with the quasi-real-time dynamic 3D display. Further improvements could bring applications in telemedicine, prototyping, advertising, updatable 3D maps and entertainment.

  19. How music alters a kiss: superior temporal gyrus controls fusiform-amygdalar effective connectivity.

    PubMed

    Pehrs, Corinna; Deserno, Lorenz; Bakels, Jan-Hendrik; Schlochtermeier, Lorna H; Kappelhoff, Hermann; Jacobs, Arthur M; Fritz, Thomas Hans; Koelsch, Stefan; Kuchinke, Lars

    2014-11-01

    While watching movies, the brain integrates the visual information and the musical soundtrack into a coherent percept. Multisensory integration can lead to emotion elicitation on which soundtrack valences may have a modulatory impact. Here, dynamic kissing scenes from romantic comedies were presented to 22 participants (13 females) during functional magnetic resonance imaging scanning. The kissing scenes were either accompanied by happy music, sad music or no music. Evidence from cross-modal studies motivated a predefined three-region network for multisensory integration of emotion, consisting of fusiform gyrus (FG), amygdala (AMY) and anterior superior temporal gyrus (aSTG). The interactions in this network were investigated using dynamic causal models of effective connectivity. This revealed bilinear modulations by happy and sad music with suppression effects on the connectivity from FG and AMY to aSTG. Non-linear dynamic causal modeling showed a suppressive gating effect of aSTG on fusiform-amygdalar connectivity. In conclusion, fusiform to amygdala coupling strength is modulated via feedback through aSTG as region for multisensory integration of emotional material. This mechanism was emotion-specific and more pronounced for sad music. Therefore, soundtrack valences may modulate emotion elicitation in movies by differentially changing preprocessed visual information to the amygdala. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  20. A gaze-contingent display to study contrast sensitivity under natural viewing conditions

    NASA Astrophysics Data System (ADS)

    Dorr, Michael; Bex, Peter J.

    2011-03-01

    Contrast sensitivity has been extensively studied over the last decades and there are well-established models of early vision that were derived by presenting the visual system with synthetic stimuli such as sine-wave gratings near threshold contrasts. Natural scenes, however, contain a much wider distribution of orientations, spatial frequencies, and both luminance and contrast values. Furthermore, humans typically move their eyes two to three times per second under natural viewing conditions, but most laboratory experiments require subjects to maintain central fixation. We here describe a gaze-contingent display capable of performing real-time contrast modulations of video in retinal coordinates, thus allowing us to study contrast sensitivity when dynamically viewing dynamic scenes. Our system is based on a Laplacian pyramid for each frame that efficiently represents individual frequency bands. Each output pixel is then computed as a locally weighted sum of pyramid levels to introduce local contrast changes as a function of gaze. Our GPU implementation achieves real-time performance with more than 100 fps on high-resolution video (1920 by 1080 pixels) and a synthesis latency of only 1.5ms. Psychophysical data show that contrast sensitivity is greatly decreased in natural videos and under dynamic viewing conditions. Synthetic stimuli therefore only poorly characterize natural vision.

  1. How music alters a kiss: superior temporal gyrus controls fusiform–amygdalar effective connectivity

    PubMed Central

    Deserno, Lorenz; Bakels, Jan-Hendrik; Schlochtermeier, Lorna H.; Kappelhoff, Hermann; Jacobs, Arthur M.; Fritz, Thomas Hans; Koelsch, Stefan; Kuchinke, Lars

    2014-01-01

    While watching movies, the brain integrates the visual information and the musical soundtrack into a coherent percept. Multisensory integration can lead to emotion elicitation on which soundtrack valences may have a modulatory impact. Here, dynamic kissing scenes from romantic comedies were presented to 22 participants (13 females) during functional magnetic resonance imaging scanning. The kissing scenes were either accompanied by happy music, sad music or no music. Evidence from cross-modal studies motivated a predefined three-region network for multisensory integration of emotion, consisting of fusiform gyrus (FG), amygdala (AMY) and anterior superior temporal gyrus (aSTG). The interactions in this network were investigated using dynamic causal models of effective connectivity. This revealed bilinear modulations by happy and sad music with suppression effects on the connectivity from FG and AMY to aSTG. Non-linear dynamic causal modeling showed a suppressive gating effect of aSTG on fusiform–amygdalar connectivity. In conclusion, fusiform to amygdala coupling strength is modulated via feedback through aSTG as region for multisensory integration of emotional material. This mechanism was emotion-specific and more pronounced for sad music. Therefore, soundtrack valences may modulate emotion elicitation in movies by differentially changing preprocessed visual information to the amygdala. PMID:24298171

  2. Generation, recognition, and consistent fusion of partial boundary representations from range images

    NASA Astrophysics Data System (ADS)

    Kohlhepp, Peter; Hanczak, Andrzej M.; Li, Gang

    1994-10-01

    This paper presents SOMBRERO, a new system for recognizing and locating 3D, rigid, non- moving objects from range data. The objects may be polyhedral or curved, partially occluding, touching or lying flush with each other. For data collection, we employ 2D time- of-flight laser scanners mounted to a moving gantry robot. By combining sensor and robot coordinates, we obtain 3D cartesian coordinates. Boundary representations (Brep's) provide view independent geometry models that are both efficiently recognizable and derivable automatically from sensor data. SOMBRERO's methods for generating, matching and fusing Brep's are highly synergetic. A split-and-merge segmentation algorithm with dynamic triangular builds a partial (21/2D) Brep from scattered data. The recognition module matches this scene description with a model database and outputs recognized objects, their positions and orientations, and possibly surfaces corresponding to unknown objects. We present preliminary results in scene segmentation and recognition. Partial Brep's corresponding to different range sensors or viewpoints can be merged into a consistent, complete and irredundant 3D object or scene model. This fusion algorithm itself uses the recognition and segmentation methods.

  3. Salient contour extraction from complex natural scene in night vision image

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lian-fa

    2014-03-01

    The theory of center-surround interaction in non-classical receptive field can be applied in night vision information processing. In this work, an optimized compound receptive field modulation method is proposed to extract salient contour from complex natural scene in low-light-level (LLL) and infrared images. The kernel idea is that multi-feature analysis can recognize the inhomogeneity in modulatory coverage more accurately and that center and surround with the grouping structure satisfying Gestalt rule deserves high connection-probability. Computationally, a multi-feature contrast weighted inhibition model is presented to suppress background and lower mutual inhibition among contour elements; a fuzzy connection facilitation model is proposed to achieve the enhancement of contour response, the connection of discontinuous contour and the further elimination of randomly distributed noise and texture; a multi-scale iterative attention method is designed to accomplish dynamic modulation process and extract contours of targets in multi-size. This work provides a series of biologically motivated computational visual models with high-performance for contour detection from cluttered scene in night vision images.

  4. Continuous video coherence computing model for detecting scene boundaries

    NASA Astrophysics Data System (ADS)

    Kang, Hang-Bong

    2001-07-01

    The scene boundary detection is important in the semantic understanding of video data and is usually determined by coherence between shots. To measure the coherence, two approaches have been proposed. One is a discrete approach and the other one is a continuous approach. In this paper, we use the continuous approach and propose some modifications on the causal First-In-First-Out(FIFO) short-term memory-based model. One modification is that we allow dynamic memory size in computing coherence reliably regardless of the size of each shot. Another modification is that some shots can be removed from the memory buffer not by the FIFO rule. These removed shots have no or small foreground objects. Using this model, we detect scene boundaries by computing shot coherence. In computing coherence, we add one new term which is the number of intermediate shots between two comparing shots because the effect of intermediate shots is important in computing shot recall. In addition, we also consider shot activity because this is important to reflect human perception. We experiment our computing model on different genres of videos and have obtained reasonable results.

  5. A compressed sensing method with analytical results for lidar feature classification

    NASA Astrophysics Data System (ADS)

    Allen, Josef D.; Yuan, Jiangbo; Liu, Xiuwen; Rahmes, Mark

    2011-04-01

    We present an innovative way to autonomously classify LiDAR points into bare earth, building, vegetation, and other categories. One desirable product of LiDAR data is the automatic classification of the points in the scene. Our algorithm automatically classifies scene points using Compressed Sensing Methods via Orthogonal Matching Pursuit algorithms utilizing a generalized K-Means clustering algorithm to extract buildings and foliage from a Digital Surface Models (DSM). This technology reduces manual editing while being cost effective for large scale automated global scene modeling. Quantitative analyses are provided using Receiver Operating Characteristics (ROC) curves to show Probability of Detection and False Alarm of buildings vs. vegetation classification. Histograms are shown with sample size metrics. Our inpainting algorithms then fill the voids where buildings and vegetation were removed, utilizing Computational Fluid Dynamics (CFD) techniques and Partial Differential Equations (PDE) to create an accurate Digital Terrain Model (DTM) [6]. Inpainting preserves building height contour consistency and edge sharpness of identified inpainted regions. Qualitative results illustrate other benefits such as Terrain Inpainting's unique ability to minimize or eliminate undesirable terrain data artifacts.

  6. An unusual pedestrian road trauma: from forensic pathology to forensic veterinary medicine.

    PubMed

    Aquila, Isabella; Di Nunzio, Ciro; Paciello, Orlando; Britti, Domenico; Pepe, Francesca; De Luca, Ester; Ricci, Pietrantonio

    2014-01-01

    Traffic accidents have increased in the last decade, pedestrians being the most affected group. At autopsy, it is evident that the most common cause of pedestrian death is central nervous system injury, followed by skull base fractures, internal bleeding, lower limb haemorrhage, skull vault fractures, cervical spinal cord injury and airway compromise. The attribution of accident responsibility can be realised through reconstruction of road accident dynamics, investigation of the scene, survey of the vehicle involved and examination of the victim(s). A case study concerning a car accident where both humans and pets were involved is reported here. Investigation and reconstruction of the crime scene were conducted by a team consisting of forensic pathologists and forensic veterinarians. At the scene investigation, the pedestrian and his dog were recovered on the side of the road. An autopsy and a necropsy were conducted on the man and the dog, respectively. In addition, a complete inspection of the sports utility vehicle (SUV) implicated in the road accident was conducted. The results of the autopsy and necropsy were compared and the information was used to reconstruct the collision. This unusual case was solved through the collaboration between forensic pathology and veterinary forensic medicine, emphasising the importance of this kind of co-operation to solve a crime scene concerning both humans and animals. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Best-next-view algorithm for three-dimensional scene reconstruction using range images

    NASA Astrophysics Data System (ADS)

    Banta, J. E.; Zhien, Yu; Wang, X. Z.; Zhang, G.; Smith, M. T.; Abidi, Mongi A.

    1995-10-01

    The primary focus of the research detailed in this paper is to develop an intelligent sensing module capable of automatically determining the optimal next sensor position and orientation during scene reconstruction. To facilitate a solution to this problem, we have assembled a system for reconstructing a 3D model of an object or scene from a sequence of range images. Candidates for the best-next-view position are determined by detecting and measuring occlusions to the range camera's view in an image. Ultimately, the candidate which will reveal the greatest amount of unknown scene information is selected as the best-next-view position. Our algorithm uses ray tracing to determine how much new information a given sensor perspective will reveal. We have tested our algorithm successfully on several synthetic range data streams, and found the system's results to be consistent with an intuitive human search. The models recovered by our system from range data compared well with the ideal models. Essentially, we have proven that range information of physical objects can be employed to automatically reconstruct a satisfactory dynamic 3D computer model at a minimal computational expense. This has obvious implications in the contexts of robot navigation, manufacturing, and hazardous materials handling. The algorithm we developed takes advantage of no a priori information in finding the best-next-view position.

  8. Dynamic Evolution of the Chloroplast Genome in the Green Algal Classes Pedinophyceae and Trebouxiophyceae.

    PubMed

    Turmel, Monique; Otis, Christian; Lemieux, Claude

    2015-07-01

    Previous studies of trebouxiophycean chloroplast genomes revealed little information regarding the evolutionary dynamics of this genome because taxon sampling was too sparse and the relationships between the sampled taxa were unknown. We recently sequenced the chloroplast genomes of 27 trebouxiophycean and 2 pedinophycean green algae to resolve the relationships among the main lineages recognized for the Trebouxiophyceae. These taxa and the previously sampled members of the Pedinophyceae and Trebouxiophyceae are included in the comparative chloroplast genome analysis we report here. The 38 genomes examined display considerable variability at all levels, except gene content. Our results highlight the high propensity of the rDNA-containing large inverted repeat (IR) to vary in size, gene content and gene order as well as the repeated losses it experienced during trebouxiophycean evolution. Of the seven predicted IR losses, one event demarcates a superclade of 11 taxa representing 5 late-diverging lineages. IR expansions/contractions account not only for changes in gene content in this region but also for changes in gene order and gene duplications. Inversions also led to gene rearrangements within the IR, including the reversal or disruption of the rDNA operon in some lineages. Most of the 20 IR-less genomes are more rearranged compared with their IR-containing homologs and tend to show an accelerated rate of sequence evolution. In the IR-less superclade, several ancestral operons were disrupted, a few genes were fragmented, and a subgroup of taxa features a G+C-biased nucleotide composition. Our analyses also unveiled putative cases of gene acquisitions through horizontal transfer. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  9. Vision Based SLAM in Dynamic Scenes

    DTIC Science & Technology

    2012-12-20

    the correct relative poses between cameras at frame F. For this purpose, we detect and match SURF features between cameras in dilierent groups, and...all cameras in s uch a challenging case. For a compa rison, we disabled the ’ inte r-camera pose estimation’ and applied the ’ intra-camera pose esti

  10. The Changed/Changing Educational Scene: The State of the Art.

    ERIC Educational Resources Information Center

    Bunke, Clinton R.; And Others

    A special attempt was made to organize this document in terms of the requirements of its readership (members of the Commission on Children, White House Conference, 1980). To this end, the first section, entitled "Gaining Perspective," provides an overview of developments, expectations, issues, dynamics, culture shock, and future trends in…

  11. Comprehension of Infrequent Subject-Verb Agreement Forms: Evidence from French-Learning Children

    ERIC Educational Resources Information Center

    Legendre, Geraldine; Barriere, Isabelle; Goyet, Louise; Nazzi, Thierry

    2010-01-01

    Two comprehension experiments were conducted to investigate whether young French-learning children (N = 76) are able to use a single number cue in subject-verb agreement contexts and match a visually dynamic scene with a corresponding verbal stimulus. Results from both preferential looking and pointing demonstrated significant comprehension in…

  12. Toward a Script Theory of Guidance in Computer-Supported Collaborative Learning

    ERIC Educational Resources Information Center

    Fischer, Frank; Kollar, Ingo; Stegmann, Karsten; Wecker, Christof

    2013-01-01

    This article presents an outline of a script theory of guidance for computer-supported collaborative learning (CSCL). With its 4 types of components of internal and external scripts (play, scene, role, and scriptlet) and 7 principles, this theory addresses the question of how CSCL practices are shaped by dynamically reconfigured internal…

  13. Action Learning--A Process Which Supports Organisational Change Initiatives

    ERIC Educational Resources Information Center

    Joyce, Pauline

    2012-01-01

    This paper reflects on how action learning sets (ALSs) were used to support organisational change initiatives. It sets the scene with contextualising the inclusion of change projects in a masters programme. Action learning is understood to be a dynamic process where a team meets regularly to help individual members address issues through a highly…

  14. Orbifold Schur index and IR formula

    NASA Astrophysics Data System (ADS)

    Imamura, Yosuke

    2018-04-01

    We discuss an orbifold version of the Schur index defined as the supersymmetric partition function in S^3/{Z}_n×{S}^1. We first give a general formula for Lagrangian theories obtained by the localization technique, and then suggest a generalization of the Cordova and Shao IR formula. We confirm that the generalized IR formula gives the correct answer for systems with free hypermultiplets if we tune the background fields so that they are invariant under the orbifold action. Unfortunately, we find disagreement for theories with dynamical vector multiplets.

  15. Research on simulation technology of full-path infrared tail flame tracking of photoelectric theodolite in complicated environment

    NASA Astrophysics Data System (ADS)

    Wu, Hai-ying; Zhang, San-xi; Liu, Biao; Yue, Peng; Weng, Ying-hui

    2018-02-01

    The photoelectric theodolite is an important scheme to realize the tracking, detection, quantitative measurement and performance evaluation of weapon systems in ordnance test range. With the improvement of stability requirements for target tracking in complex environment, infrared scene simulation with high sense of reality and complex interference has become an indispensable technical way to evaluate the track performance of photoelectric theodolite. And the tail flame is the most important infrared radiation source of the weapon system. The dynamic tail flame with high reality is a key element for the photoelectric theodolite infrared scene simulation and imaging tracking test. In this paper, an infrared simulation method for the full-path tracking of tail flame by photoelectric theodolite is proposed aiming at the faint boundary, irregular, multi-regulated points. In this work, real tail images are employed. Simultaneously, infrared texture conversion technology is used to generate DDS texture for a particle system map. Thus, dynamic real-time tail flame simulation results with high fidelity from the theodolite perspective can be gained in the tracking process.

  16. Guided filter-based fusion method for multiexposure images

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei

    2016-11-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.

  17. IR-drop analysis for validating power grids and standard cell architectures in sub-10nm node designs

    NASA Astrophysics Data System (ADS)

    Ban, Yongchan; Wang, Chenchen; Zeng, Jia; Kye, Jongwook

    2017-03-01

    Since chip performance and power are highly dependent on the operating voltage, the robust power distribution network (PDN) is of utmost importance in designs to provide with the reliable voltage without voltage (IR)-drop. However, rapid increase of parasitic resistance and capacitance (RC) in interconnects makes IR-drop much worse with technology scaling. This paper shows various IR-drop analyses in sub 10nm designs. The major objectives are to validate standard cell architectures, where different sizes of power/ground and metal tracks are validated, and to validate PDN architecture, where types of power hook-up approaches are evaluated with IR-drop calculation. To estimate IR-drops in 10nm and below technologies, we first prepare physically routed designs given standard cell libraries, where we use open RISC RTL, synthesize the CPU, and apply placement & routing with process-design kits (PDK). Then, static and dynamic IR-drop flows are set up with commercial tools. Using the IR-drop flow, we compare standard cell architectures, and analysis impacts on performance, power, and area (PPA) with the previous technology-node designs. With this IR-drop flow, we can optimize the best PDN structure against IR-drops as well as types of standard cell library.

  18. Epitaxial growth of Al9Ir2 intermetallic compound on Al(100): Mechanism and interface structure

    NASA Astrophysics Data System (ADS)

    Kadok, J.; Pussi, K.; Šturm, S.; Ambrožič, B.; Gaudry, É.; de Weerd, M.-C.; Fournée, V.; Ledieu, J.

    2018-04-01

    The adsorption of Ir adatoms on Al(100) has been investigated under various exposures and temperature conditions. The experimental and theoretical results reveal a diffusion of Ir adatoms within the Al(100) surface selvedge already at 300 K. Above 593 K, two domains of a (√{5 }×√{5 }) R 26 .6∘ phase are identified by low energy electron diffraction (LEED) and scanning tunneling microscopy measurements. This phase corresponds to the initial growth of an Al9Ir2 compound at the Al(100) surface. The Al9Ir2 intermetallic domains are terminated by bulklike pure Al layers. The structural stability of Al9Ir2 (001) grown on Al(100) has been analyzed by density functional theory based calculations. Dynamical LEED analysis is consistent with an Ir adsorption leading to the growth of an Al9Ir2 intermetallic compound. We propose that the epitaxial relationship Al9Ir2(001 ) ∥Al (100) and Al9Ir2[100 ] ∥Al [031 ]/[013 ] originates from a matching of Al atomic arrangements present both on Al(100) and on pure Al(001) layers present in the Al9Ir2 compound. Finally, the interface between Al9Ir2 precipitates and the Al matrix has been characterized by transmission electron microscopy measurements. The cross-sectional observations are consistent with the formation of Al9Ir2 (001) compounds. These measurements indicate an important Ir diffusion within Al(100) near the surface region. The coherent interface between Al9Ir2 and the Al matrix is sharp.

  19. Kernel Density Estimation as a Measure of Environmental Exposure Related to Insulin Resistance in Breast Cancer Survivors.

    PubMed

    Jankowska, Marta M; Natarajan, Loki; Godbole, Suneeta; Meseck, Kristin; Sears, Dorothy D; Patterson, Ruth E; Kerr, Jacqueline

    2017-07-01

    Background: Environmental factors may influence breast cancer; however, most studies have measured environmental exposure in neighborhoods around home residences (static exposure). We hypothesize that tracking environmental exposures over time and space (dynamic exposure) is key to assessing total exposure. This study compares breast cancer survivors' exposure to walkable and recreation-promoting environments using dynamic Global Positioning System (GPS) and static home-based measures of exposure in relation to insulin resistance. Methods: GPS data from 249 breast cancer survivors living in San Diego County were collected for one week along with fasting blood draw. Exposure to recreation spaces and walkability was measured for each woman's home address within an 800 m buffer (static), and using a kernel density weight of GPS tracks (dynamic). Participants' exposure estimates were related to insulin resistance (using the homeostatic model assessment of insulin resistance, HOMA-IR) controlled by age and body mass index (BMI) in linear regression models. Results: The dynamic measurement method resulted in greater variability in built environment exposure values than did the static method. Regression results showed no association between HOMA-IR and home-based, static measures of walkability and recreation area exposure. GPS-based dynamic measures of both walkability and recreation area were significantly associated with lower HOMA-IR ( P < 0.05). Conclusions: Dynamic exposure measurements may provide important evidence for community- and individual-level interventions that can address cancer risk inequities arising from environments wherein breast cancer survivors live and engage. Impact: This is the first study to compare associations of dynamic versus static built environment exposure measures with insulin outcomes in breast cancer survivors. Cancer Epidemiol Biomarkers Prev; 26(7); 1078-84. ©2017 AACR . ©2017 American Association for Cancer Research.

  20. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    NASA Astrophysics Data System (ADS)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.

  1. Protective effect of mitochondrial-targeted antioxidant MitoQ against iron ion 56Fe radiation induced brain injury in mice.

    PubMed

    Gan, Lu; Wang, Zhenhua; Si, Jing; Zhou, Rong; Sun, Chao; Liu, Yang; Ye, Yancheng; Zhang, Yanshan; Liu, Zhiyuan; Zhang, Hong

    2018-02-15

    Exposure to iron ion 56 Fe radiation (IR) during space missions poses a significant risk to the central nervous system and radiation exposure is intimately linked to the production of reactive oxygen species (ROS). MitoQ is a mitochondria-targeted antioxidant that has been shown to decrease oxidative damage and lower mitochondrial ROS in a number of animal models. Therefore, the present study aimed to investigate role of the mitochondrial targeted antioxidant MitoQ against 56 Fe particle irradiation-induced oxidative damage and mitochondria dysfunction in the mouse brains. Increased ROS levels were observed in mouse brains after IR compared with the control group. Enhanced ROS production leads to disruption of cellular antioxidant defense systems, mitochondrial respiration dysfunction, altered mitochondria dynamics and increased release of cytochrome c (cyto c) from mitochondria into cytosol resulting in apoptotic cell death. MitoQ reduced IR-induced oxidative stress (decreased ROS production and increased SOD, CAT activities) with decreased lipid peroxidation as well as reduced protein and DNA oxidation. MitoQ also protected mitochondrial respiration after IR. In addition, MitoQ increased the expression of mitofusin2 (Mfn2) and optic atrophy gene1 (OPA1), and decreased the expression of dynamic-like protein (Drp1). MitoQ also suppressed mitochondrial DNA damage, cyto c release, and caspase-3 activity in IR-treated mice compared to the control group. These results demonstrate that MitoQ may protect against IR-induced brain injury. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Ultrabroadband Two-Dimensional Coherent Optical Spectrometer for Directed Energy Trapping in Quantum Dynamical Systems

    DTIC Science & Technology

    2015-12-04

    in the 2DFT spectrum. 9 Figure 8. Comparison of 2DFT spectra. Absolute-value 2DFT spectra of (a) IR-144 cyanine dye ( ) and (b) LH2 ...a subset of the Hadamard- encoded measurements [10% (819 spatial masks) for IR-144 and 35% (2867 spatial masks) for LH2 ]. Diagonal peaks arise from

  3. Impact of Infrared Lunar Laser Ranging on Lunar Dynamics

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vishnu; Fienga, Agnès; Manche, Hervé; Gastineau, Mickael; Courde, Clément; Torre, Jean-Marie; Exertier, Pierre; Laskar, Jacques; LLR Observers : Astrogeo-OCA, Apache Point, McDonald Laser Ranging Station, Haleakala Observatory, Matera Laser Ranging Observatory

    2016-10-01

    Since 2015, in addition to the traditional green (532nm), infrared (1064nm) has been the preferred wavelength for lunar laser ranging at the Calern lunar laser ranging (LLR) site in France. Due to the better atmospheric transmission of IR with respect to Green, nearly 3 times the number of normal points have been obtained in IR than in Green [ C.Courde et al 2016 ]. In our study, in addition to the historical data obtained from various other LLR sites, we include the recent IR normal points obtained from Calern over the 1 year time span (2015-2016), constituting about 4.2% of data spread over 46 years of LLR. Near even distribution of data provided by IR on both the spatial and temporal domain, helps us to improve constraints on the internal structure of the Moon modeled within the planetary ephemeris : INPOP [ Fienga et al 2015 ]. IERS recommended models have been used in the data reduction software GINS (GRGS,CNES) [ V.Viswanathan et al 2015 ]. Constraints provided by GRAIL, on the Lunar gravitational potential and Love numbers have been taken into account in the least-square fit procedure. New estimates on the dynamical parameters of the lunar core will be presented.

  4. A Q-switched Ho:YAG laser assisted nanosecond time-resolved T-jump transient mid-IR absorbance spectroscopy with high sensitivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Deyong; Li, Yunliang; Li, Hao

    2015-05-15

    Knowledge of dynamical structure of protein is an important clue to understand its biological function in vivo. Temperature-jump (T-jump) time-resolved transient mid-IR absorbance spectroscopy is a powerful tool in elucidating the protein dynamical structures and the folding/unfolding kinetics of proteins in solution. A home-built setup of T-jump time-resolved transient mid-IR absorbance spectroscopy with high sensitivity is developed, which is composed of a Q-switched Cr, Tm, Ho:YAG laser with an output wavelength at 2.09 μm as the T-jump heating source, and a continuous working CO laser tunable from 1580 to 1980 cm{sup −1} as the IR probe. The results demonstrate thatmore » this system has a sensitivity of 1 × 10{sup −4} ΔOD for a single wavelength detection, and 2 × 10{sup −4} ΔOD for spectral detection in amide I′ region, as well as a temporal resolution of 20 ns. Moreover, the data quality coming from the CO laser is comparable to the one using the commercial quantum cascade laser.« less

  5. Ultrafast dynamics of localized magnetic moments in the unconventional Mott insulator Sr 2IrO 4

    DOE PAGES

    Krupin, O.; Dakovski, G. L.; Kim, B. J.; ...

    2016-06-16

    Here, we report a time-resolved study of the ultrafast dynamics of the magnetic moments formed by themore » $${{J}_{\\text{eff}}}=1/2$$ states in Sr 2IrO 4 by directly probing the localized iridium 5d magnetic state through resonant x-ray diffraction. Using optical pump–hard x-ray probe measurements, two relaxation time scales were determined: a fast fluence-independent relaxation is found to take place on a time scale of 1.5 ps, followed by a slower relaxation on a time scale of 500 ps–1.5 ns.« less

  6. Electron dynamics and prompt ablation of aluminum surface excited by intense femtosecond laser pulse

    NASA Astrophysics Data System (ADS)

    Ionin, A. A.; Kudryashov, S. I.; Makarov, S. V.; Seleznev, L. V.; Sinitsyn, D. V.

    2014-12-01

    Thin aluminum film homogeneously heated by intense IR femtosecond laser pulses exhibits on the excitation timescale consequent fluence-dependent rise and drop of the IR-pump self-reflectivity, followed by its final saturation at higher fluences F > 0.3 J/cm2. This prompt optical dynamics correlates with the initial monotonic increase in the accompanying laser-induced electron emission, which is succeeded by its non-linear (three-photon) increase for F > 0.3 J/cm2. The underlying electronic dynamics is related to the initial saturation of IR resonant interband transitions in this material, followed by its strong instantaneous electronic heating via intraband transitions during the pump pulse resulting in thermionic emission. Above the threshold fluence of 0.3 J/cm2, the surface electronic heating is balanced during the pump pulse by simultaneous cooling via intense plasma removal (prompt ablation). The relationship between the deposited volume energy density in the film and its prompt electronic temperature derived from the self-reflection measurements using a Drude model, demonstrates a kind of electron "liquid-vapor" phase transition, driven by strong cubic optical non-linearity of the photo-excited aluminum.

  7. Surface enrichment of Pt in stable Pt-Ir nano-alloy particles on MgAl 2O 4 spinel in oxidizing atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Wei -Zhen; Nie, Lei; Cheng, Yingwen

    With the capability of MgAl 2O 4 spinel {111} nano-facets in stabilizing small Rh, Ir and Pt particles, bimetallic Ir-Pt catalysts on the same support were investigated in this paper, aiming at further lowering the catalyst cost by substituting expensive Pt with cheaper Ir in the bulk. Small Pt-Ir nano-alloy particles (< 2 nm) were successfully stabilized on the spinel {111} nano-facets as expected. Interestingly, methanol oxidative dehydrogenation (ODH) rate on the surface Pt atoms increases with oxidizing aging but decreases upon reducing treatment, where Ir is almost inactive under the same reaction conditions. Up to three times enhancement inmore » Pt exposure was achieved when the sample was oxidized at 800 °C in air for 1 week and subsequently reduced by H 2 for 2 h, demonstrating successful surface enrichment of Pt on Pt-Ir nano-alloy particles. Finally, a dynamic stabilization mechanism involving wetting/nucleation seems to be responsible for the evolution of surface compositions upon cyclic oxidizing and reducing thermal treatments.« less

  8. Surface enrichment of Pt in stable Pt-Ir nano-alloy particles on MgAl 2O 4 spinel in oxidizing atmosphere

    DOE PAGES

    Li, Wei -Zhen; Nie, Lei; Cheng, Yingwen; ...

    2017-01-13

    With the capability of MgAl 2O 4 spinel {111} nano-facets in stabilizing small Rh, Ir and Pt particles, bimetallic Ir-Pt catalysts on the same support were investigated in this paper, aiming at further lowering the catalyst cost by substituting expensive Pt with cheaper Ir in the bulk. Small Pt-Ir nano-alloy particles (< 2 nm) were successfully stabilized on the spinel {111} nano-facets as expected. Interestingly, methanol oxidative dehydrogenation (ODH) rate on the surface Pt atoms increases with oxidizing aging but decreases upon reducing treatment, where Ir is almost inactive under the same reaction conditions. Up to three times enhancement inmore » Pt exposure was achieved when the sample was oxidized at 800 °C in air for 1 week and subsequently reduced by H 2 for 2 h, demonstrating successful surface enrichment of Pt on Pt-Ir nano-alloy particles. Finally, a dynamic stabilization mechanism involving wetting/nucleation seems to be responsible for the evolution of surface compositions upon cyclic oxidizing and reducing thermal treatments.« less

  9. Dynamic Range and Sensitivity Requirements of Satellite Ocean Color Sensors: Learning from the Past

    NASA Technical Reports Server (NTRS)

    Hu, Chuanmin; Feng, Lian; Lee, Zhongping; Davis, Curtiss O.; Mannino, Antonio; McClain, Charles R.; Franz, Bryan A.

    2012-01-01

    Sensor design and mission planning for satellite ocean color measurements requires careful consideration of the signal dynamic range and sensitivity (specifically here signal-to-noise ratio or SNR) so that small changes of ocean properties (e.g., surface chlorophyll-a concentrations or Chl) can be quantified while most measurements are not saturated. Past and current sensors used different signal levels, formats, and conventions to specify these critical parameters, making it difficult to make cross-sensor comparisons or to establish standards for future sensor design. The goal of this study is to quantify these parameters under uniform conditions for widely used past and current sensors in order to provide a reference for the design of future ocean color radiometers. Using measurements from the Moderate Resolution Imaging Spectroradiometer onboard the Aqua satellite (MODISA) under various solar zenith angles (SZAs), typical (L(sub typical)) and maximum (L(sub max)) at-sensor radiances from the visible to the shortwave IR were determined. The Ltypical values at an SZA of 45 deg were used as constraints to calculate SNRs of 10 multiband sensors at the same L(sub typical) radiance input and 2 hyperspectral sensors at a similar radiance input. The calculations were based on clear-water scenes with an objective method of selecting pixels with minimal cross-pixel variations to assure target homogeneity. Among the widely used ocean color sensors that have routine global coverage, MODISA ocean bands (1 km) showed 2-4 times higher SNRs than the Sea-viewing Wide Field-of-view Sensor (Sea-WiFS) (1 km) and comparable SNRs to the Medium Resolution Imaging Spectrometer (MERIS)-RR (reduced resolution, 1.2 km), leading to different levels of precision in the retrieved Chl data product. MERIS-FR (full resolution, 300 m) showed SNRs lower than MODISA and MERIS-RR with the gain in spatial resolution. SNRs of all MODISA ocean bands and SeaWiFS bands (except the SeaWiFS near-IR bands) exceeded those from prelaunch sensor specifications after adjusting the input radiance to L(sub typical). The tabulated L(sub typical), L(sub max), and SNRs of the various multiband and hyperspectral sensors under the same or similar radiance input provide references to compare sensor performance in product precision and to help design future missions such as the Geostationary Coastal and Air Pollution Events (GEO-CAPE) mission and the Pre-Aerosol-Clouds-Ecosystems (PACE) mission currently being planned by the U.S. National Aeronautics and Space Administration (NASA).

  10. A novel role of HIF-1α/PROX-1/LYVE-1 axis on tissue regeneration after renal ischaemia/reperfusion in mice.

    PubMed

    Meng, Fanwei

    2018-04-10

    Renal ischaemia reperfusion (I/R) is a common clinical condition with a high morbidity and mortality rate. To date, I/R-induced renal injury remains an ineffective treatment. We hypothesis that angiogenesis and lymphangiogenesis markers, prospero homeobox-1 (PROX-1) and lymphatic endothelial hyaluronan receptor-1 (LYVE-1), are critical during I/R. Kunming mice were subjected to I/R and observed for the following eight consecutive days. Pathology analysis and protein distribution were detected by H&E staining, immunohistochemistry and immunofluorescence confocal analysis. After I/R treatment, renal pathology was changed. HIF-1α was induced in the early stage and colocalisation with PROX-1 mainly in the renal tubular region, whereas PROX-1 and LYVE-1 were colocalised in the glomerulus of the endothelial region. In this study, we revealed HIF-1α/PROX-1/LVYE-1 axis dynamic changes in different regions after I/R and demonstrated for the first time it activates during I/R repair.

  11. Long-term assessment of the CALIPSO Imaging Infrared Radiometer (IIR) calibration and stability through simulated and observed comparisons with MODIS/Aqua and SEVIRI/Meteosat

    NASA Astrophysics Data System (ADS)

    Garnier, Anne; Scott, Noëlle A.; Pelon, Jacques; Armante, Raymond; Crépeau, Laurent; Six, Bruno; Pascal, Nicolas

    2017-04-01

    The quality of the calibrated radiances of the medium-resolution Imaging Infrared Radiometer (IIR) on-board the CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation) satellite is quantitatively evaluated from the beginning of the mission in June 2006. Two complementary relative and stand-alone approaches are used, which are related to comparisons of measured brightness temperatures and to model-to-observations comparisons, respectively. In both cases, IIR channels 1 (8.65 µm), 2 (10.6 µm), and 3 (12.05 µm) are paired with the Moderate Resolution Imaging Spectroradiometer (MODIS)/Aqua Collection 5 companion channels 29, 31, and 32, respectively, as well as with the Spinning Enhanced Visible and Infrared Imager (SEVIRI)/Meteosat companion channels IR8.7, IR10.8, and IR12, respectively. These pairs were selected before launch to meet radiometric, geometric, and space-time constraints. The prelaunch studies were based on simulations and sensitivity studies using the 4A/OP radiative transfer model and the more than 2300 atmospheres of the climatological Thermodynamic Initial Guess Retrieval (TIGR) input dataset further sorted into five air mass types. Using data from over 9.5 years of on-orbit operation, and following the relative approach technique, collocated measurements of IIR and of its companion channels have been compared at all latitudes over ocean, during day and night, and for all types of scenes in a wide range of brightness temperatures. The relative approach shows an excellent stability of IIR2-MODIS31 and IIR3-MODIS32 brightness temperature differences (BTDs) since launch. A slight trend within the IIR1-MODIS29 BTD, that equals -0.02 K yr-1 on average over 9.5 years, is detected when using the relative approach at all latitudes and all scene temperatures. For very cold scene temperatures (190-200 K) in the tropics, each IIR channel is warmer than its MODIS companion channel by 1.6 K on average. For the stand-alone approach, clear sky measurements only are considered, which are directly compared with simulations using 4A/OP and collocated ERA-Interim (ERA-I) reanalyses. The clear sky mask is derived from collocated observations from IIR and the CALIPSO lidar. Simulations for clear sky pixels in the tropics reproduce the differences between IIR1 and MODIS29 within 0.02 K and between IIR2 and MODIS31 within 0.04 K, whereas IIR3-MODIS32 is larger than simulated by 0.26 K. The stand-alone approach indicates that the trend identified from the relative approach originates from MODIS29, whereas no trend (less than ±0.004 K yr-1) is identified for any of the IIR channels. Finally, using the relative approach, a year-by-year seasonal bias between nighttime and daytime IIR-MODIS BTD was found at mid-latitude in the Northern Hemisphere. It is due to a nighttime IIR bias as determined by the stand-alone approach, which originates from a calibration drift during day-to-night transitions. The largest bias is in June and July when IIR2 and IIR3 are warmer by 0.4 K on average, and IIR1 is warmer by 0.2 K.

  12. VO2 /TiN Plasmonic Thermochromic Smart Coatings for Room-Temperature Applications.

    PubMed

    Hao, Qi; Li, Wan; Xu, Huiyan; Wang, Jiawei; Yin, Yin; Wang, Huaiyu; Ma, Libo; Ma, Fei; Jiang, Xuchuan; Schmidt, Oliver G; Chu, Paul K

    2018-03-01

    Vanadium dioxide/titanium nitride (VO 2 /TiN) smart coatings are prepared by hybridizing thermochromic VO 2 with plasmonic TiN nanoparticles. The VO 2 /TiN coatings can control infrared (IR) radiation dynamically in accordance with the ambient temperature and illumination intensity. It blocks IR light under strong illumination at 28 °C but is IR transparent under weak irradiation conditions or at a low temperature of 20 °C. The VO 2 /TiN coatings exhibit a good integral visible transmittance of up to 51% and excellent IR switching efficiency of 48% at 2000 nm. These unique advantages make VO 2 /TiN promising as smart energy-saving windows. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Radiometric calibration of wide-field camera system with an application in astronomy

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika

    2017-09-01

    Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.

  14. a Modeling Method of Fluttering Leaves Based on Point Cloud

    NASA Astrophysics Data System (ADS)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  15. GIFTS SM EDU Data Processing and Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Johnson, David G.; Reisse, Robert A.; Gazarik, Michael J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration stage. The calibration procedures can be subdivided into three stages. In the pre-calibration stage, a phase correction algorithm is applied to the decimated and filtered complex interferogram. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected blackbody reference spectra. In the radiometric calibration stage, we first compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. During the post-processing stage, we estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. We then implement a correction scheme that compensates for the effect of fore-optics offsets. Finally, for off-axis pixels, the FPA off-axis effects correction is performed. To estimate the performance of the entire FPA, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation.

  16. Vegetation in transition: the Southwest's dynamic past century

    Treesearch

    Raymond M. Turner

    2005-01-01

    Monitoring that follows long-term vegetation changes often requires selection of a temporal baseline. Any such starting point is to some degree artificial, but in some instances there are aids that can be used as guides to baseline selection. Matched photographs duplicating scenes first recorded on film a century or more ago reveal changes that help select the starting...

  17. Generation of chemical movies: FT-IR spectroscopic imaging of segmented flows.

    PubMed

    Chan, K L Andrew; Niu, X; deMello, A J; Kazarian, S G

    2011-05-01

    We have previously demonstrated that FT-IR spectroscopic imaging can be used as a powerful, label-free detection method for studying laminar flows. However, to date, the speed of image acquisition has been too slow for the efficient detection of moving droplets within segmented flow systems. In this paper, we demonstrate the extraction of fast FT-IR images with acquisition times of 50 ms. This approach allows efficient interrogation of segmented flow systems where aqueous droplets move at a speed of 2.5 mm/s. Consecutive FT-IR images separated by 120 ms intervals allow the generation of chemical movies at eight frames per second. The technique has been applied to the study of microfluidic systems containing moving droplets of water in oil and droplets of protein solution in oil. The presented work demonstrates the feasibility of the use of FT-IR imaging to study dynamic systems with subsecond temporal resolution.

  18. A receptor and neuron that activate a circuit limiting sucrose consumption.

    PubMed

    Joseph, Ryan M; Sun, Jennifer S; Tam, Edric; Carlson, John R

    2017-03-23

    The neural control of sugar consumption is critical for normal metabolism. In contrast to sugar-sensing taste neurons that promote consumption, we identify a taste neuron that limits sucrose consumption in Drosophila . Silencing of the neuron increases sucrose feeding; optogenetic activation decreases it. The feeding inhibition depends on the IR60b receptor, as shown by behavioral analysis and Ca 2+ imaging of an IR60b mutant. The IR60b phenotype shows a high degree of chemical specificity when tested with a broad panel of tastants. An automated analysis of feeding behavior in freely moving flies shows that IR60b limits the duration of individual feeding bouts. This receptor and neuron provide the molecular and cellular underpinnings of a new element in the circuit logic of feeding regulation. We propose a dynamic model in which sucrose acts via IR60b to activate a circuit that inhibits feeding and prevents overconsumption.

  19. Computational infrared and two-dimensional infrared photon echo spectroscopy of both wild-type and double mutant myoglobin-CO proteins.

    PubMed

    Choi, Jun-Ho; Kwak, Kyung-Won; Cho, Minhaeng

    2013-12-12

    The CO stretching mode of both wild-type and double mutant ( T67R / S92D ) MbCO (carbonmonoxymyoglobin) proteins is an ideal infrared (IR) probe for studying the local electrostatic environment inside the myoglobin heme pocket. Recently, to elucidate the conformational switching dynamics between two distinguishable states, extensive IR absorption, IR pump-probe, and two-dimensional (2D) IR spectroscopic studies for various mutant MbCO's have been performed by the Fayer group. They showed that the 2D IR spectroscopy of the double mutant, which has a peroxidase enzyme activity, reveals a rapid chemical exchange between two distinct states, whereas that of the wild-type does not. Despite the fact that a few simulation studies on these systems were already performed and reported, such complicated experimental results have not been fully reproduced nor described in terms of conformational state-to-state transition processes. Here, we first develop a distributed vibrational solvatochromic charge model for describing the CO stretch frequency shift reflecting local electric potential changes. Then, by carrying out molecular dynamic simulations of the two MbCO's and examining their CO frequency trajectories, it becomes possible to identify a proper reaction coordinate consisting of His64 imidazole ring rotation and its distance to the CO ligand. From the 2D surfaces of the resulting potential of mean forces, the spectroscopically distinguished A1 and A3 states of the wild-type as well as two more substates of the double mutant are identified and their vibrational frequencies and distributions are separately examined. Our simulated IR absorption and 2D IR spectra of the two MbCO's are directly compared with the previous experimental results reported by the Fayer group. The chemical exchange rate constants extracted from the two-state kinetic analyses of the simulated 2D IR spectra are in excellent agreement with the experimental values. On the basis of the quantitative agreement between the simulated spectra and experimental ones, we further examine the conformational differences in the heme pockets of the two proteins and show that the double mutation, T67R / S92D , suppresses the A1 population, restricts the imidazole ring rotation, and increases hydrogen-bond strength between the imidazole Nε-H and the oxygen atom of the CO ligand. It is believed that such delicate change of distal His64 imidazole ring dynamics induced by the double mutation may be responsible for its enhanced peroxidase catalytic activity as compared to the wild-type myoglobin.

  20. Molecular and Structural Traits of Insulin Receptor Substrate 1/LC3 Nuclear Structures and Their Role in Autophagy Control and Tumor Cell Survival.

    PubMed

    Lassak, Adam; Dean, Mathew; Wyczechowska, Dorota; Wilk, Anna; Marrero, Luis; Trillo-Tinoco, Jimena; Boulares, A Hamid; Sarkaria, Jann N; Del Valle, Luis; Peruzzi, Francesca; Ochoa, Augusto; Reiss, Krzysztof

    2018-05-15

    Insulin receptor substrate 1 (IRS-1) is a common cytosolic adaptor molecule involved in signal transduction from insulin and insulin-like growth factor I (IGF-I) receptors. IRS-1 can also be found in the nucleus. We report here a new finding of unique IRS-1 nuclear structures, which we observed initially in glioblastoma biopsy specimens and glioblastoma xenografts. These nuclear structures can be reproduced in vitro by the ectopic expression of IRS-1 cDNA cloned in frame with the nuclear localization signal (NLS-IRS-1). In these structures, IRS-1 localizes at the periphery, while the center harbors a key autophagy protein, LC3. These new nuclear structures are highly dynamic, rapidly exchange IRS-1 molecules with the surrounding nucleoplasm, disassemble during mitosis, and require a growth stimulus for their reassembly and maintenance. In tumor cells engineered to express NLS-IRS-1, the IRS-1/LC3 nuclear structures repress autophagy induced by either amino acid starvation or rapamycin treatment. In this process, IRS-1 nuclear structures sequester LC3 inside the nucleus, possibly preventing its cytosolic translocation and the formation of new autophagosomes. This novel mechanism provides a quick and reversible way of inhibiting autophagy, which could counteract autophagy-induced cancer cell death under severe stress, including anticancer therapies. Copyright © 2018 American Society for Microbiology.

  1. The low dose gamma ionising radiation impact upon cooperativity of androgen-specific proteins.

    PubMed

    Filchenkov, Gennady N; Popoff, Eugene H; Naumov, Alexander D

    2014-01-01

    The paper deals with effects of the ionising radiation (γ-IR, 0.5 Gy) upon serum testosterone (T), characteristics of testosterone-binding globulin (TeBG) and androgen receptor (AR) in parallel with observation of androgen (A) responsive enzyme activity - hexokinase (HK). The interdependence or relationships of T-levels with parameters of the proteins that provide androgenic regulation are consequently analyzed in post-IR dynamics. The IR-stress adjustment data reveal expediency of TeBG- and AR-cooperativity measurements for more precise assessments of endocrine A-control at appropriate emergencies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Hot Carrier Dynamics in the X Valley in Si and Ge Measured by Pump-IR-Probe Absorption Spectroscopy

    NASA Technical Reports Server (NTRS)

    Wang, W. B.; Cavicchia, M. A.; Alfano, R. R.

    1996-01-01

    Si is the semiconductor of choice for nanoelectronic roadmap into the next century for computer and other nanodevices. With growing interest in Si, Ge, and Si(sub m)Ge(sub n) strained superlattices, knowledge of the carrier relaxation processes in these materials and structures has become increasingly important. The limited time resolution for earlier studies of carrier dynamics in Ge and Si, performed using Nd:glass lasers, was not sufficient to observe the fast cooling processes. In this paper, we present a direct measurement of hot carrier dynamics in the satellite X valley in Si and Ge by time-resolved infrared(IR) absorption spectroscopy, and show the potential of our technique to identify whether the X valley is the lowest conduction valley in semiconductor materials and structures.

  3. DynAOI: a tool for matching eye-movement data with dynamic areas of interest in animations and movies.

    PubMed

    Papenmeier, Frank; Huff, Markus

    2010-02-01

    Analyzing gaze behavior with dynamic stimulus material is of growing importance in experimental psychology; however, there is still a lack of efficient analysis tools that are able to handle dynamically changing areas of interest. In this article, we present DynAOI, an open-source tool that allows for the definition of dynamic areas of interest. It works automatically with animations that are based on virtual three-dimensional models. When one is working with videos of real-world scenes, a three-dimensional model of the relevant content needs to be created first. The recorded eye-movement data are matched with the static and dynamic objects in the model underlying the video content, thus creating static and dynamic areas of interest. A validation study asking participants to track particular objects demonstrated that DynAOI is an efficient tool for handling dynamic areas of interest.

  4. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.

    PubMed

    Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K

    2010-09-01

    We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.

  5. Attractive Flicker--Guiding Attention in Dynamic Narrative Visualizations.

    PubMed

    Waldner, Manuela; Le Muzic, Mathieu; Bernhard, Matthias; Purgathofer, Werner; Viola, Ivan

    2014-12-01

    Focus+context techniques provide visual guidance in visualizations by giving strong visual prominence to elements of interest while the context is suppressed. However, finding a visual feature to enhance for the focus to pop out from its context in a large dynamic scene, while leading to minimal visual deformation and subjective disturbance, is challenging. This paper proposes Attractive Flicker, a novel technique for visual guidance in dynamic narrative visualizations. We first show that flicker is a strong visual attractor in the entire visual field, without distorting, suppressing, or adding any scene elements. The novel aspect of our Attractive Flicker technique is that it consists of two signal stages: The first "orientation stage" is a short but intensive flicker stimulus to attract the attention to elements of interest. Subsequently, the intensive flicker is reduced to a minimally disturbing luminance oscillation ("engagement stage") as visual support to keep track of the focus elements. To find a good trade-off between attraction effectiveness and subjective annoyance caused by flicker, we conducted two perceptual studies to find suitable signal parameters. We showcase Attractive Flicker with the parameters obtained from the perceptual statistics in a study of molecular interactions. With Attractive Flicker, users were able to easily follow the narrative of the visualization on a large display, while the flickering of focus elements was not disturbing when observing the context.

  6. Super-resolved FT-IR spectroscopy: Strategies, challenges, and opportunities for membrane biophysics.

    PubMed

    Li, Jessica J; Yip, Christopher M

    2013-10-01

    Direct correlation of molecular conformation with local structure is critical to studies of protein- and peptide-membrane interactions, particularly in the context of membrane-facilitated aggregation, and disruption or disordering. Infrared spectroscopy has long been a mainstay for determining molecular conformation, following folding dynamics, and characterizing reactions. While tremendous advances have been made in improving the spectral and temporal resolution of infrared spectroscopy, it has only been with the introduction of scanned-probe techniques that exploit the raster-scanning tip as either a source, scattering tool, or measurement probe that researchers have been able to obtain sub-diffraction limit IR spectra. This review will examine the history of correlated scanned-probe IR spectroscopies, from their inception to their use in studies of molecular aggregates, membrane domains, and cellular structures. The challenges and opportunities that these platforms present for examining dynamic phenomena will be discussed. This article is part of a Special Issue entitled: FTIR in membrane proteins and peptide studies. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Qualitative and numerical investigations of the impact of a novel pathogen on a seabird colony

    NASA Astrophysics Data System (ADS)

    O'Regan, S. M.; Kelly, T. C.; Korobeinikov, A.; O'Callaghan, M. J. A.; Pokrovskii, A. V.

    2008-11-01

    Understanding the dynamics of novel pathogens in dense populations is crucial to public and veterinary health as well as wildlife ecology. Seabirds live in crowded colonies numbering several thousands of individuals. The long-term dynamics of avian influenza H5N1 virus in a seabird colony with no existing herd immunity are investigated using sophisticated mathematical techniques. The key characteristics of seabird population biology and the H5N1 virus are incorporated into a Susceptible-Exposed-Infected-Recovered (SEIR) model. Using the theory of integral manifolds, the SEIR model is reduced to a simpler system of two differential equations depending on the infected and recovered populations only, termed the IR model. The results of numerical experiments indicate that the IR model and the SEIR model are in close agreement. Using Lyapunov's direct method, the equilibria of the SEIR and the IR models are proven to be globally asymptotically stable in the positive quadrant.

  8. Magnetic structure and excitation spectrum of the hyperhoneycomb Kitaev magnet β -Li2IrO3

    NASA Astrophysics Data System (ADS)

    Ducatman, Samuel; Rousochatzakis, Ioannis; Perkins, Natalia B.

    2018-03-01

    We present a theoretical study of the static and dynamical properties of the three-dimensional, hyperhoneycomb Kitaev magnet β -Li2IrO3 . We argue that the observed incommensurate order can be understood in terms of a long-wavelength twisting of a nearby commensurate period-3 state, with the same key qualitatively features. The period-3 state shows very different structure when either the Kitaev interaction K or the off-diagonal exchange anisotropy Γ is dominant. A comparison of the associated static spin structure factors with reported scattering experiments in zero and finite fields gives strong evidence that β -Li2IrO3 lies in the regime of dominant Kitaev coupling, and that the Heisenberg exchange J is much weaker than both K and Γ . Our predictions for the magnon excitation spectra, the dynamical spin structure factors, and their polarization dependence provide additional distinctive fingerprints that can be checked experimentally.

  9. Ultrafast forward and backward electron transfer dynamics of coumarin 337 in hydrogen-bonded anilines as studied with femtosecond UV-pump/IR-probe spectroscopy.

    PubMed

    Ghosh, Hirendra N; Verma, Sandeep; Nibbering, Erik T J

    2011-02-10

    Femtosecond infrared spectroscopy is used to study both forward and backward electron transfer (ET) dynamics between coumarin 337 (C337) and the aromatic amine solvents aniline (AN), N-methylaniline (MAN), and N,N-dimethylaniline (DMAN), where all the aniline solvents can donate an electron but only AN and MAN can form hydrogen bonds with C337. The formation of a hydrogen bond with AN and MAN is confirmed with steady state FT-IR spectroscopy, where the C═O stretching vibration is a direct marker mode for hydrogen bond formation. Transient IR absorption measurements in all solvents show an absorption band at 2166 cm(-1), which has been attributed to the C≡N stretching vibration of the C337 radical anion formed after ET. Forward electron transfer dynamics is found to be biexponential with time constants τ(ET)(1) = 500 fs, τ(ET)(2) = 7 ps in all solvents. Despite the presence of hydrogen bonds of C337 with the solvents AN and MAN, no effect has been found on the forward electron transfer step. Because of the absence of an H/D isotope effect on the forward electron transfer reaction of C337 in AN, hydrogen bonds are understood to play a minor role in mediating electron transfer. In contrast, direct π-orbital overlap between C337 and the aromatic amine solvents causes ultrafast forward electron transfer dynamics. Backward electron transfer dynamics, in contrast, is dependent on the solvent used. Standard Marcus theory explains the observed backward electron transfer rates.

  10. Dynamic fracture toughness of ASME SA508 Class 2a ASME SA533 grade A Class 2 base and heat affected zone material and applicable weld metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Logsdon, W.A.; Begley, J.A.; Gottshall, C.L.

    1978-03-01

    The ASME Boiler and Pressure Vessel Code, Section III, Article G-2000, requires that dynamic fracture toughness data be developed for materials with specified minimum yield strengths greater than 50 ksi to provide verification and utilization of the ASME specified minimum reference toughness K/sub IR/ curve. In order to qualify ASME SA508 Class 2a and ASME SA533 Grade A Class 2 pressure vessel steels (minimum yield strengths equal 65 kip/in./sup 2/ and 70 kip/in./sup 2/, respectively) per this requirement, dynamic fracture toughness tests were performed on these materials. All dynamic fracture toughness values of SA508 Class 2a base and HAZ material,more » SA533 Grade A Class 2 base and HAZ material, and applicable weld metals exceeded the ASME specified minimum reference toughness K/sub IR/ curve.« less

  11. Infrared signatures of the peptide dynamical transition: A molecular dynamics simulation study

    NASA Astrophysics Data System (ADS)

    Kobus, Maja; Nguyen, Phuong H.; Stock, Gerhard

    2010-07-01

    Recent two-dimensional infrared (2D-IR) experiments on a short peptide 310-helix in chloroform solvent [E. H. G. Backus et al., J. Phys. Chem. B 113, 13405 (2009)] revealed an intriguing temperature dependence of the homogeneous line width, which was interpreted in terms of a dynamical transition of the peptide. To explain these findings, extensive molecular dynamics simulations at various temperatures were performed in order to construct the free energy landscape of the system. The study recovers the familiar picture of a glass-forming system, which below the glass transition temperature Tg is trapped in various energy basins, while it diffuses freely between these basins above Tg. In fact, one finds at Tg≈270 K a sharp rise of the fluctuations of the backbone dihedral angles, which reflects conformational transitions of the peptide. The corresponding CO frequency fluctuations are found to be a sensitive probe of the peptide conformational dynamics from femtosecond to nanosecond time scales and lead to 2D-IR spectra that qualitatively match the experiment. The calculated homogeneous line width, however, does not show the biphasic temperature dependence observed in experiment.

  12. Evaluation of Alternate Concepts for Synthetic Vision Flight Displays With Weather-Penetrating Sensor Image Inserts During Simulated Landing Approaches

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.; Nold, Dean E.

    2003-01-01

    A simulation study was conducted in 1994 at Langley Research Center that used 12 commercial airline pilots repeatedly flying complex Microwave Landing System (MLS)-type approaches to parallel runways under Category IIIc weather conditions. Two sensor insert concepts of 'Synthetic Vision Systems' (SVS) were used in the simulated flights, with a more conventional electro-optical display (similar to a Head-Up Display with raster capability for sensor imagery), flown under less restrictive visibility conditions, used as a control condition. The SVS concepts combined the sensor imagery with a computer-generated image (CGI) of an out-the-window scene based on an onboard airport database. Various scenarios involving runway traffic incursions (taxiing aircraft and parked fuel trucks) and navigational system position errors (both static and dynamic) were used to assess the pilots' ability to manage the approach task with the display concepts. The two SVS sensor insert concepts contrasted the simple overlay of sensor imagery on the CGI scene without additional image processing (the SV display) to the complex integration (the AV display) of the CGI scene with pilot-decision aiding using both object and edge detection techniques for detection of obstacle conflicts and runway alignment errors.

  13. A Drastic Change in Background Luminance or Motion Degrades the Preview Benefit.

    PubMed

    Osugi, Takayuki; Murakami, Ikuya

    2017-01-01

    When some distractors (old items) precede some others (new items) in an inefficient visual search task, the search is restricted to new items, and yields a phenomenon termed the preview benefit. It has recently been demonstrated that, in this preview search task, the onset of repetitive changes in the background disrupts the preview benefit, whereas a single transient change in the background does not. In the present study, we explored this effect with dynamic background changes occurring in the context of realistic scenes, to examine the robustness and usefulness of visual marking. We examined whether preview benefit in a preview search task survived through task-irrelevant changes in the scene, namely a luminance change and the initiation of coherent motion, both occurring in the background. Luminance change of the background disrupted preview benefit if it was synchronized with the onset of the search display. Furthermore, although the presence of coherent background motion per se did not affect preview benefit, its synchronized initiation with the onset of the search display did disrupt preview benefit if the motion speed was sufficiently high. These results suggest that visual marking can be destroyed by a transient event in the scene if that event is sufficiently drastic.

  14. Trained Eyes: Experience Promotes Adaptive Gaze Control in Dynamic and Uncertain Visual Environments

    PubMed Central

    Taya, Shuichiro; Windridge, David; Osman, Magda

    2013-01-01

    Current eye-tracking research suggests that our eyes make anticipatory movements to a location that is relevant for a forthcoming task. Moreover, there is evidence to suggest that with more practice anticipatory gaze control can improve. However, these findings are largely limited to situations where participants are actively engaged in a task. We ask: does experience modulate anticipative gaze control while passively observing a visual scene? To tackle this we tested people with varying degrees of experience of tennis, in order to uncover potential associations between experience and eye movement behaviour while they watched tennis videos. The number, size, and accuracy of saccades (rapid eye-movements) made around ‘events,’ which is critical for the scene context (i.e. hit and bounce) were analysed. Overall, we found that experience improved anticipatory eye-movements while watching tennis clips. In general, those with extensive experience showed greater accuracy of saccades to upcoming event locations; this was particularly prevalent for events in the scene that carried high uncertainty (i.e. ball bounces). The results indicate that, even when passively observing, our gaze control system utilizes prior relevant knowledge in order to anticipate upcoming uncertain event locations. PMID:23951147

  15. Optical system for object detection and delineation in space

    NASA Astrophysics Data System (ADS)

    Handelman, Amir; Shwartz, Shoam; Donitza, Liad; Chaplanov, Loran

    2018-01-01

    Object recognition and delineation is an important task in many environments, such as in crime scenes and operating rooms. Marking evidence or surgical tools and attracting the attention of the surrounding staff to the marked objects can affect people's lives. We present an optical system comprising a camera, computer, and small laser projector that can detect and delineate objects in the environment. To prove the optical system's concept, we show that it can operate in a hypothetical crime scene in which a pistol is present and automatically recognize and segment it by various computer-vision algorithms. Based on such segmentation, the laser projector illuminates the actual boundaries of the pistol and thus allows the persons in the scene to comfortably locate and measure the pistol without holding any intermediator device, such as an augmented reality handheld device, glasses, or screens. Using additional optical devices, such as diffraction grating and a cylinder lens, the pistol size can be estimated. The exact location of the pistol in space remains static, even after its removal. Our optical system can be fixed or dynamically moved, making it suitable for various applications that require marking of objects in space.

  16. VMD DisRg: New User-Friendly Implement for calculation distance and radius of gyration in VMD program

    PubMed Central

    Falsafi-Zadeh, Sajad; Karimi, Zahra; Galehdari, Hamid

    2012-01-01

    Molecular dynamic simulation is a practical and powerful technique for analysis of protein structure. Several programs have been developed to facilitate the mentioned investigation, under them the visual molecular dynamic or VMD is the most frequently used programs. One of the beneficial properties of the VMD is its ability to be extendable by designing new plug-in. We introduce here a new facility of the VMD for distance analysis and radius of gyration of biopolymers such as protein and DNA. Availability The database is available for free at http://trc.ajums.ac.ir/HomePage.aspx/?TabID/=12618/&Site/=trc.ajums.ac/&Lang/=fa-IR PMID:22553393

  17. Nonlinear optical properties of thick composite media with vanadium dioxide nanoparticles. I. Self-defocusing of radiation in the visible and near-IR regions

    NASA Astrophysics Data System (ADS)

    Ostrosablina, A. A.; Sidorov, A. I.

    2005-07-01

    This paper presents the experimental and theoretical results of a study of the interaction of pulsed laser radiation with thick composite media containing nanoparticles of vanadium dioxide (VO2). It establishes that the reversible semiconductor-metal phase transition that occurs in VO2 nanoparticles under the action of radiation can produce self-defocusing of radiation in the visible and near-IR regions because of the formation of a photoinduced dynamic lens. An analysis is carried out of how the radiation intensity affects the dynamics of these processes. It is shown that photoinduced absorption and scattering play a role in forming the nonlinear optical response.

  18. Electronic and thermally tunable infrared metamaterial absorbers

    NASA Astrophysics Data System (ADS)

    Shrekenhamer, David; Miragliotta, Joseph A.; Brinkley, Matthew; Fan, Kebin; Peng, Fenglin; Montoya, John A.; Gauza, Sebastian; Wu, Shin-Tson; Padilla, Willie J.

    2016-09-01

    In this paper, we report a computational and experimental study using tunable infrared (IR) metamaterial absorbers (MMAs) to demonstrate frequency tunable (7%) and amplitude modulation (61%) designs. The dynamic tuning of each structure was achieved through the addition of an active material—liquid crystals (LC) or vanadium dioxide (VO2)-within the unit cell of the MMA architecture. In both systems, an applied stimulus (electric field or temperature) induced a dielectric change in the active material and subsequent variation in the absorption and reflection properties of the MMA in the mid- to long-wavelength region of the IR (MWIR and LWIR, respectively). These changes were observed to be reversible for both systems and dynamic in the LC-based structure.

  19. CO2 removal by solid amine sorbents. 1: Experimental studies of amine resin IR-45 with regard to spacecraft applications. 2: Computer program for predicting the transient performance of solid amine sorbent systems

    NASA Technical Reports Server (NTRS)

    Wright, R. M.; Hwang, K. C.

    1973-01-01

    The sorbent behavior of solid amine resin IR-45 with regard to potential use in regenerative CO2-removal systems for manned spacecraft is considered. Measurements of equilibrium sorption capacity of IR-45 for water and for CO2 are reported, and the dynamic mass transfer behavior of IR-45 beds is studied under conditions representative of those expected in a manned spacecraft. A digital computer program was written for the transient performance prediction of CO2 removal systems comprised of solid amine beds. Also evaluated are systems employing inorganic molecular-sieve sorbents. Tests show that there is definitely an effect of water loading on the absorption rate.

  20. Scene recognition following locomotion around a scene.

    PubMed

    Motes, Michael A; Finlay, Cory A; Kozhevnikov, Maria

    2006-01-01

    Effects of locomotion on scene-recognition reaction time (RT) and accuracy were studied. In experiment 1, observers memorized an 11-object scene and made scene-recognition judgments on subsequently presented scenes from the encoded view or different views (ie scenes were rotated or observers moved around the scene, both from 40 degrees to 360 degrees). In experiment 2, observers viewed different 5-object scenes on each trial and made scene-recognition judgments from the encoded view or after moving around the scene, from 36 degrees to 180 degrees. Across experiments, scene-recognition RT increased (in experiment 2 accuracy decreased) with angular distance between encoded and judged views, regardless of how the viewpoint changes occurred. The findings raise questions about conditions in which locomotion produces spatially updated representations of scenes.

  1. Multispectral Detection with Metal-Dielectric Filters: An Investigation in Several Wavelength Bands with Temporal Coupled-Mode Theory

    NASA Astrophysics Data System (ADS)

    Lesmanne, Emeline; Espiau de Lamaestre, Roch; Boutami, Salim; Durantin, Cédric; Dussopt, Laurent; Badano, Giacomo

    2016-09-01

    Multispectral infrared (IR) detection is of great interest to enhance our ability to gather information from a scene. Filtering is a low-cost alternative to the complex multispectral device architectures to which the IR community has devoted much attention. Multilayer dielectric filters are standard in industry, but they require changing the thickness of at least one layer to tune the wavelength. Here, we pursue an approach based on apertures in a metallic layer of fixed thickness, in which the filtered wavelengths are selected by varying the aperture geometry. In particular, we study filters made of at least one sheet of resonating apertures in metal embedded in dielectrics. We will discuss two interesting problems that arise when one attempts to design such filters. First, metallic absorption must be taken into account. Second, the form and size of the pattern is limited by lithography. We will present some design examples and an attempt at explaining the filtering behavior based on the temporal coupled mode theory. That theory models the filter as a resonator interacting with the environment via loss channels. The transmission is solely determined by the loss rates associated with those channels. This model allows us to give a general picture of the filtering performance and compare their characteristics at different wavelength bands.

  2. Penehyclidine hydrochloride regulates mitochondrial dynamics and apoptosis through p38MAPK and JNK signal pathways and provides cardioprotection in rats with myocardial ischemia-reperfusion injury.

    PubMed

    Feng, Min; Wang, Lirui; Chang, Siyuan; Yuan, Pu

    2018-05-31

    The potential mechanism of penehyclidine hydrochloride (PHC) against myocardial ischemia-reperfusion (I/R) injury has not been fully elucidated. The aim of the present study was to reveal whether mitochondrial dynamics, apoptosis, and MAPKs were involved in the cardioprotective effect of this drug on myocardial I/R injury. Ninety healthy adult male Wistar rats were separately pretreated with normal saline (0.9%); PHC; and signal pathway blockers of MAPKs, Drp1, and Bcl-2. Coronary artery ligation and subsequent reperfusion were performed to induce myocardial I/R injury. Echocardiography was performed. Myocardial enzymes and oxidative stress markers were detected. Myocardial cell apoptotic rates and infarct sizes were measured. Mitochondrial function was evaluated. Expression levels of MAPKs, mitochondria regulatory proteins (Drp1, Mfn1/2), and apoptosis-related proteins (Bcl-2, Bax) were determined. PHC pretreatment improved myocardial abnormalities (dysfunction, injury, infarct size, and apoptotic rate), mitochondrial abnormalities (dysfunction and fission), and excessive oxidative stress and inhibited the activities of p38MAPK and JNK signal pathways in rats with myocardial I/R injury (P < 0.05). Additionally, p38MAPK and JNK blockers (SB239063 and SP600125, respectively) had an effect on rats same as that of PHC. Although Drp1 blocker (Mdivi-1) showed a similar cardioprotective effect (P < 0.05), it did not affect the expression of MAPKs and apoptosis-related proteins (P > 0.05). In addition, Bcl-2 blocker (ABT-737) caused a high expression of Drp1 and a low expression of Mfn1/2 (P < 0.05). PHC regulated mitochondrial dynamics and apoptosis through p38MAPK and JNK signal pathways and provided cardioprotection in rats with myocardial I/R injury. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Effect of solvent polarity on the vibrational dephasing dynamics of the nitrosyl stretch in an Fe(II) complex revealed by 2D IR spectroscopy.

    PubMed

    Brookes, Jennifer F; Slenkamp, Karla M; Lynch, Michael S; Khalil, Munira

    2013-07-25

    The vibrational dephasing dynamics of the nitrosyl stretching vibration (ν(NO)) in sodium nitroprusside (SNP, Na2[Fe(CN)5NO]·2H2O) are investigated using two-dimensional infrared (2D IR) spectroscopy. The ν(NO) in SNP acts as a model system for the nitrosyl ligand found in metalloproteins which play an important role in the transportation and detection of nitric oxide (NO) in biological systems. We perform a 2D IR line shape study of the ν(NO) in the following solvents: water, deuterium oxide, methanol, ethanol, ethylene glycol, formamide, and dimethyl sulfoxide. The frequency of the ν(NO) exhibits a large vibrational solvatochromic shift of 52 cm(-1), ranging from 1884 cm(-1) in dimethyl sulfoxide to 1936 cm(-1) in water. The vibrational anharmonicity of the ν(NO) varies from 21 to 28 cm(-1) in the solvents used in this study. The frequency-frequency correlation functions (FFCFs) of the ν(NO) in SNP in each of the seven solvents are obtained by fitting the experimentally obtained 2D IR spectra using nonlinear response theory. The fits to the 2D IR line shape reveal that the spectral diffusion time scale of the ν(NO) in SNP varies from 0.8 to 4 ps and is negatively correlated with the empirical solvent polarity scales. We compare our results with the experimentally determined FFCFs of other charged vibrational probes in polar solvents and in the active sites of heme proteins. Our results suggest that the vibrational dephasing dynamics of the ν(NO) in SNP reflect the fluctuations of the nonhomogeneous electric field created by the polar solvents around the nitrosyl and cyanide ligands. The solute solvent interactions occurring at the trans-CN ligand are sensed through the π-back-bonding network along the Fe-NO bond in SNP.

  4. Smoking scenes in popular Japanese serial television dramas: descriptive analysis during the same 3-month period in two consecutive years.

    PubMed

    Kanda, Hideyuki; Okamura, Tomonori; Turin, Tanvir Chowdhury; Hayakawa, Takehito; Kadowaki, Takashi; Ueshima, Hirotsugu

    2006-06-01

    Japanese serial television dramas are becoming very popular overseas, particularly in other Asian countries. Exposure to smoking scenes in movies and television dramas has been known to trigger initiation of habitual smoking in young people. Smoking scenes in Japanese dramas may affect the smoking behavior of many young Asians. We examined smoking scenes and smoking-related items in serial television dramas targeting young audiences in Japan during the same season in two consecutive years. Fourteen television dramas targeting the young audience broadcast between July and September in 2001 and 2002 were analyzed. A total of 136 h 42 min of television programs were divided into unit scenes of 3 min (a total of 2734 unit scenes). All the unit scenes were reviewed for smoking scenes and smoking-related items. Of the 2734 3-min unit scenes, 205 (7.5%) were actual smoking scenes and 387 (14.2%) depicted smoking environments with the presence of smoking-related items, such as ash trays. In 185 unit scenes (90.2% of total smoking scenes), actors were shown smoking. Actresses were less frequently shown smoking (9.8% of total smoking scenes). Smoking characters in dramas were in the 20-49 age group in 193 unit scenes (94.1% of total smoking scenes). In 96 unit scenes (46.8% of total smoking scenes), at least one non-smoker was present in the smoking scenes. The smoking locations were mainly indoors, including offices, restaurants and homes (122 unit scenes, 59.6%). The most common smoking-related items shown were ash trays (in 45.5% of smoking-item-related scenes) and cigarettes (in 30.2% of smoking-item-related scenes). Only 3 unit scenes (0.1 % of all scenes) promoted smoking prohibition. This was a descriptive study to examine the nature of smoking scenes observed in Japanese television dramas from a public health perspective.

  5. Cortical Dynamics of Contextually Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    ERIC Educational Resources Information Center

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-01-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient…

  6. English- and Mandarin-Learning Infants' Discrimination of Actions and Objects in Dynamic Events

    ERIC Educational Resources Information Center

    Chen, Jie; Tardif, Twila; Pulverman, Rachel; Casasola, Marianella; Zhu, Liqi; Zheng, Xiaobei; Meng, Xiangzhi

    2015-01-01

    The present studies examined the role of linguistic experience in directing English and Mandarin learners' attention to aspects of a visual scene. Specifically, they asked whether young language learners in these 2 cultures attend to differential aspects of a word-learning situation. Two groups of English and Mandarin learners, 6-8-month-olds (n =…

  7. Got a Match? Ion Extraction GC-MS Characterization of Accelerants Adsorbed in Charcoal Using Negative Pressure Dynamic Headspace Concentration

    ERIC Educational Resources Information Center

    Anzivino, Barbara; Tilley, Leon J.; Ingalls, Laura R.; Hall, Adam B.; Drugan, John E.

    2009-01-01

    An undergraduate organic chemistry experiment demonstrating real-life application of GC-MS to arson accelerant identification is described. Students are given the task of comparing a sample recovered from a "crime scene" to that from a "suspect's clothing". Accelerants subjected to different conditions are recovered using a quick and simple…

  8. The Influence of Presentation Modality on the Social Comprehension of Naturalistic Scenes in Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Gedek, Haley M.; Pantelis, Peter C.; Kennedy, Daniel P.

    2018-01-01

    The comprehension of dynamically unfolding social situations is made possible by the seamless integration of multimodal information merged with rich intuitions about the thoughts and behaviors of others. We examined how high-functioning adults with autism spectrum disorder and neurotypical controls made a complex social judgment (i.e. rating the…

  9. Orientation Preferences and Motion Sickness Induced in a Virtual Reality Environment.

    PubMed

    Chen, Wei; Chao, Jian-Gang; Zhang, Yan; Wang, Jin-Kun; Chen, Xue-Wen; Tan, Cheng

    2017-10-01

    Astronauts' orientation preferences tend to correlate with their susceptibility to space motion sickness (SMS). Orientation preferences appear universally, since variable sensory cue priorities are used between individuals. However, SMS susceptibility changes after proper training, while orientation preferences seem to be intrinsic proclivities. The present study was conducted to investigate whether orientation preferences change if susceptibility is reduced after repeated exposure to a virtual reality (VR) stimulus environment that induces SMS. A horizontal supine posture was chosen to create a sensory context similar to weightlessness, and two VR devices were used to produce a highly immersive virtual scene. Subjects were randomly allocated to an experimental group (trained through exposure to a provocative rotating virtual scene) and a control group (untrained). All subjects' orientation preferences were measured twice with the same interval, but the experimental group was trained three times during the interval, while the control group was not. Trained subjects were less susceptible to SMS, with symptom scores reduced by 40%. Compared with untrained subjects, trained subjects' orientation preferences were significantly different between pre- and posttraining assessments. Trained subjects depended less on visual cues, whereas few subjects demonstrated the opposite tendency. Results suggest that visual information may be inefficient and unreliable for body orientation and stabilization in a rotating visual scene, while reprioritizing preferences for different sensory cues was dynamic and asymmetric between individuals. The present findings should facilitate customization of efficient and proper training for astronauts with different sensory prioritization preferences and dynamic characteristics.Chen W, Chao J-G, Zhang Y, Wang J-K, Chen X-W, Tan C. Orientation preferences and motion sickness induced in a virtual reality environment. Aerosp Med Hum Perform. 2017; 88(10):903-910.

  10. Adaptive fusion of infrared and visible images in dynamic scene

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  11. Measuring river from the cloud - River width algorithm development on Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Yang, X.; Pavelsky, T.; Allen, G. H.; Donchyts, G.

    2017-12-01

    Rivers are some of the most dynamic features of the terrestrial land surface. They help distribute freshwater, nutrients, sediment, and they are also responsible for some of the greatest natural hazards. Despite their importance, our understanding of river behavior is limited at the global scale, in part because we do not have a river observational dataset that spans both time and space. Remote sensing data represent a rich, largely untapped resource for observing river dynamics. In particular, publicly accessible archives of satellite optical imagery, which date back to the 1970s, can be used to study the planview morphodynamics of rivers at the global scale. Here we present an image processing algorithm developed using the Google Earth Engine cloud-based platform, that can automatically extracts river centerlines and widths from Landsat 5, 7, and 8 scenes at 30 m resolution. Our algorithm makes use of the latest monthly global surface water history dataset and an existing Global River Width from Landsat (GRWL) dataset to efficiently extract river masks from each Landsat scene. Then a combination of distance transform and skeletonization techniques are used to extract river centerlines. Finally, our algorithm calculates wetted river width at each centerline pixel perpendicular to its local centerline direction. We validated this algorithm using in situ data estimated from 16 USGS gauge stations (N=1781). We find that 92% of the width differences are within 60 m (i.e. the minimum length of 2 Landsat pixels). Leveraging Earth Engine's infrastructure of collocated data and processing power, our goal is to use this algorithm to reconstruct the morphodynamic history of rivers globally by processing over 100,000 Landsat 5 scenes, covering from 1984 to 2013.

  12. Parsing heterogeneity in autism spectrum disorders: visual scanning of dynamic social scenes in school-aged children.

    PubMed

    Rice, Katherine; Moriuchi, Jennifer M; Jones, Warren; Klin, Ami

    2012-03-01

    To examine patterns of variability in social visual engagement and their relationship to standardized measures of social disability in a heterogeneous sample of school-aged children with autism spectrum disorders (ASD). Eye-tracking measures of visual fixation during free-viewing of dynamic social scenes were obtained for 109 children with ASD (mean age, 10.2 ± 3.2 years), 37 of whom were matched with 26 typically-developing (TD) children (mean age, 9.5 ± 2.2 years) on gender, age, and IQ. The smaller subset allowed between-group comparisons, whereas the larger group was used for within-group examinations of ASD heterogeneity. Between-group comparisons revealed significantly attenuated orientation to socially salient aspects of the scenes, with the largest effect size (Cohen's d = 1.5) obtained for reduced fixation on faces. Within-group analyses revealed a robust association between higher fixation on the inanimate environment and greater social disability. However, the associations between fixation on the eyes and mouth and social adaptation varied greatly, even reversing, when comparing different cognitive profile subgroups. Although patterns of social visual engagement with naturalistic social stimuli are profoundly altered in children with ASD, the social adaptivity of these behaviors varies for different groups of children. This variation likely represents different patterns of adaptation and maladaptation that should be traced longitudinally to the first years of life, before complex interactions between early predispositions and compensatory learning take place. We propose that variability in these early mechanisms of socialization may serve as proximal behavioral manifestations of genetic vulnerabilities. Copyright © 2012 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  13. Enabling model customization and integration

    NASA Astrophysics Data System (ADS)

    Park, Minho; Fishwick, Paul A.

    2003-09-01

    Until fairly recently, the idea of dynamic model content and presentation were treated synonymously. For example, if one was to take a data flow network, which captures the dynamics of a target system in terms of the flow of data through nodal operators, then one would often standardize on rectangles and arrows for the model display. The increasing web emphasis on XML, however, suggests that the network model can have its content specified in an XML language, and then the model can be represented in a number of ways depending on the chosen style. We have developed a formal method, based on styles, that permits a model to be specified in XML and presented in 1D (text), 2D, and 3D. This method allows for customization and personalization to exert their benefits beyond e-commerce, to the area of model structures used in computer simulation. This customization leads naturally to solving the bigger problem of model integration - the act of taking models of a scene and integrating them with that scene so that there is only one unified modeling interface. This work focuses mostly on customization, but we address the integration issue in the future work section.

  14. Robust curb detection with fusion of 3D-Lidar and camera data.

    PubMed

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-05-21

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.

  15. Learning to recognize objects on the fly: a neurally based dynamic field approach.

    PubMed

    Faubel, Christian; Schöner, Gregor

    2008-05-01

    Autonomous robots interacting with human users need to build and continuously update scene representations. This entails the problem of rapidly learning to recognize new objects under user guidance. Based on analogies with human visual working memory, we propose a dynamical field architecture, in which localized peaks of activation represent objects over a small number of simple feature dimensions. Learning consists of laying down memory traces of such peaks. We implement the dynamical field model on a service robot and demonstrate how it learns 30 objects from a very small number of views (about 5 per object are sufficient). We also illustrate how properties of feature binding emerge from this framework.

  16. 2D IR spectroscopy reveals the role of water in the binding of channel-blocking drugs to the influenza M2 channel.

    PubMed

    Ghosh, Ayanjeet; Wang, Jun; Moroz, Yurii S; Korendovych, Ivan V; Zanni, Martin; DeGrado, William F; Gai, Feng; Hochstrasser, Robin M

    2014-06-21

    Water is an integral part of the homotetrameric M2 proton channel of the influenza A virus, which not only assists proton conduction but could also play an important role in stabilizing channel-blocking drugs. Herein, we employ two dimensional infrared (2D IR) spectroscopy and site-specific IR probes, i.e., the amide I bands arising from isotopically labeled Ala30 and Gly34 residues, to probe how binding of either rimantadine or 7,7-spiran amine affects the water dynamics inside the M2 channel. Our results show, at neutral pH where the channel is non-conducting, that drug binding leads to a significant increase in the mobility of the channel water. A similar trend is also observed at pH 5.0 although the difference becomes smaller. Taken together, these results indicate that the channel water facilitates drug binding by increasing its entropy. Furthermore, the 2D IR spectral signatures obtained for both probes under different conditions collectively support a binding mechanism whereby amantadine-like drugs dock in the channel with their ammonium moiety pointing toward the histidine residues and interacting with a nearby water cluster, as predicted by molecular dynamics simulations. We believe these findings have important implications for designing new anti-influenza drugs.

  17. O-H anharmonic vibrational motions in Cl(-)···(CH3OH)(1-2) ionic clusters. Combined IRPD experiments and AIMD simulations.

    PubMed

    Beck, Jordan P; Cimas, Alvaro; Lisy, James M; Gaigeot, Marie-Pierre

    2014-02-05

    The structures of Cl(-)-(Methanol)1,2 clusters have been unraveled combining Infrared Predissociation (IR-PD) experiments and DFT-based molecular dynamics simulations (DFT-MD) at 100 K. The dynamical IR spectra extracted from DFT-MD provide the initial 600 cm(-1) large anharmonic red-shift of the O-H stretch from uncomplexed methanol (3682 cm(-1)) to Cl(-)-(Methanol)1 complex (3085 cm(-1)) as observed in the IR-PD experiment, as well as the subtle supplementary blue- and red-shifts of the O-H stretch in Cl(-)-(Methanol)2 depending on the structure. The anharmonic vibrational calculations remarkably provide the 100 cm(-1) O-H blue-shift when the two methanol molecules are simultaneously organized in the anion first hydration shell (conformer 2A), while they provide the 240 cm(-1) O-H red-shift when the second methanol is in the second hydration shell of Cl(-) (conformer 2B). RRKM calculations have also shown that 2A/2B conformers interconvert on a nanosecond time-scale at the estimated 100 K temperature of the clusters formed by evaporative cooling of argon prior to the IR-PD process. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Improving GEOS-5 seven day forecast skill by assimilation of quality controlled AIRS temperature profiles

    NASA Astrophysics Data System (ADS)

    Susskind, J.; Rosenberg, R. I.

    2016-12-01

    The GEOS-5 Data Assimilation System (DAS) generates a global analysis every six hours by combining the previous six hour forecast for that time period with contemporaneous observations. These observations include in-situ observations as well as those taken by satellite borne instruments, such as AIRS/AMSU on EOS Aqua and CrIS/ATMS on S-NPP. Operational data assimilation methodology assimilates observed channel radiances Ri for IR sounding instruments such as AIRS and CrIS, but only for those channels i in a given scene whose radiances are thought to be unaffected by clouds. A limitation of this approach is that radiances in most tropospheric sounding channels are affected by clouds under partial cloud cover conditions, which occurs most of the time. The AIRS Science Team Version-6 retrieval algorithm generates cloud cleared radiances (CCR's) for each channel in a given scene, which represent the radiances AIRS would have observed if the scene were cloud free, and then uses them to determine quality controlled (QC'd) temperature profiles T(p) under all cloud conditions. There are potential advantages to assimilate either AIRS QC'd CCR's or QC'd T(p) instead of Ri in that the spatial coverage of observations is greater under partial cloud cover. We tested these two alternate data assimilation approaches by running three parallel data assimilation experiments over different time periods using GEOS-5. Experiment 1 assimilated all observations as done operationally, Experiment 2 assimilated QC'd values of AIRS CCRs in place of AIRS radiances, and Experiment 3 assimilated QC'd values of T(p) in place of observed radiances. Assimilation of QC'd AIRS T(p) resulted in significant improvement in seven day forecast skill compared to assimilation of CCR's or assimilation of observed radiances, especially in the Southern Hemisphere Extra-tropics.

  19. Dynamic behaviour of coastal sedimentation in the Lions Gulf. [France

    NASA Technical Reports Server (NTRS)

    Guy, M. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. A number of ERTS-1 images covering this geographical zone were studied and compared with cartographic maps, air photographs, and thermal-IR images. Old and recent sediments leave traces in the landscape which are decoded by interpreting the shapes of the clear zones forming a network against the black background representing water and humid zones. Current sedimentation and its mechanism were investigated. It had been hoped that a regular sequence of images would make it possible to follow the dynamics of the Rhone and the coastal rivers in relation to meteorological conditions. In any event only a small number of images spread over a wide period of time were obtained, and a complete study was therefore impossible. However, in comparing some of the ERTS-1 images certain thermal-IR images and information on the flow of the Rhone provided some clarification of mechanisms associated with river dynamics.

  20. Bond deformation paths and electronic instabilities of ultraincompressible transition metal diborides: Case study of OsB2 and IrB2

    NASA Astrophysics Data System (ADS)

    Zhang, R. F.; Legut, D.; Wen, X. D.; Veprek, S.; Rajan, K.; Lookman, T.; Mao, H. K.; Zhao, Y. S.

    2014-09-01

    The energetically most stable orthorhombic structure of OsB2 and IrB2 is dynamically stable for OsB2 but unstable for IrB2. Both diborides have substantially lower shear strength in their easy slip systems than their metal counterparts. This is attributed to an easy sliding facilitated by out-of-plane weakening of metallic Os-Os bonds in OsB2 and by an in-plane bond splitting instability in IrB2. A much higher shear resistance of Os-B and B-B bonds than Os-Os ones is found, suggesting that the strengthened Os-B and B-B bonds are responsible for hardness enhancement in OsB2. In contrast, an in-plane electronic instability in IrB2 limits its strength. The electronic structure of deformed diborides suggests that the electronic instabilities of 5d orbitals are their origin of different bond deformation paths. Neither IrB2 nor OsB2 can be intrinsically superhard.

Top