Science.gov

Sample records for luminance mapping technique

  1. Evaluation of High Dynamic Range Photography as a Luminance Mapping Technique

    SciTech Connect

    Inanici, Mehlika; Galvin, Jim

    2004-12-30

    The potential, limitations, and applicability of the High Dynamic Range (HDR) photography technique is evaluated as a luminance mapping tool. Multiple exposure photographs of static scenes are taken with a Nikon 5400 digital camera to capture the wide luminance variation within the scenes. The camera response function is computationally derived using the Photosphere software, and is used to fuse the multiple photographs into HDR images. The vignetting effect and point spread function of the camera and lens system is determined. Laboratory and field studies have shown that the pixel values in the HDR photographs can correspond to the physical quantity of luminance with reasonable precision and repeatability.

  2. Mapping luminance onto lightness in vision and art

    NASA Astrophysics Data System (ADS)

    Gilchrist, Alan

    2013-03-01

    Most natural images span a large range of luminance values, often a thousand-to-one and sometimes a million-to-one (Heckaman and Fairchild, 2009). This luminance range must be mapped by the visual system onto a scale of perceived gray shades (called lightness) with a range of roughly thirty-to-one (90% to 3% reflectance). For the painter who wants to represent this scene on a canvas, the challenge is a bit different. The high-range scene must be represented using pigments with a range of only thirty-to-one. Let's begin with vision. Even without a high range in the scene, understanding the mapping of luminance onto lightness has proven elusive. But we can think of the problem as having two parts: Anchoring and scaling.

  3. A new device for dynamic luminance mapping and glare risk assessment in buildings

    NASA Astrophysics Data System (ADS)

    Borisuit, Apiparn; Münch, Mirjam; Deschamps, Laurent; Kämpf, Jérôme; Scartezzini, Jean-Louis

    2012-10-01

    High dynamic range imaging has been shown to be a reliable tool to assess luminance maps and glare risk probability in buildings. However, there are some limitations of image capturing time, especially when dealing with highly dynamic and contrasted daylight situations. We used a newly developed prototype of a digital camera which contains a high dynamic range pixel array chip, with a logarithmic scale for encoding. This type of camera allows to effectively assessing luminance, contrast and contrast directions, by taking only a single image or by performing real time recordings. The device was equipped with a fisheye lens and V-lambda filters to adapt the camera's spectral sensitivity to the human eye. After spectral as well as photometric calibration and vignetting correction, the device was tested to perform luminance mapping of real scenes. The results showed that luminance maps of a room can be efficiently assessed under dynamic daylight and mixed day- and electric lighting conditions in a very short time (i.e. 100 ms), when compared to classical HDR imaging techniques. This allows us to calculate glare indexes of a scene simultaneously. The camera opens a variety of new applications as a useful tool for architects, building designers and lighting experts. The device can be used to easily monitor daylight availability and glare indexes in existing buildings and further developments for advanced (day-) lighting control can be envisaged.

  4. Covariance mapping techniques

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek J.

    2016-08-01

    Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.

  5. Nonlinear mapping of the luminance in dual-layer high dynamic range displays

    NASA Astrophysics Data System (ADS)

    Guarnieri, Gabriele; Ramponi, Giovanni; Bonfiglio, Silvio; Albani, Luigi

    2009-02-01

    It has long been known that the human visual system (HVS) has a nonlinear response to luminance. This nonlinearity can be quantified using the concept of just noticeable difference (JND), which represents the minimum amplitude of a specified test pattern an average observer can discern from a uniform background. The JND depends on the background luminance following a threshold versus intensity (TVI) function. It is possible to define a curve which maps physical luminances into a perceptually linearized domain. This mapping can be used to optimize a digital encoding, by minimizing the visibility of quantization noise. It is also commonly used in medical applications to display images adapting to the characteristics of the display device. High dynamic range (HDR) displays, which are beginning to appear on the market, can display luminance levels outside the range in which most standard mapping curves are defined. In particular, dual-layer LCD displays are able to extend the gamut of luminance offered by conventional liquid crystals towards the black region; in such areas suitable and HVS-compliant luminance transformations need to be determined. In this paper we propose a method, which is primarily targeted to the extension of the DICOM curve used in medical imaging, but also has a more general application. The method can be modified in order to compensate for the ambient light, which can be significantly greater than the black level of an HDR display and consequently reduce the visibility of the details in dark areas.

  6. C IV and C III] reverberation mapping of the luminous quasar PG 1247+267

    SciTech Connect

    Trevese, D.; Saturni, F. G.; Perna, M.; Dadina, M.

    2014-11-10

    So far the masses of about 50 active galactic nuclei (AGNs) have been measured through the reverberation mapping technique (RM). Most measurements have been performed for objects of moderate luminosity and redshift, based on Hβ, which is also used to calibrate the scaling relation that allows single-epoch (SE) mass determination based on AGN luminosity and the width of different emission lines. Due to the complex structure and gas dynamics of the relevant emission region, the SE masses obtained from the C IV(1549 Å) line show a large spread around the mean values. Direct RM measures of C IV exist for only six AGNs of low luminosity and redshift, and only one luminous quasar. Since 2003, we have collected photometric and spectroscopic observations of PG1247+267, the most luminous quasar ever analyzed for RM. We provide light curves for the continuum and for C IV(1549 Å) and C III](1909 Å), and measures of the reverberation time lags based on the SPEAR method. The sizes of the line emission regions assume a ratio of R {sub C} {sub III]}/R {sub C} {sub IV} ∼ 2, similar to the case of Seyfert galaxies, indicating for the first time a similar ionization stratification in a luminous quasar and low-luminosity nuclei. Due to the relatively small size of the broad line region and the relatively narrow line widths, we estimate a small mass and an anomalously high Eddington ratio. We discuss the possibility that either the shape of the emission region or an amplification of the luminosity caused by gravitational lensing may be partly responsible for the result.

  7. A methodology for luminance map retrieval using airborne hyperspectral and photogrammetric data

    NASA Astrophysics Data System (ADS)

    Pipia, Luca; Alamús, Ramon; Tardà, Anna; Pérez, Fernando; Palà, Vicenç; Corbera, Jordi

    2014-10-01

    This paper puts forward a methodology developed at the Institut Cartogràfic i Geològic de Catalunya (ICGC) to quantify upwelling light flux using hyperspectral and photogrammetric airborne data. The work was carried out in the frame of a demonstrative study requested by the municipality of Sant Cugat del Vallès, in the vicinity of Barcelona (Spain), and aimed to envisage a new approach to assess artificial lighting policies and actions as alternative to field campaigns. Hyperspectral and high resolution multispectral/panchromatic data were acquired simultaneously over urban areas. In order to avoid moon light contributions, data were acquired during the first days of new moon phase. Hyperspectral data were radiometrically calibrated. Then, National Center for Environmental Prediction (NCEP) atmospheric profiles were employed to estimate the actual Column Water Vapor (CWV) to be passed to ModTran5.0 for the atmospheric transmissivity τ calculation. At-the-ground radiance was finally integrated using the photopic sensitivity curve to generate a luminance map (cdm-2) of the flown area by mosaicking the different flight tracks. In an attempt to improve the spatial resolution and enhance the dynamic range of the luminance map, a sensor-fusion strategy was finally looked into. DMC Photogrammetric data acquired simultaneously to hyperspectral information were converted into at-the-ground radiance and upscaled to CASI spatial resolution. High-resolution (HR) luminance maps with enhanced dynamic range were finally generated by linearly fitting up-scaled DMC mosaics to the CASI-based luminance information. In the end, a preliminary assessment of the methodology is carried out using non-simultaneous in-situ measurements.

  8. Visuoauditory mappings between high luminance and high pitch are shared by chimpanzees (Pan troglodytes) and humans

    PubMed Central

    Ludwig, Vera U.; Adachi, Ikuma; Matsuzawa, Tetsuro

    2011-01-01

    Humans share implicit preferences for certain cross-sensory combinations; for example, they consistently associate higher-pitched sounds with lighter colors, smaller size, and spikier shapes. In the condition of synesthesia, people may experience such cross-modal correspondences to a perceptual degree (e.g., literally seeing sounds). So far, no study has addressed the question whether nonhuman animals share cross-modal correspondences as well. To establish the evolutionary origins of cross-modal mappings, we tested whether chimpanzees (Pan troglodytes) also associate higher pitch with higher luminance. Thirty-three humans and six chimpanzees were required to classify black and white squares according to their color while hearing irrelevant background sounds that were either high-pitched or low-pitched. Both species performed better when the background sound was congruent (high-pitched for white, low-pitched for black) than when it was incongruent (low-pitched for white, high-pitched for black). An inherent tendency to pair high pitch with high luminance hence evolved before the human lineage split from that of chimpanzees. Rather than being a culturally learned or a linguistic phenomenon, this mapping constitutes a basic feature of the primate sensory system. PMID:22143791

  9. Visuoauditory mappings between high luminance and high pitch are shared by chimpanzees (Pan troglodytes) and humans.

    PubMed

    Ludwig, Vera U; Adachi, Ikuma; Matsuzawa, Tetsuro

    2011-12-20

    Humans share implicit preferences for certain cross-sensory combinations; for example, they consistently associate higher-pitched sounds with lighter colors, smaller size, and spikier shapes. In the condition of synesthesia, people may experience such cross-modal correspondences to a perceptual degree (e.g., literally seeing sounds). So far, no study has addressed the question whether nonhuman animals share cross-modal correspondences as well. To establish the evolutionary origins of cross-modal mappings, we tested whether chimpanzees (Pan troglodytes) also associate higher pitch with higher luminance. Thirty-three humans and six chimpanzees were required to classify black and white squares according to their color while hearing irrelevant background sounds that were either high-pitched or low-pitched. Both species performed better when the background sound was congruent (high-pitched for white, low-pitched for black) than when it was incongruent (low-pitched for white, high-pitched for black). An inherent tendency to pair high pitch with high luminance hence evolved before the human lineage split from that of chimpanzees. Rather than being a culturally learned or a linguistic phenomenon, this mapping constitutes a basic feature of the primate sensory system. PMID:22143791

  10. A MAP OF THE INTEGRATED SACHS-WOLFE SIGNAL FROM LUMINOUS RED GALAXIES

    SciTech Connect

    Granett, Benjamin R.; Neyrinck, Mark C.; Szapudi, Istvan

    2009-08-10

    We construct a map of the time derivative of the gravitational potential traced by Sloan Digital Sky Survey luminous red galaxies (LRGs). The potential decays on large scales due to cosmic acceleration, leaving an imprint on cosmic microwave background (CMB) radiation through the integrated Sachs-Wolfe (ISW) effect. With a template fit, we directly measure this signature on the CMB at a 2{sigma} confidence level. The measurement is consistent with the cross-correlation statistic, strengthening the claim that dark energy is indeed the cause of the correlation. This new approach potentially simplifies the cosmological interpretation. Our constructed linear ISW map shows no evidence for degree-scale cold and hot spots associated with supervoid and supercluster structures. This suggests that the linear ISW effect in a concordance {lambda}CDM cosmology is insufficient to explain the strong CMB imprints from these structures that we previously reported.

  11. Brain Friendly Techniques: Mind Mapping

    ERIC Educational Resources Information Center

    Goldberg, Cristine

    2004-01-01

    Mind Mapping can be called the Swiss Army Knife for the brain, a total visual thinking tool or a multi-handed thought catcher. Invented by Tony Buzan in the early 1970s and used by millions around the world, it is a method that can be a part of a techniques repertoire when teaching information literacy, planning, presenting, thinking, and so…

  12. Mapping luminous blue compact galaxies with VIRUS-P. Morphology, line ratios, and kinematics

    NASA Astrophysics Data System (ADS)

    Cairós, L. M.; Caon, N.; García Lorenzo, B.; Kelz, A.; Roth, M.; Papaderos, P.; Streicher, O.

    2012-11-01

    Context. Blue compact galaxies (BCG) are narrow emission-line systems that undergo a violent burst of star formation. They are compact, low-luminosity galaxies, with blue colors and low chemical abundances, which offer us a unique opportunity to investigate collective star formation and its effects on galaxy evolution in a relatively simple, dynamically unperturbed environment. Spatially resolved spectrophotometric studies of BCGs are essential for a better understanding of the role of starburst-driven feedback processes on the kinematical and chemical evolution of low-mass galaxies near and far. Aims: We carry out an integral field spectroscopy (IFS) study of a sample of luminous BCGs, with the aim to probe the morphology, kinematics, dust extinction, and excitation mechanisms of their warm interstellar medium (ISM). Methods: We obtained IFS data for five luminous BCGs with VIRUS-P, the prototype instrument for the Visible Integral Field Replicable Unit Spectrograph, attached to the 2.7 m Harlan J. Smith Telescope at the McDonald Observatory. VIRUS-P consists of a square array of 247 optical fibers, which covers a 109″ × 109″ field of view, with a spatial sampling of 4farcs2 and a 0.3 filling factor. We observed in the 3550-5850 Å spectral range, with a resolution of 5 Å FWHM. From these data we built two-dimensional maps of the continuum and the most prominent emission-lines ([O ii] λ3727, Hγ, Hβ and [O iii] λ5007), and investigated the morphology of diagnostic emission-line ratios and the extinction patterns in the ISM as well as stellar and gas kinematics. Additionally, from integrated spectra we inferred total line fluxes and luminosity-weighted extinction coefficients and gas-phase metallicities. Results: All galaxies exhibit an overall regular morphology in the stellar continuum, while their warm ISM morphology is more complex: in II Zw 33 and Mrk 314, the star-forming regions are aligned along a chain-structure; Haro 1, NGC 4670 and III Zw 102

  13. Simple techniques for thematic mapping

    NASA Astrophysics Data System (ADS)

    Derenyi, Eugene E.

    A digital image restitution and mensuration software package was developed and installed in a geographic information system. Mapping is performed by monoscopic digitization in the image display screen. Two of the three image restitution schemes employ a rigorous mathematical model for generating the object space coordinates, while the third one is based on a close approximation. No photogrammetric and cartographic experience is necessary to follow these procedures. The primary application of this mapping tool is seen as thematic mapping.

  14. Window defect planar mapping technique

    NASA Technical Reports Server (NTRS)

    Minton, F. R.; Minton, U. O. (Inventor)

    1976-01-01

    A method of planar mapping defects in a window having an edge surface and a planar surface. The method is comprised of steps for mounting the window on a support surface. Then a light sensitive paper is placed adjacent to the window surface. A light source is positioned adjacent to the window edge. The window is then illuminated with the source of light for a predetermined interval of time. Defects on the surface of the glass, as well as in the interior of the glass are detected by analyzing the developed light sensitive paper. The light source must be in the form of optical fibers or a light tube whose light transmitting ends are placed near the edge surface of the window.

  15. Digital Mapping Techniques '07 - Workshop Proceedings

    USGS Publications Warehouse

    Soller, David R.

    2008-01-01

    The Digital Mapping Techniques '07 (DMT'07) workshop was attended by 85 technical experts from 49 agencies, universities, and private companies, including representatives from 27 state geological surveys. This year's meeting, the tenth in the annual series, was hosted by the South Carolina Geological Survey, from May 20-23, 2007, on the University of South Carolina campus in Columbia, South Carolina. Each DMT workshop has been coordinated by the U.S. Geological Survey's National Geologic Map Database Project and the Association of American State Geologists (AASG). As in previous year's meetings, the objective was to foster informal discussion and exchange of technical information, principally in order to develop more efficient methods for digital mapping, cartography, GIS analysis, and information management. At this meeting, oral and poster presentations and special discussion sessions emphasized: 1) methods for creating and publishing map products (here, 'publishing' includes Web-based release); 2) field data capture software and techniques, including the use of LIDAR; 3) digital cartographic techniques; 4) migration of digital maps into ArcGIS Geodatabase format; 5) analytical GIS techniques; and 6) continued development of the National Geologic Map Database.

  16. Partial covariance mapping techniques at FELs

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek

    2014-05-01

    The development of free-electron lasers (FELs) is driven by the desire to access the structure and chemical dynamics of biomolecules with atomic resolution. Short, intense FEL pulses have the potential to record x-ray diffraction images before the molecular structure is destroyed by radiation damage. However, even during the shortest, few-femtosecond pulses currently available, there are some significant changes induced by massive ionisation and onset of Coulomb explosion. To interpret the diffraction images it is vital to gain insight into the electronic and nuclear dynamics during multiple core and valence ionisations that compete with Auger cascades. This paper focuses on a technique that is capable to probe these processes. The covariance mapping technique is well suited to the high intensity and low repetition rate of FEL pulses. While the multitude of charges ejected at each pulse overwhelm conventional coincidence methods, an improved technique of partial covariance mapping can cope with hundreds of photoelectrons or photoions detected at each FEL shot. The technique, however, often reveals spurious, uninteresting correlations that spoil the maps. This work will discuss the strengths and limitations of various forms of covariance mapping techniques. Quantitative information extracted from the maps will be linked to theoretical modelling of ionisation and fragmentation paths. Special attention will be given to critical experimental parameters, such as counting rate, FEL intensity fluctuations, vacuum impurities or detector efficiency and nonlinearities. Methods of assessing and optimising signal-to-noise ratio will be described. Emphasis will be put on possible future developments such as multidimensional covariance mapping, compensation for various experimental instabilities and improvements in the detector response. This work has been supported the EPSRC, UK (grants EP/F021232/1 and EP/I032517/1).

  17. Three-dimensional spectroscopy of local luminous compact blue galaxies: kinematic maps of a sample of 22 objects

    NASA Astrophysics Data System (ADS)

    Pérez-Gallego, J.; Guzmán, R.; Castillo-Morales, A.; Gallego, J.; Castander, F. J.; Garland, C. A.; Gruel, N.; Pisano, D. J.; Zamorano, J.

    2011-12-01

    We use three-dimensional optical spectroscopy observations of a sample of 22 local luminous compact blue galaxies (LCBGs) to create kinematic maps. By means of these, we classify the kinematics of these galaxies into three different classes: rotating disc (RD), perturbed rotation (PR) and complex kinematics (CK). We find 48 per cent are RDs, 28 per cent are PRs and 24 per cent are CKs. RDs show rotational velocities that range between ˜50 and ˜200 km s-1, and dynamical masses that range between ˜1 × 109 and ˜3 × 1010 M⊙. We also address the following two fundamental questions through the study of the kinematic maps: (i) What processes are triggering the current starburst in LCBGs? We search our maps of the galaxy velocity fields for signatures of recent interactions and close companions that may be responsible for the enhanced star formation in our sample. We find that 5 per cent of objects show evidence of a recent major merger, 10 per cent of a minor merger and 45 per cent of a companion. This argues in favour of ongoing interactions with close companions as a mechanism for the enhanced star formation activity in these galaxies. (ii) What processes may eventually quench the current starbust in LCBGs? Velocity and velocity width maps, together with emission line ratio maps, can reveal signatures of active galactic nuclei (AGNs) activity or supernova (SN)-driven galactic winds that could halt the current burst. We find only 5 per cent of objects with clear evidence of AGN activity and 27 per cent with kinematics consistent with SN-driven galactic winds. Therefore, a different mechanism may be responsible for quenching the star formation in LCBGs. Finally, from our analysis, we find that the velocity widths of RDs, rather than accounting exclusively for the rotational nature of these objects, may account as well for other kinematic components and may not be good tracers of their dynamical masses.

  18. Multiscale mapping: Physical concepts and mathematical techniques

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This is an introductory summary for papers either invited or a part of a symposium at the 18th World Congress of Soil Science, July 2006 in Philadelphia. The symposium, titled "Multiscale Mapping of Soil Properties for Environmental Studies, Agriculture, and Decision Making," focused on techniques u...

  19. Digital Mapping Techniques '06 - Workshop Proceedings

    USGS Publications Warehouse

    Soller, David R.

    2007-01-01

    The Digital Mapping Techniques `06 (DMT`06) workshop was attended by more than 110 technical experts from 51 agencies, universities, and private companies, including representatives from 27 state geological surveys (see Appendix A of these Proceedings). This workshop was similar in nature to the previous nine meetings, which were held in Lawrence, Kansas (Soller, 1997), Champaign, Illinois (Soller, 1998), Madison, Wisconsin (Soller, 1999), Lexington, Kentucky (Soller, 2000), Tuscaloosa, Alabama (Soller, 2001), Salt Lake City, Utah (Soller, 2002), Millersville, Pennsylvania (Soller, 2003), Portland, Oregon (Soller, 2004), and Baton Rouge, Louisiana (Soller, 2005). This year?s meeting was hosted by the Ohio Geological Survey, from June 11-14, 2006, on the Ohio State University campus in Columbus, Ohio. As in the previous meetings, the objective was to foster informal discussion and exchange of technical information. It is with great pleasure that I note that the objective was successfully met, as attendees continued to share and exchange knowledge and information, and renew friendships and collegial work begun at past DMT workshops. Each DMT workshop has been coordinated by the Association of American State Geologists (AASG) and U.S. Geological Survey (USGS) Data Capture Working Group, the latter of which was formed in August 1996 to support the AASG and the USGS in their effort to build a National Geologic Map Database (see Soller, this volume, and http://ngmdb.usgs.gov/info/standards/datacapt/). The Working Group was formed because increased production efficiencies, standardization, and quality of digital map products were needed for the database?and for the State and Federal geological surveys?to provide more high-quality digital maps to the public. At the 2006 meeting, oral and poster presentations and special discussion sessions emphasized: 1) methods for creating and publishing map products (here, ?publishing? includes Web-based release); 2) field data capture

  20. Investigation of light scattering as a technique for detecting discrete soot particles in a luminous flame

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The practicability of using a classical light-scattering technique, involving comparison of angular scattering intensity patterns with theoretically determined Mie and Rayleight patterns, to detect discrete soot particles (diameter less than 50 nm) in premixed propane/air and propane/oxygen-helium flames is considered. The experimental apparatus employed in this investigation included a laser light source, a flat-flame burner, specially coated optics, a cooled photomultiplier detector, and a lock-in voltmeter readout. Although large, agglomerated soot particles were detected and sized, it was not possible to detect small, discrete particles. The limiting factor appears to be background scattering by the system's optics.

  1. Structural Modeling Using "Scanning and Mapping" Technique

    NASA Technical Reports Server (NTRS)

    Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)

    2000-01-01

    Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar

  2. Digital Mapping Techniques '11–12 workshop proceedings

    USGS Publications Warehouse

    Soller, David R.

    2014-01-01

    At these meetings, oral and poster presentations and special discussion sessions emphasized: (1) methods for creating and publishing map products (here, "publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase formats; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  3. Process Mapping: Tools, Techniques, & Critical Success Factors.

    ERIC Educational Resources Information Center

    Kalman, Howard K.

    2002-01-01

    Explains process mapping as an analytical tool and a process intervention that performance technologists can use to improve human performance by reducing error variance. Highlights include benefits of process mapping; and critical success factors, including organizational readiness, time commitment by participants, and the availability of a…

  4. Concept Mapping: A Critical Thinking Technique

    ERIC Educational Resources Information Center

    Harris, Charles M.; Zha, Shenghua

    2013-01-01

    Concept mapping, graphically depicting the structure of abstract concepts, is based on the observation that pictures and line drawings are often more easily comprehended than the words that represent an abstract concept. The efficacy of concept mapping for facilitating critical thinking was assessed in four sections of an introductory psychology…

  5. A Novel Tsunami Inundation Mapping Technique

    NASA Astrophysics Data System (ADS)

    Watts, P.; Walder, J. S.; Waythomas, C. F.

    2003-12-01

    Traditional tsunami inundation maps have focused on maximum on-land inundation. Occasionally, the inundation maps are supplemented with information about maximum water velocity or timing. We demonstrate using several case studies the utility of producing maps showing 1) on-land inundation area, 2) maximum water velocity, 3) maximum water flux, 4) time of wave arrival, 5) time of wave maximum, and 6) time of wave departure. Map attributes 1-3 are related to water motion and are of particular value to scientists involved in tsunami hazard assessment. Attributes 4-6 are related to the timing of wave events and are of particular value to emergency managers. However, this raw form of inundation mapping is not easily interpreted within the usual GIS context because water velocities and fluxes are not readily understood by lay people. To improve on the GIS approach to portraying tsunami information, we employ simple engineering models illustrating the consequences of tsunami attack in a way that lay audiences can readily understand. In particular, we present maps that depict regions where a human would be knocked down, where cars would be moved, where various sizes of sediment would be moved, where anchors would break, and where ships would be moved laterally by a significant amount. Each map is separated into three distinct layers, each layer representing a different level of hazard. For example, the car movement map shows results for three sizes of automobile. Each region is depicted as a separate GIS layer. Given the uncertainty involved in numerical simulations, as well as the many local variables that can alter the outcome during a real event, these maps are meant to provide general guidelines as to the outcome of tsunami attack.

  6. Achromatic Emission Velocity Measurements in Luminous Flows

    NASA Technical Reports Server (NTRS)

    Schneider, S. J.; Fulghum, S. F.; Rostler, P. S.

    1997-01-01

    A new velocity measurement instrument for luminous flows was developed by Science Research Laboratory for NASA. The SIEVE (Segmented Image Emission VElocimeter) instrument uses broadband light emitted by the flow for the velocity measurement. This differs from other velocimetry techniques in that it does not depend on laser illumination and/or light scattering from particles in the flow. The SIEVE is a passive, non-intrusive diagnostic. By moving and adjusting the imaging optics, the SIEVE can provide three-dimensional mapping of a flow field and determine turbulence scale size. A SIEVE instrument was demonstrated on an illuminated rotating disk to evaluate instrument response and noise and on an oxy-acetylene torch to measure flame velocities. The luminous flow in rocket combustors and plumes is an ideal subject for the SIEVE velocity measurement technique.

  7. Luminous presence

    NASA Astrophysics Data System (ADS)

    Dawson, Paula

    2008-02-01

    The Luminous Presence project examines the use of standard film language in the framing, angle and of points of view of holographic subjects though eight digital holographic stereograms; seven 25 x 25 cm, Hail, Water, Rain, Snow, Sun, Text, Imprint and 1.5 x 1 m, Luminous Presences i. However, before embarking on a discussion of how filmic language can be used in digital holograms it is first important to explain why this line of investigation could be fruitful. Undoubtedly several of the compositional practices which sprung up and evolved throughout the development of the diverse forms of the holographic medium have contributed to a unique hologram pictorial language, however it is well known that the reading of visual imagery of any type relies a great deal on the viewer's knowledge of and experience of other images .The lens-recorded imagery of film is a far more familiar language than that of holograms and the correlation between certain filmic pictorial conventions and emotional responses are well documented and understood. ii . In short the language of film contains a highly nuanced vocabulary of shot types and lens types (which may be criticised as being formulaic) yet are effective in lending emotion to figures.

  8. A satellite remote sensing technique for geological structure horizon mapping

    SciTech Connect

    Fraser, A.; Huggins, P.; Rees, J.

    1996-08-01

    A Satellite Remote Sensing Technique is demonstrated for generating near surface geological structure data. This technique enables the screening of large areas and targeting of seismic acquisition during hydrocarbon exploration. This is of particular advantage in terrains where surveying is logistically difficult. Landsat Thematic Mapper (TM) data and a high resolution Digital Elevation Model (DEM), are used to identify and map outcropping horizons. These are used to reconstruct the near surface structure. The technique is applied in Central Yemen which is characterised by a {open_quote}layer-cake{close_quote} geological and low dipping terrain. The results are validated using 2D seismic data. The near surface map images faults and structure not apparent in the raw data. Comparison with the structure map generated from a 2D seismic data indicates very good structural and fault correlation. The near surface map successfully highlights areas of potential closure at reservoir depths.

  9. Mapping the climate: guidance on appropriate techniques to map climate variables and their uncertainty

    NASA Astrophysics Data System (ADS)

    Kaye, N. R.; Hartley, A.; Hemming, D.

    2012-02-01

    Maps are a crucial asset in communicating climate science to a diverse audience, and there is a wealth of software available to analyse and visualise climate information. However, this availability makes it easy to create poor maps as users often lack an underlying cartographic knowledge. Unlike traditional cartography, where many known standards allow maps to be interpreted easily, there is no standard mapping approach used to represent uncertainty (in climate or other information). Consequently, a wide range of techniques have been applied for this purpose, and users may spend unnecessary time trying to understand the mapping approach rather than interpreting the information presented. Furthermore, communicating and visualising uncertainties in climate data and climate change projections, using for example ensemble based approaches, presents additional challenges for mapping that require careful consideration. The aim of this paper is to provide background information and guidance on suitable techniques for mapping climate variables, including uncertainty. We assess a range of existing and novel techniques for mapping variables and uncertainties, comparing "intrinsic" approaches that use colour in much the same way as conventional thematic maps with "extrinsic" approaches that incorporate additional geometry such as points or features. Using cartographic knowledge and lessons learned from mapping in different disciplines we propose the following 6 general mapping guidelines to develop a suitable mapping technique that represents both magnitude and uncertainty in climate data:

    - use a sensible sequential or diverging colour scheme;

    - use appropriate colour symbolism if it is applicable;

    - ensure the map is usable by colour blind people;

    - use a data classification scheme that does not misrepresent the data;

    - use a map

  10. Map Your Way to Speech Success! Employing Mind Mapping as a Speech Preparation Technique

    ERIC Educational Resources Information Center

    Paxman, Christina G.

    2011-01-01

    Mind mapping has gained considerable credibility recently in corporations such as Boeing and Nabisco, as well as in the classroom in terms of preparing for examinations and preparing for speeches. A mind map is a graphic technique for organizing an individual's thoughts and other information. It harnesses the full range of cortical skills--word,…

  11. Mapping techniques for aligning sulci across multiple brains☆

    PubMed Central

    Tosun, Duygu; Rettmann, Maryam E.; Prince, Jerry L.

    2015-01-01

    Visualization and mapping of function on the cortical surface is difficult because of its sulcal and gyral convolutions. Methods to unfold and flatten the cortical surface for visualization and measurement have been described in the literature. This makes visualization and measurement possible, but comparison across multiple subjects is still difficult because of the lack of a standard mapping technique. In this paper, we describe two methods that map each hemisphere of the cortex to a portion of a sphere in a standard way. To quantify how accurately the geometric features of the cortex – i.e., sulci and gyri – are mapped into the same location, sulcal alignment across multiple brains is analyzed, and probabilistic maps for different sulcal regions are generated to be used in automatic labelling of segmented sulcal regions. PMID:15450224

  12. Automated thermal mapping techniques using chromatic image analysis

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1989-01-01

    Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.

  13. Comparison of the Reliability and Validity of Scores from Two Concept-Mapping Techniques.

    ERIC Educational Resources Information Center

    Ruiz-Primo, Maria Araceli; Schultz, Susan E.; Li, Min; Shavelson, Richard J.

    2001-01-01

    Reports the results of a study that compared two concept-mapping techniques, one high-directed, "fill-in-the-map" and one low-directed, "construct-a-map-from-scratch". Examines whether: (1) skeleton map scores were sensitive to the sample; (2) the two types of skeleton maps were equivalent; and (3) the two mapping techniques provided similar…

  14. Extended mapping and characteristics techniques for inverse aerodynamic design

    NASA Technical Reports Server (NTRS)

    Sobieczky, H.; Qian, Y. J.

    1991-01-01

    Some ideas for using hodograph theory, mapping techniques and methods of characteristics to formulate typical aerodynamic design boundary value problems are developed. The inverse method of characteristics is shown to be a fast tool for design of transonic flow elements as well as supersonic flows with given shock waves.

  15. Uncertainty in mapping urban air quality using crowdsourcing techniques

    NASA Astrophysics Data System (ADS)

    Schneider, Philipp; Castell, Nuria; Lahoz, William; Bartonova, Alena

    2016-04-01

    Small and low-cost sensors measuring various air pollutants have become available in recent years owing to advances in sensor technology. Such sensors have significant potential for improving high-resolution mapping of air quality in the urban environment as they can be deployed in comparatively large numbers and therefore are able to provide information at unprecedented spatial detail. However, such sensor devices are subject to significant and currently little understood uncertainties that affect their usability. Not only do these devices exhibit random errors and biases of occasionally substantial magnitudes, but these errors may also shift over time. In addition, there often tends to be significant inter-sensor variability even when supposedly identical sensors from the same manufacturer are used. We need to quantify accurately these uncertainties to make proper use of the information they provide. Furthermore, when making use of the data and producing derived products such as maps, the measurement uncertainties that propagate throughout the analysis need to be clearly communicated to the scientific and non-scientific users of the map products. Based on recent experiences within the EU-funded projects CITI-SENSE and hackAIR we discuss the uncertainties along the entire processing chain when using crowdsourcing techniques for mapping urban air quality. Starting with the uncertainties exhibited by the sensors themselves, we present ways of quantifying the error characteristics of a network of low-cost microsensors and show suitable statistical metrics for summarizing them. Subsequently, we briefly present a data-fusion-based method for mapping air quality in the urban environment and illustrate how we propagate the uncertainties of the individual sensors throughout the mapping system, resulting in detailed maps that document the pixel-level uncertainty for each concentration field. Finally, we present methods for communicating the resulting spatial uncertainty

  16. Geological mapping in northwestern Saudi Arabia using LANDSAT multispectral techniques

    NASA Technical Reports Server (NTRS)

    Blodget, H. W.; Brown, G. F.; Moik, J. G.

    1975-01-01

    Various computer enhancement and data extraction systems using LANDSAT data were assessed and used to complement a continuing geologic mapping program. Interactive digital classification techniques using both the maximum-likelihood and thresholding statistical approaches achieve very limited success in areas of highly dissected terrain. Computer enhanced imagery developed by color compositing stretched MSS ratio data was constructed for a test site in northwestern Saudi Arabia. Initial results indicate that several igneous and sedimentary rock-types can be discriminated.

  17. Geological mapping in northwestern Saudi Arabia using LANDSAT multispectral techniques

    NASA Technical Reports Server (NTRS)

    Blodget, H. W.; Brown, G. F.; Moik, J. G.

    1975-01-01

    Various computer enhancement and data extraction systems using LANDSAT data were assessed and used to complement a continuing geologic mapping program. Interactive digital classification techniques using both the parallel-piped and maximum-likelihood statistical approaches achieve very limited success in areas of highly dissected terrain. Computer enhanced imagery developed by color compositing stretched MSS ratio data was constructed for a test site in northwestern Saudi Arabia. Initial results indicate that several igneous and sedimentary rock types can be discriminated.

  18. Luminous efficiency functions at higher intensities

    NASA Astrophysics Data System (ADS)

    Harrington, Lawrence Kent

    Two psychophysical measurement techniques, flicker photometry and successive heterochromatic brightness matching, were used to measure changes in luminance efficiency functions with increasing levels of light adaptation. Both measurement techniques were performed using the same optical system and the same seven healthy adults as subjects. Measurements were taken at four reference stimulus intensities, 1, 10, 100 and 1000 foot-lamberts. Luminous efficiency was found to depend on both the technique and the reference stimulus intensity with which the measurements were taken. For heterochromatic brightness matching, luminous efficiency increased for longer wavelengths as reference intensity increased. Peak luminous efficiency shifted from approximately 540nm to greater than 600nm with increasing intensity for all seven subjects. Peak luminous efficiency was constant for flicker photometry across all intensities but the function narrowed slightly at 100 foot-lamberts.

  19. Strain mapping accuracy improvement using super-resolution techniques.

    PubMed

    Bárcena-González, G; Guerrero-Lebrero, M P; Guerrero, E; Fernández-Reyes, D; González, D; Mayoral, A; Utrilla, A D; Ulloa, J M; Galindo, P L

    2016-04-01

    Super-resolution (SR) software-based techniques aim at generating a final image by combining several noisy frames with lower resolution from the same scene. A comparative study on high-resolution high-angle annular dark field images of InAs/GaAs QDs has been carried out in order to evaluate the performance of the SR technique. The obtained SR images present enhanced resolution and higher signal-to-noise (SNR) ratio and sharpness regarding the experimental images. In addition, SR is also applied in the field of strain analysis using digital image processing applications such as geometrical phase analysis and peak pairs analysis. The precision of the strain mappings can be improved when SR methodologies are applied to experimental images. PMID:26501744

  20. Applying field mapping refractive beam shapers to improve holographic techniques

    NASA Astrophysics Data System (ADS)

    Laskin, Alexander; Williams, Gavin; McWilliam, Richard; Laskin, Vadim

    2012-03-01

    Performance of various holographic techniques can be essentially improved by homogenizing the intensity profile of the laser beam with using beam shaping optics, for example, the achromatic field mapping refractive beam shapers like πShaper. The operational principle of these devices presumes transformation of laser beam intensity from Gaussian to flattop one with high flatness of output wavefront, saving of beam consistency, providing collimated output beam of low divergence, high transmittance, extended depth of field, negligible residual wave aberration, and achromatic design provides capability to work with several laser sources with different wavelengths simultaneously. Applying of these beam shapers brings serious benefits to the Spatial Light Modulator based techniques like Computer Generated Holography or Dot-Matrix mastering of security holograms since uniform illumination of an SLM allows simplifying mathematical calculations and increasing predictability and reliability of the imaging results. Another example is multicolour Denisyuk holography when the achromatic πShaper provides uniform illumination of a field at various wavelengths simultaneously. This paper will describe some design basics of the field mapping refractive beam shapers and optical layouts of their applying in holographic systems. Examples of real implementations and experimental results will be presented as well.

  1. Duodenal luminal nutrient sensing

    PubMed Central

    Rønnestad, Ivar; Akiba, Yasutada; Kaji, Izumi; Kaunitz, Jonathan D

    2016-01-01

    The gastrointestinal mucosa is exposed to numerous chemical substances and microorganisms, including macronutrients, micronutrients, bacteria, endogenous ions, and proteins. The regulation of mucosal protection, digestion, absorption and motility is signaled in part by luminal solutes. Therefore, luminal chemosensing is an important mechanism enabling the mucosa to monitor luminal conditions, such as pH, ion concentrations, nutrient quantity, and microflora. The duodenal mucosa shares luminal nutrient receptors with lingual taste receptors in order to detect the five basic tastes, in addition to essential nutrients, and unwanted chemicals. The recent ‘de-orphanization’ of nutrient sensing G protein-coupled receptors provides an essential component of the mechanism by which the mucosa senses luminal nutrients. In this review, we will update the mechanisms of and underlying physiological and pathological roles in luminal nutrient sensing, with a main focus on the duodenal mucosa. PMID:25113991

  2. Applications of Mapping and Tomographic Techniques in Gem Sciences

    NASA Astrophysics Data System (ADS)

    Shen, A. H.

    2014-12-01

    Gem Sciences are scientific studies of gemstones - their genesis, provenance, synthesis, enhancement, treatment and identification. As high quality forms of specific minerals, the gemstones exhibit unusual physical properties that are usually unseen in the regular counterparts. Most gemstones are colored by trace elements incorporated in the crystal lattice during various growth stages; forming coloration zones of various scales. Studying the spectral and chemical contrast across color zones helps elucidating the origins of colors. These are done by UV-visible spectrometers with microscope and LA-ICPMS in modern gemological laboratories. In the case of diamonds, their colored zones arise from various structural defects incorporated in different growth zones and are studied with FTIR spectrometers with IR microscope and laser photoluminescence spectrometers. Advancement in modern synthetic techniques such as chemical vapor deposition (CVD) has created some problem for identification. Some exploratory experiments in carbon isotope mapping were done on diamonds using SIMS. The most important issue in pearls is to identify one particular pearl being a cultured one or a natural pearl. The price difference can be enormous. Classical way of such identification is done by x-ray radiographs, which clearly show the bead and the nacre. Modern cultured pearl advancement has eliminated the need for an artificial bead, but a small piece of tissue instead. Nowadays, computer x-ray tomography (CT) scanning devices are used to depict the clear image of the interior of a pearl. In the Chinese jade market, filling fissures with epoxy and/or wax are very commonly seen. We are currently exploring Magnetic Resonance Imaging (MRI) technique to map the distribution of artificial resin within a polycrystalline aggregates.

  3. Gluing for Raman lidar systems using the lamp mapping technique.

    PubMed

    Walker, Monique; Venable, Demetrius; Whiteman, David N

    2014-12-20

    In the context of combined analog and photon counting (PC) data acquisition in a Lidar system, glue coefficients are defined as constants used for converting an analog signal into a virtual PC signal. The coefficients are typically calculated using Lidar profile data taken under clear, nighttime conditions since, in the presence of clouds or high solar background, it is difficult to obtain accurate glue coefficients from Lidar backscattered data. Here we introduce a new method in which we use the lamp mapping technique (LMT) to determine glue coefficients in a manner that does not require atmospheric profiles to be acquired and permits accurate glue coefficients to be calculated when adequate Lidar profile data are not available. The LMT involves scanning a halogen lamp over the aperture of a Lidar receiver telescope such that the optical efficiency of the entire detection system is characterized. The studies shown here involve two Raman lidar systems; the first from Howard University and the second from NASA/Goddard Space Flight Center. The glue coefficients determined using the LMT and the Lidar backscattered method agreed within 1.2% for the water vapor channel and within 2.5% for the nitrogen channel for both Lidar systems. We believe this to be the first instance of the use of laboratory techniques for determining the glue coefficients for Lidar data analysis. PMID:25608203

  4. Large areas elemental mapping by ion beam analysis techniques

    NASA Astrophysics Data System (ADS)

    Silva, T. F.; Rodrigues, C. L.; Curado, J. F.; Allegro, P.; Moro, M. V.; Campos, P. H. O. V.; Santos, S. B.; Kajiya, E. A. M.; Rizzutto, M. A.; Added, N.; Tabacniks, M. H.

    2015-07-01

    The external beam line of the Laboratory for Material Analysis with Ion Beams (LAMFI) is a versatile setup for multi-technique analysis. X-ray detectors for Particle Induced X-rays Emission (PIXE) measurements, a Gamma-ray detector for Particle Induced Gamma- ray Emission (PIGE), and a particle detector for scattering analysis, such as Rutherford Backscattering Spectrometry (RBS), were already installed. In this work, we present some results, using a large (60-cm range) XYZ computer controlled sample positioning system, completely developed and build in our laboratory. The XYZ stage was installed at the external beam line and its high spacial resolution (better than 5 μm over the full range) enables positioning the sample with high accuracy and high reproducibility. The combination of a sub-millimeter beam with the large range XYZ robotic stage is being used to produce elemental maps of large areas in samples like paintings, ceramics, stones, fossils, and all sort of samples. Due to its particular characteristics, this is a unique device in the sense of multi-technique analysis of large areas. With the continuous development of the external beam line at LAMFI, coupled to the robotic XYZ stage, it is becoming a robust and reliable option for regular analysis of trace elements (Z > 5) competing with the traditional in-vacuum ion-beam-analysis with the advantage of automatic rastering.

  5. Cognitive Mapping Techniques: Implications for Research in Engineering and Technology Education

    ERIC Educational Resources Information Center

    Dixon, Raymond A.; Lammi, Matthew

    2014-01-01

    The primary goal of this paper is to present the theoretical basis and application of two types of cognitive maps, concept map and mind map, and explain how they can be used by educational researchers in engineering design research. Cognitive mapping techniques can be useful to researchers as they study students' problem solving strategies…

  6. Interaction-Point Phase-Space Characterization using Single-Beam and Luminous-Region Measurements at PEP-II

    SciTech Connect

    Kozanecki, W; Bevan, A.J.; Viaud, B.F.; Cai, Y.; Fisher, A.S.; O'Grady, C.; Lindquist, B.; Roodman, A.; J.M.Thompson, M.Weaver; /SLAC

    2008-09-09

    We present an extensive experimental characterization of the e{sup {+-}} phase space at the interaction point of the SLAC PEP-II B-Factory, that combines a detailed mapping of luminous-region observables using the BABAR detector, with stored-beam measurements by accelerator techniques.

  7. Hydrocarbon microseepage mapping using signature based target detection techniques

    NASA Astrophysics Data System (ADS)

    Soydan, Hilal; Koz, Alper; Şebnem Düzgün, H.; Aydin Alatan, A.

    2015-10-01

    In this paper, we compare the conventional methods in hydrocarbon seepage anomalies with the signature based detection algorithms. The Crosta technique [1] is selected as a basement in the experimental comparisons for the conventional approach. The Crosta technique utilizes the characteristic bands of the searched target for principal component transformation in order to determine the components characterizing the target in interest. Desired Target Detection and Classification Algorithm (DTDCA), Spectral Matched Filter (SMF), and Normalized Correlation (NC) are employed for signature based target detection. Signature based target detection algorithms are applied to the whole spectrum benefiting from the information stored in all spectral bands. The selected methods are applied to a multispectral Advanced SpaceBorne Thermal Emission and Radiometer (ASTER) image of the study region, with an atmospheric correction prior to the realization of the algorithms. ASTER provides multispectral bands covering visible, short wave, and thermal infrared region, which serves as a useful tool for the interpretation of the areas with hydrocarbon anomalies. The exploration area is selected as Gemrik Anticline which is located in South East Anatolia, Adıyaman, Bozova Oil Field, where microseeps can be observed with almost no vegetation cover. The spectral signatures collected with Analytical Spectral Devices Inc. (ASD) spectrometer from the reference valley [2] have been utilized as an input to the signature based detection algorithms. The experiments have indicated that DTDCA and MF outperforms the Crosta technique by locating the microseepage patterns along the mitigation pathways with a better contrast. On the other hand, NC has not been able to map the searched target with a visible distinction. It is concluded that the signature based algorithms can be more effective than the conventional methods for the detection of microseepage induced anomalies.

  8. Semantic Data And Visualization Techniques Applied To Geologic Field Mapping

    NASA Astrophysics Data System (ADS)

    Houser, P. I. Q.; Royo-Leon, M.; Munoz, R.; Estrada, E.; Villanueva-Rosales, N.; Pennington, D. D.

    2015-12-01

    Geologic field mapping involves the use of technology before, during, and after visiting a site. Geologists utilize hardware such as Global Positioning Systems (GPS) connected to mobile computing platforms such as tablets that include software such as ESRI's ArcPad and other software to produce maps and figures for a final analysis and report. Hand written field notes contain important information and drawings or sketches of specific areas within the field study. Our goal is to collect and geo-tag final and raw field data into a cyber-infrastructure environment with an ontology that allows for large data processing, visualization, sharing, and searching, aiding in connecting field research with prior research in the same area and/or aid with experiment replication. Online searches of a specific field area return results such as weather data from NOAA and QuakeML seismic data from USGS. These results that can then be saved to a field mobile device and searched while in the field where there is no Internet connection. To accomplish this we created the GeoField ontology service using the Web Ontology Language (OWL) and Protégé software. Advanced queries on the dataset can be made using reasoning capabilities can be supported that go beyond a standard database service. These improvements include the automated discovery of data relevant to a specific field site and visualization techniques aimed at enhancing analysis and collaboration while in the field by draping data over mobile views of the site using augmented reality. A case study is being performed at University of Texas at El Paso's Indio Mountains Research Station located near Van Horn, Texas, an active multi-disciplinary field study site. The user can interactively move the camera around the study site and view their data digitally. Geologist's can check their data against the site in real-time and improve collaboration with another person as both parties have the same interactive view of the data.

  9. Phase reversal technique decreases cortical stimulation time during motor mapping.

    PubMed

    Simon, Mirela V; Sheth, Sameer A; Eckhardt, Christine A; Kilbride, Ronan D; Braver, Diana; Williams, Ziv; Curry, William; Cahill, Dan; Eskandar, Emad N

    2014-06-01

    Neurophysiologic mapping of the primary motor cortex (PMC) is commonly used in supratentorial surgery. Electrical cortical stimulation is guided by anatomic landmarks towards the precentral gyrus, with recording of the triggered primary motor responses (TPMR) in the contralateral hemibody. Thus, factors such as distortion of the pericentral anatomy, small surgical fields, brain shifts and miscalibrated neuronavigational systems may lengthen the process and result in unnecessary stimulations, increasing the probability of triggering seizures. We hypothesized that central sulcus localization via the median somatosensory evoked potentials phase reversal technique (MSSEP PRT) accurately guides the surgeon, resulting in prompt identification of the PMC with minimal electrical stimulation. Multivariate Cox regression was used to study the impact of MSSEP PRT on time spent performing electrical cortical stimulation to TPMR. The analysis was adjusted for presence of increased cortical excitability, high motor thresholds, lesions close to PMC and fMRI data, in 100 consecutive standardized motor mapping procedures for brain tumor resection and epilepsy surgery. Phase reversal and change morphology of the recorded somatosensory evoked potentials quadrupled (hazard ratio [HR] 4.13, p<0.0001) and doubled (HR 2.14, p=0.02) the rate of obtaining TPMR, respectively. A 1mA increase in motor threshold decreased the rate by 9% (HR 0.91, p=0.0002). Afterdischarges triggered before TPMR and lesions in close proximity to PMC decreased the rate of TPMR by 76% (HR 0.23, p<0.0001) and 48% (HR 0.52, p=0.04), respectively. Informative PRT decreases stimulation time. Afterdischarges triggered before TPMR, high motor thresholds and lesions close to the PMC increase it. PMID:24679940

  10. Optical coherence tomography investigations of ceramic lumineers

    NASA Astrophysics Data System (ADS)

    Fernandes, Luana O.; Graça, Natalia D. R. L.; Melo, Luciana S. A.; Silva, Claudio H. V.; Gomes, Anderson S. L.

    2016-02-01

    Lumineers are veneer laminates used as an alternative for aesthetic dental solutions of the highest quality, but the only current means of its performance assessment is visual inspection. The objective of this study was to use the Optical Coherence Tomography (OCT) technique working in spectral domain to analyze in vivo in a single patient, 14 lumineers 180 days after cementation. It was possible to observe images in various kinds of changes in the cementing line and the laminate. It was concluded that the OCT is an effective and promising method to clinical evaluation of the cementing line in lumineers.

  11. Digital Mapping Techniques '09-Workshop Proceedings, Morgantown, West Virginia, May 10-13, 2009

    USGS Publications Warehouse

    Soller, David R.

    2011-01-01

    As in the previous years' meetings, the objective was to foster informal discussion and exchange of technical information, principally in order to develop more efficient methods for digital mapping, cartography, GIS analysis, and information management. At this meeting, oral and poster presentations and special discussion sessions emphasized (1) methods for creating and publishing map products (here, "publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase format; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  12. Techniques, problems and uses of mega-geomorphological mapping

    NASA Technical Reports Server (NTRS)

    Embleton, C.

    1985-01-01

    A plea for a program of global geomorphological mapping based on remote sensing data is presented. It is argued that the program is a necessary step in bringing together the rapidly evolving concepts of plate tectonics with the science of geomorphology. Geomorphologists are urged to bring temporal scales into their subject and to abandon their recent isolation from tectonics and geological history. It is suggested that a start be made with a new geomorphological map of Europe, utilizing the latest space technology.

  13. A close-range photogrammetric technique for mapping neotectonic features in trenches

    USGS Publications Warehouse

    Fairer, G.M.; Whitney, J.W.; Coe, J.A.

    1989-01-01

    Close-range photogrammetric techniques and newly available computerized plotting equipment were used to map exploratory trench walls that expose Quaternary faults in the vicinity of Yucca Mountain, Nevada. Small-scale structural, lithologic, and stratigraphic features can be rapidly mapped by the photogrammetric method. This method is more accurate and significantly more rapid than conventional trench-mapping methods, and the analytical plotter is capable of producing cartographic definition of high resolution when detailed trench maps are necessary. -from Authors

  14. Tumor mapping: new rationales, adjusted techniques, expanded applications.

    PubMed

    Cinberg, J Z; Katlein, S B; Vogl, S E

    1983-10-01

    Eleven consecutive patients with cancers of the oropharynx (4), hypopharynx (4), and oral cavity (3) were endoscoped and "tumor mapped" with a modified tattoo solution before beginning non-surgical antineoplastic therapy. The tattooed outlines were clearly visible at 7 weeks (5 patients after induction chemotherapy); at 14 weeks (2 patients after induction chemotherapy and radiotherapy); and between 12 and 16 weeks (4 patients after radiotherapy). The "tumor mapping" aided both the establishment of appropriate resection margins in cancers that had diminished in response to non-surgical therapy and the recognition of a tumor's lack of response to nonsurgical antineoplastic treatment. PMID:6194395

  15. Boson mapping techniques applied to constant gauge fields in QCD

    NASA Technical Reports Server (NTRS)

    Hess, Peter Otto; Lopez, J. C.

    1995-01-01

    Pairs of coordinates and derivatives of the constant gluon modes are mapped to new gluon-pair fields and their derivatives. Applying this mapping to the Hamiltonian of constant gluon fields results for large coupling constants into an effective Hamiltonian which separates into one describing a scalar field and another one for a field with spin two. The ground state is dominated by pairs of gluons coupled to color and spin zero with slight admixtures of color zero and spin two pairs. As color group we used SU(2).

  16. A comparison of two conformal mapping techniques applied to an aerobrake body

    NASA Technical Reports Server (NTRS)

    Hommel, Mark J.

    1987-01-01

    Conformal mapping is a classical technique which has been utilized for solving problems in aerodynamics and hydrodynamics. Conformal mapping has been successfully applied in the construction of grids around airfoils, engine inlets and other aircraft configurations. Conformal mapping techniques were applied to an aerobrake body having an axis of symmetry. Two different approaches were utilized: (1) Karman-Trefftz transformation; and (2) Point Wise Schwarz Christoffel transformation. In both cases, the aerobrake body was mapped onto a near circle, and a grid was generated in the mapped plane. The mapped body and grid were then mapped back into physical space and the properties of the associated grids were examined. Advantages and disadvantages of both approaches are discussed.

  17. Knowledge mapping as a technique to support knowledge translation.

    PubMed Central

    Ebener, S.; Khan, A.; Shademani, R.; Compernolle, L.; Beltran, M.; Lansang, Ma; Lippman, M.

    2006-01-01

    This paper explores the possibility of integrating knowledge mapping into a conceptual framework that could serve as a tool for understanding the many complex processes, resources and people involved in a health system, and for identifying potential gaps within knowledge translation processes in order to address them. After defining knowledge mapping, this paper presents various examples of the application of this process in health, before looking at the steps that need to be taken to identify potential gaps, to determine to what extent these gaps affect the knowledge translation process and to establish their cause. This is followed by proposals for interventions aimed at strengthening the overall process. Finally, potential limitations on the application of this framework at the country level are addressed. PMID:16917651

  18. Simulating and mapping spatial complexity using multi-scale techniques

    USGS Publications Warehouse

    De Cola, L.

    1994-01-01

    A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author

  19. Research Techniques Made Simple: Immunofluorescence Antigen Mapping in Epidermolysis Bullosa.

    PubMed

    Has, Cristina; He, Yinghong

    2016-07-01

    Inherited epidermolysis bullosa is a group of genetic blistering diseases with a broad spectrum of clinical severity and molecular defects. Epidermolysis bullosa results from mutations in genes encoding proteins involved in cell-cell and cell-matrix adhesion in the epidermis. Immunofluorescence antigen mapping makes use of monoclonal antibodies against proteins of the dermal-epidermal junction zone to determine the layer of skin where cleavage occurs and the relative protein abundance. It allows the diagnosis of the type and subtype of inherited epidermolysis bullosa and sheds light on molecular mechanisms underlying the disease. Immunofluorescence mapping steps include obtaining a skin biopsy sample, processing the biopsy material, antigen-antibody interaction on tissue, washing, incubation with fluorescently conjugated secondary antibodies, mounting, observation under a fluorescence microscope, and interpretation. A minimal antibody panel allows discrimination of the main epidermolysis bullosa subtypes. Extended panels can be used depending on the diagnostic or scientific question to be addressed. Immunofluorescence mapping contributed to significant progress in understanding epidermolysis bullosa, including identification of new underlying genetic mutations, mutation mechanisms, and the presence of revertant mosaicism. It is also an important tool in the assessment of the efficacy of experimental therapeutic approaches. PMID:27342035

  20. Mapping Diffuse Seismicity Using Empirical Matched Field Processing Techniques

    SciTech Connect

    Wang, J; Templeton, D C; Harris, D B

    2011-01-21

    The objective of this project is to detect and locate more microearthquakes using the empirical matched field processing (MFP) method than can be detected using only conventional earthquake detection techniques. We propose that empirical MFP can complement existing catalogs and techniques. We test our method on continuous seismic data collected at the Salton Sea Geothermal Field during November 2009 and January 2010. In the Southern California Earthquake Data Center (SCEDC) earthquake catalog, 619 events were identified in our study area during this time frame and our MFP technique identified 1094 events. Therefore, we believe that the empirical MFP method combined with conventional methods significantly improves the network detection ability in an efficient matter.

  1. Development of predictive mapping techniques for soil survey and salinity mapping

    NASA Astrophysics Data System (ADS)

    Elnaggar, Abdelhamid A.

    Conventional soil maps represent a valuable source of information about soil characteristics, however they are subjective, very expensive, and time-consuming to prepare. Also, they do not include explicit information about the conceptual mental model used in developing them nor information about their accuracy, in addition to the error associated with them. Decision tree analysis (DTA) was successfully used in retrieving the expert knowledge embedded in old soil survey data. This knowledge was efficiently used in developing predictive soil maps for the study areas in Benton and Malheur Counties, Oregon and accessing their consistency. A retrieved soil-landscape model from a reference area in Harney County was extrapolated to develop a preliminary soil map for the neighboring unmapped part of Malheur County. The developed map had a low prediction accuracy and only a few soil map units (SMUs) were predicted with significant accuracy, mostly those shallow SMUs that have either a lithic contact with the bedrock or developed on a duripan. On the other hand, the developed soil map based on field data was predicted with very high accuracy (overall was about 97%). Salt-affected areas of the Malheur County study area are indicated by their high spectral reflectance and they are easily discriminated from the remote sensing data. However, remote sensing data fails to distinguish between the different classes of soil salinity. Using the DTA method, five classes of soil salinity were successfully predicted with an overall accuracy of about 99%. Moreover, the calculated area of salt-affected soil was overestimated when mapped using remote sensing data compared to that predicted by using DTA. Hence, DTA could be a very helpful approach in developing soil survey and soil salinity maps in more objective, effective, less-expensive and quicker ways based on field data.

  2. Middle Atmosphere Program. Handbook for MAP. Volume 15: Balloon techniques

    NASA Technical Reports Server (NTRS)

    Murcray, D. G. (Editor)

    1985-01-01

    Some techniques employed by investigators using balloons to obtain data on the properties of the middle atmosphere are discussed. Much effort has gone into developing instruments which could be used on small balloons to measure temperature and variable species. These efforts are discussed. Remote sensing techniques used to obtain data on atmospheric composition are described. Measurement of stratospheric ions and stratospheric aerosols are also discussed.

  3. Quantification of Geologic Lineaments by Manual and Machine Processing Techniques. [Landsat satellites - mapping/geological faults

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.; Moik, J. G.; Shoup, W. C.

    1975-01-01

    The effect of operator variability and subjectivity in lineament mapping and methods to minimize or eliminate these problems by use of several machine preprocessing methods was studied. Mapped lineaments of a test landmass were used and the results were compared statistically. The total number of fractures mapped by the operators and their average lengths varied considerably, although comparison of lineament directions revealed some consensus. A summary map (785 linears) produced by overlaying the maps generated by the four operators shows that only 0.4 percent were recognized by all four operators, 4.7 percent by three, 17.8 percent by two, and 77 percent by one operator. Similar results were obtained in comparing these results with another independent group. This large amount of variability suggests a need for the standardization of mapping techniques, which might be accomplished by a machine aided procedure. Two methods of machine aided mapping were tested, both simulating directional filters.

  4. THE TOTAL LUMINOUS EFFICIENCY OF LUMINOUS BACTERIA.

    PubMed

    Harvey, E N

    1925-09-18

    Methods are described for measuring the light emitted by an emulsion of luminous bacteria of given thickness, and calculating the light emitted by a single bacterium, measuring 1.1 x 2.2 micra, provided there is no absorption of light in the emulsion. At the same time, the oxygen consumed by a single bacterium was measured by recording the time for the bacteria to use up .9 of the oxygen dissolved in sea water from air (20 per cent oxygen). The luminescence intensity does not diminish until the oxygen concentration falls below 2 per cent, when the luminescence diminishes rapidly. Above 2 per cent oxygen (when the oxygen dissolving in sea water from pure oxygen at 760 mm. Hg pressure = 100 per cent) the bacteria use equal amounts of oxygen in equal times, while below 2 per cent oxygen it seems very likely that rate of oxygen absorption is proportional to oxygen concentration. By measuring the time for a tube of luminous bacteria of known concentration saturated with air (20 per cent oxygen) to begin to darken (2 per cent oxygen) we can calculate the oxygen absorbed by one bacterium per second. The bacteria per cc. are counted on a blood counting slide or by a centrifugal method, after measuring the volume of a single bacterium (1.695 x 10(-12) cc.). Both methods gave results in good agreement with each other. The maximum value for the light from a single bacterium was 24 x 10(-14) lumens or 1.9 x 10(-14) candles. The maximum value for lumen-seconds per mg. of oxygen absorbed was 14. The average value for lumen-seconds per mg. O(2) was 9.25. The maximum values were selected in calculating the efficiency of light production, since some of the bacteria counted may not be producing light, although they may still be using oxygen. The "diet" of the bacteria was 60 per cent glycerol and 40 per cent peptone. To oxidize this mixture each mg. of oxygen would yield 3.38 gm. calories or 14.1 watts per second. 1 lumen per watt is therefore produced by a normal bacterium which

  5. MetricMap: an embedding technique for processing distance-based queries in metric spaces.

    PubMed

    Wang, Jason T L; Wang, Xiong; Shasha, Dennis; Zhang, Kaizhong

    2005-10-01

    In this paper, we present an embedding technique, called MetricMap, which is capable of estimating distances in a pseudometric space. Given a database of objects and a distance function for the objects, which is a pseudometric, we map the objects to vectors in a pseudo-Euclidean space with a reasonably low dimension while preserving the distance between two objects approximately. Such an embedding technique can be used as an approximate oracle to process a broad class of distance-based queries. It is also adaptable to data mining applications such as data clustering and classification. We present the theory underlying MetricMap and conduct experiments to compare MetricMap with other methods including MVP-tree and M-tree in processing the distance-based queries. Experimental results on both protein and RNA data show the good performance and the superiority of MetricMap over the other methods. PMID:16240772

  6. Digitizing zone maps, using modified LARSYS program. [computer graphics and computer techniques for mapping

    NASA Technical Reports Server (NTRS)

    Giddings, L.; Boston, S.

    1976-01-01

    A method for digitizing zone maps is presented, starting with colored images and producing a final one-channel digitized tape. This method automates the work previously done interactively on the Image-100 and Data Analysis System computers of the Johnson Space Center (JSC) Earth Observations Division (EOD). A color-coded map was digitized through color filters on a scanner to form a digital tape in LARSYS-2 or JSC Universal format. The taped image was classified by the EOD LARSYS program on the basis of training fields included in the image. Numerical values were assigned to all pixels in a given class, and the resulting coded zone map was written on a LARSYS or Universal tape. A unique spatial filter option permitted zones to be made homogeneous and edges of zones to be abrupt transitions from one zone to the next. A zoom option allowed the output image to have arbitrary dimensions in terms of number of lines and number of samples on a line. Printouts of the computer program are given and the images that were digitized are shown.

  7. An Electronic Engineering Curriculum Design Based on Concept-Mapping Techniques

    ERIC Educational Resources Information Center

    Toral, S. L.; Martinez-Torres, M. R.; Barrero, F.; Gallardo, S.; Duran, M. J.

    2007-01-01

    Curriculum design is a concern in European Universities as they face the forthcoming European Higher Education Area (EHEA). This process can be eased by the use of scientific tools such as Concept-Mapping Techniques (CMT) that extract and organize the most relevant information from experts' experience using statistics techniques, and helps a…

  8. My Solar System: A Developmentally Adapted Eco-Mapping Technique for Children

    ERIC Educational Resources Information Center

    Curry, Jennifer R.; Fazio-Griffith, Laura J.; Rohr, Shannon N.

    2008-01-01

    Counseling children requires specific skills and techniques, such as play therapy and expressive arts, to address developmental manifestations and to facilitate the understanding of presenting problems. This article outlines an adapted eco-mapping activity that can be used as a creative counseling technique with children in order to promote…

  9. Feasibility of a borehole VHF radar technique for fracture mapping

    SciTech Connect

    Chang, H.T.

    1984-01-01

    Experiments were conducted to establish the feasibility of a downhole high-frequency electromagnetic technique for location of fractures in the vicinity of boreholes. An existing flame-cut slot in granite was filled with salt water to simulate a brine-filled fracture. A transmitter consisting of a phased dual-dipole array arranged to provide a directional signal toward the fracture was installed in a borehole opposite the fracture. A receiver operated at 30 to 300 MHz was also located in the same borehole. The radar returns from the simulated fracture were detectable in boreholes located at distances of up to 12 meters from the fracture. These results indicate for the first time the feasibility of a downhole VHF radar for use in a single borehole for detection of fractures located away from the borehole.

  10. Digital Mapping Techniques '08—Workshop Proceedings, Moscow, Idaho, May 18–21, 2008

    USGS Publications Warehouse

    Soller, David R.

    2009-01-01

    The Digital Mapping Techniques '08 (DMT'08) workshop was attended by more than 100 technical experts from 40 agencies, universities, and private companies, including representatives from 24 State geological surveys. This year's meeting, the twelfth in the annual series, was hosted by the Idaho Geological Survey, from May 18-21, 2008, on the University of Idaho campus in Moscow, Idaho. Each DMT workshop has been coordinated by the U.S. Geological Survey's National Geologic Map Database Project and the Association of American State Geologists (AASG). As in previous years' meetings, the objective was to foster informal discussion and exchange of technical information, principally in order to develop more efficient methods for digital mapping, cartography, GIS analysis, and information management. At this meeting, oral and poster presentations and special discussion sessions emphasized (1) methods for creating and publishing map products (here, "publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase format; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  11. Improving predictive mapping of deep-water habitats: Considering multiple model outputs and ensemble techniques

    NASA Astrophysics Data System (ADS)

    Robert, Katleen; Jones, Daniel O. B.; Roberts, J. Murray; Huvenne, Veerle A. I.

    2016-07-01

    In the deep sea, biological data are often sparse; hence models capturing relationships between observed fauna and environmental variables (acquired via acoustic mapping techniques) are often used to produce full coverage species assemblage maps. Many statistical modelling techniques are being developed, but there remains a need to determine the most appropriate mapping techniques. Predictive habitat modelling approaches (redundancy analysis, maximum entropy and random forest) were applied to a heterogeneous section of seabed on Rockall Bank, NE Atlantic, for which landscape indices describing the spatial arrangement of habitat patches were calculated. The predictive maps were based on remotely operated vehicle (ROV) imagery transects high-resolution autonomous underwater vehicle (AUV) sidescan backscatter maps. Area under the curve (AUC) and accuracy indicated similar performances for the three models tested, but performance varied by species assemblage, with the transitional species assemblage showing the weakest predictive performances. Spatial predictions of habitat suitability differed between statistical approaches, but niche similarity metrics showed redundancy analysis and random forest predictions to be most similar. As one statistical technique could not be found to outperform the others when all assemblages were considered, ensemble mapping techniques, where the outputs of many models are combined, were applied. They showed higher accuracy than any single model. Different statistical approaches for predictive habitat modelling possess varied strengths and weaknesses and by examining the outputs of a range of modelling techniques and their differences, more robust predictions, with better described variation and areas of uncertainties, can be achieved. As improvements to prediction outputs can be achieved without additional costly data collection, ensemble mapping approaches have clear value for spatial management.

  12. Criss-cross mapping BD+30 3639: a new kinematic analysis technique

    NASA Astrophysics Data System (ADS)

    Steffen, W.; Tamayo, F.; Koning, N.

    2011-10-01

    We introduce a new method to analyze proper motion data. The method is called ``criss-cross'' mapping. It emphasizes regions where proper motion vector extensions cross or converge. From a superposition of lines through the vectors, a map is generated which helps to interpret the kinematic data. The new mapping technique is applied to the young planetary nebula BD+30 3639. The data are approx. 200 internal proper motion measurements from Li, Harrington, & Borkowski (2002). From the criss-cross mapping of BD+30 3639, we conclude that the kinematic center is approximately 0.5 arcsec off-set to the South-East from the central star. The mapping also shows evidence for a non-homologous expansion of the nebula that is consistent with a disturbance aligned with the bipolar molecular bullets.

  13. Mapping electrodynamic features of the high-latitude ionosphere from localized observations - Technique

    NASA Technical Reports Server (NTRS)

    Richmond, A. D.; Kamide, Y.

    1988-01-01

    This paper describes a novel procedure for mapping high-latitude electric fields and currents and their associated magnetic variations, using sets of localized observational data derived from different types of measurements. The technique provides a formalism for incorporating simultaneously such different classes of data as electric fields from radars and satellites, electric currents from radars, and magnetic perturbations at the ground and at satellite heights; the technique also uses available statistical information on the averages and variances of electrodynamic fields. The technique provides a more rigorous way of quantitatively estimating high-latitude electric field and current patterns than other methods and has a capability to quantify the errors in the mapped fields, based on the distribution of available data, their errors, and the statistical variances of the fields. The technique is illustrated by an application to a substorm which was analyzed by Kamide et al. (1982) by an earlier technique.

  14. Digital Mapping Techniques '10-Workshop Proceedings, Sacramento, California, May 16-19, 2010

    USGS Publications Warehouse

    Soller, David R.

    2012-01-01

    The Digital Mapping Techniques '10 (DMT'10) workshop was attended by 110 technical experts from 40 agencies, universities, and private companies, including representatives from 19 State geological surveys (see Appendix A). This workshop, hosted by the California Geological Survey, May 16-19, 2010, in Sacramento, California, was similar in nature to the previous 13 meetings (see Appendix B). The meeting was coordinated by the U.S. Geological Survey's (USGS) National Geologic Map Database project. As in the previous meetings, the objective was to foster informal discussion and exchange of technical information. It is with great pleasure that I note that the objective was again successfully met, as attendees continued to share and exchange knowledge and information, and renew friendships and collegial work begun at past DMT workshops. At this meeting, oral and poster presentations and special discussion sessions emphasized (1) methods for creating and publishing map products ("publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase format; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  15. The Most Luminous Supernovae

    NASA Astrophysics Data System (ADS)

    Sukhbold, Tuguldur; Woosley, S. E.

    2016-04-01

    Recent observations have revealed a stunning diversity of extremely luminous supernovae, seemingly increasing in radiant energy without bound. We consider simple approximate limits for what existing models can provide for the peak luminosity and total radiated energy for non-relativistic, isotropic stellar explosions. The brightest possible supernova is a Type I explosion powered by a sub-millisecond magnetar with field strength B ∼ few × {10}13 G. In extreme cases, such models might reach a peak luminosity of 2× {10}46 {erg} {{{s}}}-1 and radiate a total energy of up to 4× {10}52 {erg}. Other less luminous models are also explored, including prompt hyper-energetic explosions in red supergiants, pulsational-pair instability supernovae, pair-instability supernovae, and colliding shells. Approximate analytic expressions and limits are given for each case. Excluding magnetars, the peak luminosity is near 3× {10}44 {erg} {{{s}}}-1 for the brightest models and the corresponding limit on total radiated energy is 3× {10}51 {erg}. Barring new physics, supernovae with a light output over 3× {10}51 erg must be rotationally powered, either during the explosion itself or after, the most obvious candidate being a rapidly rotating magnetar. A magnetar-based model for the recent transient event, ASASSN-15lh is presented that strains, but does not exceed the limits of what the model can provide.

  16. Evaluation of Landslide Mapping Techniques and LiDAR-based Conditioning Factors

    NASA Astrophysics Data System (ADS)

    Mahalingam, R.; Olsen, M. J.

    2014-12-01

    Landslides are a major geohazard, which result in significant human, infrastructure, and economic losses. Landslide susceptibility mapping can help communities to plan and prepare for these damaging events. Mapping landslide susceptible locations using GIS and remote sensing techniques is gaining popularity in the past three decades. These efforts use a wide variety of procedures and consider a wide range of factors. Unfortunately, each study is often completed differently and independently of others. Further, the quality of the datasets used varies in terms of source, data collection, and generation, which can propagate errors or inconsistencies into the resulting output maps. Light detection and ranging (LiDAR) has proved to have higher accuracy in representing the continuous topographic surface, which can help minimize this uncertainty. The primary objectives of this paper are to investigate the applicability and performance of terrain factors in landslide hazard mapping, determine if LiDAR-derived datasets (slope, slope roughness, terrain roughness, stream power index and compound topographic index) can be used for predictive mapping without data representing other common landslide conditioning factors, and evaluate the differences in landslide susceptibility mapping using widely-used statistical approaches. The aforementioned factors were used to produce landslide susceptibility maps for a 140 km2 study area in northwest Oregon using six representative techniques: frequency ratio, weights of evidence, logistic regression, discriminant analysis, artificial neural network, and support vector machine. Most notably, the research showed an advantage in selecting fewer critical conditioning factors. The most reliable factors all could be derived from a single LiDAR DEM, reducing the need for laborious and costly data gathering. Most of the six techniques showed similar statistical results; however, ANN showed less accuracy for predictive mapping. Keywords : Li

  17. Lidar Luminance Quantizer

    NASA Technical Reports Server (NTRS)

    Quilligan, Gerard; DeMonthier, Jeffrey; Suarez, George

    2011-01-01

    This innovation addresses challenges in lidar imaging, particularly with the detection scheme and the shapes of the detected signals. Ideally, the echoed pulse widths should be extremely narrow to resolve fine detail at high event rates. However, narrow pulses require wideband detection circuitry with increased power dissipation to minimize thermal noise. Filtering is also required to shape each received signal into a form suitable for processing by a constant fraction discriminator (CFD) followed by a time-to-digital converter (TDC). As the intervals between the echoes decrease, the finite bandwidth of the shaping circuits blends the pulses into an analog signal (luminance) with multiple modes, reducing the ability of the CFD to discriminate individual events

  18. The Impact on Incorporating Collaborative Concept Mapping with Coteaching Techniques in Elementary Science Classes

    ERIC Educational Resources Information Center

    Jang, Syh-Jong

    2010-01-01

    The purpose of this research was to evaluate a collaborative concept-mapping technique that was integrated into coteaching in fourth-grade science classes in order to examine students' performance and attitudes toward the experimental teaching method. There are two fourth-grade science teachers and four classes with a total of 114 students…

  19. USING HYPERSPECTAL IMAGERY AND SPECTRAL UNMIXING TECHNIQUES FOR MAPPING GRAIN SORGHUM YIELD VARIABILITY

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Vegetation indices derived from remotely sensed imagery are commonly used to estimate crop yields. Spectral unmixing techniques provide an alternative approach to quantifying crop canopy abundance within each pixel and have the potential for mapping crop yield variability. The objective of this stud...

  20. A technique for determining the deuterium/hydrogen contrast map in neutron macromolecular crystallography.

    PubMed

    Chatake, Toshiyuki; Fujiwara, Satoru

    2016-01-01

    A difference in the neutron scattering length between hydrogen and deuterium leads to a high density contrast in neutron Fourier maps. In this study, a technique for determining the deuterium/hydrogen (D/H) contrast map in neutron macromolecular crystallography is developed and evaluated using ribonuclease A. The contrast map between the D2O-solvent and H2O-solvent crystals is calculated in real space, rather than in reciprocal space as performed in previous neutron D/H contrast crystallography. The present technique can thus utilize all of the amplitudes of the neutron structure factors for both D2O-solvent and H2O-solvent crystals. The neutron D/H contrast maps clearly demonstrate the powerful detectability of H/D exchange in proteins. In fact, alternative protonation states and alternative conformations of hydroxyl groups are observed at medium resolution (1.8 Å). Moreover, water molecules can be categorized into three types according to their tendency towards rotational disorder. These results directly indicate improvement in the neutron crystal structure analysis. This technique is suitable for incorporation into the standard structure-determination process used in neutron protein crystallography; consequently, more precise and efficient determination of the D-atom positions is possible using a combination of this D/H contrast technique and standard neutron structure-determination protocols. PMID:26894536

  1. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  2. Concept Mapping as a Technique for Program Theory Development: An Illustration Using Family Support Programs. Notes

    ERIC Educational Resources Information Center

    Rosas, Scott R.

    2005-01-01

    The emergence of theory-driven evaluation has drawn attention to the need for practical ways for evaluators to construct program theory to achieve the benefits of the approach. This article illustrates the use of concept mapping as a technique to explicate underlying program theory in the context of family support programs. In this example, 29…

  3. Comparing Computer-Supported Dynamic Modeling and "Paper & Pencil" Concept Mapping Technique in Students' Collaborative Activity

    ERIC Educational Resources Information Center

    Komis, Vassilis; Ergazaki, Marida; Zogza, Vassiliki

    2007-01-01

    This study aims at highlighting the collaborative activity of two high school students (age 14) in the cases of modeling the complex biological process of plant growth with two different tools: the "paper & pencil" concept mapping technique and the computer-supported educational environment "ModelsCreator". Students' shared activity in both cases…

  4. Accuracy of vertical radial plume mapping technique in measuring lagoon gas emission

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recently, the U.S. Environmental Protection Agency (USEPA) posted a ground-based optical remote sensing method on its website called OTM 10 for measuring fugitive gas emission flux from area sources such as closed landfills. The OTM 10 utilizes the vertical radial plume mapping (VRPM) technique to c...

  5. Information Landscaping: Information Mapping, Charting, Querying and Reporting Techniques for Total Quality Knowledge Management.

    ERIC Educational Resources Information Center

    Tsai, Bor-sheng

    2003-01-01

    Total quality management and knowledge management are merged and used as a conceptual model to direct and develop information landscaping techniques through the coordination of information mapping, charting, querying, and reporting. Goals included: merge citation analysis and data mining, and apply data visualization and information architecture…

  6. Acoustic emission source location in complex structures using full automatic delta T mapping technique

    NASA Astrophysics Data System (ADS)

    Al-Jumaili, Safaa Kh.; Pearson, Matthew R.; Holford, Karen M.; Eaton, Mark J.; Pullin, Rhys

    2016-05-01

    An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the "Minimum Difference" approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation.

  7. A novel technique for visualizing high-resolution 3D terrain maps

    NASA Astrophysics Data System (ADS)

    Dammann, John

    2007-02-01

    A new technique is presented for visualizing high-resolution terrain elevation data. It produces realistic images at small scales on the order of the data resolution and works particularly well when natural objects are present. Better visualization at small scales opens up new applications, like site surveillance for security and Google Earth-type local search and exploration tasks that are now done with 2-D maps. The large 3-D maps are a natural for high-resolution stereo display. The traditional technique drapes a continuous surface over the regularly spaced elevation values. This technique works well when displaying large areas or in cities with large buildings, but falls apart at small scales or for natural objects like trees. The new technique visualizes the terrain as a set of disjoint square patches. It is combined with an algorithm that identifies smooth areas within the scene. Where the terrain is smooth, such as in grassy areas, roads, parking lots and rooftops, it warps the patches to create a smooth surface. For trees or shrubs or other areas where objects are under-sampled, however, the patches are left disjoint. This has the disadvantage of leaving gaps in the data, but the human mind is very adept at filling in this missing information. It has the strong advantage of making natural terrain look realistic, trees and bushes look stylized but still look natural and are easy to interpret. Also, it does not add artifacts to the map, like filling in blank vertical walls where there are alcoves and other structure and extending bridges and overpasses down to the ground. The new technique is illustrated using very large 1-m resolution 3-D maps from the Rapid Terrain Visualization (RTV) program, and comparisons are made with traditional visualizations using these maps.

  8. A wafer mapping technique for residual stress in surface micromachined films

    NASA Astrophysics Data System (ADS)

    Schiavone, G.; Murray, J.; Smith, S.; Desmulliez, M. P. Y.; Mount, A. R.; Walton, A. J.

    2016-09-01

    The design of MEMS devices employing movable structures is crucially dependant on the mechanical behaviour of the deposited materials. It is therefore important to be able to fully characterize the micromachined films and predict with confidence the mechanical properties of patterned structures. This paper presents a characterization technique that enables the residual stress in MEMS films to be mapped at the wafer level by using microstructures released by surface micromachining. These dedicated MEMS test structures and the associated measurement techniques are used to extract localized information on the strain and Young’s modulus of the film under investigation. The residual stress is then determined by numerically coupling this data with a finite element analysis of the structure. This paper illustrates the measurement routine and demonstrates it with a case study using electrochemically deposited alloys of nickel and iron, particularly prone to develop high levels of residual stress. The results show that the technique enables wafer mapping of film non-uniformities and identifies wafer-to-wafer differences. A comparison between the results obtained from the mapping technique and conventional wafer bow measurements highlights the benefits of using a procedure tailored to films that are non-uniform, patterned and surface-micromachined, as opposed to simple standard stress extraction methods. The presented technique reveals detailed information that is generally unexplored when using conventional stress extraction methods such as wafer bow measurements.

  9. Parametric techniques for characterizing myocardial tissue by magnetic resonance imaging (part 1): T1 mapping.

    PubMed

    Perea Palazón, R J; Ortiz Pérez, J T; Prat González, S; de Caralt Robira, T M; Cibeira López, M T; Solé Arqués, M

    2016-01-01

    The development of myocardial fibrosis is a common process in the appearance of ventricular dysfunction in many heart diseases. Magnetic resonance imaging makes it possible to accurately evaluate the structure and function of the heart, and its role in the macroscopic characterization of myocardial fibrosis by late enhancement techniques has been widely validated clinically. Recent studies have demonstrated that T1-mapping techniques can quantify diffuse myocardial fibrosis and the expansion of the myocardial extracellular space in absolute terms. However, further studies are necessary to validate the usefulness of this technique in the early detection of tissue remodeling at a time when implementing early treatment would improve a patient's prognosis. This article reviews the state of the art for T1 mapping of the myocardium, its clinical applications, and its limitations. PMID:26944850

  10. MODIS Snow Cover Mapping Decision Tree Technique: Snow and Cloud Discrimination

    NASA Technical Reports Server (NTRS)

    Riggs, George A.; Hall, Dorothy K.

    2010-01-01

    Accurate mapping of snow cover continues to challenge cryospheric scientists and modelers. The Moderate-Resolution Imaging Spectroradiometer (MODIS) snow data products have been used since 2000 by many investigators to map and monitor snow cover extent for various applications. Users have reported on the utility of the products and also on problems encountered. Three problems or hindrances in the use of the MODIS snow data products that have been reported in the literature are: cloud obscuration, snow/cloud confusion, and snow omission errors in thin or sparse snow cover conditions. Implementation of the MODIS snow algorithm in a decision tree technique using surface reflectance input to mitigate those problems is being investigated. The objective of this work is to use a decision tree structure for the snow algorithm. This should alleviate snow/cloud confusion and omission errors and provide a snow map with classes that convey information on how snow was detected, e.g. snow under clear sky, snow tinder cloud, to enable users' flexibility in interpreting and deriving a snow map. Results of a snow cover decision tree algorithm are compared to the standard MODIS snow map and found to exhibit improved ability to alleviate snow/cloud confusion in some situations allowing up to about 5% increase in mapped snow cover extent, thus accuracy, in some scenes.

  11. Digital Mapping Techniques '05--Workshop Proceedings, Baton Rouge, Louisiana, April 24-27, 2005

    USGS Publications Warehouse

    Soller, David R.

    2005-01-01

    Intorduction: The Digital Mapping Techniques '05 (DMT'05) workshop was attended by more than 100 technical experts from 47 agencies, universities, and private companies, including representatives from 25 state geological surveys (see Appendix A). This workshop was similar in nature to the previous eight meetings, held in Lawrence, Kansas (Soller, 1997), in Champaign, Illinois (Soller, 1998), in Madison, Wisconsin (Soller, 1999), in Lexington, Kentucky (Soller, 2000), in Tuscaloosa, Alabama (Soller, 2001), in Salt Lake City, Utah (Soller, 2002), in Millersville, Pennsylvania (Soller, 2003), and in Portland, Oregon (Soller, 2004). This year's meeting was hosted by the Louisiana Geological Survey, from April 24-27, 2005, on the Louisiana State University campus in Baton Rouge, Louisiana. As in the previous meetings, the objective was to foster informal discussion and exchange of technical information. It is with great pleasure I note that the objective was successfully met, as attendees continued to share and exchange knowledge and information, and to renew friendships and collegial work begun at past DMT workshops. Each DMT workshop has been coordinated by the Association of American State Geologists (AASG) and U.S. Geological Survey (USGS) Data Capture Working Group, which was formed in August 1996, to support the AASG and the USGS in their effort to build a National Geologic Map Database (see Soller and Berg, this volume, and http://ngmdb.usgs.gov/info/standards/datacapt/). The Working Group was formed because increased production efficiencies, standardization, and quality of digital map products were needed for the database?and for the State and Federal geological surveys?to provide more high-quality digital maps to the public. At the 2005 meeting, oral and poster presentations and special discussion sessions emphasized: 1) methods for creating and publishing map products (here, 'publishing' includes Web-based release); 2) field data capture software and

  12. Interrogation of a cascaded FBG sensor using a wavelength-to-delay mapping technique

    NASA Astrophysics Data System (ADS)

    Clement, Juan; Hervás, Javier; Fernández-Pousa, Carlos R.; Barrera, David; Pastor, Daniel; Sales, Salvador; Capmany, José

    2015-09-01

    An optical fiber sensor composed of six standard FBGs in cascade is interrogated by use of a technique based on wavelength to delay mapping. A microwave-modulated optical signal enters the FBG cascade and, after being sliced and reflected, experiences dispersion in standard single-mode fiber. The Bragg wavelengths of the FBGs are mapped into the delays that determine the peaks in the system's electrical impulse response. The Bragg wavelength shifts are calculated from the difference of the delays undergone by FBGs samples. A resolution of 9.2 pm in Bragg wavelength shift is demonstrated.

  13. A novel technique for mapping the disparity of off-terrain objects

    NASA Astrophysics Data System (ADS)

    Suliman, A.; Zhang, Y.; Al-Tahir, R.

    2016-04-01

    Third-dimension information is of a great importance for several remote sensing applications, such as building detection. The main data-source for these applications is very high resolution (VHR) satellite images which allow detailed mapping of complex environments. Stereo VHR satellite images allow the extraction of two correlated types of third-dimension information: disparity and elevation information. While the disparity is measured directly, the elevation information is derived computationally. To measure the disparity information, two overlapped images are matched. However, for the backward and forward off-nadir VHR stereo images, building facades occlude areas and hence create many data gaps. When the disparity is required to represent only the off-terrain objects, interpolation and normalization techniques are typically used. However, in dense urban environments, these techniques destroy the quality of the generated data. Therefore, this paper proposes a registration-based technique to measure the disparity of the above-ground objects. The technique includes constructing epipolar images and registering them using common terrain- level features to allow direct disparity mapping for the off-terrain objects. After the implementation, the negative effects of occlusion in the off-nadir VHR stereo images are mitigated through direct disparity mapping of the above-ground objects and bypassing the interpolation and normalization steps.

  14. A satellite remote-sensing technique for geological horizon structure mapping

    SciTech Connect

    Fraser, A.J.; Huggins, P.; Cleverley, P.H.; Rees, J.L.

    1995-12-31

    A Satellite Remote Sensing technique is demonstrated which provides accurate and cost effective near-surface geological structure data. In the exploration phase the technique enables the rapid and inexpensive screening of open licences and the targeting of seismic acquisition, particularly important in terrains of difficult data acquisition. This paper describes the satellite data used, the technique of horizon surface data extraction and the analysis of a case study from Yemen. Landsat Thematic Mapper (TM) data and a high resolution digital elevation model (DEM), generated from stereo SPOT panchromatic images, are used in conjunction to identify a number of outcropping horizons and map their spatial position and height. Geological contacts are identified and digitised from the Landsat TM data and the elevations of these points taken from the digital elevation data. The extracted x,y,z co-ordinates are then gridded to construct a horizon structure map. The technique is applied to an area of central Yemen which is characterised by a near-surface {open_quote}layer cake{close_quote} geological structure in an extremely low dipping terrain (Less than 1{degrees}). The remote sensing interpretation is validated by comparison with 2D seismic across the area. Regional flexural structures with bed dips of as little as 0.25{degrees} can be mapped. Trend analysis and residual calculations on the horizon structure map show the techniques ability to identify and quantify horizon deformation related to faulting. Surface geological structure was successfully interpolated into the subsurface indicating potential fault closure at reservoir target depths.

  15. Mapping of terrain by computer clustering techniques using multispectral scanner data and using color aerial film

    NASA Technical Reports Server (NTRS)

    Smedes, H. W.; Linnerud, H. J.; Woolaver, L. B.; Su, M. Y.; Jayroe, R. R.

    1972-01-01

    Two clustering techniques were used for terrain mapping by computer of test sites in Yellowstone National Park. One test was made with multispectral scanner data using a composite technique which consists of (1) a strictly sequential statistical clustering which is a sequential variance analysis, and (2) a generalized K-means clustering. In this composite technique, the output of (1) is a first approximation of the cluster centers. This is the input to (2) which consists of steps to improve the determination of cluster centers by iterative procedures. Another test was made using the three emulsion layers of color-infrared aerial film as a three-band spectrometer. Relative film densities were analyzed using a simple clustering technique in three-color space. Important advantages of the clustering technique over conventional supervised computer programs are (1) human intervention, preparation time, and manipulation of data are reduced, (2) the computer map, gives unbiased indication of where best to select the reference ground control data, (3) use of easy to obtain inexpensive film, and (4) the geometric distortions can be easily rectified by simple standard photogrammetric techniques.

  16. Contemporary Mapping Techniques of Complex Cardiac Arrhythmias - Identifying and Modifying the Arrhythmogenic Substrate.

    PubMed

    Koutalas, Emmanuel; Rolf, Sascha; Dinov, Borislav; Richter, Sergio; Arya, Arash; Bollmann, Andreas; Hindricks, Gerhard; Sommer, Philipp

    2015-05-01

    Cardiac electrophysiology has moved a long way forward during recent decades in the comprehension and treatment of complex cardiac arrhythmias. Contemporary electroanatomical mapping systems, along with state-of-the-art technology in the manufacture of electrophysiology catheters and cardiac imaging modalities, have significantly enriched our armamentarium, enabling the implementation of various mapping strategies and techniques in electrophysiology procedures. Beyond conventional mapping strategies, ablation of complex fractionated electrograms and rotor ablation in atrial fibrillation ablation procedures, the identification and modification of the underlying arrhythmogenic substrate has emerged as a strategy that leads to improved outcomes. Arrhythmogenic substrate modification also has a major role in ventricular tachycardia ablation procedures. Optimisation of contact between tissue and catheter and image integration are a further step forward to augment our precision and effectiveness. Hybridisation of existing technologies with a reasonable cost should be our goal over the next few years. PMID:26835095

  17. Contemporary Mapping Techniques of Complex Cardiac Arrhythmias – Identifying and Modifying the Arrhythmogenic Substrate

    PubMed Central

    Koutalas, Emmanuel; Rolf, Sascha; Dinov, Borislav; Richter, Sergio; Arya, Arash; Bollmann, Andreas; Hindricks, Gerhard

    2015-01-01

    Cardiac electrophysiology has moved a long way forward during recent decades in the comprehension and treatment of complex cardiac arrhythmias. Contemporary electroanatomical mapping systems, along with state-of-the-art technology in the manufacture of electrophysiology catheters and cardiac imaging modalities, have significantly enriched our armamentarium, enabling the implementation of various mapping strategies and techniques in electrophysiology procedures. Beyond conventional mapping strategies, ablation of complex fractionated electrograms and rotor ablation in atrial fibrillation ablation procedures, the identification and modification of the underlying arrhythmogenic substrate has emerged as a strategy that leads to improved outcomes. Arrhythmogenic substrate modification also has a major role in ventricular tachycardia ablation procedures. Optimisation of contact between tissue and catheter and image integration are a further step forward to augment our precision and effectiveness. Hybridisation of existing technologies with a reasonable cost should be our goal over the next few years. PMID:26835095

  18. Using a Concept Mapping Tool with a Photograph Association Technique (CoMPAT) to Elicit Children's Ideas about Microbial Activity

    ERIC Educational Resources Information Center

    Byrne, Jenny; Grace, Marcus

    2010-01-01

    Concept mapping is a technique used to provide a visual representation of an individual's ideas about a concept or set of related concepts. This paper describes a concept mapping tool using a photograph association technique (CoMPAT) that is considered to be a novel way of eliciting children's ideas. What children at 11 years of age know about…

  19. Digital mapping techniques '00, workshop proceedings - May 17-20, 2000, Lexington, Kentucky

    USGS Publications Warehouse

    Soller, David R.

    2000-01-01

    Introduction: The Digital Mapping Techniques '00 (DMT'00) workshop was attended by 99 technical experts from 42 agencies, universities, and private companies, including representatives from 28 state geological surveys (see Appendix A). This workshop was similar in nature to the first three meetings, held in June, 1997, in Lawrence, Kansas (Soller, 1997), in May, 1998, in Champaign, Illinois (Soller, 1998a), and in May, 1999, in Madison, Wisconsin (Soller, 1999). This year's meeting was hosted by the Kentucky Geological Survey, from May 17 to 20, 2000, on the University of Kentucky campus in Lexington. As in the previous meetings, the objective was to foster informal discussion and exchange of technical information. When, based on discussions at the workshop, an attendee adopts or modifies a newly learned technique, the workshop clearly has met that objective. Evidence of learning and cooperation among participating agencies continued to be a highlight of the DMT workshops (see example in Soller, 1998b, and various papers in this volume). The meeting's general goal was to help move the state geological surveys and the USGS toward development of more cost-effective, flexible, and useful systems for digital mapping and geographic information systems (GIS) analysis. Through oral and poster presentations and special discussion sessions, emphasis was given to: 1) methods for creating and publishing map products (here, 'publishing' includes Web-based release); 2) continued development of the National Geologic Map Database; 3) progress toward building a standard geologic map data model; 4) field data-collection systems; and 5) map citation and authorship guidelines. Four representatives of the GIS hardware and software vendor community were invited to participate. The four annual DMT workshops were coordinated by the AASG/USGS Data Capture Working Group, which was formed in August, 1996, to support the Association of American State Geologists and the USGS in their effort

  20. Breast density mapping based upon system calibration, x-ray techniques, and FFDM images

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Smith, Andrew P.; Jing, Zhenxue; Wu, Tao

    2007-03-01

    Clinical studies have correlated a high breast density to a women's risk of breast cancer. A breast density measurement that can quantitatively depict the volume distribution and percentage of dense tissues in breasts would be very useful for risk factor assessment of breast cancer, and might be more predictive of risks than the common but subjective and coarse 4-point BIRADS scale. This paper proposes to use a neural-network mapping to compute the breast density information based upon system calibration data, x-ray techniques, and Full Field Digital Mammography (FFDM) images. The mapping consists of four modules, namely, system calibration, generator of beam quality, generator of normalized absorption, and a multi-layer feed-forward neural network. As the core of breast density mapping, the network accepts x-ray target/filter combination, normalized x-ray absorption, pixel-wise breast thickness map, and x-ray beam quality during image acquisition as input elements, and exports a pixel-wise breast density distribution and a single breast density percentage for the imaged breast. Training and testing data sets for the design and verification of the network were formulated from calibrated x-ray beam quality, imaging data with a step wedge phantom under a variety x-ray imaging techniques, and nominal breast densities of tissue equivalent materials. The network was trained using a Levenberg-Marquardt algorithm based back-propagation learning method. Various thickness and glandular density phantom studies were performed with clinical x-ray techniques. Preliminary results showed that the neural network mapping is promising in accurately computing glandular density distribution and breast density percentage.

  1. Technique of sentinel lymph node biopsy and lymphatic mapping during laparoscopic colon resection for cancer

    PubMed Central

    Bianchi, PP; Andreoni, B; Rottoli, M; Celotti, S; Chiappa, A; Montorsi, M

    2007-01-01

    Background: The utility of lymph node mapping to improve staging in colon cancer is still under evaluation. Laparoscopic colectomy for colon cancer has been validated in multi-centric trials. This study assessed the feasibility and technical aspects of lymph node mapping in laparoscopic colectomy for colon cancer. Methods: A total of 42 patients with histologically proven colon cancer were studied from January 2006 to September 2007. Exclusion criteria were: advanced disease (clinical stage III), rectal cancer, previous colon resection and contraindication to laparoscopy. Lymph-nodal status was assessed preoperatively by computed tomography (CT) scan and intra-operatively with the aid of laparoscopic ultrasound. Before resection, 2–3 ml of Patent Blue V dye was injected sub-serosally around the tumour. Coloured lymph nodes were marked as sentinel (SN) with metal clips or suture and laparoscopic colectomy with lymphadenectomy completed as normal. In case of failure of the intra-operative procedure, an ex vivo SN biopsy was performed on the colectomy specimen after resection. Results: A total number of 904 lymph nodes were examined, with a median number of 22 lymph nodes harvested per patient. The SN detection rate was 100%, an ex vivo lymph node mapping was necessary in four patients. Eleven (26.2%) patients had lymph-nodal metastases and in five (45.5%) of these patients, SN was the only positive lymph node. There were two (18.2%) false-negative SN. In three cases (7.1%) with aberrant lymphatic drainage, lymphadenectomy was extended. The accuracy of SN mapping was 95.2% and negative predictive value was 93.9%. Conclusions: Laparoscopic lymphatic mapping and SN removal is feasible in laparoscopic colectomy for colon cancer. The ex vivo technique is useful as a salvage technique in case of failure of the intra-operative procedure. Prospective studies are justified to determine the real accuracy and false-negative rate of the technique. PMID:22275957

  2. Estimation of flood environmental effects using flood zone mapping techniques in Halilrood Kerman, Iran.

    PubMed

    Boudaghpour, Siamak; Bagheri, Majid; Bagheri, Zahra

    2014-01-01

    High flood occurrences with large environmental damages have a growing trend in Iran. Dynamic movements of water during a flood cause different environmental damages in geographical areas with different characteristics such as topographic conditions. In general, environmental effects and damages caused by a flood in an area can be investigated from different points of view. The current essay is aiming at detecting environmental effects of flood occurrences in Halilrood catchment area of Kerman province in Iran using flood zone mapping techniques. The intended flood zone map was introduced in four steps. Steps 1 to 3 pave the way to calculate and estimate flood zone map in the understudy area while step 4 determines the estimation of environmental effects of flood occurrence. Based on our studies, wide range of accuracy for estimating the environmental effects of flood occurrence was introduced by using of flood zone mapping techniques. Moreover, it was identified that the existence of Jiroft dam in the study area can decrease flood zone from 260 hectares to 225 hectares and also it can decrease 20% of flood peak intensity. As a result, 14% of flood zone in the study area can be saved environmentally. PMID:25649059

  3. Development of two-dimensional mapping technique by in-air-PIXE with metal capillary

    NASA Astrophysics Data System (ADS)

    Fujita, N.; Ishii, K.; Ogawa, H.

    2011-05-01

    We have developed the two-dimensional mapping technique with in-air-PIXE (2D-PIXE) using a metal capillary as a guide to extract ion beam to air. The metal capillary is the conventional injection needle with a 200 μm inside diameter. For the target which is the character made of the copper wires on aluminum basement, 2D-PIXE measurements were performed by irradiating 3 MeV proton beam. As a result, the character was tend to be restored clearly by this method. We discuss about the result of the two-dimensional map from a viewpoint of the signal-to-noise ratio and the resolution. This technique is expected to be applicable to various fields such as biology, nano-technology, archeology and so on.

  4. GIS-based statistical mapping technique for block-and-ash pyroclastic flow and surge hazards

    NASA Astrophysics Data System (ADS)

    Widiwijayanti, C.; Voight, B.; Hidayat, D.; Schilling, S.

    2008-12-01

    Assessments of pyroclastic flow (PF) hazards are commonly based on mapping of PF and surge deposits and estimations of inundation limits, and/or computer models of varying degrees of sophistication. In volcanic crises a PF hazard map may be sorely needed, but limited time, exposures, or safety aspects may preclude fieldwork, and insufficient time or baseline data may be available for reliable dynamic simulations. We have developed a statistically constrained simulation model for block-and-ash PFs to estimate potential areas of inundation by adapting methodology from Iverson et al. (1998) for lahars. The predictive equations for block-and-ash PFs are calibrated with data from many volcanoes and given by A = (0.05-0.1)V2/3, B = (35-40)V2/3 , where A is cross-sectional area of inundation, B is planimetric area and V is deposit volume. The proportionality coefficients were obtained from regression analyses and comparison of simulations to mapped deposits. The method embeds the predictive equations in a GIS program coupled with DEM topography, using the LAHARZ program of Schilling (1998). Although the method is objective and reproducible, any PF hazard zone so computed should be considered as an approximate guide only, due to uncertainties on coefficients applicable to individual PFs, DEM details, and release volumes. Gradational nested hazard maps produced by these simulations reflect in a sense these uncertainties. The model does not explicitly consider dynamic behavior, which can be important. Surge impacts must be extended beyond PF hazard zones and we have explored several approaches to do this. The method has been used to supply PF hazard maps in two crises: Merapi 2006; and Montserrat 2006- 2007. We have also compared our hazard maps to actual recent PF deposits and to maps generated by several other model techniques.

  5. New mapping techniques help assess the health of Hawaii's coral reefs

    USGS Publications Warehouse

    Field, M.E.; Chavez, P.S., Jr.; Evans, K.R.; Cochran, S.A.

    2001-01-01

    The U.S. Geological Survey (USGS) is working closely with academic institutions and state and Federal agencies to assess the factors that affect the health of Hawaii's and our Nation's coral reefs. In order to establish a basis from which scientists can objectively detect changes in reef health, the USGS and its cooperators are applying many new techniques to the mapping and monitoring of coral reefs in Hawaii.

  6. Mineral classification map using MF and SAM techniques: A case study in the Nohwa Island, Korea

    SciTech Connect

    Son, Young-Sun; Yoon, Wang-Jung

    2015-03-10

    The purpose of this study is to map pyprophyllite distribution at surface of the Nohwa deposit, Korea by using Advanced Spaceborne Thermal Emission and Reflectance Radiometer (ASTER) data. For this, combined Spectral Angle Mapper (SAM), and Matched Filtering (MF) technique based on mathematical algorithm was applied. The regional distribution of high-grade and low-grade pyrophyllite in the Nohwa deposit area could be differentiated by this method. The results of this study show that ASTER data analysis using combination of SAM and MF techniques will assist in exploration of pyrophyllite at the exposed surface.

  7. Remote Sensing and GIS Techniques to map Groundwater Recharge and Discharge

    NASA Astrophysics Data System (ADS)

    Tweed, S.; Leblanc, M.; Webb, J.; Lubczynski, M.; Stagnitti, F.

    2006-12-01

    In salinity prone catchments, effective management of water resources involves balancing the requirements of sustainable resource allocation with salinity mitigation programs and ecosystem protection. Understanding groundwater recharge and discharge processes is a key component to achieving this balance. In this study, surface and sub-surface features are linked with recharge and discharge processes by incorporating GIS and remote sensing mapping techniques for an unconfined basalt aquifer, in a salinity and drought prone region of southeast Australia. The basalt aquifer covers ~11,500 km2 in an agriculturally intensive region. The approach requires (a) knowledge of local hydrogeological processes, to choose appropriate surface/sub-surface indicators, (b) adequate remote sensing and GIS techniques as well as necessary datasets to map these indicators, and (c) validation of the results at test sites and at the catchment-scale. This approach, applied systematically across a catchment, provides a framework for mapping recharge and discharge areas. A key component in assigning surface and sub-surface indicators is the relevance to the dominant recharge and discharge processes occurring, and using appropriate remote sensing and GIS techniques with the capacity to identify these.

  8. A semi-quantitative technique for mapping potential aquifer productivity on the national scale: example of England and Wales (UK)

    NASA Astrophysics Data System (ADS)

    Abesser, Corinna; Lewis, Melinda

    2015-12-01

    The development and validation of aquifer productivity and depth-to-source maps for England and Wales are described. Aquifer productivity maps can provide valuable support for the assessment, planning and management of groundwater and renewable heat energy resources. Aquifer productivity is often mapped using geostatistical interpolation techniques such as kriging, but these techniques tend to be unsuitable for mapping at the national scale due to the high data (and time) demands. A methodology is outlined for mapping aquifer productivity at the national scale using existing national-scale data sets. Pumping test data are used to characterise the potential borehole yields that different geological formations of varying lithologies and ages can provide. Based on this analysis and using expert knowledge, the corresponding map codes on the geological map are assigned to potential productivity classes. The subsurface (concealed) extent of aquifer units is mapped from geophysical data, and together with the attributed geological map, provide the bedrock-aquifer productivity map. Drilling and pumping costs can be an important consideration when evaluating the feasibility of developing a groundwater source. Thus, a map of the approximate depth to source is developed alongside the aquifer productivity map. The maps are validated using independent data sets, and map performance is compared against performance from maps derived by random and uniform attribution. The results show that the maps successfully predict potential productivity and approximate depth to the water source, although utility of the depth-to-source map could be improved by increasing the vertical discretisation at which depth intervals are mapped.

  9. Luminous-flux measurements by an absolute integrating sphere

    NASA Astrophysics Data System (ADS)

    Rastello, Maria Luisa; Miraldi, Elio; Pisoni, Paolo

    1996-08-01

    We present an original implementation of the absolute-sphere method recently proposed by Ohno. The luminous-flux unit, the lumen, is realized by means of an integrating sphere with an opening calibrated by a luminous-intensity standard placed outside. The adapted experimental setup permits one to measure luminous-flux values between 5 and 2500 lm with a significant improvement with respect to the simulated performances reported in the literature. Traditionally, the luminous-flux unit, the lumen, is realized by goniophotometric techniques in which the luminous-intensity distribution is measured and integrated over the whole solid angle. Thus sphere results are compared with those obtained with the Istituto Elettrotecnico Nazionale goniophotometer. In particular, a set of standards, characterized by luminous-flux values of approximately 2000 lm, has been calibrated with both techniques. We highlight some of the problems encountered. Experimental results show that the agreement between the two methods is within the estimated uncertainty and suggest promising areas for future research.

  10. Advanced Ecosystem Mapping Techniques for Large Arctic Study Domains Using Calibrated High-Resolution Imagery

    NASA Astrophysics Data System (ADS)

    Macander, M. J.; Frost, G. V., Jr.

    2015-12-01

    Regional-scale mapping of vegetation and other ecosystem properties has traditionally relied on medium-resolution remote sensing such as Landsat (30 m) and MODIS (250 m). Yet, the burgeoning availability of high-resolution (<=2 m) imagery and ongoing advances in computing power and analysis tools raises the prospect of performing ecosystem mapping at fine spatial scales over large study domains. Here we demonstrate cutting-edge mapping approaches over a ~35,000 km² study area on Alaska's North Slope using calibrated and atmospherically-corrected mosaics of high-resolution WorldView-2 and GeoEye-1 imagery: (1) an a priori spectral approach incorporating the Satellite Imagery Automatic Mapper (SIAM) algorithms; (2) image segmentation techniques; and (3) texture metrics. The SIAM spectral approach classifies radiometrically-calibrated imagery to general vegetation density categories and non-vegetated classes. The SIAM classes were developed globally and their applicability in arctic tundra environments has not been previously evaluated. Image segmentation, or object-based image analysis, automatically partitions high-resolution imagery into homogeneous image regions that can then be analyzed based on spectral, textural, and contextual information. We applied eCognition software to delineate waterbodies and vegetation classes, in combination with other techniques. Texture metrics were evaluated to determine the feasibility of using high-resolution imagery to algorithmically characterize periglacial surface forms (e.g., ice-wedge polygons), which are an important physical characteristic of permafrost-dominated regions but which cannot be distinguished by medium-resolution remote sensing. These advanced mapping techniques provide products which can provide essential information supporting a broad range of ecosystem science and land-use planning applications in northern Alaska and elsewhere in the circumpolar Arctic.

  11. Testing the PV-Theta Mapping Technique in a 3-D CTM Model Simulation

    NASA Technical Reports Server (NTRS)

    Frith, Stacey M.

    2004-01-01

    Mapping lower stratospheric ozone into potential vorticity (PV)- potential temperature (Theta) coordinates is a common technique employed to analyze sparse data sets. Ozone transformed into a flow-following dynamical coordinate system is insensitive to meteorological variations. Therefore data from a wide range of times/locations can be compared, so long as the measurements were made in the same airmass (as defined by PV). Moreover, once a relationship between ozone and PV/Theta is established, a full 3D ozone field can be estimated from this relationship and the 3D analyzed PV field. However, ozone data mapped in this fashion can be hampered by noisy PV fields, or "mis-matches" in the resolution and/or exact location of the ozone and PV measurements. In this study, we investigate the PV-ozone relationship using output from a recent 50-year run of the Goddard 3D chemical transport model (CTM). Model constituents are transported using off-line dynamics from the finite volume general circulation model (FVGCM). By using the internally consistent model PV and ozone fields, we minimize noise due to mis-matching and resolution issues. We calculate correlations between model ozone and PV throughout the stratosphere, and test the sensitivity of the technique to initial data resolution. To do this we degrade the model data to that of various satellite instruments, then compare the mapped fields derived from the sub-sampled data to the full resolution model data. With these studies we can determine appropriate limits for the PV-theta mapping technique in latitude, altitude, and as a function of original data resolution.

  12. Bioclimatic and vegetation mapping of a topographically complex oceanic island applying different interpolation techniques

    NASA Astrophysics Data System (ADS)

    Garzón-Machado, Víctor; Otto, Rüdiger; del Arco Aguilar, Marcelino José

    2014-07-01

    Different spatial interpolation techniques have been applied to construct objective bioclimatic maps of La Palma, Canary Islands. Interpolation of climatic data on this topographically complex island with strong elevation and climatic gradients represents a challenge. Furthermore, meteorological stations are not evenly distributed over the island, with few stations at high elevations. We carried out spatial interpolations of the compensated thermicity index (Itc) and the annual ombrothermic Index (Io), in order to obtain appropriate bioclimatic maps by using automatic interpolation procedures, and to establish their relation to potential vegetation units for constructing a climatophilous potential natural vegetation map (CPNV). For this purpose, we used five interpolation techniques implemented in a GIS: inverse distance weighting (IDW), ordinary kriging (OK), ordinary cokriging (OCK), multiple linear regression (MLR) and MLR followed by ordinary kriging of the regression residuals. Two topographic variables (elevation and aspect), derived from a high-resolution digital elevation model (DEM), were included in OCK and MLR. The accuracy of the interpolation techniques was examined by the results of the error statistics of test data derived from comparison of the predicted and measured values. Best results for both bioclimatic indices were obtained with the MLR method with interpolation of the residuals showing the highest R 2 of the regression between observed and predicted values and lowest values of root mean square errors. MLR with correction of interpolated residuals is an attractive interpolation method for bioclimatic mapping on this oceanic island since it permits one to fully account for easily available geographic information but also takes into account local variation of climatic data.

  13. Bioclimatic and vegetation mapping of a topographically complex oceanic island applying different interpolation techniques.

    PubMed

    Garzón-Machado, Víctor; Otto, Rüdiger; del Arco Aguilar, Marcelino José

    2014-07-01

    Different spatial interpolation techniques have been applied to construct objective bioclimatic maps of La Palma, Canary Islands. Interpolation of climatic data on this topographically complex island with strong elevation and climatic gradients represents a challenge. Furthermore, meteorological stations are not evenly distributed over the island, with few stations at high elevations. We carried out spatial interpolations of the compensated thermicity index (Itc) and the annual ombrothermic Index (Io), in order to obtain appropriate bioclimatic maps by using automatic interpolation procedures, and to establish their relation to potential vegetation units for constructing a climatophilous potential natural vegetation map (CPNV). For this purpose, we used five interpolation techniques implemented in a GIS: inverse distance weighting (IDW), ordinary kriging (OK), ordinary cokriging (OCK), multiple linear regression (MLR) and MLR followed by ordinary kriging of the regression residuals. Two topographic variables (elevation and aspect), derived from a high-resolution digital elevation model (DEM), were included in OCK and MLR. The accuracy of the interpolation techniques was examined by the results of the error statistics of test data derived from comparison of the predicted and measured values. Best results for both bioclimatic indices were obtained with the MLR method with interpolation of the residuals showing the highest R2 of the regression between observed and predicted values and lowest values of root mean square errors. MLR with correction of interpolated residuals is an attractive interpolation method for bioclimatic mapping on this oceanic island since it permits one to fully account for easily available geographic information but also takes into account local variation of climatic data. PMID:23686111

  14. Applying data fusion techniques for benthic habitat mapping and monitoring in a coral reef ecosystem

    NASA Astrophysics Data System (ADS)

    Zhang, Caiyun

    2015-06-01

    Accurate mapping and effective monitoring of benthic habitat in the Florida Keys are critical in developing management strategies for this valuable coral reef ecosystem. For this study, a framework was designed for automated benthic habitat mapping by combining multiple data sources (hyperspectral, aerial photography, and bathymetry data) and four contemporary imagery processing techniques (data fusion, Object-based Image Analysis (OBIA), machine learning, and ensemble analysis). In the framework, 1-m digital aerial photograph was first merged with 17-m hyperspectral imagery and 10-m bathymetry data using a pixel/feature-level fusion strategy. The fused dataset was then preclassified by three machine learning algorithms (Random Forest, Support Vector Machines, and k-Nearest Neighbor). Final object-based habitat maps were produced through ensemble analysis of outcomes from three classifiers. The framework was tested for classifying a group-level (3-class) and code-level (9-class) habitats in a portion of the Florida Keys. Informative and accurate habitat maps were achieved with an overall accuracy of 88.5% and 83.5% for the group-level and code-level classifications, respectively.

  15. Mapping small wetlands of Kenya and Tanzania using remote sensing techniques

    NASA Astrophysics Data System (ADS)

    Mwita, E.; Menz, G.; Misana, S.; Becker, M.; Kisanga, D.; Boehme, B.

    2013-04-01

    Although wetlands in Tanzania and Kenya have great potentials for agricultural production and a multitude of uses, many of them are not even documented on official maps. Lack of official recognition has done little in preventing there over utilization. As the wetlands continue to play remarkable roles in the movement of people and terrestrial species in the region, it is important that they are monitored and properly managed. This study was undertaken in Usambara highlands and the Pangani floodplain in Tanzania, the Mount Kenya highlands and Laikipia floodplain in Kenya to map the different types of wetlands in terms of their size, density, spatial distribution and use patterns. Remote sensing techniques and field surveys were adopted, and 51 wetlands were identified in flood plains within the semi-arid and sub-humid lowlands, and inland valleys in the region. The detailed maps generated showed the intensity of wetland use, inland valleys being the most intensively used, and are useful in monitoring changes in wetlands for their effective management. The use of multispatial resolution imagery, combined with field survey and GIS produced satisfactory results for the delineation and mapping of small wetlands and their uses.

  16. Making Accurate Topographic Maps of the Schoolyard Using Ideas and Techniques Learned and Adapted from Multi-beam Sonar Mapping of the Arctic Ocean

    NASA Astrophysics Data System (ADS)

    Fuerst, S. I.; Roberts, J. D.

    2010-12-01

    Having participated in a University of Rhode Island Project Armada expedition to join the University of New Hampshire Center for Coastal and Oceanographic Studies in making multi-beam sonar contour maps of the Arctic Ocean floor, I was able to bring the principles learned from this trip to my earth science high school students and create a project in our "mapping the earth" unit. Students learn basic surveying techniques and create authentic, accurately detailed topographic maps of the schoolyard. Models of their maps are then constructed of either Styrofoam or wood which enables them to make the transition from a 2-dimensional map to a 3-dimensional representation. Even though our maps are created using sticks, line levels, compasses and GPS, the scientific concepts of using location and elevation data to draw contour lines are identical to those used in underwater mapping. Once the students understand the science in mapping and creating contour maps to scale on graph paper by hand, they are able to easily relate this knowledge to what I was doing onboard ship using multi-beam sonar and computer mapping programs. We would like to share with you the lab and techniques that we have developed to make this activity possible with minimal materials and simple technology. As a background extension, it is also possible to replicate sonar measurements using an aquarium, food coloring, and a surface grid to map the topography of a teacher created landscape on the aquarium bottom. Earth Science students using simple tools to accurately map the topography of the school grounds

  17. Validating Flood Mapping Products Using a Digital Elevation Model Comparison Technique

    NASA Astrophysics Data System (ADS)

    Fayne, J.

    2014-12-01

    This preliminary study assessed the validity of a pixel analysis elevation comparison technique and determined necessary steps for improvement. The pixel analysis sought to assess the probability of a flood occurring in a particular area by comparing the spatial extent of flood mapping products to the local elevation. The method was developed to determine if the physical relationship between elevation and floods as shown in satellite images is accurately represented in a flood mapping product. The data incorporated in this study are raster digital elevation model (DEM) tiles, a scene from Landsat 5 during a flood period, and a scene from the NASA DEVELOP Flood Disasters Team Flood Product. Pixels representing flooded areas were compared to the elevation height pixels using horizontal transect lines to create pixel value profiles across a 727 km transect of Vietnam and Cambodia. The elevation model comparison validates the Flood Product by depicting water presence in alignment with areas of low elevation. Initial findings indicate that the technique can be used to improve the assessment of flood mapping products in transects less than 10 km. Future research will focus on streamlining the pixel analysis process to yield comprehensive results for larger areas.

  18. Performance Analysis of the Microsoft Kinect Sensor for 2D Simultaneous Localization and Mapping (SLAM) Techniques

    PubMed Central

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-01-01

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks. PMID:25490595

  19. Performance analysis of the Microsoft Kinect sensor for 2D Simultaneous Localization and Mapping (SLAM) techniques.

    PubMed

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-01-01

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks. PMID:25490595

  20. Urban Mapping and Growth Prediction using Remote Sensing and GIS Techniques, Pune, India

    NASA Astrophysics Data System (ADS)

    Sivakumar, V.

    2014-11-01

    This study aims to map the urban area in and around Pune region between the year 1991 and 2010, and predict its probable future growth using remote sensing and GIS techniques. The Landsat TM and ETM+ satellite images of 1991, 2001 and 2010 were used for analyzing urban land use class. Urban class was extracted / mapped using supervised classification technique with maximum likelihood classifier. The accuracy assessment was carried out for classified maps. The achieved overall accuracy and Kappa statistics were 86.33 % & 0.76 respectively. Transition probability matrix and area change were obtained using different classified images. A plug-in was developed in QGIS software (open source) based on Markov Chain model algorithm for predicting probable urban growth for the future year 2021. Based on available data set, the result shows that urban area is expected to grow much higher in the year 2021 when compared to 2010. This study provides an insight into understanding of urban growth and aids in subsequent infrastructure planning, management and decision-making.

  1. Micro-scale strain mapping technique: a tool to quantify strain partitioning during creep deformation

    NASA Astrophysics Data System (ADS)

    Quintanilla-Terminel, Alejandra; Zimmerman, Mark; Evans, Brian; Kohlstedt, David

    2016-04-01

    Several deformation mechanisms interact to accommodate plastic deformation. Quantifying the contribution of each to the total strain is necessary for establishing a better link between observed microstructures and mechanical data, as well as to allow more confident extrapolation from laboratory to natural conditions. In this contribution, we present the experimental and computational technique involved in micro-scale strain mapping (MSSM). The MSSM technique relies on analyzing the relative displacement of initially regularly spaced markers after deformation. We present several microfabrication techniques that permit us to pattern various rocks with micrometric and nanometric metal markers, as well as the challenges faced in working at high temperatures and pressures. A Hough transform algorithm was used to detect the markers and automate as much as possible the strain analysis. The von Mises strain is calculated for a set of n-points and their relative displacements, which allow us to map the strain at different length scales. We applied the MSSM technique to study strain partitioning during deformation creep of Carrara marble and San Carlos olivine at a confining pressure, Pc, of 300 MPa and homologous temperatures of 0.3 to 0.6. We measured the local strain and strain heterogeneity produced during creep deformation of split cylinders of Carrara marble under conventional triaxial loading to inelastic strains of 11 to 36% at a strain rate of 3x10‑5s‑1, Pc = 300 MPa and 400o < T <700oC. We conclude that the evolution of deformation structures in marble takes place over a substantial interval in strain and that the duration of this interval depends on strain rate, temperature, and pressure. Our first results on strain mapping of olivine deformed at T = 1150oC and Pc = 300 MPa demonstrate promise for characterizing intragranular strain and better defining the contribution of grain boundary sliding to the total strain.

  2. Stacked reverberation mapping

    NASA Astrophysics Data System (ADS)

    Fine, S.; Shanks, T.; Green, P.; Kelly, B. C.; Croom, S. M.; Webster, R. L.; Berger, E.; Chornock, R.; Burgett, W. S.; Chambers, K. C.; Kaiser, N.; Price, P. A.

    2013-07-01

    Over the past 20 years reverberation mapping has proved one of the most successful techniques for studying the local (<1 pc) environment of supermassive black holes that drive active galactic nuclei. Key successes of reverberation mapping have been direct black hole mass estimates, the radius-luminosity relation for the Hβ line and the calibration of single-epoch mass estimators commonly employed up to z ˜ 7. However, observing constraints mean that few studies have been successful at z > 0.1, or for the more-luminous quasars that make up the majority of current spectroscopic samples, or for rest-frame ultraviolet emission lines available in optical spectra of z > 0.5 objects. Previously, we described a technique for stacking cross-correlations to obtain reverberation mapping results at high z. Here, we present the first results from a campaign designed for this purpose. We construct stacked cross-correlation functions for the C IV and Mg II lines and find a clear peak in both. We find that the peak in the Mg II correlation is at longer lags than C IV consistent with previous results at low redshift. For the C IV sample, we are able to bin by luminosity and find evidence for increasing lags for more-luminous objects. This C IV radius-luminosity relation is consistent with previous studies but with a fraction of the observational cost.

  3. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95%), and progress is being made towards identifying the mapped spectral classes.

  4. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95 percent), and progress is being made towards identifying the mapped spectral classes.

  5. Tropical Cyclone Vulnerability Mapping Using Geospatial Techniques: Application to a Coastal Upazila in Bangladesh

    NASA Astrophysics Data System (ADS)

    Hoque, M. A. A.; Phinn, S. R.; Roelfsema, C. M.; Childs, I.

    2015-12-01

    Cyclones are one of the most catastrophic natural disasters. Globally, many coastal regions are vulnerable to different categories cyclones. In Bangladesh, disasters from tropical cyclones are annual occurrences in coastal areas. The intensity and extent of damage due to tropical cyclones are very high. An appropriate mapping approach is essential for producing detail vulnerability assessments to deliver useful information for reducing the impacts of cyclones on people, property and environment. The present study developed and tested a vulnerability mapping approach for tropical cyclone impacts in Sarankhola upazila a 151 km2 local government area located in coastal Bangladesh. The study applied the approach by integrating remote sensing, field data and multi-criteria evaluation at regional scales covering <1000 km2. Seven criteria concerned with cyclone impacts were considered for the study: elevation, slope, geomorphology, proximity to coastline, proximity to cyclone track, land uses and population density. Thematic raster map layers were prepared for every criterion and weighted using Analytical Hierarchy Process (AHP) with sensitivity analysis. Weighted Linear Combination (WLC) technique was used for overlaying standardized criteria maps with their weights to produce the vulnerability map. Our results indicated that 15% of the study area had very high vulnerability; mostly close to the river and densely populated areas, with 40 % area as high vulnerability on cropped grounds. Around 25% area was classified at moderate vulnerability covering most of the forests. The low and very low vulnerable area accounts the 12% and 8% respectively. This approach provided very promising result and it was verified by field survey. The result provides the strong applicability of this approach to assess the vulnerability of coastal Bangladesh to tropical cyclones.

  6. Brain MAPS: an automated, accurate and robust brain extraction technique using a template library

    PubMed Central

    Leung, Kelvin K.; Barnes, Josephine; Modat, Marc; Ridgway, Gerard R.; Bartlett, Jonathan W.; Fox, Nick C.; Ourselin, Sébastien

    2011-01-01

    Whole brain extraction is an important pre-processing step in neuro-image analysis. Manual or semi-automated brain delineations are labour-intensive and thus not desirable in large studies, meaning that automated techniques are preferable. The accuracy and robustness of automated methods are crucial because human expertise may be required to correct any sub-optimal results, which can be very time consuming. We compared the accuracy of four automated brain extraction methods: Brain Extraction Tool (BET), Brain Surface Extractor (BSE), Hybrid Watershed Algorithm (HWA) and a Multi-Atlas Propagation and Segmentation (MAPS) technique we have previously developed for hippocampal segmentation. The four methods were applied to extract whole brains from 682 1.5T and 157 3T T1-weighted MR baseline images from the Alzheimer’s Disease Neuroimaging Initiative database. Semi-automated brain segmentations with manual editing and checking were used as the gold-standard to compare with the results. The median Jaccard index of MAPS was higher than HWA, BET and BSE in 1.5T and 3T scans (p < 0.05, all tests), and the 1st-99th centile range of the Jaccard index of MAPS was smaller than HWA, BET and BSE in 1.5T and 3T scans (p < 0.05, all tests). HWA and MAPS were found to be best at including all brain tissues (median false negative rate ≤ 0.010% for 1.5T scans and ≤ 0.019% for 3T scans, both methods). The median Jaccard index of MAPS were similar in both 1.5T and 3T scans, whereas those of BET, BSE and HWA were higher in 1.5T scans than 3T scans (p < 0.05, all tests). We found that the diagnostic group had a small effect on the median Jaccard index of all four methods. In conclusion, MAPS had relatively high accuracy and low variability compared to HWA, BET and BSE in MR scans with and without atrophy. PMID:21195780

  7. A technique for the reconstruction of a map of continuous curves from interference fringes

    NASA Astrophysics Data System (ADS)

    Palevicius, Paulius; Ragulskis, Minvydas; Janušas, Giedrius; Palevicius, Arvydas

    2014-08-01

    The well-known phase-shifting approach for three-dimensional surface measurement uses multiple fringe patterns along with the phase-shifting algorithm to obtain 3-D profiles with high accuracy though this approach is not applicable for dynamic object measurement techniques such as time-averaged holography and in cases when only a single interference fringe pattern is available. In this case the fringe tracing method can be used that is based on localization of centers of interference fringes. We propose a technique for the reconstruction of the contour map from fringe patterns which comprises standard image processing techniques and a scheme for reconstruction of the map of continuous curves from the binary matrix of pixels representing fringe centers. The approach of image division into grid cells is taken and such problems as derivation of approximate line equations in each cell using Hough transformation, grouping contacting cells into curves and interpolation between curves with fractures are solved. The functionality of this approach is demonstrated for a demanding optical image containing fractures and noise.

  8. A New Active Cavitation Mapping Technique for Pulsed HIFU Applications – Bubble Doppler

    PubMed Central

    Li, Tong; Khokhlova, Tatiana; Sapozhnikov, Oleg; Hwang, Joo Ha; Sapozhnikov, Oleg; O’Donnell, Matthew

    2015-01-01

    In this work, a new active cavitation mapping technique for pulsed high-intensity focused ultrasound (pHIFU) applications termed bubble Doppler is proposed and its feasibility tested in tissue-mimicking gel phantoms. pHIFU therapy uses short pulses, delivered at low pulse repetition frequency, to cause transient bubble activity that has been shown to enhance drug and gene delivery to tissues. The current gold standard for detecting and monitoring cavitation activity during pHIFU treatments is passive cavitation detection (PCD), which provides minimal information on the spatial distribution of the bubbles. B-mode imaging can detect hyperecho formation, but has very limited sensitivity, especially to small, transient microbubbles. The bubble Doppler method proposed here is based on a fusion of the adaptations of three Doppler techniques that had been previously developed for imaging of ultrasound contrast agents – color Doppler, pulse inversion Doppler, and decorrelation Doppler. Doppler ensemble pulses were interleaved with therapeutic pHIFU pulses using three different pulse sequences and standard Doppler processing was applied to the received echoes. The information yielded by each of the techniques on the distribution and characteristics of pHIFU-induced cavitation bubbles was evaluated separately, and found to be complementary. The unified approach - bubble Doppler – was then proposed to both spatially map the presence of transient bubbles and to estimate their sizes and the degree of nonlinearity. PMID:25265178

  9. Equivalent Dynamic Stiffness Mapping technique for identifying nonlinear structural elements from frequency response functions

    NASA Astrophysics Data System (ADS)

    Wang, X.; Zheng, G. T.

    2016-02-01

    A simple and general Equivalent Dynamic Stiffness Mapping technique is proposed for identifying the parameters or the mathematical model of a nonlinear structural element with steady-state primary harmonic frequency response functions (FRFs). The Equivalent Dynamic Stiffness is defined as the complex ratio between the internal force and the displacement response of unknown element. Obtained with the test data of responses' frequencies and amplitudes, the real and imaginary part of Equivalent Dynamic Stiffness are plotted as discrete points in a three dimensional space over the displacement amplitude and the frequency, which are called the real and the imaginary Equivalent Dynamic Stiffness map, respectively. These points will form a repeatable surface as the Equivalent Dynamic stiffness is only a function of the corresponding data as derived in the paper. The mathematical model of the unknown element can then be obtained by surface-fitting these points with special functions selected by priori knowledge of the nonlinear type or with ordinary polynomials if the type of nonlinearity is not pre-known. An important merit of this technique is its capability of dealing with strong nonlinearities owning complicated frequency response behaviors such as jumps and breaks in resonance curves. In addition, this technique could also greatly simplify the test procedure. Besides there is no need to pre-identify the underlying linear parameters, the method uses the measured data of excitation forces and responses without requiring a strict control of the excitation force during the test. The proposed technique is demonstrated and validated with four classical single-degree-of-freedom (SDOF) numerical examples and one experimental example. An application of this technique for identification of nonlinearity from multiple-degree-of-freedom (MDOF) systems is also illustrated.

  10. Utility of Close-Range Remote Sensing Techniques for Mapping Topography and Bathymetry in Small Streams

    NASA Astrophysics Data System (ADS)

    Fonstad, M. A.; Dietrich, J.

    2012-12-01

    River science is in the midst of a methodological revolution, where various remote sensing appraoches have been applied to the problem of mapping river forms and processes at high resolution and over large areas. Most of these advancements has been advanced by airborne and spaceborne remote sensing platforms, yielding ground resolutions at meter-scale or larger. Close-range remote-sensing approaches (helicopters, UAS platforms, helikites) have yielded much higher resolutions, but with increased costs, range limitations, and/or technical complexity. Many river researchers, however, still measure riverscapes in small study areas and with a desire for high precision measurements. As such, it is reasonable to infer that ground-based remote sensing techniques might yield highly-precise, low-cost, and easy to acquire information in small streams and be useful for a large user base. Some of the small-stream features that are potentially mappable with close-range imaging include particle sizes, riparian vegetation, water depth, water surface elevations and superelevations, and riverbank and near-floodplain topography. Some of the important comparative metrics include (a) spatial resolution, (b) precision and accuracy, (c) cost, (d) ease of use, (e) speed and ability to map large areas, (f) the importance of specific observation platform, (g) software and hardware considerations, and (h) systematic vs. random errors. In order to compare the utility of mapping approaches against these various metrics, we have mapped a 100m stream test reach in eastern Oregon with both a high-quality dSLR camera and a Kinect active near-IR distance sensor. The camera can map topography through the Structure from Motion (SfM) photogrammetric approach, and can map bathymetry using either SfM or through spectral depth approaches. The Kinect potentially can map above-water topography for land and water. We test the camera-based methods and the Kinect sensor on two platforms. First, by hand

  11. Spatial Field Variability Mapping of Rice Crop using Clustering Technique from Space Borne Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Moharana, S.; Dutta, S.

    2015-12-01

    Precision farming refers to field-specific management of an agricultural crop at a spatial scale with an aim to get the highest achievable yield and to achieve this spatial information on field variability is essential. The difficulty in mapping of spatial variability occurring within an agriculture field can be revealed by employing spectral techniques in hyperspectral imagery rather than multispectral imagery. However an advanced algorithm needs to be developed to fully make use of the rich information content in hyperspectral data. In the present study, potential of hyperspectral data acquired from space platform was examined to map the field variation of paddy crop and its species discrimination. This high dimensional data comprising 242 spectral narrow bands with 30m ground resolution Hyperion L1R product acquired for Assam, India (30th Sept and 3rd Oct, 2014) were allowed for necessary pre-processing steps followed by geometric correction using Hyperion L1GST product. Finally an atmospherically corrected and spatially deduced image consisting of 112 band was obtained. By employing an advanced clustering algorithm, 12 different clusters of spectral waveforms of the crop were generated from six paddy fields for each images. The findings showed that, some clusters were well discriminated representing specific rice genotypes and some clusters were mixed treating as a single rice genotype. As vegetation index (VI) is the best indicator of vegetation mapping, three ratio based VI maps were also generated and unsupervised classification was performed for it. The so obtained 12 clusters of paddy crop were mapped spatially to the derived VI maps. From these findings, the existence of heterogeneity was clearly captured in one of the 6 rice plots (rice plot no. 1) while heterogeneity was observed in rest of the 5 rice plots. The degree of heterogeneous was found more in rice plot no.6 as compared to other plots. Subsequently, spatial variability of paddy field was

  12. Spectropolarimetry of hot, luminous stars

    NASA Technical Reports Server (NTRS)

    Schulte-Ladbeck, Regina E.

    1994-01-01

    I review polarimetric observations of presumably single, hot luminous stars. The stellar types discussed are OB stars. B(e) supergiants, Luminous Blue Variables (LBV), Wolf-Rayet (W-R) stars, and type II supernovae (SN). It is shown that variable, intrinsic polarization is a common phenomenon in that part of the Hertzsprung-Russell (HR) diagram which these stars occupy. However, much observational work remains to be done before we can answer the most basic, statistical questions about the polarimetric properties of different groups of hot, luminous stars. Insight into the diagnostic power of polarization observations has been gained, but cannot be exploited without detailed models. Thus, while polarimetric observations do tell us that the mass-loss processes of all types of massive stars are time-dependent and anisotropic, the significance that this might have for the accuracy of their stellar parameters and evolutionary paths remains elusive.

  13. Mapping Glauconite Unites with Using Remote Sensing Techniques in North East of Iran

    NASA Astrophysics Data System (ADS)

    Ahmadirouhani, R.; Samiee, S.

    2014-10-01

    Glauconite is a greenish ferric-iron silicate mineral with micaceous structure, characteristically formed in shallow marine environments. Glauconite has been used as a pigmentation agent for oil paint, contaminants remover in environmental studies and a source of potassium in plant fertilizers, and other industries. Koppeh-dagh basin is extended in Iran, Afghanistan and Turkmenistan countries and Glauconite units exist in this basin. In this research for enhancing and mapping glauconitic units in Koppeh-dagh structural zone in north east of Iran, remote sensing techniques such as Spectral Angle Mapper classification (SAM), band ratio and band composition methods on SPOT, ASTER and Landsat data in 3 steps were applied.

  14. Hotspot decorations map plasmonic patterns with the resolution of scanning probe techniques.

    PubMed

    Valev, V K; Silhanek, A V; Jeyaram, Y; Denkova, D; De Clercq, B; Petkov, V; Zheng, X; Volskiy, V; Gillijns, W; Vandenbosch, G A E; Aktsipetrov, O A; Ameloot, M; Moshchalkov, V V; Verbiest, T

    2011-06-01

    In high definition mapping of the plasmonic patterns on the surfaces of nanostructures, the diffraction limit of light remains an important obstacle. Here we demonstrate that this diffraction limit can be completely circumvented. We show that upon illuminating nanostructures made of nickel and palladium, the resulting surface-plasmon pattern is imprinted on the structures themselves; the hotspots (regions of local field enhancement) are decorated with overgrowths, allowing for their subsequent imaging with scanning-probe techniques. The resulting resolution of plasmon pattern imaging is correspondingly improved. PMID:21702624

  15. Brain-Mapping Techniques for Evaluating Poststroke Recovery and Rehabilitation: A Review

    PubMed Central

    Eliassen, James C.; Boespflug, Erin L.; Lamy, Martine; Allendorfer, Jane; Chu, Wen-Jang; Szaflarski, Jerzy P.

    2009-01-01

    Brain-mapping techniques have proven to be vital in understanding the molecular, cellular, and functional mechanisms of recovery after stroke. This article briefly summarizes the current molecular and functional concepts of stroke recovery and addresses how various neuroimaging techniques can be used to observe these changes. The authors provide an overview of various techniques including diffusion-tensor imaging (DTI), magnetic resonance spectroscopy (MRS), ligand-based positron emission tomography (PET), single-photon emission computed tomography (SPECT), regional cerebral blood flow (rCBF) and regional metabolic rate of glucose (rCMRglc) PET and SPECT, functional magnetic resonance imaging (fMRI), near infrared spectroscopy (NIRS), electroencephalography (EEG), magnetoencephalography (MEG), and transcranial magnetic stimulation (TMS). Discussion in the context of poststroke recovery research informs about the applications and limitations of the techniques in the area of rehabilitation research. The authors also provide suggestions on using these techniques in tandem to more thoroughly address the outstanding questions in the field. PMID:19008203

  16. Mapping.

    ERIC Educational Resources Information Center

    Kinney, Douglas M.; McIntosh, Willard L.

    1979-01-01

    The area of geological mapping in the United States in 1978 increased greatly over that reported in 1977; state geological maps were added for California, Idaho, Nevada, and Alaska last year. (Author/BB)

  17. The creation of a digital soil map for Cyprus using decision-tree classification techniques

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Zomeni, Zomenia; Bruggeman, Adriana; Noller, Joy; Zissimos, Andreas

    2014-05-01

    Considering the increasing threats soil are experiencing especially in semi-arid, Mediterranean environments like Cyprus (erosion, contamination, sealing and salinisation), producing a high resolution, reliable soil map is essential for further soil conservation studies. This study aims to create a 1:50.000 soil map covering the area under the direct control of the Republic of Cyprus (5.760 km2). The study consists of two major steps. The first is the creation of a raster database of predictive variables selected according to the scorpan formula (McBratney et al., 2003). It is of particular interest the possibility of using, as soil properties, data coming from three older island-wide soil maps and the recently published geochemical atlas of Cyprus (Cohen et al., 2011). Ten highly characterizing elements were selected and used as predictors in the present study. For the other factors usual variables were used: temperature and aridity index for climate; total loss on ignition, vegetation and forestry types maps for organic matter; the DEM and related relief derivatives (slope, aspect, curvature, landscape units); bedrock, surficial geology and geomorphology (Noller, 2009) for parent material and age; and a sub-watershed map to better bound location related to parent material sources. In the second step, the digital soil map is created using the Random Forests package in R. Random Forests is a decision tree classification technique where many trees, instead of a single one, are developed and compared to increase the stability and the reliability of the prediction. The model is trained and verified on areas where a 1:25.000 published soil maps obtained from field work is available and then it is applied for predictive mapping to the other areas. Preliminary results obtained in a small area in the plain around the city of Lefkosia, where eight different soil classes are present, show very good capacities of the method. The Ramdom Forest approach leads to reproduce soil

  18. Nonuniformity correction using a flood technique and 1:1 mapping

    NASA Astrophysics Data System (ADS)

    Olson, Eric M.; Bergin, Thomas P.

    2001-08-01

    Although the sparse grid Non-Uniformity Correction (NUC) technique can accurately correct individual emitters on a resistor array, it is not a good solution for all projection applications. Due to busbar robbing, the sparse grid NUC breaks down when a large number of emitters are turned on simultaneously. For this case, a more appropriate NUC data collection method is needed. This method involves measuring the entire resistor array at once with a 1:1 mapping between the projector and NUC sensor. Then busbar effects, measured during the NUC data collection, can be accounted for and corrected. This paper presents details pertaining to the flood NUC technique and results. This NUC system is implemented at the Kinetic Kill Vehicle Hardware In the Loop Simulator (KHILS) at Eglin AFB, Florida.

  19. Delineating the Rattlesnake Springs, New Mexico Watershed Using Shallow Subsurface Geophysical Techniques and Geologic Mapping

    NASA Astrophysics Data System (ADS)

    Doser, D. I.; Langford, R. P.; Boykov, N. D.; Baker, M. R.; Kaip, G. M.

    2007-12-01

    Rattlesnake Springs serves as the sole water source for Carlsbad Caverns National Park. The recent development of oil and gas leases and agricultural lands surrounding the springs has led to concern about contamination of the karst aquifer. We have used geophysical techniques, combined with geologic mapping, to delineate possible fracture systems in the gypsum and carbonate bedrock that feed the spring system. Our initial work has focused on a 700 m by 700 m region surrounding the springs. We conducted a series of ground conductivity surveys with follow-up DC resistivity surveys (Wenner array vertical electrical soundings and a pole- pole survey) to determine variations in soil grain size and moisture content. Surface geologic mapping was used to identify a series of Holocene terraces and valleys that incise the terraces. Our combined results suggest that northwest-southeast and north-south trending fractures and dissolution features control regional water flow. Relict spring valleys are found to the west of the present springs. A pole-pole survey conducted around the perimeter of the springs suggests main water flow into the springs occurs from the northwest. We plan to complete a precision gravity survey in September and October 2007 to map bedrock topography and determine its relation to structural and dissolution features. Ground penetrating radar data will be collected on the northwestern side of the springs in an attempt to better delineate structures controlling inflow into the springs.

  20. Mapping of a river using close range photogrammetry technique and unmanned aerial vehicle system

    NASA Astrophysics Data System (ADS)

    Room, M. H. M.; Ahmad, A.

    2014-02-01

    Photogrammetry is a technique that can be used to record the information of any feature without direct contact. Nowadays, a combination of photogrammetry and Unmanned Aerial Vehicle (UAV) systems is widely used for various applications, especially for large scale mapping. UAV systems offer several advantages in terms of cost and image resolution compared to terrestrial photogrammetry and remote sensing system. Therefore, a combination of photogrammetry and UAV created a new term which is UAV photogrammetry. The aim of this study is to investigate the ability of a UAV system to map a river at very close distance. A digital camera is attached to the Hexacopter UAV and it is flown at 2 m above the ground surface to produce aerial photos. Then, the aerial photos are processed to create two photogrammetric products as output. These are mosaicked orthophoto and digital image. Both products are assessed (RSME). The RSME of X and Y coordinates are ±0.009 m and ±0.033 m respectively. As a conclusion, photogrammetry and the UAV system offer a reliable accuracy for mapping a river model and advantages in term of cost-efficient, high ground resolution and rapid data acquisition.

  1. Robust biological parametric mapping: an improved technique for multimodal brain image analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.

    2011-03-01

    Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.

  2. Proceedings of a workshop on digital mapping techniques; methods for geologic map data capture, management, and publication - June 2 - 5, 1997, Lawrence, Kansas

    USGS Publications Warehouse

    Soller, David R.

    1997-01-01

    Introduction: From June 2-5, 1997, selected technical representatives of the USGS and State geological surveys participated in the 'AASG/USGS Digital Mapping Techniques' workshop in Lawrence, Kansas. The workshop was initiated by the AASG/USGS Data Capture Working Group, and was hosted by the Kansas Geological Survey (KGS). With a focus on methods for data capture and digital map production, the goal was to help move the state surveys and the USGS toward development of more cost-effective, flexible, and useful systems for digital mapping and GIS analysis.

  3. Study of lumineers' interfaces by means of optical coherence tomography

    NASA Astrophysics Data System (ADS)

    de Andrade Borges, Erica; Fernandes Cassimiro-Silva, Patrícia; Osório Fernandes, Luana; Leônidas Gomes, Anderson Stevens

    2015-06-01

    OCT has been used to evaluate dental materials, and is employed here to evaluate lumineers for the first time. Lumineers are used as esthetical indirect restoration, and after wearing and aging, several undesirable features such as gaps, bubbles and mismatch can appear in which would only be seen by invasive analysis. The OCT (spectral domain SD-OCT, 930nm central wavelength) was used to evaluate noninvasively the lumineer- cement-tooth interface. We analyzed 20 specimens of lumineers-teeth that were prepared in bovine teeth and randomly allocated in 4 experimental groups (n=5) with two different cementation techniques and two different types of cementing agent (RelyX U200 and RelyX Veneer, 3M ESPE, with the adhesive recommended by the manufacture). The lumineers were made of lithium disilicate and obtained using a vacuum injection technique. The analysis was performed by using 2D and 3D OCT images, obtained before and after cementing and the thermal cycling process to simulate thermal stress in a oral cavity. Initial measurements showed that the SD-OCT was able to see through the 500μm thick lumineer, as delivered by the fabricant, and internal stress was observed. Failures were found in the cementing process and also after ageing simulation by thermal cycling. The adhesive failures as bubbles, gaps and degradation of the cementation line are the natural precursors of other defects reported by several studies of clinical follow-up (detachments, fractures and cracks). Bubble dimensions ranging from 146 μm to 1427 μm were measured and the OCT was validated as an investigative and precise tool for evaluation of the lumineer-cement-tooth.

  4. Multiple mapping techniques of glaciovolcanic regions: Remote sensing and field mapping of Askja (Dyngjufjöll), Iceland

    NASA Astrophysics Data System (ADS)

    Ellis, M. K.; Graettinger, A. H.; Reath, K. A.; Skilling, I. P.; Ramsey, M. S.; Hughes, C. G.

    2011-12-01

    The Icelandic landscape is dominated by basaltic glaciovolcanic and Holocene post-glacial landforms. The lack of chemical diversity has led to significant simplification of regional maps. This includes areas of historic volcanic activity, such as Askja (Dyngjufjöll) in central Iceland. The purpose of this study is to improve the resolution of the regional map through the identification of variations within glaciovolcanic units and to more accurately constrain the products of historic eruptions. As part of a greater project to map the 45 km2 glaciovolcanic deposits of Askja volcano, Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER) based mapping of the region surrounding the Dyngjufjöll massif was undertaken using previous datasets and recent field work for validation. Maps were based on 3 ASTER scenes obtained during overpasses in August 2010 and 2011. Field mapping conducted in August 2010 and 2011 of the glaciovolcanic deposits, previous geologic maps of the Holocene deposits (Annertz 1985, Sigvaldason 1992) and previous remote sensing based investigations (Hjartardottir et al. 2009) were utilized to validate Visible/Near-Infrared (VNIR) and Thermal Infrared (TIR) based maps. Laboratory based TIR data were collected from samples of representative lithologic units within the field area, including glaciovolcanic deposits (pillow lavas, hyaloclastite, etc.), historical deposits (1875 pumice, 1960's basaltic lavas), and Holocene basaltic lavas from Askja. This project produced a highly detailed map of the distribution of the remaining eroded 1875 deposit within the proximal (<50 km) vent area as well as, the complexity of neighboring glaciovolcanic massifs. Additionally, the products of this investigation demonstrate the potential for remote sensing based mapping of glaciovolcanic deposits throughout Iceland and other well exposed geologic regions (i.e. British Columbia). Remote sensing aided mapping will benefit glaciovolcanic studies by

  5. Computer controlled techniques for high emission density mapping of thermionic cathodes

    NASA Astrophysics Data System (ADS)

    Gibson, J. W.; Thomas, R. E.

    1985-12-01

    Some of the techniques commonly used (e.g. SLEEP and thermionic emission microscope) for measuring emission or work function uniformity of thermionic cathode surfaces require the use of very low or near zero current densities, thus the cathode is characterized at current densities and temperatures much lower than that of a normally operating cathode. The system reported on here uses a high voltage pulse technique and is capable of measuring emission densities in the range 1 to 80 A/cm 2 at normal cathode operating temperatures. The cathode surface is scanned with an anode having a 0.025 mm aperture whose position is controlled by computer operated stepping motors. The current through the aperture to a collector electrode is measured using a sample-and-hold amplifier. Pulsing and sampling are computer synchronized with the scanning, and data for each pulse are accumulated and can be processed and displayed in several ways using the computer, including a detailed "three-dimensional" map of either the electron emission density or work function variations. The entire surface of the cathode or any portion of it can be mapped in steps as small as 0.001 mm (1μm), but typically steps of 5-100 μm were used. Measurements are presented illustrating the uniformity or nonuniformity of the electron emission densities and work functions for type-B and type-M cathodes.

  6. Creep deformation behavior in eutectic Sn-Ag solder joints using a novel mapping technique

    SciTech Connect

    Lucas, J.P.; Guo, F.; McDougall, J.; Bieler, T.R.; Subramanian, K.N.; Park, J.K.

    1999-11-01

    Creep deformation behavior was measured for 60--100 {micro}m thick solder joints. The solder joints investigated consisted of: (1) non-composite solder joints made with eutectic Sn-Ag solder, and (2) composite solder joints with eutectic Sn-Ag solder containing 20 vol.%, 5 {micro}m diameter in-situ Cu{sub 6}Sn{sub 5} intermetallic reinforcements. All creep testing in this study was carried out at room temperature. Qualitative and quantitative assessment of creep deformation was characterized on the solder joints. Creep deformation was analyzed using a novel mapping technique where a geometrical-regular line pattern was etched over the entire solder joint using excimer laser ablation. During creep, the laser-ablation (LA) pattern becomes distorted due to deformation in the solder joint. By imaging the distortion of laser-ablation patterns using the SEM, actual deformation mapping for the entire solder joint is revealed. The technique involves sequential optical/digital imaging of the deformation versus time history during creep. By tracing and recording the deformation of the LA patterns on the solder over intervals of time, local creep data are obtained in many locations in the joint. This analysis enables global and localized creep shear strains and strain rate to be determined.

  7. Fingerprint detection and mapping using a phase shifted coherent gradient sensing technique.

    PubMed

    Dhanotia, Jitendra; Prakash, Satya; Bhatia, Vimal; Prakash, Shashi

    2016-07-10

    In this paper, a full field technique for mapping a latent fingerprint using a coherent gradient sensing (CGS) sensor is proposed. Collimated light from an He-Ne laser illuminates a specimen comprising a fingerprint implanted onto a reflecting surface. Reflected light from the specimen is analyzed using the CGS sensor comprising a pair of gratings. Reflected light carries information regarding the depth and orientation of furrows and ridges in the fingerprint. The topological information of the fingerprint is retrieved using four-step phase shifting interferometry. Well-defined 2D and 3D phase plots have been reconstructed to map the topography of the human fingerprint. The recorded slope data reconstructs the information regarding the separation and depth of the ridges in the latent fingerprint. The proposed technique is noninvasive and full field and does not require any kind of chemical or physical treatment. The sensor is very simple, yields interferometric sensitivity, and has the advantages of easy alignment, compactness, and low cost. PMID:27409305

  8. Surface three-dimensional velocity map imaging: A new technique for the study of photodesorption dynamics

    SciTech Connect

    Ji Yuanyuan; Koehler, Sven P. K.; Auerbach, Daniel J.; Wodtke, Alec M.

    2010-07-15

    The authors describe the application of a combination of velocity map imaging and time-of-flight (TOF) techniques to obtain three-dimensional velocity distributions for surface photodesorption. They have established a systematic alignment procedure to achieve correct and reproducible experimental conditions. It includes four steps: (1) optimization of the velocity map imaging ion optics' voltages to achieve optimum velocity map imaging conditions; (2) alignment of the surface normal with the symmetry axis (ion flight axis) of the ion optics; (3) determination of TOF distance between the surface and the ionizing laser beam; (4) alignment of the position of the ionizing laser beam with respect to the ion optics. They applied this set of alignment procedures and then measured Br({sup 2}P{sub 3/2}) (Br) and Br({sup 2}P{sub 1/2}) (Br*) atoms photodesorbing from a single crystal of KBr after exposure to 193 nm light. They analyzed the velocity flux and energy flux distributions for motion normal to the surface. The Br* normal energy distribution shows two clearly resolved peaks at approximately 0.017 and 0.39 eV, respectively. The former is slightly faster than expected for thermal desorption at the surface temperature and the latter is hyperthermal. The Br normal energy distribution shows a single broad peak that is likely composed of two hyperthermal components. The capability that surface three-dimensional velocity map imaging provides for measuring state-specific velocity distributions in all three dimensions separately and simultaneously for the products of surface photodesorption or surface reactions holds great promise to contribute to our understanding of these processes.

  9. Use of Technology-Assisted Techniques of Mind Mapping and Concept Mapping in Science Education: A Constructivist Study

    ERIC Educational Resources Information Center

    Balim, Ali Günay

    2013-01-01

    The study aims to investigate the effects of using mind maps and concept maps on students' learning of concepts in science courses. A total of 51 students participated in this study which used a quasi-experimental research design with pre-test/post-test control groups. The constructivist-inspired study was carried out in the sixth-grade…

  10. A Hybrid Technique for Thickness-Map Visualization of the Hip Cartilages in MRI

    NASA Astrophysics Data System (ADS)

    Khanmohammadi, Mahdieh; Zoroofi, Reza Aghaiezadeh; Nishii, Takashi; Tanaka, Hisashi; Sato, Yoshinobu

    Quantification of the hip cartilages is clinically important. In this study, we propose an automatic technique for segmentation and visualization of the acetabular and femoral head cartilages based on clinically obtained multi-slice T1-weighted MR data and a hybrid approach. We follow a knowledge based approach by employing several features such as the anatomical shapes of the hip femoral and acetabular cartilages and corresponding image intensities. We estimate the center of the femoral head by a Hough transform and then automatically select the volume of interest. We then automatically segment the hip bones by a self-adaptive vector quantization technique. Next, we localize the articular central line by a modified canny edge detector based on the first and second derivative filters along the radial lines originated from the femoral head center and anatomical constraint. We then roughly segment the acetabular and femoral head cartilages using derivative images obtained in the previous step and a top-hat filter. Final masks of the acetabular and femoral head cartilages are automatically performed by employing the rough results, the estimated articular center line and the anatomical knowledge. Next, we generate a thickness map for each cartilage in the radial direction based on a Euclidian distance. Three dimensional pelvic bones, acetabular and femoral cartilages and corresponding thicknesses are overlaid and visualized. The techniques have been implemented in C++ and MATLAB environment. We have evaluated and clarified the usefulness of the proposed techniques in the presence of 40 clinical hips multi-slice MR images.

  11. Rethinking Program Assessment through the Use of Program Alignment Mapping Technique

    ERIC Educational Resources Information Center

    Liu, Min; Wrobbel, Duff; Blankson, Isaac

    2010-01-01

    Curriculum mapping is a well-known assessment tool used to articulate and align a curriculum. The authors present an improved method of curriculum alignment that combines the traditional curriculum mapping with what is often called prerequisite mapping. This improved method of curriculum alignment mapping we label Program Alignment Mapping (PAM)…

  12. MRI technique for the snapshot imaging of quantitative velocity maps using RARE

    NASA Astrophysics Data System (ADS)

    Shiko, G.; Sederman, A. J.; Gladden, L. F.

    2012-03-01

    A quantitative PGSE-RARE pulse sequence was developed and successfully applied to the in situ dissolution of two pharmaceutical formulations dissolving over a range of timescales. The new technique was chosen over other existing fast velocity imaging techniques because it is T2 weighted, not T2∗ weighted, and is, therefore, robust for imaging time-varying interfaces and flow in magnetically heterogeneous systems. The complex signal was preserved intact by separating odd and even echoes to obtain two phase maps which are then averaged in post-processing. Initially, the validity of the technique was shown when imaging laminar flow in a pipe. Subsequently, the dissolution of two drugs was followed in situ, where the technique enables the imaging and quantification of changes in the form of the tablet and the flow field surrounding it at high spatial and temporal resolution. First, the complete 3D velocity field around an eroding salicylic acid tablet was acquired at a resolution of 98 × 49 μm2, within 20 min, and monitored over ˜13 h. The tablet was observed to experience a heterogeneous flow field and, hence a heterogeneous shear field, which resulted in the non-symmetric erosion of the tablet. Second, the dissolution of a fast dissolving immediate release tablet was followed using one-shot 2D velocity images acquired every 5.2 s at a resolution of 390 × 390 μm2. The quantitative nature of the technique and fast acquisition times provided invaluable information on the dissolution behaviour of this tablet, which had not been attainable previously with conventional quantitative MRI techniques.

  13. 78 FR 70964 - Luminant Generation Company, LLC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-27

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Luminant Generation Company, LLC AGENCY: Nuclear Regulatory Commission. ACTION: Combined license... for four consecutive weeks of a combined license (COL) application from Luminant Generation...

  14. 78 FR 66785 - Luminant Generation Company, LLC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-06

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Luminant Generation Company, LLC AGENCY: Nuclear Regulatory Commission. ACTION: Notice of receipt... consecutive weeks of ] a combined license (COL) application from Luminant Generation Company, LLC....

  15. 78 FR 68100 - Luminant Generation Company, LLC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Luminant Generation Company, LLC AGENCY: U.S. Nuclear Regulatory Commission (NRC). ACTION... consecutive weeks of a combined license (COL) application from Luminant Generation Company, LLC....

  16. 78 FR 69710 - Luminant Generation Company, LLC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-20

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Luminant Generation Company, LLC AGENCY: U.S. Nuclear Regulatory Commission (NRC). ACTION... consecutive weeks of a combined license (COL) application from Luminant Generation Company, LLC....

  17. The Development of Luminance- and Texture-Defined Form Perception during the School-Aged Years

    ERIC Educational Resources Information Center

    Bertone, Armando; Hanck, Julie; Guy, Jacalyn; Cornish, Kim

    2010-01-01

    The objective of the present study was to assess the development of luminance- and texture-defined static form perception in school-aged children. This was done using an adapted Landolt-C technique where C-optotypes were defined by either luminance or texture information, the latter necessitating extra-striate neural processing to be perceived.…

  18. Updating Landsat-derived land-cover maps using change detection and masking techniques

    NASA Technical Reports Server (NTRS)

    Likens, W.; Maw, K.

    1982-01-01

    The California Integrated Remote Sensing System's San Bernardino County Project was devised to study the utilization of a data base at a number of jurisdictional levels. The present paper discusses the implementation of change-detection and masking techniques in the updating of Landsat-derived land-cover maps. A baseline landcover classification was first created from a 1976 image, then the adjusted 1976 image was compared with a 1979 scene by the techniques of (1) multidate image classification, (2) difference image-distribution tails thresholding, (3) difference image classification, and (4) multi-dimensional chi-square analysis of a difference image. The union of the results of methods 1, 3 and 4 was used to create a mask of possible change areas between 1976 and 1979, which served to limit analysis of the update image and reduce comparison errors in unchanged areas. The techniques of spatial smoothing of change-detection products, and of combining results of difference change-detection algorithms are also shown to improve Landsat change-detection accuracies.

  19. Probabilistic flood inundation mapping of ungauged rivers: Linking GIS techniques and frequency analysis

    NASA Astrophysics Data System (ADS)

    Sarhadi, Ali; Soltani, Saeed; Modarres, Reza

    2012-08-01

    SummaryThis study presents an exhaustive methodology of floodplain mapping at ungauged rivers. To present our methodology, we selected the Halilrud basin and Jiroft city in southeastern Iran as an example of hazardous regions. To estimate flood quantiles in different return periods at ungauged reaches, we used regional flood frequency analysis. By using the well-known L-moments approach and related criteria, a homogeneous region was formed and the 3-parameter Log normal distribution was identified as the robust regional frequency distribution. The hydro-geomorphic characteristics and the land use properties of the catchments were then extracted using RS&GIS techniques to establish multivariate regional regression models between hydro-geomorphic characteristics and flood quantiles. After delineation of the catchments for the ungauged reaches, flood quantiles as an important factor in flood inundation at outlets of these reaches with different probabilities were estimated using the regional regression models. To delineate flood hazard maps and to enhance the accuracy of the hydraulic modeling, we applied satellite stereoscope images of Cartosat-1 along with the Rational Polynomial Coefficients to extract a high resolution DTM and detailed parameterization of the channel required by 1D hydraulic model HEC-RAS. The GIS-based HEC-Geo RAS pre- and post-processor were also used for careful optimization of the geometry features for real visualization of the flood prone areas. Information of some historical flood events was also used to evaluate the hydraulic model performance in predicting flood inundations. Finally, vulnerable areas were crossed with extracted land use mapping from IRS-P6 satellite images to differentiate the critical infrastructures and the valuable land use classes affected by floods in different return periods.

  20. [Endoscopic management of luminal stenosis in inflammatory bowel disease].

    PubMed

    Lorenzo-Zúñiga, Vicente; García-Planella, Esther; Moreno De Vega, Vicente; Domènech, Eugeni; Boix, Jaume

    2012-01-01

    Luminal stenosis is frequent in Crohn's disease (CD) due to transmural involvement. Before any endoscopic treatment, the presence of neoplastic stenosis should always be excluded. Endoscopic balloon dilatation has been used in several series to treat benign stenosis, mainly in CD with involvement of the distal ileon, colon or surgical anastomosis, with success rates of 51% to 85%, although recurrence is high. The concomitant use of injected steroids (triamcinolone) after endoscopic dilatation produces longer-lasting results, but there are few published reports. In patients with luminal stenosis refractory to conventional endoscopy, three emerging techniques may be useful: self-expanding metallic stents, biodegradable endoprostheses and intralesional infliximab injection. PMID:22341673

  1. Mapping air pollution using Earth observation techniques for cultural heritage sites

    NASA Astrophysics Data System (ADS)

    Agapiou, Athos; Nisantzi, Argyro; Lysandrou, Vasiliki; Mamouri, Rodanthi; Alexakis, Dimitrios D.; Themistocleous, Kyriacos; Sarris, Apostolos; Hadjimitsis, Diofantos G.

    2013-08-01

    Air pollutants, together with climatic parameters, are of major importance for the deterioration of cultural heritage monuments. Atmospheric pollution is widely recognized as one of the major anthropogenic threats to architectural cultural heritage, in particular when associated with water absorption phenomena. Atmospheric particle deposition on surfaces of Monuments (of cultural heritage interest) may cause an aesthetic impact induced by a series of chemical reactions. Therefore there is a need for systematic monitoring and mapping of air pollution for areas where important archaeological sites and monuments are found. observation techniques, such as the use of satellite image for the retrieval of Aerosol Optical Thickness (AOT), are ideal for this purpose. In this paper, all important monuments of the Paphos District, listed by the Department of Antiquities of Cyprus, have been mapped using Geographical Information Systems. Several recent (2012) MODIS satellite images (both Aqua and Terra) have been used to extract the AOT values in this area. Multi-temporal analysis was performed to identify areas of high risk where AOT values are considered to be high. In situ observations have been also carried out to verify the results.

  2. Inventory and mapping of flood inundation using interactive digital image analysis techniques

    USGS Publications Warehouse

    Rohde, Wayne G.; Nelson, Charles A.; Taranik, J.V.

    1979-01-01

    LANDSAT digital data and color infra-red photographs were used in a multiphase sampling scheme to estimate the area of agricultural land affected by a flood. The LANDSAT data were classified with a maximum likelihood algorithm. Stratification of the LANDSAT data, prior to classification, greatly reduced misclassification errors. The classification results were used to prepare a map overlay showing the areal extent of flooding. These data also provided statistics required to estimate sample size in a two phase sampling scheme, and provided quick, accurate estimates of areas flooded for the first phase. The measurements made in the second phase, based on ground data and photo-interpretation, were used with two phase sampling statistics to estimate the area of agricultural land affected by flooding These results show that LANDSAT digital data can be used to prepare map overlays showing the extent of flooding on agricultural land and, with two phase sampling procedures, can provide acreage estimates with sampling errors of about 5 percent. This procedure provides a technique for rapidly assessing the areal extent of flood conditions on agricultural land and would provide a basis for designing a sampling framework to estimate the impact of flooding on crop production.

  3. Quantitative mapping of pore fraction variations in silicon nitride using an ultrasonic contact scan technique

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Kiser, James D.; Swickard, Suzanne M.; Szatmary, Steven A.; Kerwin, David P.

    1993-01-01

    An ultrasonic scan procedure using the pulse-echo contact configuration was employed to obtain maps of pore fraction variations in sintered silicon nitride samples in terms of ultrasonic material properties. Ultrasonic velocity, attenuation coefficient, and reflection coefficient images were obtained simultaneously over a broad band of frequencies (e.g., 30 to 110 MHz) by using spectroscopic analysis. Liquid and membrane (dry) coupling techniques and longitudinal and shear-wave energies were used. The major results include the following: Ultrasonic velocity (longitudinal and shear wave) images revealed and correlated with the extent of average through-thickness pore fraction variations in the silicon nitride disks. Attenuation coefficient images revealed pore fraction nonuniformity due to the scattering that occurred at boundaries between regions of high and low pore fraction. Velocity and attenuation coefficient images were each nearly identical for machined and polished disks, making the method readily applicable to machined materials. Velocity images were similar for wet and membrane coupling. Maps of apparent Poisson's ratio constructed from longitudinal and shear-wave velocities quantified Poisson's ratio variations across a silicon nitride disk. Thermal wave images of a disk indicated transient thermal behavior variations that correlated with observed variations in pore fraction and velocity and attenuation coefficients.

  4. Supersonic shear imaging: a new technique for soft tissue elasticity mapping.

    PubMed

    Bercoff, Jérémy; Tanter, Mickaël; Fink, Mathias

    2004-04-01

    Supersonic shear imaging (SSI) is a new ultrasound-based technique for real-time visualization of soft tissue viscoelastic properties. Using ultrasonic focused beams, it is possible to remotely generate mechanical vibration sources radiating low-frequency, shear waves inside tissues. Relying on this concept, SSI proposes to create such a source and make it move at a supersonic speed. In analogy with the "sonic boom" created by a supersonic aircraft, the resulting shear waves will interfere constructively along a Mach cone, creating two intense plane shear waves. These waves propagate through the medium and are progressively distorted by tissue heterogeneities. An ultrafast scanner prototype is able to both generate this supersonic source and image (5000 frames/s) the propagation of the resulting shear waves. Using inversion algorithms, the shear elasticity of medium can be mapped quantitatively from this propagation movie. The SSI enables tissue elasticity mapping in less than 20 ms, even in strongly viscous medium like breast. Modalities such as shear compounding are implementable by tilting shear waves in different directions and improving the elasticity estimation. Results validating SSI in heterogeneous phantoms are presented. The first in vivo investigations made on healthy volunteers emphasize the potential clinical applicability of SSI for breast cancer detection. PMID:15139541

  5. Performance prediction of four-contact vertical Hall-devices using a conformal mapping technique

    NASA Astrophysics Data System (ADS)

    Yang, Huang; Yue, Xu; Yufeng, Guo

    2015-12-01

    Instead of the conventional design with five contacts in the sensor active area, innovative vertical Hall devices (VHDs) with four contacts and six contacts are asymmetrical in structural design but symmetrical in the current flow that can be well fit for the spinning current technique for offset elimination. In this article, a conformal mapping calculation method is used to predict the performance of asymmetrical VHD embedded in a deep n-well with four contacts. Furthermore, to make the calculation more accurate, the junction field effect is also involved into the conformal mapping method. The error between calculated and simulated results is less than 5% for the current-related sensitivity, and approximately 13% for the voltage-related sensitivity. This proves that such calculations can be used to predict the optimal structure of the vertical Hall-devices. Project supported by the Natural Science Foundation of Jiangsu Province, China (Nos. BK20131379, BK20141431) and the Graduate Research and Innovation Projects of Jiangsu Province (No. SJLX_0373).

  6. Mapping the temporal pole with a specialized electrode array: technique and preliminary results

    PubMed Central

    Abel, Taylor J.; Rhone, Ariane E.; Nourski, Kirill V.; Granner, Mark A.; Oya, Hiroyuki; Griffiths, Timothy D.; Tranel, Daniel T.; Kawasaki, Hiroto; Howard, Matthew A.

    2014-01-01

    Temporopolar cortex plays a crucial role in the pathogenesis of temporal lobe epilepsy and subserves important cognitive functions. Because of its shape and position in the middle cranial fossa, complete electrode coverage of the temporal pole (TP) is difficult to achieve using existing devices. We designed a novel TP electrode array that conforms to the surface of temporopolar cortex and achieves dense electrode coverage of this important brain region. A multi-pronged electrode array was designed that can be placed over the surface of the TP using a straightforward insertion technique. Twelve patients with medically intractable epilepsy were implanted with the TP electrode array for purposes of seizure localization. Select patients underwent cognitive mapping by electrocorticographic (ECoG) recording from the TP during a naming task. Use of the array resulted in excellent TP electrode coverage in all patients. High quality ECoG data were consistently obtained for purposes of delineating seizure activity and functional mapping. During a naming task, significant increases in ECoG power were observed within localized subregions of the TP. One patient developed a transient neurological deficit thought to be related to the mass effect of multiple intracranial recording arrays, including the TP array. This deficit resolved following removal of all electrodes. The TP electrode array overcomes limitations of existing devices and enables clinicians and researchers to obtain optimal multi-site recordings from this important brain region. PMID:24480831

  7. Pilot Application of 3d Underwater Imaging Techniques for Mapping Posidonia Oceanica (L.) Delile Meadows

    NASA Astrophysics Data System (ADS)

    Rende, F. S.; Irving, A. D.; Lagudi, A.; Bruno, F.; Scalise, S.; Cappa, P.; Montefalcone, M.; Bacci, T.; Penna, M.; Trabucco, B.; Di Mento, R.; Cicero, A. M.

    2015-04-01

    Seagrass communities are considered one of the most productive and complex marine ecosystems. Seagrasses belong to a small group of 66 species that can form extensive meadows in all coastal areas of our planet. Posidonia oceanica beds are the most characteristic ecosystem of the Mediterranean Sea, and should be constantly monitored, preserved and maintained, as specified by EU Habitats Directive for priority habitats. Underwater 3D imaging by means of still or video cameras can allow a detailed analysis of the temporal evolution of these meadows, but also of the seafloor morphology and integrity. Video-photographic devices and open source software for acquiring and managing 3D optical data rapidly became more and more effective and economically viable, making underwater 3D mapping an easier task to carry out. 3D reconstruction of the underwater scene can be obtained with photogrammetric techniques that require just one or more digital cameras, also in stereo configuration. In this work we present the preliminary results of a pilot 3D mapping project applied to the P. oceanica meadow in the Marine Protected Area of Capo Rizzuto (KR, Calabria Region - Italy).

  8. Mapping to Irregular Torus Topologies and Other Techniques for Petascale Biomolecular Simulation

    PubMed Central

    Phillips, James C.; Sun, Yanhua; Jain, Nikhil; Bohm, Eric J.; Kalé, Laxmikant V.

    2014-01-01

    Currently deployed petascale supercomputers typically use toroidal network topologies in three or more dimensions. While these networks perform well for topology-agnostic codes on a few thousand nodes, leadership machines with 20,000 nodes require topology awareness to avoid network contention for communication-intensive codes. Topology adaptation is complicated by irregular node allocation shapes and holes due to dedicated input/output nodes or hardware failure. In the context of the popular molecular dynamics program NAMD, we present methods for mapping a periodic 3-D grid of fixed-size spatial decomposition domains to 3-D Cray Gemini and 5-D IBM Blue Gene/Q toroidal networks to enable hundred-million atom full machine simulations, and to similarly partition node allocations into compact domains for smaller simulations using multiple-copy algorithms. Additional enabling techniques are discussed and performance is reported for NCSA Blue Waters, ORNL Titan, ANL Mira, TACC Stampede, and NERSC Edison. PMID:25594075

  9. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1998-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  10. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1997-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  11. DIGITAL PROCESSING TECHNIQUES FOR IMAGE MAPPING WITH LANDSAT TM AND SPOT SIMULATOR DATA.

    USGS Publications Warehouse

    Chavez, Pat S., Jr.

    1984-01-01

    To overcome certain problems associated with the visual selection of Landsat TM bands for image mapping, the author used a quantitative technique that ranks the 20 possible three-band combinations based upon their information content. Standard deviations and correlation coefficients can be used to compute a value called the Optimum Index Factor (OIF) for each of the 20 possible combinations. SPOT simulator images were digitally processed and compared with Landsat-4 Thematic Mapper (TM) images covering a semi-arid region in northern Arizona and a highly vegetated urban area near Washington, D. C. Statistical comparisons indicate the more radiometric or color information exists in certain TM three-band combinations than in the three SPOT bands.

  12. Application of the lamp mapping technique for overlap function for Raman lidar systems.

    PubMed

    Walker, Monique; Venable, Demetrius; Whiteman, David N; Sakai, Tetsu

    2016-04-01

    Traditionally, the lidar water vapor mixing ratio (WVMR) is corrected for overlap using data from another instrument, such as a radiosonde. Here we introduce a new experimental method to determine the overlap function using the lamp mapping technique (LMT), which relies on the lidar optics and detection system. The LMT discussed here involves a standard halogen lamp being scanned over the aperture of a Raman lidar telescope in synchronization with the lidar detection system [Appl. Opt.50, 4622 (2011)APOPAI0003-693510.1364/AO.50.004622, Appl. Opt.53, 8538 (2014)APOPAI0003-693510.1364/AO.53.008535]. In this paper, we show results for a LMT-determined overlap function for individual channels, as well as a WVMR overlap function. We found that the LMT-determined WVMR overlap functions deviate within 5% of the traditional radiosonde-determined overlap. PMID:27139656

  13. A new context-model for the pseudo-distance technique in lossless compression of color-mapped images

    NASA Astrophysics Data System (ADS)

    Koc, Basar; Arnavut, Ziya

    2012-10-01

    In this work, we propose a method that utilizes a new context model along with a pseudo-distance technique in compression of color-mapped images. Graphic Interchange Format (GIF) and Portable Network Graphics (PNG) are two of the well-known and frequently used techniques for the compression of color-mapped images. There are several techniques that achieve better compression results than GIF and PNG; however, most of these techniques need two passes on the image data, while others do not run in linear time. The pseudo-distance technique runs in linear time and requires only one pass. We show that using the proposed context model along with the pseudo-distance technique yields better results than both PNG and GIF.

  14. Chromospheres of Luminous Cool Stars

    NASA Astrophysics Data System (ADS)

    Dupree, Andrea K.; Avrett, Eugene

    2015-08-01

    Ultraviolet imaging of Alpha Orionis (Betelgeuse) reveals a complex variable chromospheric structure. Such atmospheres in luminous cool stars can affect features in the optical spectrum. Constructing semi-empiricalmodel atmospheres of luminous stars including the temperature rise due to a chromosphere allows us to predict potential effects on optical transitions. The radiative transfer code, PANDORA, calculates line strengths in a LTE or non-LTE formulation, spherical symmetry, and includes velocity fields when present. Various aspects of the line calculations and their impact on equivalent widths will be discussed including developing appropriate chromospheric models, comparison to a pure radiative equilibrium model, transitions sensitive to non-LTE and the effects of a realistic spherical non-LTE approximation as compared to a plane-parallel approximation. We discuss the extent to which a chromosphere can impact the determination of stellar abundances.

  15. Mapping aboveground woody biomass using forest inventory, remote sensing and geostatistical techniques.

    PubMed

    Yadav, Bechu K V; Nandy, S

    2015-05-01

    Mapping forest biomass is fundamental for estimating CO₂ emissions, and planning and monitoring of forests and ecosystem productivity. The present study attempted to map aboveground woody biomass (AGWB) integrating forest inventory, remote sensing and geostatistical techniques, viz., direct radiometric relationships (DRR), k-nearest neighbours (k-NN) and cokriging (CoK) and to evaluate their accuracy. A part of the Timli Forest Range of Kalsi Soil and Water Conservation Division, Uttarakhand, India was selected for the present study. Stratified random sampling was used to collect biophysical data from 36 sample plots of 0.1 ha (31.62 m × 31.62 m) size. Species-specific volumetric equations were used for calculating volume and multiplied by specific gravity to get biomass. Three forest-type density classes, viz. 10-40, 40-70 and >70% of Shorea robusta forest and four non-forest classes were delineated using on-screen visual interpretation of IRS P6 LISS-III data of December 2012. The volume in different strata of forest-type density ranged from 189.84 to 484.36 m(3) ha(-1). The total growing stock of the forest was found to be 2,024,652.88 m(3). The AGWB ranged from 143 to 421 Mgha(-1). Spectral bands and vegetation indices were used as independent variables and biomass as dependent variable for DRR, k-NN and CoK. After validation and comparison, k-NN method of Mahalanobis distance (root mean square error (RMSE) = 42.25 Mgha(-1)) was found to be the best method followed by fuzzy distance and Euclidean distance with RMSE of 44.23 and 45.13 Mgha(-1) respectively. DRR was found to be the least accurate method with RMSE of 67.17 Mgha(-1). The study highlighted the potential of integrating of forest inventory, remote sensing and geostatistical techniques for forest biomass mapping. PMID:25930205

  16. Application of Geologic Mapping Techniques and Autonomous Feature Detection to Future Exploration of Europa

    NASA Astrophysics Data System (ADS)

    Bunte, M. K.; Tanaka, K. L.; Doggett, T.; Figueredo, P. H.; Lin, Y.; Greeley, R.; Saripalli, S.; Bell, J. F.

    2013-12-01

    Europa's extremely young surface age, evidence for extensive resurfacing, and indications of a sub-surface ocean elevate its astrobiological potential for habitable environments and make it a compelling focus for study. Knowledge of the global distribution and timing of Europan geologic units is a key step in understanding the history of the satellite and for identifying areas relevant for exploration. I have produced a 1:15M scale global geologic map of Europa which represents a proportionate distribution of four unit types and associated features: plains, linea, chaos, and crater materials. Mapping techniques differ somewhat from other planetary maps but do provide a method to establish stratigraphic markers and to illustrate the surface history through four periods of formation as a function of framework lineament cross-cutting relationships. Correlations of observed features on Europa with Earth analogs enforce a multi-process theory for formation rather than the typical reliance on the principle of parsimony. Lenticulae and microchaos are genetically similar and most likely form by diapirism. Platy and blocky chaos units, endmembers of archetypical chaos, are best explained by brine mobilization. Ridges account for the majority of lineaments and may form by a number of methods indicative of local conditions; most form by either tidal pumping or shear heating. The variety of morphologies exhibited by bands indicates that multiple formation mechanisms apply once fracturing of the brittle surface over a ductile subsurface is initiated. Mapping results support the interpretation that Europa's shell has thickened over time resulting in changes in the style and intensity of deformation. Mapping serves as an index for change detection and classification, aids in pre-encounter targeting, and supports the selection of potential landing sites. Highest priority target areas are those which indicate geophysical activity by the presence of volcanic plumes, outgassing, or

  17. Effect of display polarity and luminance contrast on visual lobe shape characteristics.

    PubMed

    Tsang, Steve N H; Chan, Alan H S; Yu, R F

    2012-01-01

    The effect of display polarity and luminance contrast on visual lobe (effective visual field) shape characteristics was studied using three levels of luminance contrast with combinations of positive and negative polarities. The binocular effective visual field for a detection task, with a peripherally presented target (V) embedded in a homogeneous competing background (Xs), was mapped on 24 imaginary axes passing through the fixation point. The results showed that visual lobes mapped using positive polarity were statistically larger in area, rounder and more regular in shape than those for negative polarity. The medium contrast condition lobes were more symmetric and regular than low contrast condition lobes, and lobe area and perimeter increased with increasing luminance contrast ratio. Under the interaction of positive polarity and high luminance contrast, visual lobes were found to be larger, smoother and rounder. The high level of luminance and contrast however resulted in a higher degree of visual discomfort. The results indicated that positive polarity and contrast of medium (26:1) to high (41:1) levels are possible display settings for better visual lobe characteristics and better anticipated search performance. Practitioner Summary: The effect of display polarity and luminance contrast on visual lobe shape characteristics was examined with uniform stimulus materials in this study. The results help to identify the optimum display settings for luminance contrast and display polarity to enhance lobe shape characteristics and hence search performance in industrial inspection tasks. PMID:22676836

  18. A self-trained classification technique for producing 30 m percent-water maps from Landsat data

    USGS Publications Warehouse

    Rover, Jennifer R.; Wylie, Bruce K.; Ji, Lei

    2010-01-01

    Small bodies of water can be mapped with moderate-resolution satellite data using methods where water is mapped as subpixel fractions using field measurements or high-resolution images as training datasets. A new method, developed from a regression-tree technique, uses a 30 m Landsat image for training the regression tree that, in turn, is applied to the same image to map subpixel water. The self-trained method was evaluated by comparing the percent-water map with three other maps generated from established percent-water mapping methods: (1) a regression-tree model trained with a 5 m SPOT 5 image, (2) a regression-tree model based on endmembers and (3) a linear unmixing classification technique. The results suggest that subpixel water fractions can be accurately estimated when high-resolution satellite data or intensively interpreted training datasets are not available, which increases our ability to map small water bodies or small changes in lake size at a regional scale.

  19. Threading Dislocation Characterization and Stress Mapping Depth Profiling via Ray Tracing Technique

    NASA Astrophysics Data System (ADS)

    Zhou, Tianyi

    Zinc oxide (ZnO) has been well known as a transparent, dielectric, piezoelectric and wide band gap material. The potential capabilities have been demonstrated for a wide range of applications such as piezoelectric transducer, gas sensor, optical waveguides and transparent electrode. It could also be applied as a substrate material for GaN-based devices. However, while some applications have already been realized, issues relating to crystalline defects remain a barrier to the successful realization of several others. In this thesis, the central focus of Chapter II is to characterize threading dislocations in hydrothermal grown ZnO substrates through simulation work as well as other techniques. The goal of this study is to find the origin of threading dislocations and design strategies to mitigate their negative effects by either reducing their densities or completely eliminating them. In Chapter III, the technique of SMART (stress mapping analysis via ray tracing) is discussed in detail to measure residue stress in packaged silicon circuits. Residual stress plays an important role in the performance and lifetime of single crystal device material. There are mainly two advantages of SMART compared with other techniques: (a) all six components of the stress tensor could be evaluated; (b) it is non-destructive and no damaging trace will be left on the sample. In this study, our goal is to build a relationship between stress distribution and depth. The concept of penetration depth is critically important in this study and its value may cause great changes for real space stress distribution. A new function is applied to get better fitting curves. Data in this study is obtained from various penetration depth, which represents exponentially decaying weighted average of actual stress value or in other words this stress profile is Laplace transform of real stress profile. Mathematical procedure is described to determine real stress profile from Laplace profile. Experiment

  20. Spatial epidemiological techniques in cholera mapping and analysis towards a local scale predictive modelling

    NASA Astrophysics Data System (ADS)

    Rasam, A. R. A.; Ghazali, R.; Noor, A. M. M.; Mohd, W. M. N. W.; Hamid, J. R. A.; Bazlan, M. J.; Ahmad, N.

    2014-02-01

    Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia.

  1. A Riemannian geometric mapping technique for identifying incompressible equivalents to subsonic potential flows

    NASA Astrophysics Data System (ADS)

    German, Brian Joseph

    This research develops a technique for the solution of incompressible equivalents to planar steady subsonic potential flows. Riemannian geometric formalism is used to develop a gauge transformation of the length measure followed by a curvilinear coordinate transformation to map the given subsonic flow into a canonical Laplacian flow with the same boundary conditions. The effect of the transformation is to distort both the immersed profile shape and the domain interior nonuniformly as a function of local flow properties. The method represents the full nonlinear generalization of the classical methods of Prandtl-Glauert and Karman-Tsien. Unlike the classical methods which are "corrections," this method gives exact results in the sense that the inverse mapping produces the subsonic full potential solution over the original airfoil, up to numerical accuracy. The motivation for this research was provided by an observed analogy between linear potential flow and the special theory of relativity that emerges from the invariance of the d'Alembert wave equation under Lorentz transformations. This analogy is well known in an operational sense, being leveraged widely in linear unsteady aerodynamics and acoustics, stemming largely from the work of Kussner. Whereas elements of the special theory can be invoked for compressibility effects that are linear and global in nature, the question posed in this work was whether other mathematical techniques from the realm of relativity theory could be used to similar advantage for effects that are nonlinear and local. This line of thought led to a transformation leveraging Riemannian geometric methods common to the general theory of relativity. A gauge transformation is used to geometrize compressibility through the metric tensor of the underlying space to produce an equivalent incompressible flow that lives not on a plane but on a curved surface. In this sense, forces owing to compressibility can be ascribed to the geometry of space in

  2. The derivation of tropospheric column ozone using the TOR approach and mapping technique

    NASA Astrophysics Data System (ADS)

    Yang, Qing

    2007-12-01

    Tropospheric ozone columns (TCOs) derived from differences between the Dutch-Finnish Aura Ozone Monitoring Instrument (OMI) measurements of the total atmospheric ozone column and the Aura Microwave Limb Sounder (MLS) measurements of stratospheric ozone columns are discussed. Because the measurements by these two instruments are not spatially coincident, interpolation techniques, with emphasis on mapping the stratospheric columns in space and time using the relationships between lower stratospheric ozone and potential vorticity (PV) and geopotential heights (Z), are evaluated at mid-latitudes. It is shown that this PV mapping procedure produces somewhat better agreement in comparisons with ozonesonde measurements, particularly in winter, than does simple linear interpolation of the MLS stratospheric columns or the use of typical coincidence criteria at mid-latitudes. The OMI/MLS derived tropospheric columns are calculated to be 4 Dobson units (DU) smaller than the sonde measured columns at mid-latitudes. This mean difference is consistent with the MLS (version 1.5) stratospheric ozone columns being high relative to Stratospheric Aerosol and Gas Experiment (SAGE II) columns by 3 DU. Standard deviations between the derived tropospheric columns and those measured by ozonesondes are 9 DU (30%) annually but they are just 6 DU (15%) in summer. Uncertainties in the interpolated MLS stratospheric columns are likely to be the primary cause of these standard deviations. An important advantage of the PV mapping approach is that it works well when MLS data are missing (e.g., when an orbit of measurements is missing). In the comparisons against ozonesonde measurements, it provides up to twice as many comparisons compared to the other techniques. The OMI/MLS derived tropospheric ozone columns have been compared with corresponding columns based on the Tropospheric Emission Spectrometer (TES) measurements, and Regional chEmical trAnsport Model (REAM) simulations. The variability of

  3. Comparison of the Biological Impacts of the Fluoride Compounds by Graphical Risk Visualization Map Technique.

    PubMed

    Usuda, Kan; Kono, Rei; Ueno, Takaaki; Ito, Yuichi; Dote, Tomotaro; Yokoyama, Hirotaka; Kono, Koichi; Tamaki, Junko

    2015-09-01

    Various fluoride compounds are widely used in industry. The present risk assessment study was conducted using a series of inorganic binary fluorides of the type XFn, where X(n) = Na(+), K(+), Li(+), Mg(2+), Ca(2+), Sr(2+), Ba(2+), Al(3+), Nd(3+), La(3+), Ce(3+), Sm(3+), Gd(3+), Y(3+), Yb(2+), and Zn(2+). The aqueous solutions of these salts were orally administrated to 16 experimental groups (one for each of the salts tested). The levels of fluoride, N-acetyl-β-D-glucosaminidase in cumulative 24-h urine samples and creatinine clearance were measured to assess possible acute renal damages. The levels of fluoride, alanine aminotransferase, and aspartate aminotransferase were also determined in serum samples to assess possible acute hepatic damages. The results reveal that sodium fluoride (NaF), potassium fluoride (KF), and zinc fluoride tetrahydrate (ZnF2 (.)4H2O) can carry the fluoride ion into the bloodstream and that it is excreted via urine more readily than the other compounds tested. These fluorides were assigned the highest risk impact factor. Most of the rare earth fluorides are insoluble in water while those groups 2 and 13 of the periodic table are slightly soluble, so that they do not have a significant negative risk. These findings suggest that the biological impact of fluoride depends on the accompanying counter ion and its solubility. The risk map obtained in the present study shows that the graphical visualization map technique employed is a valuable new tool to assess the toxicological risk of chemical compounds. PMID:25749926

  4. Can Social Semantic Web Techniques Foster Collaborative Curriculum Mapping In Medicine?

    PubMed Central

    Finsterer, Sonja; Cremer, Jan; Schenkat, Hennig

    2013-01-01

    Background Curriculum mapping, which is aimed at the systematic realignment of the planned, taught, and learned curriculum, is considered a challenging and ongoing effort in medical education. Second-generation curriculum managing systems foster knowledge management processes including curriculum mapping in order to give comprehensive support to learners, teachers, and administrators. The large quantity of custom-built software in this field indicates a shortcoming of available IT tools and standards. Objective The project reported here aims at the systematic adoption of techniques and standards of the Social Semantic Web to implement collaborative curriculum mapping for a complete medical model curriculum. Methods A semantic MediaWiki (SMW)-based Web application has been introduced as a platform for the elicitation and revision process of the Aachen Catalogue of Learning Objectives (ACLO). The semantic wiki uses a domain model of the curricular context and offers structured (form-based) data entry, multiple views, structured querying, semantic indexing, and commenting for learning objectives (“LOs”). Semantic indexing of learning objectives relies on both a controlled vocabulary of international medical classifications (ICD, MeSH) and a folksonomy maintained by the users. An additional module supporting the global checking of consistency complements the semantic wiki. Statements of the Object Constraint Language define the consistency criteria. We evaluated the application by a scenario-based formative usability study, where the participants solved tasks in the (fictional) context of 7 typical situations and answered a questionnaire containing Likert-scaled items and free-text questions. Results At present, ACLO contains roughly 5350 operational (ie, specific and measurable) objectives acquired during the last 25 months. The wiki-based user interface uses 13 online forms for data entry and 4 online forms for flexible searches of LOs, and all the forms are

  5. Low Altitude AVIRIS Data for Mapping Land Cover in Yellowstone National Park: Use of Isodata Clustering Techniques

    NASA Technical Reports Server (NTRS)

    Spruce, Joe

    2001-01-01

    Yellowstone National Park (YNP) contains a diversity of land cover. YNP managers need site-specific land cover maps, which may be produced more effectively using high-resolution hyperspectral imagery. ISODATA clustering techniques have aided operational multispectral image classification and may benefit certain hyperspectral data applications if optimally applied. In response, a study was performed for an area in northeast YNP using 11 select bands of low-altitude AVIRIS data calibrated to ground reflectance. These data were subjected to ISODATA clustering and Maximum Likelihood Classification techniques to produce a moderately detailed land cover map. The latter has good apparent overall agreement with field surveys and aerial photo interpretation.

  6. Comparing and Reconciling Traditional Field and Photogeologic Mapping Techniques: Lessons from the San Francisco Volcanic Field, Arizona

    NASA Technical Reports Server (NTRS)

    Skinner, J. A., Jr.; Eppler, D. B.; Bleacher, J. E.; Evans, C. A.; Feng, W.; Gruener, J.; Hurwitz, D. M.; Janoiko, B.; Whitson, P.

    2014-01-01

    Cartographic products and - specifically - geologic maps provide critical assistance for establishing physical and temporal frameworks of planetary surfaces. The technical methods that result in the creation of geologic maps vary depending on how observations are made as well as the overall intent of the final products [1-3]. These methods tend to follow a common linear work flow, including the identification and delineation of spatially and temporally discrete materials (units), the documentation of their primary (emplacement) and secondary (erosional) characteristics, analysis of the relative and absolute age relationships between these materials, and the collation of observations and interpretations into an objective map product. The "objectivity" of a map is critical cross comparison with overlapping maps and topical studies as well as its relevance to scientific posterity. However, the "accuracy" and "correctness" of a geologic map is very subject to debate. This can be evidenced by comparison of existing geologic maps at various scales, particularly those compiled through field- and remote-based mapped efforts. Our study focuses on comparing the fidelity of (1) "Apollo-style" geologic investigations, where typically non-geologist crew members follow static traverse routes established through pre-mission planning, and (2) "traditional" field-based investigations, where geologists are given free rein to observe without preplanned routes. This abstract summarizes the regional geology wherein our study was conducted, presents the geologic map created from traditional field mapping techniques, and offers basic insights into how geologic maps created from different tactics can be reconciled in support of exploratory missions. Additional abstracts [4-6] from this study discuss various exploration and science results of these efforts.

  7. Studying AGN Feedback with Galactic Outflows in Luminous Obscured Quasar

    NASA Astrophysics Data System (ADS)

    Sun, Ai-Lei

    2016-01-01

    Feedback from Active galactic nuclei (AGN) has been proposed as an important quenching mechanism to suppress star formation in massive galaxies. We investigate the most direct form of AGN feedback - galactic outflows - in the most luminous obscured AGN (L>10^45 erg/s) from the SDSS sample in the nearby universe (z<0.2). Using ALMA and Magellan observations to target molecular and ionized outflows, we find that luminous AGN can impact the dynamics and phase of the galactic medium, and confirm the complex multi-phase and multi-scaled nature of the feedback phenomenon. In particular, we found that most of these luminous AGN hosts ionized outflows. The outflow size, velocity, and energetics correlate with the AGN luminosity, and can be very extended (r > 10 kpc) and fast (v > 1000 km/s) for the most luminous ones. I end with presenting a new technique to find extended ionized outflows using broadband imaging surveys, and to characterize their occurrence rate, morphology, size distribution, and their dependence on the AGN luminosity. This technique will open a new window for feedback studies in the era of large-scale optical imaging surveys, e.g., HSC and then LSST.

  8. Coal fire mapping of East Basuria Colliery, Jharia coalfield using vertical derivative technique of magnetic data

    NASA Astrophysics Data System (ADS)

    Pal, S. K.; Vaish, Jitendra; Kumar, Sahadev; Bharti, Abhay Kumar

    2016-02-01

    The present study deals with the coal fire mapping of East Basuria Colliery, Jharia coalfield, India, using the magnetic method. It is based on the fact that rise in temperature would result significant changes in magnetic susceptibility and thermo-remanent magnetization (TRM) of the overlying rocks. Magnetism increases slowly with the rise of temperature until the Curie temperature. Generally, rock/ overburden loses magnetization and becomes paramagnetic due to heating to Curie temperature, which results with significant reduction in magnetic susceptibility. However, magnetism increases significantly after cooling below the Curie temperature. Several data processing methods such as diurnal correction, reduction to pole (RTP), first and second vertical derivatives have been used for analysis of magnetic data and their interpretation. It is observed that the total magnetic field intensity anomaly of the area varies approximately from 44850 to 47460 nT and the residual magnetic anomaly varies approximately from -1323 to 1253 nT. The range of the magnetic anomaly after RTP is approximately 1050-1450 nT. About 20 low magnetic anomaly zones have been identified associated with active coal fire regions and 11 high magnetic anomaly zones have been identified associated with non-coal fire regions using vertical derivative techniques.

  9. Classification of a set of vectors using self-organizing map- and rule-based technique

    NASA Astrophysics Data System (ADS)

    Ae, Tadashi; Okaniwa, Kaishirou; Nosaka, Kenzaburou

    2005-02-01

    There exist various objects, such as pictures, music, texts, etc., around our environment. We have a view for these objects by looking, reading or listening. Our view is concerned with our behaviors deeply, and is very important to understand our behaviors. We have a view for an object, and decide the next action (data selection, etc.) with our view. Such a series of actions constructs a sequence. Therefore, we propose a method which acquires a view as a vector from several words for a view, and apply the vector to sequence generation. We focus on sequences of the data of which a user selects from a multimedia database containing pictures, music, movie, etc... These data cannot be stereotyped because user's view for them changes by each user. Therefore, we represent the structure of the multimedia database as the vector representing user's view and the stereotyped vector, and acquire sequences containing the structure as elements. Such a vector can be classified by SOM (Self-Organizing Map). Hidden Markov Model (HMM) is a method to generate sequences. Therefore, we use HMM of which a state corresponds to the representative vector of user's view, and acquire sequences containing the change of user's view. We call it Vector-state Markov Model (VMM). We introduce the rough set theory as a rule-base technique, which plays a role of classifying the sets of data such as the sets of "Tour".

  10. Mapping of explosive contamination using GC/chemiluminescence and ion mobility spectrometry techniques

    NASA Astrophysics Data System (ADS)

    Miller, Carla J.; Glenn, D. F.; Hartenstein, Steven D.; Hallowell, Susan F.

    1998-12-01

    Recent efforts at the Idaho National Engineering and Environmental Laboratory (INEEL) have included mapping explosive contamination resulting from manufacturing and carrying improvised explosive devices (IEDs). Two types of trace detection equipment were used to determine levels of contamination from designated sampling areas. A total of twenty IEDs were constructed: ten using TNT and ten using C-4. Two test scenarios were used. The first scenario tracked the activities of a manufacturer who straps the device onto an independent courier. The courier then performed a series of activities to simulate waiting in an airport. The second scenario tracked the activities of a manufacturer who also served as the courier. A sample set for each test consisted of thirty samples from various locations on each IED manufacturer, thirty from each IED courier, twenty-five from the manufacturing area, and twenty-five from the courier area. Pre-samples and post-samples were collected for analysis with each detection technique. Samples analyzed by gc/chemiluminescence were taken by swiping a teflon- coated sampling swipe across the surface of the sampling area to pick up any explosive particles. Samples analyzed by ion mobility spectrometry (IMS) were taken from the clothing of the manufacturer and courier by vacuuming the surface and collecting particulates on a fiberglass filter. Samples for IMS analysis from the manufacturing and courier rooms were taken by wiping a cotton sampling swipe across the surface area. Currently, building IEDs and monitoring the explosive contamination is being directed toward detection with portal monitors.

  11. Real-time PCR mapping of DNaseI-hypersensitive sites using a novel ligation-mediated amplification technique

    PubMed Central

    Follows, George A.; Janes, Mary E.; Vallier, Ludovic; Green, Anthony R.; Gottgens, Berthold

    2007-01-01

    Mapping sites within the genome that are hypersensitive to digestion with DNaseI is an important method for identifying DNA elements that regulate transcription. The standard approach to locating these DNaseI-hypersensitive sites (DHSs) has been to use Southern blotting techniques, although we, and others, have recently published alternative methods using a range of technologies including high-throughput sequencing and genomic array tiling paths. In this article, we describe a novel protocol to use real-time PCR to map DHS. Advantages of the technique reported here include the small cell numbers required for each analysis, rapid, relatively low-cost experiments with minimal need for specialist equipment. Presented examples include comparative DHS mapping of known TAL1/SCL regulatory elements between human embryonic stem cells and K562 cells. PMID:17389645

  12. Unveiling the hearts of luminous and ultra-luminous infrared galaxy mergers with laser guide star adaptive optics

    NASA Astrophysics Data System (ADS)

    Medling, Anne M.

    2013-03-01

    Gas-rich galaxies across cosmic time exhibit one or both of two phenomena: ongoing star formation and an active galactic nucleus indicating current black hole accretion. These two processes are important mechanisms through which galaxies evolve and grow, but their effects are difficult to disentangle. Both will use up some available gas, and both are capable of producing winds strong enough to eject remaining gas from the galaxy. One must look at high spatial resolutions in order to separate the dynamical effects of star formation going on near the nucleus of a galaxy from the black hole growth going on in the nucleus. We present high spatial resolution integral field spectroscopy of fifteen nearby luminous and ultra-luminous infrared galaxies. These systems are extremely bright in the infrared exactly because they host powerful starbursts and active nuclei, which in turn heat the surrounding dust. Our data provide resolved stellar and gaseous kinematics of the central kiloparsec of each of these systems by removing atmospheric blurring with adaptive optics, an observing technique that measures the turbulence in the Earth's atmosphere and then uses a deformable mirror to correct the resulting distortions. Our kinematic maps reveal nuclear disks of gas and stars with radii ˜ a few hundred parsecs surrounding the central black holes. Because the stellar and gas kinematics match well, we conclude that the stars are forming in situ from the gas in the disks. These disks may be the progenitors of kinematically decoupled cores seen in many isolated elliptical galaxies, and may have a significant effect on the merger rate of binary black holes. Additionally, these disks may be used to measure black hole masses which, when combined with host galaxy properties and placed on scaling relations, indicate that black holes grow as or more quickly than their host galaxies during a merger. This suggests that a sudden burst of black hole growth at in the final stages of the merger

  13. The Effect of Using the Story-Mapping Technique on Developing Tenth Grade Students' Short Story Writing Skills in EFL

    ERIC Educational Resources Information Center

    Ibnian, Salem Saleh Khalaf

    2010-01-01

    The current study aimed at investigating the effect of using the story-mapping technique on developing tenth grade students' short story writing skills in EFL. The study attempted to answer the following questions: (1) What are the short story writing skills needed for tenth grade students in EFL?; and (2) What is the effect of the using the…

  14. Notes for the improvement of a remote sensing multispectral data non-supervised classification and mapping technique

    NASA Technical Reports Server (NTRS)

    Dalton, C. C.

    1973-01-01

    Examined are: (1) the sequential clustering technique for the unsupervised automatic classification and mapping of earth resources satellite data, (2) theoretical analysis of the tests which were used, and (3) derivation of an alternative set of tests and their necessary algorithm.

  15. Observing Equatorial Thermospheric Winds and Temperatures with a New Mapping Technique

    NASA Astrophysics Data System (ADS)

    Faivre, M. W.; Meriwether, J. W.; Sherwood, P.; Veliz, O.

    2005-12-01

    Application of the Fabry-Perot interferometer (FPI) at Arequipa, Peru (16.4S, 71.4 W) to measure the Doppler shifts and Doppler broadenings in the equatorial O(1D) 630-nm nightglow has resulted in numerous detections of a large-scale thermospheric phenomenon called the Midnight Temperature Maximum (MTM). A recent detector upgrade with a CCD camera has improved the accuracy of these measurements by a factor of 5. Temperature increases of 50 to 150K have been measured during nights in April and July, 2005, with error bars less than 10K after averaging in all directions. Moreover, the meridional wind measurements show evidence for a flow reversal from equatorward to poleward near local midnight for such events. A new observing strategy based upon the pioneering work of Burnside et al.[1981] maps the equatorial wind and temperature fields by observing in eight equally-spaced azimuth directions, each with a zenith angle of 60 degrees. Analysis of the data obtained with this technique gives the mean wind velocities in the meridional and zonal directions as well as the horizontal gradients of the wind field for these directions. Significant horizontal wind gradients are found for the meridional direction but not for the zonal direction. The zonal wind blows eastward throughout the night with a maximum speed of ~150 m/s near the middle of the night and then decreases towards zero just before dawn. In general, the fastest poleward meridional wind is observed near mid-evening. By the end of the night, the meridional flow tends to be more equatorward at speeds of about 50 m/s. Using the assumption that local time and longitude are equivalent over a period of 30 minutes, a map of the horizontal wind field vector field is constructed over a range of 12 degrees latitude centered at 16.5 S. Comparison between MTM nights and quiet nights (no MTM) revealed significant differences in the horizontal wind fields. Using the method of Fourier decomposition of the line-of-sight winds

  16. Evaluation of EREP techniques for geological mapping. [southern Pyrenees and Ebro basin in Spain

    NASA Technical Reports Server (NTRS)

    Vandermeermohr, H. E. C.; Srivastava, G. S. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. Skylab photographs may be successfully utilized for preparing a reconnaissance geological map in the areas where no maps or semi-detailed maps exist. Large coverage of area and regional perspective from Skylab photographs can help better coordination in regional mapping. It is possible to delineate major structural trends and other features like mega-lineaments, geofractures, and faults, which have evaded their detection by conventional methods. The photointerpretability is better in areas dominated by sedimentary rocks. Rock units of smaller extent and having poor geomorphic expressions are difficult to map. Demarcation of quaternary river alluvium can be made with better precision and ease with the Skylab photographs. Stereoscopic viewing greatly helps in interpretation of area structures. Skylab photographs are not good for preparing geological maps larger than 1:270,000 scale.

  17. Soil zymography - A novel technique for mapping enzyme activity in the rhizosphere

    NASA Astrophysics Data System (ADS)

    Spohn, Marie

    2014-05-01

    The effect plant roots on microbial activity in soil at the millimeter scale is poorly understood. One reason for this is that spatially explicit methods for the study of microbial activity in soil are limited. Here we present a quantitative in situ technique for mapping the distribution of exoenzymes in soil along with some results about the effects of roots on exoenzyme activity in soil. In the first study we showed that both acid and alkaline phosphatase activity were up to 5.4-times larger in the rhizosphere of Lupinus albus than in the bulk soil. While acid phosphatase activity (produced by roots and microorganisms) was closely associated with roots, alkaline phosphatase activity (produced only by microorganisms) was more widely distributed, leading to a 2.5-times larger area of activity of alkaline than of acid phosphatase. These results indicate a spatial differentiation of different ecophysiological groups of organic phosphorus mineralizing organisms in the rhizosphere which might alleviate a potential competition for phosphorus between them. In a second study cellulase, chitinase and phosphatase activities were analyzed in the presence of living Lupinus polyphyllus roots and dead/dying roots (in the same soils 10, 20 and 30 days after cutting the L. polyphyllus shoots). The activity of all three enzymes was 9.0 to 13.9-times higher at the living roots compared to the bulk soil. Microhotspots of cellulase, chitinase and phosphatase activity in the soil were found up to 60 mm away from the living roots. 10 days after shoot cutting, the areas of high activities of cellulase and phosphatase activity were extend up to 55 mm away from the next root, while the extension of the area of chitinase activity did not change significantly. At the root, cellulase and chitinase activity increased first at the root tips after shoot cutting and showed maximal activity 20 days after shoot cutting. The number and activity of microhotspots of chitinase activity was maximal 10

  18. Side-scan sonar mapping: Pseudo-real-time processing and mosaicking techniques

    SciTech Connect

    Danforth, W.W.; Schwab, W.C.; O'Brien, T.F. ); Karl, H. )

    1990-05-01

    The US Geological Survey (USGS) surveyed 1,000 km{sup 2} of the continental shelf off San Francisco during a 17-day cruise, using a 120-kHz side-scan sonar system, and produced a digitally processed sonar mosaic of the survey area. The data were processed and mosaicked in real time using software developed at the Lamont-Doherty Geological Observatory and modified by the USGS, a substantial task due to the enormous amount of data produced by high-resolution side-scan systems. Approximately 33 megabytes of data were acquired every 1.5 hr. The real-time sonar images were displayed on a PC-based workstation and the data were transferred to a UNIX minicomputer where the sonar images were slant-range corrected, enhanced using an averaging method of desampling and a linear-contrast stretch, merged with navigation, geographically oriented at a user-selected scale, and finally output to a thermal printer. The hard-copy output was then used to construct a mosaic of the survey area. The final product of this technique is a UTM-projected map-mosaic of sea-floor backscatter variations, which could be used, for example, to locate appropriate sites for sediment sampling to ground truth the sonar imagery while still at sea. More importantly, reconnaissance surveys of this type allow for the analysis and interpretation of the mosaic during a cruise, thus greatly reducing the preparation time needed for planning follow-up studies of a particular area.

  19. Properties of unusually luminous supernovae

    NASA Astrophysics Data System (ADS)

    Pan, Tony Shih Arng

    This thesis is a theoretical study of the progenitors, event rates, and observational properties of unusually luminous supernova (SN), and aims to identify promising directions for future observations. In Chapter 2, we present model light curves and spectra of pair-instability supernovae (PISNe) over a range of progenitor masses and envelope structures for Pop III stars. We calculate the rates and detectability of PISNe, core-collapse supernovae (CCSNe), and Type Ia SNe at the Epoch of Reionization with the James Webb Space Telescope (JWST), which can be used to determine the contribution of Pop III versus Pop II stars toward ionizing the universe. Although CCSNe are the least intrinsically luminous supernovae, Chapter 5 shows that a JWST survey targeting known galaxy clusters with Einstein radii > 35" should discover gravitationally lensed CCSNe at redshifts exceeding z = 7--8. In Chapter 3, we explain the Pop II/I progenitors of observed PISNe in the local universe can be created via mergers in runaway collisions in young, dense star clusters, despite copious mass loss via line-driven winds. The PISN rate from this mechanism is consistent with the observed volumetric rate, and the Large Synoptic Survey Telescope could discover ~102 such PISNe per year. In Chapter 4, we identify 10 star clusters which may host PISN progenitors with masses up to 600 solar masses formed via runaway collisions. We estimate the probabilities of these very massive stars being in eclipsing binaries to be ≳ 30%, and find that their transits can be detected even under the contamination of the background cluster light, due to mean transit depths of ~10 6 solar luminosities. In Chapter 6, we show that there could be X-ray analogues of optically super-luminous SNe that are powered by the conversion of the kinetic energy of SN ejecta into radiation upon its collision with a dense but optically-thin circumstellar shell. We find shell configurations that can convert a large fraction of the SN

  20. Enhancing Local Climate Projections of Precipitation: Assets and Limitations of Quantile Mapping Techniques for Statistical Downscaling

    NASA Astrophysics Data System (ADS)

    Ivanov, Martin; Kotlarski, Sven; Schär, Christoph

    2015-04-01

    The Swiss CH2011 scenarios provide a portfolio of climate change scenarios for the region of Switzerland, specifically tailored for use in climate impact research. Although widely applied by a variety of end-users, these scenarios are subject to several limitations related to the underlying delta change methodology. Examples are difficulties to appropriately account for changes in the spatio-temporal variability of meteorological fields and for changes in extreme events. The recently launched ELAPSE project (Enhancing local and regional climate change projections for Switzerland) is connected to the EU COST Action VALUE (www.value-cost.eu) and aims at complementing CH2011 by further scenario products, including a bias-corrected version of daily scenarios at the site scale. For this purpose the well-established empirical quantile mapping (QM) methodology is employed. Here, daily temperature and precipitation output of 15 GCM-RCM model chains of the ENSEMBLES project is downscaled and bias-corrected to match observations at weather stations in Switzerland. We consider established QM techniques based on all empirical quantiles or linear interpolation between the empirical percentiles. In an attempt to improve the downscaling of extreme precipitation events, we also apply a parametric approximation of the daily precipitation distribution by a dynamically weighted mixture of a Gamma distribution for the bulk and a Pareto distribution for the right tail for the first time in the context of QM. All techniques are evaluated and intercompared in a cross-validation framework. The statistical downscaling substantially improves virtually all considered distributional and temporal characteristics as well as their spatial distribution. The empirical methods have in general very similar performances. The parametric method does not show an improvement over the empirical ones. Critical sites and seasons are highlighted and discussed. Special emphasis is placed on investigating the

  1. Applications of remote sensing techniques to county land use and flood hazard mapping

    NASA Technical Reports Server (NTRS)

    Clark, R. B.; Conn, J. S.; Miller, D. A.; Mouat, D. A.

    1975-01-01

    The application of remote sensing in Arizona is discussed. Land use and flood hazard mapping completed by the Applied Remote Sensing Program is described. Areas subject to periodic flood inundation are delineated and land use maps monitoring the growth within specific counties are provided.

  2. The Art of Teaching Map and Compass: Instructional Techniques, Curricular Formats and Practical Field Exercises.

    ERIC Educational Resources Information Center

    Watters, Ron

    This paper discusses the value of teaching map and compass skills in the classroom or an outdoor situation. Navigation is the most basic of all outdoor skills. A map and compass curriculum can be taught to anyone, is inexpensive, and is easily incorporated in a variety of educational situations. General teaching principles are outlined: (1) start…

  3. Investigating the Use of ICT-Based Concept Mapping Techniques on Creativity in Literacy Tasks

    ERIC Educational Resources Information Center

    Riley, Nigel R.; Ahlberg, Mauri

    2004-01-01

    The key research question in this small-scale study focuses on the effects that an ICT (information and communications technologies)-based concept mapping intervention has on creativity and writing achievement in 10-11-year-old primary age pupils. The data shows that pupils using a concept mapping intervention significantly improve their NFER…

  4. Benthic habitat mapping: A review of progress towards improved understanding of the spatial ecology of the seafloor using acoustic techniques

    NASA Astrophysics Data System (ADS)

    Brown, Craig J.; Smith, Stephen J.; Lawton, Peter; Anderson, John T.

    2011-05-01

    This review examines the various strategies and methods used to produce benthic habitat maps using acoustic remote sensing techniques, coupled with in situ sampling. The applications of three acoustic survey techniques are examined in detail: single-beam acoustic ground discrimination systems, sidescan sonar systems, and multi-beam echo sounders. Over the past decade we have witnessed the nascence of the field of benthic habitat mapping and, on the evidence of the literature reviewed in this paper, have seen a rapid evolution in the level of sophistication in our ability to image and thus map seafloor habitats. As acoustic survey tools have become ever more complex, new methods have been tested to segment, classify and combine these data with biological ground truth sample data. Although the specific methods used to derive habitat maps vary considerably, the review indicates that studies can generally be categorized into one of three over-arching strategies; 1) Abiotic surrogate mapping; 2) Assemble first, predict later (unsupervised classification); 3) Predict first, assemble later (supervised classification). Whilst there is still no widely accepted agreement on the best way to produce benthic habitat maps, all three strategies provide valuable map resources to support management objectives. Whilst there is still considerable work to be done before we can answer many of the outstanding technological, methodological, ecological and theoretical questions that have been raised here, the review concludes that the advent of spatial ecological studies founded on high-resolution environmental data sets will undoubtedly help us to examine patterns in community and species distributions. This is a vital first step in unraveling ecological complexities and thus providing improved spatial information for management of marine systems.

  5. A regional ionospheric TEC mapping technique over China and adjacent areas on the basis of data assimilation

    NASA Astrophysics Data System (ADS)

    Aa, Ercha; Huang, Wengeng; Yu, Shimei; Liu, Siqing; Shi, Liqin; Gong, Jiancun; Chen, Yanhong; Shen, Hua

    2015-06-01

    In this paper, a regional total electron content (TEC) mapping technique over China and adjacent areas (70°E-140°E and 15°N-55°N) is developed on the basis of a Kalman filter data assimilation scheme driven by Global Navigation Satellite Systems (GNSS) data from the Crustal Movement Observation Network of China and International GNSS Service. The regional TEC maps can be generated accordingly with the spatial and temporal resolution being 1°×1° and 5 min, respectively. The accuracy and quality of the TEC mapping technique have been validated through the comparison with GNSS observations, the International Reference Ionosphere model values, the global ionosphere maps from Center for Orbit Determination of Europe, and the Massachusetts Institute of Technology Automated Processing of GPS TEC data from Madrigal database. The verification results indicate that great systematic improvements can be obtained when data are assimilated into the background model, which demonstrates the effectiveness of this technique in providing accurate regional specification of the ionospheric TEC over China and adjacent areas.

  6. Facilitating the exploitation of ERTS-1 imagery using snow enhancement techniques. [geological fault maps of Massachusetts and Connecticut

    NASA Technical Reports Server (NTRS)

    Wobber, F. J. (Principal Investigator); Martin, K. R.; Amato, R. V.; Leshendok, T.

    1973-01-01

    The author has identified the following significant results. The applications of ERTS-1 imagery for geological fracture mapping regardless of season has been repeatedly confirmed. The enhancement provided by a differential cover of snow increases the number and length of fracture-lineaments which can be detected with ERTS-1 data and accelerates the fracture mapping process for a variety of practical applications. The geological mapping benefits of the program will be realized in geographic areas where data are most needed - complex glaciated terrain and areas of deep residual soils. ERTS-1 derived fracture-lineament maps which provide detail well in excess of existing geological maps are not available in the Massachusetts-Connecticut area. The large quantity of new data provided by ERTS-1 may accelerate and improve field mapping now in progress in the area. Numerous other user groups have requested data on the techniques. This represents a major change in operating philosophy for groups who to data judged that snow obscured geological detail.

  7. Groundwater vulnerability and risk mapping in a geologically complex area by using stable isotopes, remote sensing and GIS techniques

    NASA Astrophysics Data System (ADS)

    Dimitriou, Elias; Zacharias, Ierotheos

    2006-11-01

    Groundwater vulnerability and risk mapping is a relatively new scientific approach for facilitating planning and decision making processes in order to protect this valuable resource. Pan European methodology for aquifers vulnerability has recently been developed by assessing all the existing relevant techniques and emphasizing on karstic environments. In the particular study, state-of-the-art methods and tools have been implemented such as remote sensing, isotopic investigations and GIS to map the groundwater vulnerability and pollution risk in a geologically complex area of W. Greece. The updated land use map has been developed from a Landsat 7+TM image elaborated with image analysis software, while the detailed hydrogeologic properties of the area have been recorded with an intensive isotopic study. The local groundwater vulnerability map has been produced following the aforementioned Pan European method, in a GIS environment while the risk map, which was the final product of the study, has been developed after combining the vulnerability and the land use maps. The results indicated that the areas comprised of highly tectonized calcareous formations represented high vulnerability and risk zones while forested areas away from the karstic aquifer illustrated moderate to low vulnerability. Moreover, human activities increase the pollution risk in lowland areas consisting of sedimentary deposits that have been classified as moderate vulnerability. The particular methodology operated efficiently in this study and due to its accuracy and relatively easy implementation can be used as a decision support tool for local authorities.

  8. Apparent speed increases at low luminance

    PubMed Central

    Vaziri-Pashkam, Maryam; Cavanagh, Patrick

    2009-01-01

    To investigate the effect of luminance on apparent speed, subjects adjusted the speed of a low-luminance rotating grating (0.31 cd/m2) to match that of a high-luminance one (1260 cd/m2). Above 4 Hz, subjects overestimated the speed of the low-luminance grating. This overestimation increased as a function of temporal rate and reached 30% around 10 Hz temporal rates. The speed overestimation became significant once the lower luminance was 2.4 log units lower than the high luminance comparison. Next the role of motion smear in speed overestimation was examined. First it was shown that the length of the perceived motion smear increased at low luminances. Second, the length of the visible smear was manipulated by changing the presentation time of the stimuli. Speed overestimation was reduced at shorter presentation times. Third the speed of a blurred stimulus was compared to a stimulus with sharp edges and the blurred stimulus was judged to move faster. These results indicate that the length of motion smear following a target contributes to its perceived speed and that this leads to speed overestimation at low luminance where motion traces lengthen because of increased persistence. PMID:19146275

  9. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  10. Analysis of terrain map matching using multisensing techniques for applications to autonomous vehicle navigation

    NASA Technical Reports Server (NTRS)

    Page, Lance; Shen, C. N.

    1991-01-01

    This paper describes skyline-based terrain matching, a new method for locating the vantage point of laser range-finding measurements on a global map previously prepared by satellite or aerial mapping. Skylines can be extracted from the range-finding measurements and modelled from the global map, and are represented in parametric, cylindrical form with azimuth angle as the independent variable. The three translational parameters of the vantage point are determined with a three-dimensional matching of these two sets of skylines.

  11. Techniques and applications of SAR interferometry for ERS-1: Topographic mapping, change detection, and slope measurement

    NASA Technical Reports Server (NTRS)

    Werner, Charles L.; Hensley, Scott; Goldstein, Richard M.; Rosen, Paul A.; Zebker, Howard A.

    1993-01-01

    SAR (Synthetic Aperture Radar) interferometry allows the measurement of high resolution topography of terrain from repeat orbit SAR data sets obtained by the ERS-1 AMI (Active Microwave Instrument). The system parameters which affect the accuracy of the topographic maps are errors in determination of the interferometric baseline, non parallel orbit tracks, decorrelation caused by baseline length, thermal noise, and surface change. Surface change can be observed in the interferograms either through decorrelation of the interferometric phase, or coherent phase shifts caused by locally uniform surface displacements. Phase gradient maps can be derived directly from the complex interferograms and can be transformed into surface slopes mapped onto a geometrically corrected grid.

  12. An automated technique for detailed ?-FTIR mapping of diamond and spectral deconvolution

    NASA Astrophysics Data System (ADS)

    Howell, Dan; Griffin, Bill; O'Neill, Craig; O'Reilly, Suzanne; Pearson, Norman; Handley, Heather

    2010-05-01

    other commonly found defects and impurities. Whether these are intrinsic defects like platelets, extrinsic defects like hydrogen or boron atoms, or inclusions of minerals or fluids. Recent technological developments in the field of spectroscopy allow detailed μ-FTIR analysis to be performed rapidly in an automated fashion. The Nicolet iN10 microscope has an integrated design that maximises signal throughput and allows spectra to be collected with greater efficiency than is possible with conventional μ-FTIR spectrometer-microscope systems. Combining this with a computer controlled x-y stage allows for the automated measuring of several thousand spectra in only a few hours. This affords us the ability to record 2D IR maps of diamond plates with minimal effort, but has created the need for an automated technique to process the large quantities of IR spectra and obtain quantitative data from them. We will present new software routines that can process large batches of IR spectra, including baselining, conversion to absorption coefficient, and deconvolution to identify and quantify the various nitrogen components. Possible sources of error in each step of the process will be highlighted so that the data produced can be critically assessed. The end result will be the production of various false colour 2D maps that show the distribution of nitrogen concentrations and aggregation states, as well as other identifiable components.

  13. Language Mapping with Navigated Repetitive TMS: Proof of Technique and Validation

    PubMed Central

    Tarapore, Phiroz E.; Findlay, Anne M.; Honma, Susanne M.; Mizuiri, Danielle; Houde, John F.; Berger, Mitchel S.; Nagarajan, Srikantan S.

    2013-01-01

    Objective Lesion-based mapping of speech pathways has been possible only during invasive neurosurgical procedures using direct cortical stimulation (DCS). However, navigated transcranial magnetic stimulation (nTMS) may allow for lesion-based interrogation of language pathways noninvasively. Although not lesion-based, magnetoencephalographic imaging (MEGI) is another noninvasive modality for language mapping. In this study, we compare the accuracy of nTMS and MEGI with DCS. Methods Subjects with lesions around cortical language areas underwent preoperative nTMS and MEGI for language mapping. nTMS maps were generated using a repetitive TMS protocol to deliver trains of stimulations during a picture naming task. MEGI activation maps were derived from adaptive spatial filtering of beta-band power decreases prior to overt speech during picture naming and verb generation tasks. The subjects subsequently underwent awake language mapping via intraoperative DCS. The language maps obtained from each of the 3 modalities were recorded and compared. Results nTMS and MEGI were performed on 12 subjects. nTMS yielded 21 positive language disruption sites (11 speech arrest, 5 anomia, and 5 other) while DCS yielded 10 positive sites (2 speech arrest, 5 anomia, and 3 other). MEGI isolated 32 sites of peak activation with language tasks. Positive language sites were most commonly found in the pars opercularis for all three modalities. In 9 instances the positive DCS site corresponded to a positive nTMS site, while in 1 instance it did not. In 4 instances, a positive nTMS site corresponded to a negative DCS site, while 169 instances of negative nTMS and DCS were recorded. The sensitivity of nTMS was therefore 90%, specificity was 98%, the positive predictive value was 69% and the negative predictive value was 99% as compared with intraoperative DCS. MEGI language sites for verb generation and object naming correlated with nTMS sites in 5 subjects, and with DCS sites in 2 subjects

  14. A Hydrogeologic Map of the Death Valley Region, Nevada and California, Developed Using GIS Techniques

    USGS Publications Warehouse

    Faunt, Claudia C.; D'Agnese, Frank A.; Turner, A. Keith

    1997-01-01

    In support of Yucca Mountain site characterization studies, a hydrogeologic framework was developed, and a hydrogeologic map was constructed for the Death Valley region. The region, covering approximately 100,000 km 2 along the Nevada-California border near Las Vegas, is characterized by isolated mountain ranges juxtaposed against broad, alluvium-filled valleys. Geologic conditions are typical of the Basin and Range Province; a variety of sedimentary and igneous intrusive and extrusive rocks have been subjected to both compressional and extensional deformation. The regional ground-water flow system can best be described as a series of connected intermontane basins in which ground-water flow occurs in basin-fill deposits, carbonate rocks, clastic rocks, and volcanic rocks. Previous investigations have developed more site-specific hydrogeologic relationships; however, few have described all the lithologies within the Death Valley regional ground-water flow system. Information required to characterize the hydrogeologic units in the region was obtained from regional geologic maps and reports. Map data were digitized from regional geologic maps and combined into a composite map using a geographic information system. This map was simplified to show 10 laterally extensive hydrogeologic units with distinct hydrologic properties. The hydraulic conductivity values for the hydrogeologic units range over 15 orders of magnitude due to the variability in burial depth and degree of fracturing.

  15. A hydrogeologic map of the Death Valley region, Nevada, and California, developed using GIS techniques

    SciTech Connect

    Faunt, C.C.; D`Agnese, F.A.; Turner, A.K.

    1997-12-31

    In support of Yucca Mountain site characterization studies, a hydrogeologic framework was developed, and a hydrogeologic map was constructed for the Death Valley region. The region, covering approximately 100,000 km{sup 2} along the Nevada-California border near Las Vegas, is characterized by isolated mountain ranges juxtaposed against broad, alluvium-filled valleys. Geologic conditions are typical of the Basin and Range Province; a variety of sedimentary and igneous intrusive and extrusive rocks have been subjected to both compressional and extensional deformation. The regional ground-water flow system can best be described as a series of connected intermontane basins in which ground-water flow occurs in basin-fill deposits, carbonate rocks, clastic rocks, and volcanic rocks. Previous investigations have developed more site-specific hydrogeologic relationships; however, few have described all the lithologies within the Death Valley regional ground-water flow system. Information required to characterize the hydrogeologic units in the region was obtained from regional geologic maps and reports. Map data were digitized from regional geologic maps and combined into a composite map using a geographic information system. This map was simplified to show 10 laterally extensive hydrogeologic units with distinct hydrologic properties. The hydraulic conductivity values for the hydrogeologic units range over 15 orders of magnitude due to the variability in burial depth and degree of fracturing.

  16. Thermal mapping of mountain slopes on Mars by application of a Differential Apparent Thermal Inertia technique

    NASA Astrophysics Data System (ADS)

    Kubiak, Marta; Mège, Daniel; Gurgurewicz, Joanna; Ciazela, Jakub

    2015-04-01

    Thermal inertia (P) is an important property of geologic surfaces that essentially describes the resistance to temperature (T) change as heat is added. Most remote sensing data describe the surface only. P is a volume property that is sensitive to the composition of the subsurface, down to a depth reached by the diurnal heating wave. As direct measurement of P is not possible on Mars, thermal inertia models (Fergason et al., 2006) and deductive methods (the Apparent Thermal Inertia: ATI and Differential Apparent Thermal Inertia: DATI) are used to estimate it. ATI is computed as (1 - A) / (Tday - Tnight), where A is albedo. Due to the lack of the thermal daytime images with maximum land surface temperature (LST) and nighttime images with minimum LST in Valles Marineris region, the ATI method is difficult to apply. Instead, we have explored the DATI technique (Sabol et al., 2006). DATI is calculated based on shorter time (t) intervals with a high |ΔT/Δt| gradient (in the morning or in the afternoon) and is proportional to the day/night temperature difference (ATI), and hence P. Mars, which exhibits exceptionally high |ΔT/Δt| gradients due to the lack of vegetation and thin atmosphere, is especially suitable for the DATI approach. Here we present a new deductive method for high-resolution differential apparent thermal inertia (DATI) mapping for areas of highly contrasted relief (e.g., Valles Marineris). Contrary to the thermal inertia models, our method takes local relief characteristics (slopes and aspects) into account. This is crucial as topography highly influences A and ΔT measurements. In spite of the different approach, DATI values in the flat areas are in the same range as the values obtained by Fergason et al. (2006). They provide, however, more accurate information for geological interpretations of hilly or mountainous terrains. Sabol, D. E., Gillespie, A. R., McDonald, E., and Danilina, I., 2006. Differential Thermal Inertia of Geological Surfaces. In

  17. Using a Concept Mapping Tool with a Photograph Association Technique (CoMPAT) to Elicit Children's Ideas about Microbial Activity

    NASA Astrophysics Data System (ADS)

    Byrne, Jenny; Grace, Marcus

    2010-03-01

    Concept mapping is a technique used to provide a visual representation of an individual's ideas about a concept or set of related concepts. This paper describes a concept mapping tool using a photograph association technique (CoMPAT) that is considered to be a novel way of eliciting children's ideas. What children at 11 years of age know about particular concepts related to microbial activity is presented and discussed in relation to the effectiveness of CoMPAT as a tool to draw out their ideas. It is proposed that this tool could be used to explore ideas about this and other science concepts from children in other age groups, and where language may be a barrier to participation.

  18. Nonlinear mapping technique for data visualization and clustering assessment of LIBS data: application to ChemCam data.

    PubMed

    Lasue, J; Wiens, R C; Stepinski, T F; Forni, O; Clegg, S M; Maurice, S

    2011-07-01

    ChemCam is a remote laser-induced breakdown spectroscopy (LIBS) instrument that will arrive on Mars in 2012, on-board the Mars Science Laboratory Rover. The LIBS technique is crucial to accurately identify samples and quantify elemental abundances at various distances from the rover. In this study, we compare different linear and nonlinear multivariate techniques to visualize and discriminate clusters in two dimensions (2D) from the data obtained with ChemCam. We have used principal components analysis (PCA) and independent components analysis (ICA) for the linear tools and compared them with the nonlinear Sammon's map projection technique. We demonstrate that the Sammon's map gives the best 2D representation of the data set, with optimization values from 2.8% to 4.3% (0% is a perfect representation), together with an entropy value of 0.81 for the purity of the clustering analysis. The linear 2D projections result in three (ICA) and five times (PCA) more stress, and their clustering purity is more than twice higher with entropy values about 1.8. We show that the Sammon's map algorithm is faster and gives a slightly better representation of the data set if the initial conditions are taken from the ICA projection rather than the PCA projection. We conclude that the nonlinear Sammon's map projection is the best technique for combining data visualization and clustering assessment of the ChemCam LIBS data in 2D. PCA and ICA projections on more dimensions would improve on these numbers at the cost of the intuitive interpretation of the 2D projection by a human operator. PMID:21331488

  19. Moderately luminous Type II supernovae

    NASA Astrophysics Data System (ADS)

    Inserra, C.; Pastorello, A.; Turatto, M.; Pumo, M. L.; Benetti, S.; Cappellaro, E.; Botticella, M. T.; Bufano, F.; Elias-Rosa, N.; Harutyunyan, A.; Taubenberger, S.; Valenti, S.; Zampieri, L.

    2013-07-01

    Context. Core-collapse Supernovae (CC-SNe) descend from progenitors more massive than about 8 M⊙. Because of the young age of the progenitors, the ejecta may eventually interact with the circumstellar medium (CSM) via highly energetic processes detectable in the radio, X-ray, ultraviolet (UV) and, sometimes, in the optical domains. Aims: In this paper we present ultraviolet, optical and near infrared observations of five Type II SNe, namely SNe 2009dd, 2007pk, 2010aj, 1995ad, and 1996W. Together with few other SNe they form a group of moderately luminous Type II events. We investigate the photometric similarities and differences among these bright objects. We also attempt to characterise them by analysing the spectral evolutions, in order to find some traces of CSM-ejecta interaction. Methods: We collected photometry and spectroscopy with several telescopes in order to construct well-sampled light curves and spectral evolutions from the photospheric to the nebular phases. Both photometry and spectroscopy indicate a degree of heterogeneity in this sample. Modelling the data of SNe 2009dd, 2010aj and 1995ad allows us to constrain the explosion parameters and the properties of the progenitor stars. Results: The light curves have luminous peak magnitudes (-16.95 < MB < -18.70). The ejected masses of 56Ni for three SNe span a wide range of values (2.8 × 10-2 M⊙ < M(56Ni)< 1.4 × 10-1 M⊙), while for a fourth (SN 2010aj) we could determine a stringent upper limit (7 × 10-3 M⊙). Clues of interaction, such as the presence of high velocity (HV) features of the Balmer lines, are visible in the photospheric spectra of SNe 2009dd and 1996W. For SN 2007pk we observe a spectral transition from a Type IIn to a standard Type II SN. Modelling the observations of SNe 2009dd, 2010aj and 1995ad with radiation hydrodynamics codes, we infer kinetic plus thermal energies of about 0.2-0.5 foe, initial radii of 2-5 × 1013 cm and ejected masses of ~5.0-9.5 M⊙. Conclusions: These

  20. Mapping the petroleum system - An investigative technique to explore the hydrocarbon fluid system

    USGS Publications Warehouse

    Magoon, L.B.; Dow, W.G.

    2000-01-01

    Creating a petroleum system map includes a series of logical steps that require specific information to explain the origin in time and space of discovered hydrocarbon occurrences. If used creatively, this map provides a basis on which to develop complementary plays and prospects. The logical steps include the characterization of a petroleum system (that is, to identify, map, and name the hydrocarbon fluid system) and the summary of these results on a folio sheet. A petroleum system map is based on the understanding that there are several levels of certainty from "guessing" to "knowing" that specific oil and gas accumulations emanated from a particular pod of active source rock. Levels of certainty start with the close geographic proximity of two or more accumulations, continues with the close stratigraphic proximity, followed by the similarities in bulk properties, and then detailed geochemical properties. The highest level of certainty includes the positive geochemical correlation of the hydrocarbon fluid in the accumulations to the extract of the active source rock. A petroleum system map is created when the following logic is implemented. Implementation starts when the oil and gas accumulations of a petroleum province are grouped stratigraphically and geographically. Bulk and geochemical properties are used to further refine the groups through the determination of genetically related oil and gas types. To this basic map, surface seeps and well shows are added. Similarly, the active source rock responsible for these hydrocarbon occurrences are mapped to further define the extent of the system. A folio sheet constructed for a hypothetical case study of the Deer-Boar(.) petroleum system illustrates this methodology.

  1. General comparison of ozone vertical profiles obtained by various techniques during the 1983 MAP/GLOBUS campaign

    NASA Technical Reports Server (NTRS)

    Matthews, W. A.; Aimedieu, P.; Megie, G.; Pelon, J.; Attmannspacher, W.; Komhyr, W.; Marche, P.; De La Noe, J.; Rigaud, P.; Robbins, D. E.

    1987-01-01

    As part of the 1983 MAP/GLOBUS campaign, atmospheric ozone profile measurements were made using a large variety of different techniques both from balloon platforms and the ground. It is shown that, for most techniques, the measured height distributions agree to within + or - 5 percent with the exception of the remote visible absorption method. This + or - 5 percent uncertainty is of the order of the individual intersystem accuracy. It is suggested that since the differences with the visible absorption method are in magnitude rather than in form, the absorption cross-section data could be the possible cause for the discrepancy.

  2. Applying Value Stream Mapping Technique for Production Improvement in a Manufacturing Company: A Case Study

    NASA Astrophysics Data System (ADS)

    Jeyaraj, K. L.; Muralidharan, C.; Mahalingam, R.; Deshmukh, S. G.

    2013-01-01

    The purpose of this paper is to explain how value stream mapping (VSM) is helpful in lean implementation and to develop the road map to tackle improvement areas to bridge the gap between the existing state and the proposed state of a manufacturing firm. Through this case study, the existing stage of manufacturing is mapped with the help of VSM process symbols and the biggest improvement areas like excessive TAKT time, production, and lead time are identified. Some modifications in current state map are suggested and with these modifications future state map is prepared. Further TAKT time is calculated to set the pace of production processes. This paper compares the current state and future state of a manufacturing firm and witnessed 20 % reduction in TAKT time, 22.5 % reduction in processing time, 4.8 % reduction in lead time, 20 % improvement in production, 9 % improvement in machine utilization, 7 % improvement in man power utilization, objective improvement in workers skill level, and no change in the product and semi finished product inventory level. The findings are limited due to the focused nature of the case study. This case study shows that VSM is a powerful tool for lean implementation and allows the industry to understand and continuously improve towards lean manufacturing.

  3. Frequency analysis of a task-evoked pupillary response: Luminance-independent measure of mental effort.

    PubMed

    Peysakhovich, Vsevolod; Causse, Mickaël; Scannella, Sébastien; Dehais, Frédéric

    2015-07-01

    Pupil diameter is a widely-studied cognitive load measure, which, despite its convenience for non-intrusive operator state monitoring in complex environments, is still not available for in situ measurements because of numerous methodological limitations. The most important of these limitations is the influence of pupillary light reflex. Hence, there is the need of providing a pupil-based cognitive load measure that is independent of light conditions. In this paper, we present a promising technique of pupillary signal analysis resulting in luminance-independent measure of mental effort that could be used in real-time without a priori on luminous conditions. Twenty-two participants performed a short-term memory task under different screen luminance conditions. Our results showed that the amplitude of pupillary dilation due to load on memory was luminance-dependent with higher amplitude corresponding to lower-luminance condition. Furthermore, our experimentation showed that load on memory and luminance factors express themselves differently according to frequency. Therefore, as our statistical analysis revealed, the ratio between low (0-1.6 Hz) and high frequency (1.6-4 Hz) bands (LF/HF ratio) of power spectral densities of pupillary signal is sensitive to the cognitive load but not to luminance. Our results are promising for the measurement of load on memory in ecological settings. PMID:25941013

  4. Dust near luminous ultraviolet stars

    NASA Technical Reports Server (NTRS)

    Henry, Richard C.

    1993-01-01

    This report describes research activities related to the Infrared Astronomical Satellite (IRAS) sky survey. About 745 luminous stars were examined for the presence of interstellar dust heated by a nearby star. The 'cirrus' discovered by IRAS is thermal radiation from interstellar dust at moderate and high galactic latitudes. The IRAS locates the dust which must (at some level) scatter ultraviolet starlight, although it was expected that thermal emission would be found around virtually every star, most stars shown no detectable emission. And the emission found is not uniform. It is not that the star is embedded in 'an interstellar medium', but rather what is found are discrete clouds that are heated by starlight. An exception is the dearth of clouds near the very hottest stars, implying that the very hottest stars play an active role with respect to destroying or substantially modifying the dust clouds over time. The other possibility is simply that the hottest stars are located in regions lacking in dust, which is counter-intuitive. A bibliography of related journal articles is attached.

  5. Dust near luminous ultraviolet stars

    NASA Technical Reports Server (NTRS)

    Henry, Richard C.

    1992-01-01

    More than 700 luminous stars in the infrared astronomical satellite (IRAS) Skyflux plates were examined for the presence of dust heated by a nearby star. This dust may be distinguished from the ubiquitous cool cirrus by its higher temperature and thus enhanced 60 micron emission. More than 120 dust clouds were found around only 106 of the stars with a volume filling factor of 0.006 and an intercloud separation of 46 pc. A region of dust smoothly distributed through the volume of space heated by the star could not be found and hence an upper limit of 0.05 cm(exp -3) is placed on the equivalent gas density in the intercloud regions. The clouds have an average density of 0.22 cm(exp -3) and a radius of 1.9 pc, albeit with wide variations in their properties. Two different scale heights of 140 and 540 pc were found. This was interpreted as evidence for different distributions of dust in and out of the galactic disk.

  6. Processing techniques for the production of an experimental computer-generated shaded-relief map

    USGS Publications Warehouse

    Judd, Damon D.

    1986-01-01

    The data consisted of forty-eight 1° by 1° blocks of resampled digital elevation model (DEM) data. These data were digitally mosaicked and assigned colors based on intervals of elevation values. The color-coded data set was then used to create a shaded-relief image that was photographically composited with cartographic line information to produce a shaded-relief map. The majority of the processing was completed at the National Mapping Division EROS Data Center in Sioux Falls, South Dakota.

  7. Mapping of Primary Instructional Methods and Teaching Techniques for Regularly Scheduled, Formal Teaching Sessions in an Anesthesia Residency Program.

    PubMed

    Vested Madsen, Matias; Macario, Alex; Yamamoto, Satoshi; Tanaka, Pedro

    2016-06-01

    In this study, we examined the regularly scheduled, formal teaching sessions in a single anesthesiology residency program to (1) map the most common primary instructional methods, (2) map the use of 10 known teaching techniques, and (3) assess if residents scored sessions that incorporated active learning as higher quality than sessions with little or no verbal interaction between teacher and learner. A modified Delphi process was used to identify useful teaching techniques. A representative sample of each of the formal teaching session types was mapped, and residents anonymously completed a 5-question written survey rating the session. The most common primary instructional methods were computer slides-based classroom lectures (66%), workshops (15%), simulations (5%), and journal club (5%). The number of teaching techniques used per formal teaching session averaged 5.31 (SD, 1.92; median, 5; range, 0-9). Clinical applicability (85%) and attention grabbers (85%) were the 2 most common teaching techniques. Thirty-eight percent of the sessions defined learning objectives, and one-third of sessions engaged in active learning. The overall survey response rate equaled 42%, and passive sessions had a mean score of 8.44 (range, 5-10; median, 9; SD, 1.2) compared with a mean score of 8.63 (range, 5-10; median, 9; SD, 1.1) for active sessions (P = 0.63). Slides-based classroom lectures were the most common instructional method, and faculty used an average of 5 known teaching techniques per formal teaching session. The overall education scores of the sessions as rated by the residents were high. PMID:27243580

  8. Low-Altitude AVIRIS Data for Mapping Land Cover in Yellowstone National Park: Use of Isodata Clustering Techniques

    NASA Technical Reports Server (NTRS)

    Spruce, Joseph P.

    2001-01-01

    Northeast Yellowstone National Park (YNP) has a diversity of forest, range, and wetland cover types. Several remote sensing studies have recently been done in this area, including the NASA Earth Observations Commercial Applications Program (EOCAP) hyperspectral project conducted by Yellowstone Ecosystems Studies (YES) on the use of hyperspectral imaging for assessing riparian and in-stream habitats. In 1999, YES and NASA's Commercial Remote Sensing Program Office began collaborative study of this area, assessing the potential of synergistic use of hyperspectral, synthetic aperture radar (SAR), and multiband thermal data for mapping forest, range, and wetland land cover. Since the beginning, a quality 'reference' land cover map has been desired as a tool for developing and validating other land cover maps produced during the project. This paper recounts an effort to produce such a reference land cover map using low-altitude Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data and unsupervised classification techniques. The main objective of this study is to assess ISODATA classification for mapping land cover in Northeast YNP using select bands of low-altitude AVIRIS data. A secondary, more long-term objective is to assess the potential for improving ISODATA-based classification of land cover through use of principal components analysis and minimum noise fraction (MNF) techniques. This paper will primarily report on work regarding the primary research objective. This study focuses on an AVIRIS cube acquired on July 23, 1999, by the confluence of Soda Butte Creek with the Lamar River. Range and wetland habitats dominate the image with forested habitats being a comparatively minor component of the scene. The scene generally tracks from southwest to northeast. Most of the scene is valley bottom with some lower side slopes occurring on the western portion. Elevations within the AVIRIS scene range from approximately 1998 to 2165 m above sea level, based on US

  9. Comparison of regression and kriging techniques for mapping the average annual precipitation of Turkey

    NASA Astrophysics Data System (ADS)

    Bostan, P. A.; Heuvelink, G. B. M.; Akyurek, S. Z.

    2012-10-01

    Accurate mapping of the spatial distribution of annual precipitation is important for many applications in hydrology, climatology, agronomy, ecology and other environmental sciences. In this study, we compared five different statistical methods to predict spatially the average annual precipitation of Turkey using point observations of annual precipitation at meteorological stations and spatially exhaustive covariate data (i.e. elevation, aspect, surface roughness, distance to coast, land use and eco-region). The methods compared were multiple linear regression (MLR), ordinary kriging (OK), regression kriging (RK), universal kriging (UK), and geographically weighted regression (GWR). Average annual precipitation of Turkey from 1970 to 2006 was measured at 225 meteorological stations that are fairly uniformly distributed across the country, with a somewhat higher spatial density along the coastline. The observed annual precipitation varied between 255 mm and 2209 mm with an average of 628 mm. The annual precipitation was highest along the southern and northern coasts and low in the centre of the country, except for the area near the Van Lake, Keban and Ataturk Dams. To compare the performance of the interpolation techniques the total dataset was first randomly split in ten equally sized test datasets. Next, for each test data set the remaining 90% of the data comprised the training dataset. Each training dataset was then used to calibrate and apply the spatial prediction model. Predictions at the test dataset locations were compared with the observed test data. Validation was done by calculating the Root Mean Squared Error (RMSE), R-square and Standardized MSE (SMSE) values. According to these criteria, universal kriging is the most accurate with an RMSE of 178 mm, an R-square of 0.61 and an SMSE of 1.06, whilst multiple linear regression performed worst (RMSE of 222 mm, R-square of 0.39, and SMSE of 1.44). Ordinary kriging, UK using only elevation and geographically

  10. Implementation of Concept Mapping to Novices: Reasons for Errors, a Matter of Technique or Content?

    ERIC Educational Resources Information Center

    Conradty, Catherine; Bogner, Franz X.

    2010-01-01

    Concept mapping is discussed as a means to promote meaningful learning and in particular progress in reading comprehension skills. Its increasing implementation necessitates the acquisition of adequate knowledge about frequent errors in order to make available an effective introduction to the new learning method. To analyse causes of errors, 283…

  11. Colony mapping: A new technique for monitoring crevice-nesting seabirds

    USGS Publications Warehouse

    Renner, H.M.; Renner, M.; Reynolds, J.H.; Harping, A.M.A.; Jones, I.L.; Irons, D.B.; Byrd, G.V.

    2006-01-01

    Monitoring populations of auklets and other crevice-nesting seabirds remains problematic, although numerous methods have been attempted since the mid-1960s. Anecdotal evidence suggests several large auklet colonies have recently decreased in both abundance and extent, concurrently with vegetation encroachment and succession. Quantifying changes in the geographical extent of auklet colonies may be a useful alternative to monitoring population size directly. We propose a standardized method for colony mapping using a randomized systematic grid survey with two components: a simple presence/absence survey and an auklet evidence density survey. A quantitative auklet evidence density index was derived from the frequency of droppings and feathers. This new method was used to map the colony on St. George Island in the southeastern Bering Sea and results were compared to previous colony mapping efforts. Auklet presence was detected in 62 of 201 grid cells (each grid cell = 2500 m2) by sampling a randomly placed 16 m2 plot in each cell; estimated colony area = 155 000 m2. The auklet evidence density index varied by two orders of magnitude across the colony and was strongly correlated with means of replicated counts of birds socializing on the colony surface. Quantitatively mapping all large auklet colonies is logistically feasible using this method and would provide an important baseline for monitoring colony status. Regularly monitoring select colonies using this method may be the best means of detecting changes in distribution and population size of crevice-nesting seabirds. ?? The Cooper Ornithological Society 2006.

  12. ERTS-1 imagery interpretation techniques in the Tennessee Valley. [land use and soil mapping

    NASA Technical Reports Server (NTRS)

    Bodenheimer, R. E. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. The feasibility of delineating major soil associations and land uses through computerized analyses is discussed. Useful and potential applications in detecting landscape change and land use mapping are described. Recommendations for improving the data processing effort in a multidisciplinary program are presented.

  13. Metateaching and the Instructional Map. Teaching Techniques/Strategies Series, Volume 1.

    ERIC Educational Resources Information Center

    Timpson, William M.

    This book describes a conceptual framework, the "Instructional Map"--a metaphor for envisioning the interconnectedness of teacher and student, teaching and learning, and content and process--that can help teachers plan, sort their way through course material and instructional options, interact with students, and reflect upon progress made and what…

  14. Creating High-Resolution Maps of Leaf Water Isotopes Using IM-CRDS and IRMS Techniques

    NASA Astrophysics Data System (ADS)

    Gerlein-Safdi, C.; Sinkler, C. J.; Caylor, K. K.

    2014-12-01

    Since the development of isotope ratio infrared spectroscopy (IRIS), the applications of water isotope analysis have been increasing. Here, we present a new protocol to create high-resolution maps of leaf water isotopes 18O and 2H. We use the Picarro induction module (IM-CRDS) combined with an isotope analyzer (L2130-i) to sample up to 25 locations in one half of each leaf. Each sampling location corresponds to four samples (6 mm outside diameter punched-holes) punched next to each other. In the induction module, an induction coil heats a metal holder containing the leaf sample. The sample will release water vapor that is then sent to the isotope analyzer. The IM-CRDS allows us to significantly reduce the sample size and therefore increase the sample density, compared to the traditional cryogenic extraction method. Using spatial analysis tools, we create high-resolution spatial maps of each isotope as well as d-excess maps. The water in the second half of the leaf is extracted by cryogenic extraction and analyzed using both IRIS and isotope ratio mass spectroscopy. The isotopic composition of the extracted water is compared to the average composition calculated from the maps and used for calibration. We present applications of this protocol to the analysis of the spatio-temporal evolution of foliar uptake in Colocasia esculenta under laboratory conditions.

  15. Using Concept Mapping Techniques to Compare Stakeholder Groups' Perceptions of Tech Prep.

    ERIC Educational Resources Information Center

    Roegge, Chris A.; And Others

    A study was conducted to develop a conceptual framework for tech prep programs based on the perceptions of personnel involved in the planning and implementation of local tech prep programs. A structured conceptualization process called concept mapping was used to develop a pictorial representation of stakeholders' perceptions of tech prep. Using a…

  16. Planning and Measuring Luminance Contrast in Staircases.

    PubMed

    Houck, Leif D; Gundersen, Kristoffer; Strengen, Ola

    2016-01-01

    Norwegian legislation has requirements concerning luminance contrast for different elements in staircases. This paper investigates how architects work to meet the requirements, how to measure the actual built luminance contrasts and finally 21 staircases are measured using two different methods. The results show that some architects do not reflect on luminance contrasts at all, some use their "experience" and some try to measure the reflectance value of different materials during planning. The investigations also reveal that there is not any official predefined way to control luminance contrast, and this investigation shows that different approaches will give different results. To perform the measuring of the built staircases, it has been necessary to develop a defined measuring method. The results of the measuring generally shows that only a few of the staircases studied fully meet the legislation requirements. PMID:27534331

  17. Detection and mapping of illicit drugs and their metabolites in fingermarks by MALDI MS and compatibility with forensic techniques.

    PubMed

    Groeneveld, G; de Puit, M; Bleay, S; Bradshaw, R; Francese, S

    2015-01-01

    Despite the proven capabilities of Matrix Assisted Laser Desorption Ionisation Mass Spectrometry (MALDI MS) in laboratory settings, research is still needed to integrate this technique into current forensic fingerprinting practice. Optimised protocols enabling the compatible application of MALDI to developed fingermarks will allow additional intelligence to be gathered around a suspect's lifestyle and activities prior to the deposition of their fingermarks while committing a crime. The detection and mapping of illicit drugs and metabolites in latent fingermarks would provide intelligence that is beneficial for both police investigations and court cases. This study investigated MALDI MS detection and mapping capabilities for a large range of drugs of abuse and their metabolites in fingermarks; the detection and mapping of a mixture of these drugs in marks, with and without prior development with cyanoacrylate fuming or Vacuum Metal Deposition, was also examined. Our findings indicate the versatility of MALDI technology and its ability to retrieve chemical intelligence either by detecting the compounds investigated or by using their ion signals to reconstruct 2D maps of fingermark ridge details. PMID:26118853

  18. Detection and mapping of illicit drugs and their metabolites in fingermarks by MALDI MS and compatibility with forensic techniques

    PubMed Central

    Groeneveld, G.; de Puit, M.; Bleay, S.; Bradshaw, R.; Francese, S.

    2015-01-01

    Despite the proven capabilities of Matrix Assisted Laser Desorption Ionisation Mass Spectrometry (MALDI MS) in laboratory settings, research is still needed to integrate this technique into current forensic fingerprinting practice. Optimised protocols enabling the compatible application of MALDI to developed fingermarks will allow additional intelligence to be gathered around a suspect’s lifestyle and activities prior to the deposition of their fingermarks while committing a crime. The detection and mapping of illicit drugs and metabolites in latent fingermarks would provide intelligence that is beneficial for both police investigations and court cases. This study investigated MALDI MS detection and mapping capabilities for a large range of drugs of abuse and their metabolites in fingermarks; the detection and mapping of a mixture of these drugs in marks, with and without prior development with cyanoacrylate fuming or Vacuum Metal Deposition, was also examined. Our findings indicate the versatility of MALDI technology and its ability to retrieve chemical intelligence either by detecting the compounds investigated or by using their ion signals to reconstruct 2D maps of fingermark ridge details. PMID:26118853

  19. Detection and mapping of illicit drugs and their metabolites in fingermarks by MALDI MS and compatibility with forensic techniques

    NASA Astrophysics Data System (ADS)

    Groeneveld, G.; de Puit, M.; Bleay, S.; Bradshaw, R.; Francese, S.

    2015-06-01

    Despite the proven capabilities of Matrix Assisted Laser Desorption Ionisation Mass Spectrometry (MALDI MS) in laboratory settings, research is still needed to integrate this technique into current forensic fingerprinting practice. Optimised protocols enabling the compatible application of MALDI to developed fingermarks will allow additional intelligence to be gathered around a suspect’s lifestyle and activities prior to the deposition of their fingermarks while committing a crime. The detection and mapping of illicit drugs and metabolites in latent fingermarks would provide intelligence that is beneficial for both police investigations and court cases. This study investigated MALDI MS detection and mapping capabilities for a large range of drugs of abuse and their metabolites in fingermarks; the detection and mapping of a mixture of these drugs in marks, with and without prior development with cyanoacrylate fuming or Vacuum Metal Deposition, was also examined. Our findings indicate the versatility of MALDI technology and its ability to retrieve chemical intelligence either by detecting the compounds investigated or by using their ion signals to reconstruct 2D maps of fingermark ridge details.

  20. Compact radio sources in luminous infrared galaxies

    NASA Astrophysics Data System (ADS)

    Parra, Rodrigo

    2007-08-01

    Radio interferometry is an observational technique of high sensitivity and incomparably high spatial resolution. Moreover, because radio waves can freely propagate through interstellar dust and gas, it allows the study of regions of the universe completely obscured at other wavelengths. This thesis reports the observational and theoretical results of my research during the past four years which are mostly based on interferometric radio data. The COLA sample is an infrared selected sample of active star forming galaxies. We conducted 6 cm VLA and VLBI snapshot observations of the northern half of this sample. The radio emission seen at VLA scales is consistent with being powered by star formation activity because it follows the far infrared to radio correlation. We detect 22% of the sample sources in our VLBI snapshots. Based on luminosity arguments, we argue that these sub-parsec VLBI sources are powered by AGN activity. Furthermore, we find that VLBI detections are preferentially found in sources whose VLA scale structures have the highest peak brightnesses suggesting a strong correlation between compact starburst and AGN activity. This observational result is consistent with the theoretical picture of an Eddington-limited nuclear starburst acting as the last valve in the pipeline transporting the gas from kiloparsec scales onto the accretion disc of a buried AGN. Arp 220 is the archetypical ultra luminous infrared galaxy. For many years this source has been known to harbour a compact (~100 pc) cluster of unresolved 18 cm bright sources believed to be bright core collapse supernovae. Using multiwavelength VLBI observations, we obtained for the first time radio spectra for 18 of these sources. We find that over a half of them have spectra consistent with young supernovae. The rest can be better explained as older supernova remnants interacting with the high density starburst ISM. This finding allowed us to constrain the number of possible scenarios for the Arp 220

  1. STAR FORMATION IN TWO LUMINOUS SPIRAL GALAXIES

    SciTech Connect

    Hunter, Deidre A.; Ashburn, Allison; Wright, Teresa; Elmegreen, Bruce G.; Rubin, Vera C.; Józsa, Gyula I. G.; Struve, Christian

    2013-10-01

    We examined star formation in two very luminous (M{sub V} = –22 to –23) Sc-type spiral galaxies, NGC 801 and UGC 2885, using ultra-deep Hα images. We combine these Hα images with UBV and Two-Micron All-Sky Survey JHK images and H I maps to explore the star formation characteristics of disk galaxies at high luminosity. Hα traces star formation in these galaxies to 4-6 disk scale lengths, but the lack of detection of Hα further out is likely due to the loss of Lyman continuum photons. Considering gravitational instabilities alone, we find that the gas and stars in the outer regions are marginally stable in an average sense, but considering dissipative gas and radial and azimuthal forcing, the outer regions are marginally unstable to forming spiral arms. Star formation is taking place in spiral arms, which are regions of locally higher gas densities. Furthermore, we have traced smooth exponential stellar disks over four magnitudes in V-band surface brightness and 4-6 disk scale lengths, in spite of a highly variable gravitational instability parameter. Thus, gravitational instability thresholds do not seem relevant to the stellar disk. One possibility for creating an exponential disk is that the molecular cloud densities and star formation rates have exponential profiles and this fact forces the stellar disk to build up such a profile. Another possibility is that the stellar disk is continuously adjusted to an exponential shape regardless of the star formation profile, for example, through global dynamical processes that scatter stars. However, such scattering processes are only known to operate in spiral systems, in which case they cannot explain the same dilemma of smooth exponential disks observed in dwarf irregular galaxies.

  2. Eta Carinae and Other Luminous Blue Variables

    NASA Technical Reports Server (NTRS)

    Corcoran, M. F.

    2006-01-01

    Luminous Blue Variables (LBVs) are believed to be evolved, extremely massive stars close to the Eddington Limit and hence prone to bouts of large-scale, unstable mass loss. I discuss current understanding of the evolutionary state of these objects, the role duplicity may play and known physical characteristics of these stars using the X-ray luminous LBVs Eta Carinae and HD 5980 as test cases.

  3. Mapping the pharyngeal and intestinal pH of Caenorhabditis elegans and real-time luminal pH oscillations using extended dynamic range pH-sensitive nanosensors.

    PubMed

    Chauhan, Veeren M; Orsi, Gianni; Brown, Alan; Pritchard, David I; Aylott, Jonathan W

    2013-06-25

    Extended dynamic range pH-sensitive ratiometric nanosensors, capable of accurately mapping the full physiological pH range, have been developed and used to characterize the pH of the pharyngeal and intestinal lumen of Caenorhabditis elegans in real-time. Nanosensors, 40 nm in diameter, were prepared by conjugating pH-sensitive fluorophores, carboxyfluorescein (FAM) and Oregon Green (OG) in a 1:1 ratio, and a reference fluorophore, 5-(and-6)-carboxytetramethylrhodamine (TAMRA) to an inert polyacrylamide matrix. Accurate ratiometric pH measurements were calculated through determination of the fluorescence ratio between the pH-sensitive and reference fluorophores. Nanosensors were calibrated with an automated image analysis system and validated to demonstrate a pH measurement resolution of ±0.17 pH units. The motility of C. elegans populations, as an indicator for viability, showed nematodes treated with nanosensors, for concentrations ranging from 50.00 to 3.13 mg/mL, were not statistically different to nematodes not challenged with nanosensors up to a period of 4 days (p < 0.05). The nanosensors were also found to remain in the C. elegans lumen >24 h after nanosensor challenge was removed. The pH of viable C. elegans lumen was found to range from 5.96 ± 0.31 in the anterior pharynx to 3.59 ± 0.09 in the posterior intestine. The pharyngeal pumping rate, which dictates the transfer of ingested material from the pharynx to the intestine, was found to be temperature dependent. Imaging C. elegans at 4 °C reduced the pharyngeal pumping rate to 7 contractions/min and enabled the reconstruction of rhythmic pH oscillations in the intestinal lumen in real-time with fluorescence microscopy. PMID:23668893

  4. Simultaneous chromatic and luminance human electroretinogram responses

    PubMed Central

    Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan

    2012-01-01

    The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats’ ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing. PMID:22586211

  5. Techniques for the creation of land use maps and tabulations from Landsat imagery

    NASA Technical Reports Server (NTRS)

    Angelici, G. L.; Bryant, N. A.

    1977-01-01

    Methods for creating color thematic maps and land use tabulations, employing both Landsat imagery and computer image processing, are discussed. The system, the Multiple Input Land Use System (MILUS) has been tested in the metropolitan section of Dayton, Ohio. Training areas for land use were first digitized by coordinates and then transformed onto an image of white lines on a black background. This image was added to a Landsat image of the same area. Then multispectral classification was performed. A tape of digitized census tract boundaries was computer interfaced to yield an image of tract boundaries on a background registered to the thematic land-use map. Using a data management system, the data were then used to produce figures for the area and percent of land use in each tract. Future work is expected to convert most of the steps into interactive processing. This would greatly reduce the time needed to edit and register the data sets.

  6. On extracting brightness temperature maps from scanning radiometer data. [techniques for algorithm design

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Garza-Robles, R.

    1980-01-01

    The extraction of brightness temperature maps from scanning radiometer data is described as a typical linear inverse problem. Spatial quantization and parameter estimation is described and is suggested as an advantageous approach to a solution. Since this approach takes into explicit account the multivariate nature of the problem, it permits an accurate determination of the most detailed resolution extractable from the data as well as explicitly defining the possible compromises between accuracy and resolution. To illustrate the usefulness of the method described for algorithm design and accuracy prediction, it was applied to the problem of providing brightness temperature maps during the NOSS flight segment. The most detained possible resolution was determined and a curve which displays the possible compromises between accuracy and resolution was provided.

  7. Phase Space Tomography: A Simple, Portable and Accurate Technique to Map Phase Spaces of Beams with Space Charge

    SciTech Connect

    Stratakis, D.; Kishek, R. A.; Bernal, S.; Walter, M.; Haber, I.; Fiorito, R.; Thangaraj, J. C. T.; Quinn, B.; Reiser, M.; O'Shea, P. G.; Li, H.

    2006-11-27

    In order to understand the charged particle dynamics, e.g. the halo formation, emittance growth, x-y energy transfer and coupling, knowledge of the actual phase space is needed. Other the past decade there is an increasing number of articles who use tomography to map the beam phase space and measure the beam emittance. These studies where performed at high energy facilities where the effect of space charge was neglible and therefore not considered in the analysis. This work extends the tomography technique to beams with space charge. In order to simplify the analysis linear forces where assumed. By carefully modeling the tomography process using the particle-in-cell code WARP we test the validity of our assumptions and the accuracy of the reconstructed phase space. Finally, we report experimental results of phase space mapping at the University of Maryland Electron Ring (UMER) using tomography.

  8. Displaying and evaluating engineering properties and natural hazards using geomorphic mapping techniques: Telluride, Colorado

    SciTech Connect

    Gunawan, I.; Giardino, J.R.; Tchakerian, V.P. . Geography Dept.)

    1992-01-01

    Telluride, located in the San Juan mountains of southwestern Colorado, is situated in a glacially carved, fluvially modified alpine valley. Today this chic setting is experiencing rapid urban development resulting from flourishing tourist traffic during both the winter ski season and the summer vacation period. A new development, Mountain Village, is being built on an extensive and complex landslide that has only received superficial scrutiny. Recent fast growth is placing considerable pressure on pristine, undeveloped land. This timely quandary incorporates the interaction between prospective development, geomorphic processes, engineering factors, economic feasibility, and landuse adjudication. In an attempt to respond to these issues the State of Colorado enacted Senate Bill 35 (1972) and House Bills 1034 (1974) and 1041 (1974), all mandating assessment of the natural hazards of an area, preparatory to development. The key to evaluating the natural hazards is to comprehend the geomorphic processes. The area is highly-faulted with associated mineralization. Whereas the upper slopes are composed of massive rhyodacitic-tuff breccias and flows, the valley is sculpted from shales, sandstones, and conglomerates. Several periods of glaciation occurred in the area. Glacial till, talus slopes, avalanche chutes and cones, rock glaciers, alluvium, and landslides have been identified in the field and mapped on aerial photographs. Many of the slopes in the area are active. The authors have constructed a geomorphic map (1:12,500) that shows geology, landforms, geomorphic processes and engineering properties. This map can be used by regulatory agencies in identifying areas of natural hazards potentially sensitive to development.

  9. Mapping extreme rainfall in a mountainous region using geostatistical techniques: a case study in Scotland

    NASA Astrophysics Data System (ADS)

    Prudhomme, Christel; Reed, Duncan W.

    1999-10-01

    The spatial pattern of precipitation is known to be highly dependent on meteorological conditions and relief. However, the relationships between precipitation and topography in mountainous areas are not very well known, partly because of the complex topography in these regions, and partly because of the sparsity of information available to study such relationships in high elevation areas. The purpose of the investigation was to find a method of mapping extreme rainfall in the mountainous region of Scotland, which was easy to use and to understand, and which gave satisfactory results both in terms of statistical performance and consistency with meteorological mechanisms.Among the interpolation methods described in the literature, ordinary kriging and modified residual kriging have been found attractive by reason of their simplicity and ease of use. Both methods have been applied to map an index of extreme rainfall, the median of the annual maximum daily rainfall (RMED), in the mountainous region of Scotland. Rainfall records from a network of 1003 raingauges are used, covering Scotland with uneven density. A 4-parameter regression equation developed in a previous study, relating a transformed variable of RMED to topographical variables, is used in the modified residual kriging method. Comparing the relative performances of ordinary kriging and modified residual kriging shows that the use of topographical information helps to compensate for the lack of local data from which any interpolation method, such as ordinary kriging, might suffer, thus improving the final mapping.

  10. Subsurface object recognition by means of regularization techniques for mapping coastal waters floor

    NASA Astrophysics Data System (ADS)

    Jiménez-Rodríguez, Luis O.; Umana-Diaz, Alejandra; Diaz-Santos, Jose; Neira-Carolina, Gerardino; Morales-Morales, Javier; Rodriguez, Eladio

    2005-10-01

    A fundamental challenge to Remote Sensing is mapping the ocean floor in coastal shallow waters where variability, due to the interaction between the coast and the sea, can bring significant disparity in the optical properties of the water column. The objects to be detected, coral reefs, sands and submerged aquatic vegetation, have weak signals, with temporal and spatial variation. In real scenarios the absorption and backscattering coefficients have spatial variation due to different sources of variability (river discharge, different depths of shallow waters, water currents) and temporal fluctuations. This paper presents the development of algorithms for retrieving information and its application to the recognition, classification and mapping of objects under coastal shallow waters. A mathematical model that simplifies the radiative transfer equation was used to quantify the interaction between the object of interest, the medium and the sensor. The retrieval of information requires the development of mathematical models and processing tools in the area of inversion, image reconstruction and detection. The algorithms developed were applied to one set of remotely sensed data: a high resolution HYPERION hyperspectral imagery. An inverse problem arises as this spectral data is used for mapping the ocean shallow waters floor. Tikhonov method of regularization was used in the inversion process to estimate the bottom albedo of the ocean floor using a priori information in the form of stored spectral signatures, previously measured, of objects of interest, such as sand, corals, and sea grass.

  11. A new mapping technique for conversion of slant TEC to vertical TEC based on Computerized Ionospheric Tomography

    NASA Astrophysics Data System (ADS)

    Bhuyan, Pradip; Bhuyan, Pradip; Bhuyan, Kalyan

    The most frequently used mapping function for converting slant TEC to vertical TEC uses a single layer model with the assumption that all free electrons are concentrated in an infinitesimally thick spherical shell at the mean ionospheric height and containing the ionospheric pierce point. Spatial structures present in the ionosphere are not taken into account in such single layer models. A three dimensional mapping algorithm developed by Mannucci et al. (1999) uses three independent constant density slabs stacked vertically to model the electron density with the result of reduction in a level error of the TEC maps. We describe a new approach based on Computerized Ionospheric Tomography (CIT) to convert STEC to VTEC. The new method is independent of any assumption regarding the electron density distribution of the ionosphere. In this method, the ionosphere region of interest is divided in to pixels and TEC is represented as the sum of the integration of empirical functions within the pixels, which are intersected by the path along which TEC is measured. Using a suitable inversion algorithm, the empirical function within each pixel is retrieved from TEC data recorded simultaneously at a meridional chain of GPS receivers. The VTEC values are then easily obtained as the sum of the integration of the empirical functions within each pixel along a vertical path. The CIT method is applied for converting STEC to VTEC using GPS TEC data collected at 12 locations across India since 2003. The stations are aligned along three meridional chains. The vertical TEC values obtained from the CIT method are then compared to VTEC obtained from a single layer model. Results have shown that the CIT can be suitably adapted as a mapping technique, which takes into account the presence of spatial structures in the ionosphere. Keywords: Ionosphere (Indian equatorial and low latitude ionosphere, Vertical Total Electron Content, mapping functions, computerized ionospheric tomography)

  12. Object based technique for delineating and mapping 15 tree species using VHR WorldView-2 imagery

    NASA Astrophysics Data System (ADS)

    Mustafa, Yaseen T.; Habeeb, Hindav N.

    2014-10-01

    Monitoring and analyzing forests and trees are required task to manage and establish a good plan for the forest sustainability. To achieve such a task, information and data collection of the trees are requested. The fastest way and relatively low cost technique is by using satellite remote sensing. In this study, we proposed an approach to identify and map 15 tree species in the Mangish sub-district, Kurdistan Region-Iraq. Image-objects (IOs) were used as the tree species mapping unit. This is achieved using the shadow index, normalized difference vegetation index and texture measurements. Four classification methods (Maximum Likelihood, Mahalanobis Distance, Neural Network, and Spectral Angel Mapper) were used to classify IOs using selected IO features derived from WorldView-2 imagery. Results showed that overall accuracy was increased 5-8% using the Neural Network method compared with other methods with a Kappa coefficient of 69%. This technique gives reasonable results of various tree species classifications by means of applying the Neural Network method with IOs techniques on WorldView-2 imagery.

  13. Instantaneous high-resolution multiple-frequency measurement system based on frequency-to-time mapping technique.

    PubMed

    Nguyen, Tuan A; Chan, Erwin H W; Minasian, Robert A

    2014-04-15

    A new microwave photonic instantaneous frequency measurement system that can simultaneously measure multiple-frequency signals while achieving very high resolution and wide frequency measurement range is presented. It is based on the frequency-to-time mapping technique implemented using a frequency shifting recirculating delay line loop and a narrowband optical filter realized by the in-fiber stimulated Brillouin scattering effect. Experimental results demonstrate the realization of a multiple-frequency measurement capability over a frequency range of 0.1-20 GHz that can be extended to 90 GHz, and with a measurement resolution of 250 MHz. PMID:24979008

  14. Elemental X-ray mapping of agglutinated foraminifer tests: a non- destructive technique for determining compositional characteristics.

    USGS Publications Warehouse

    Commeau, R.F.; Reynolds, Leslie A.; Poag, C.W.

    1985-01-01

    The composition of agglutinated foraminiferal tests vary remarkably in response to local substrate characteristics, physiochemical properties of the water column and species- dependant selectivity of test components. We have employed a technique that combines a scanning electron microscope with an energy dispersive X-ray spectrometer system to identify major and minor elemental constituents of agglutinated foraminiferal walls. As a sample is bombarded with a beam of high energy electrons, X-rays are generated that are characteristic of the elements present. As a result, X- ray density maps can be produced for each of several elements present in the tests of agglutinated foraminifers. -Authors

  15. An Evaluation of Image Based Techniques for Early Wildfire Detection and Fuel Mapping

    SciTech Connect

    Gabbert, Dustin W.

    2015-05-01

    Few events can cause the catastrophic impact to ecology, infrastructure, and human safety of a wildland fire along the wildland urban interface. The suppression of natural wildland fires over the past decade has caused a buildup of dry, dead surface fuels: a condition that, coupled with the right weather conditions, can cause large destructive wildfires that are capable of threatening both ancient tree stands and manmade infrastructure. Firefighters use fire danger models to determine staffing needs on high fire risk days; however models are only as effective as the spatial and temporal density of their observations. OKFIRE, an Oklahoma initiative created by a partnership between Oklahoma State University and the University of Oklahoma, has proven that fire danger assessments close to the fire – both geographically and temporally – can give firefighters a significant increase in their situational awareness while fighting a wildland fire. This paper investigates several possible solutions for a small Unmanned Aerial System (UAS) which could gather information useful for detecting ground fires and constructing fire danger maps. Multiple fire detection and fuel mapping programs utilize satellites, manned aircraft, and large UAS equipped with hyperspectral sensors to gather useful information. Their success provides convincing proof of the utility that could be gained from low-altitude UAS gathering information at the exact time and place firefighters and land managers are interested in. Close proximity, both geographically and operationally, to the end can reduce latency times below what could ever be possible with satellite observation. This paper expands on recent advances in computer vision, photogrammetry, and infrared and color imagery to develop a framework for a next-generation UAS which can assess fire danger and aid firefighters in real time as they observe, contain, or extinguish wildland fires. It also investigates the impact information gained by this

  16. Village Level Tsunami Threat Maps for Tamil Nadu, SE Coast of India: Numerical Modeling Technique

    NASA Astrophysics Data System (ADS)

    MP, J.; Kulangara Madham Subrahmanian, D.; V, R. M.

    2014-12-01

    The Indian Ocean tsunami (IOT) devastated several countries of North Indian Ocean. India is one of the worst affected countries after Indonesia and Sri Lanka. In India, Tamil Nadu suffered maximum with fatalities exceeding 8,000 people. Historical records show that tsunami has invaded the shores of Tamil Nadu in the past and has made people realize that the tsunami threat looms over Tamil Nadu and it is necessary to evolve strategies for tsunami threat management. The IOT has brought to light that tsunami inundation and runup varied within short distances and for the disaster management for tsunami, large scale maps showing areas that are likely to be affected by future tsunami are identified. Therefore threat assessment for six villages including Mamallapuram (also called Mahabalipuram) which is famous for its rock-cut temples, from the northern part of Tamil Nadu state of India has been carried out and threat maps categorizing the coast into areas of different degree of threat are prepared. The threat was assessed by numerical modeling using TUNAMI N2 code considering different tsunamigenic sources along the Andaman - Sumatra trench. While GEBCO and C-Map data was used for bathymetry and for land elevation data was generated by RTK - GPS survey for a distance of 1 km from shore and SRTM for the inland areas. The model results show that in addition to the Sumatra source which generated the IOT in 2004, earthquakes originating in Car Nicobar and North Andaman can inflict more damage. The North Andaman source can generate a massive tsunami and an earthquake of magnitude more than Mw 9 can not only affect Tamil Nadu but also entire south east coast of India. The runup water level is used to demarcate the tsunami threat zones in the villages using GIS.

  17. Evaluation of historical museum interior lighting system using fully immersive virtual luminous environment

    NASA Astrophysics Data System (ADS)

    Navvab, Mojtaba; Bisegna, Fabio; Gugliermetti, Franco

    2013-05-01

    Saint Rocco Museum, a historical building in Venice, Italy is used as a case study to explore the performance of its' lighting system and visible light impact on viewing the large size art works. The transition from threedimensional architectural rendering to the three-dimensional virtual luminance mapping and visualization within a virtual environment is described as an integrated optical method for its application toward preservation of the cultural heritage of the space. Lighting simulation programs represent color as RGB triplets in a devicedependent color space such as ITU-R BT709. Prerequisite for this is a 3D-model which can be created within this computer aided virtual environment. The onsite measured surface luminance, chromaticity and spectral data were used as input to an established real-time indirect illumination and a physically based algorithms to produce the best approximation for RGB to be used as an input to generate the image of the objects. Conversion of RGB to and from spectra has been a major undertaking in order to match the infinite number of spectra to create the same colors that were defined by RGB in the program. The ability to simulate light intensity, candle power and spectral power distributions provide opportunity to examine the impact of color inter-reflections on historical paintings. VR offers an effective technique to quantify the visible light impact on human visual performance under precisely controlled representation of light spectrum that could be experienced in 3D format in a virtual environment as well as historical visual archives. The system can easily be expanded to include other measurements and stimuli.

  18. A new technique for selective identification and mapping of enhancers within long genomic sequences.

    PubMed

    Chernov, Igor; Stukacheva, Elena; Akopov, Sergey; Didych, Dmitry; Nikolaev, Lev; Sverdlov, Eugene

    2008-05-01

    We report a new experimental method of direct selection, identification, and mapping of potential enhancer sequences within extended stretches of genomic DNA. The method allows simultaneous cloning of a quantity of sequences instead of tedious screening of the separate ones, thus providing a robust and high-throughput approach to the mapping of enhancers. The selection procedure is based on the ability of such sequences to activate a minimal promoter that drives expression of a selective gene. To this end a mixture of short DNA fragments derived from the segment of interest was cloned in a retroviral vector containing the neomycin phosphotransferase II gene under control of a cytomegalovirus (CMV) minimal promoter. The pool of retroviruses obtained was used to infect HeLa cells and then to select neomycin-resistant colonies containing constructs with enhancer-like sequences. The pool of the genomic fragments was rescued by PCR and cloned, forming a library of the potential enhancers. Fifteen enhancer-like fragments were selected from 1-Mb human genome locus, and enhancer activity of 13 of them was verified in a transient transfection reporter gene assay. The sequences selected were found to be predominantly located near 5' regions of genes or within gene introns. PMID:18476831

  19. The Topography of Visuospatial Attention as Revealed by a Novel Visual Field Mapping Technique

    PubMed Central

    Brefczynski-Lewis, Julie A.; Datta, Ritobrato; Lewis, James W.; DeYoe, Edgar A.

    2009-01-01

    Previously, we and others have shown that attention can enhance visual processing in a spatially specific manner that is retinotopically mapped in the occipital cortex. However, it is difficult to appreciate the functional significance of the spatial pattern of cortical activation just by examining the brain maps. In this study, we visualize the neural representation of the “spotlight” of attention using a back-projection of attention-related brain activation onto a diagram of the visual field. In the two main experiments, we examine the topography of attentional activation in the occipital and parietal cortices. In retinotopic areas, attentional enhancement is strongest at the locations of the attended target, but also spreads to nearby locations and even weakly to restricted locations in the opposite visual field. The dispersion of attentional effects around an attended site increases with the eccentricity of the target in a manner that roughly corresponds to a constant area of spread within the cortex. When averaged across multiple observers, these patterns appear consistent with a gradient model of spatial attention. However, individual observers exhibit complex variations that are unique but reproducible. Overall, these results suggest that the topography of visual attention for each individual is composed of a common theme plus a personal variation that may reflect their own unique “attentional style.” PMID:18752412

  20. High-Resolution Debris Flow Volume Mapping with Unmanned Aerial Systems (uas) and Photogrammetric Techniques

    NASA Astrophysics Data System (ADS)

    Adams, M. S.; Fromm, R.; Lechner, V.

    2016-06-01

    Debris flows cause an average € 30 million damages and 1-2 fatalities every year in Austria. Detailed documentation of their extent and magnitude is essential for understanding, preventing and mitigating these natural hazard events. The recent development of unmanned aerial systems (UAS) has provided a new possibility for on-demand high-resolution monitoring and mapping. Here, we present a study, where the spatial extent and volume of a large debris flow event were mapped with different UAS, fitted with commercial off-the-shelf sensors. Orthophotos and digital terrain models (DTM) were calculated using structure-from-motion photogrammetry software. Terrain height differences caused by the debris flow in the catchment and valley floor were derived by subtracting the pre-event airborne laser scanning (ALS) DTM from a post-event UAS-DTM. The analysis of the volumetric sediment budget showed, that approximately 265,000 m³ material was mobilised in the catchment, of which 45,000 m³ settled there; of the material, which reached the valley floor, 120,000 m³ was deposited, while another 10,000 m³ was eroded from there. The UAS-results were validated against ALS data and imagery from a traditional manned-aircraft photogrammetry campaign. In conclusion, the UAS-data can reach an accuracy and precision comparable to manned aircraft data, but with the added benefits of higher flexibility, easier repeatability, less operational constraints and higher spatial resolution.

  1. Combining Electrical Techniques to map a Till Aquitard for Quantifying Lateral Flows and Improved Recharge Estimation

    NASA Astrophysics Data System (ADS)

    Thatcher, K. E.; Mackay, R.

    2007-12-01

    Where low permeability layers are present in the unsaturated zone, groundwater recharge can be significantly modified by lateral flows. To improve estimates of the magnitude and spatial distribution of lateral flows, a well defined model of the unsaturated zone hydraulic properties is required. Electromagnetic (EM) surveys, using Geonics EM31 and EM34, along with Electrical Resistivity Tomography (ERT) have been used in the Tern Catchment, Shropshire, UK to determine the distribution of Quaternary glacial deposits above the Triassic sandstone aquifer. The deposits are generally less than 10m thick and comprise low permeability lodgement till and high permeability outwash. Modelling studies have shown the depth and slope of the till surface to be key parameters controlling the magnitude of lateral flows with recharge focussed at the till edge. The distribution of permeability within the till is of secondary importance. The spatial extent of the till is well constrained by EM data and is shown to be continuous. ERT profiles provide data on the depth to the till surface in detailed 2D sections. Combining the two data sets has enabled the depth estimates from the ERT surveys to be extrapolated across a 2D map area. Recharge estimates based on the depth maps take into account lateral flows across the top of the till and show that these flows can contribute significantly to catchment recharge.

  2. Comparison of VTEC from ground-based space geodetic techniques based on ray-traced mapping factors

    NASA Astrophysics Data System (ADS)

    Heinkelmann, Robert; Alizadeh, M. Mahdi; Schuh, Harald; Deng, Zhiguo; Zus, Florian; Etemadfard, M. Hossein

    2016-07-01

    For the derivation of vertical total electron content (VTEC) from slant total electron content (STEC), usually a standard approach is used based on mapping functions that assume a single-layer model of the ionosphere (e.g. IERS Conventions 2010). In our study we test the standard approach against a recently developed alternative which is based on station specific ray-traced mapping factors. For the evaluation of this new mapping concept, we compute VTEC at selected Very Long Baseline Interferometry (VLBI) stations using the dispersive delays and the corresponding formal errors obtained by observing extra-galactic radio sources at two radio frequencies in S- and X-bands by the permanent geodetic/astrometric program organized by the IVS (International VLBI Service for Geodesy and Astrometry). Additionally, by applying synchronous sampling and a consistent analysis configuration, we determine VTEC at Global Navigation Satellite System (GNSS) antennas using GPS (Global Positioning System) and/or GLONASS (Globalnaja nawigazionnaja sputnikowaja Sistema) observations provided by the IGS (International GNSS Service) that are operated in the vicinity of the VLBI antennas. We compare the VTEC time series obtained by the individual techniques over a period of about twenty years and describe their characteristics qualitatively and statistically. The length of the time series allows us to assess the long-term climatology of ionospheric VTEC during the last twenty years.

  3. Making sense of crowdsourced observations: Data fusion techniques for real-time mapping of urban air quality

    NASA Astrophysics Data System (ADS)

    Schneider, Philipp; Castell, Nuria; Vogt, Matthias; Lahoz, William; Bartonova, Alena

    2015-04-01

    With an ever-increasing amount of environmental observations available through crowdsourcing, one of the major emerging challenges is how to best make sense of the vast amount of collected observations and how to provide citizens and other end-users with a relevant value-added product. Regarding air quality, a high-density network of low-cost sensors provided by crowdsourcing has significant potential for improving spatial mapping in general and in urban areas in particular. However, most datasets of observations made within a crowdsourcing framework contain substantial data gaps and the observations are generally point measurements, which are only representative of a relatively small area. This poses a significant challenge for mapping applications. One way to overcome these issues is to combine the crowdsourced data with spatially continuous data from a model. We present a novel data fusion-based technique for combining real-time crowdsourced observations with model output that allows to provide highly detailed, up-to-date maps of urban air quality. The EU-funded CITI-SENSE project is deploying a dense network of low-cost sensors measuring air quality in eight cities around Europe. These crowdsourced observations are used for mapping urban quality in real-time by fusing them with data obtained from statistical or deterministic air quality models. Data fusion techniques allow for combining observations with model data in a mathematically objective way and therefore provide a means of adding value to both the observations and the model. The observations are improved by filling spatio-temporal gaps in the data and the model is improved by constraining it with observations. The model further provides detailed spatial patterns in areas where no observations are available. As such, data fusion of observations from high-density low-cost sensor networks together with models can contribute to significantly improving urban-scale air quality mapping. The data fusion

  4. Automated techniques for visualization and mapping of articular cartilage in MR images of the osteoarthritic knee: a base technique for the assessment of microdamage and submicro damage.

    PubMed

    Cashman, Peter M M; Kitney, Richard I; Gariba, Munir A; Carter, Mary E

    2002-03-01

    The purpose of this paper is to describe automated techniques for the visualization and mapping of articular cartilage in magnetic resonance (MR) images of the osteoarthritic knee. The MR sequences and analysis software which will be described allow the assessment of cartilage damage using a range of standard scanners. With high field strength systems it would be possible, using these techniques, to assess micro-damage. The specific aim of the paper is to develop and validate software for automated segmentation and thickness mapping of articular cartilage from three-dimensional (3-D) gradient-echo MR images of the knee. The method can also be used for MR-based assessment of tissue engineered grafts. Typical values of cartilage thickness over seven defined regions can be obtained in patients with osteoarthritis (OA) and control subjects without OA. Three groups of patients were studied. The first group comprised patients with moderate OA in the age range 45-73 years. The second group comprised asymptomatic volunteers of 50-65 years; the third group, younger volunteers selected by clinical interview, history and X-ray. In this paper, sagittal 3-D spoiled-gradient steady-state acquisition images were obtained using a 1.5-T GE whole-body scanner with a specialist knee coil. For validation bovine and porcine cadaveric knees were given artificial cartilage lesions and then imaged. The animal validations showed close agreement between direct lesion measurements and those obtained from the MR images. The feasibility of semi-automated segmentation is demonstrated. Regional cartilage thickness values are seen as having practical application for fully automated detection of OA lesions even down to the submicrometer level. PMID:16689221

  5. Simplified soil-gas sensing techniques for plume mapping and remediation monitoring

    SciTech Connect

    Kampbell, D.H.; Wilson, J.T.; Ostendorf, D.W.

    1991-01-01

    Soil gas measurements were taken in a beach sand matrix of the unsaturated zone above a ground water plume contaminated from a spill near 35,000 gallons of aviation gasoline. The soil gas sampling and analysis strategy provided required information for mapping the plume and vertical profile measurements with a minimal expenditure of resources and work time. Analysis of a calibration gas and replicate sampling showed that the apparatus used gave reasonably correct soil gas contituent measurements. Boundaries of the plume were defined and a hot spot was located downgradient from the original spill location. Elevated carbon dioxide above the contaminated capillary fringe indicated microbial respiration activity. A biodegradation model developed from the vertical profile data predicted very low oxygen at the water table and little or no loss of hydrocarbon emissions to the atmosphere.

  6. Application of Near-Surface Geophysical Techniques for Earthquake Microzonation Mapping in the Ottawa, Ontario Region

    NASA Astrophysics Data System (ADS)

    Crow, H.; Motazedian, D.; Hunter, J. A.; Pugin, A.; Brooks, G.; Pyne, M.; Pullan, S.; Khaheshi-Banab, K.

    2009-05-01

    Earthquake Microzonation maps of the Ottawa area have been developed by a combined team of researchers from the Geological Survey of Canada (GSC) and Carleton University. The city of Ottawa has an areal extent of 2796 square kilometers, consisting of three basic geological/geotechnical units: bedrock outcrop (15%), a thin veneer of glacial deposits (30%) and thick post-glacial lacustrine and marine sediments (Champlain Sea) known locally as the Leda Clay (55%). Following the guidelines of the current National Building Code of Canada (NBCC), soil classification zones were established according to the National Earthquake Hazards Reduction Program (NEHRP) which are based on shear wave velocity measurements of the upper 30m of soil/rock and are related to ground-motion amplification. For adequate characterization of the soil properties, geophysical data were collected at 680 surface refraction-reflection sites, 35 MASW sites, 31 borehole geophysical sites, and 185 passive soil resonance sites. Twenty-five line-km of high resolution Minivib Landstreamer shear wave reflection were collected in areas of complex bedrock topography. In addition, a borehole database compiled from existing water-well and geotechnical drilling consisting of approximately 32000 borings from previous GSC work was converted into the three major geotechnical units and assigned shear wave velocity-depth functions to each borehole site based on an inverse distance weighting algorithm of adjacent shear wave measurement sites. We have found extremely low shear wave velocities associated with the post-glacial sediments (˜150m/s), and extremely high shear wave velocities associated with the competent Paleozoic or PreCambrian bedrock (˜2800 m/s), yielding very large seismic impedance ratios. Within the city limits we have shown that all six NEHRP zones occur, and that changes between solid rock (NERHP zone A) and very soft soil (NEHRP zone E or F) can occur within a lateral distance of a few hundred

  7. Improvement of global ionospheric VTEC maps by using kriging interpolation technique

    NASA Astrophysics Data System (ADS)

    Orús, R.; Hernández-Pajares, M.; Juan, J. M.; Sanz, J.

    2005-11-01

    In this work, a geostatistic interpolation algorithm, called kriging, has been applied to improve the Technical University of Catalonia (UPC) global ionospheric maps (GIMs) computed with GPS data. This new UPC GIM, from now on UPC kriging GIM, has lower RMS in the observed slant total electron content (STEC) than the current UPC GIM and IGS GIM. Improvements are about 16% and 2%, respectively, using the data of several worldwide distributed GPS stations (self-consistency test). The UPC kriging GIM also presents a better performance than UPC GIM regarding the vertical total electron content (VTEC) measurements from TOPEX/Poseidon and JASON dual-frequency altimetric data. For both standard deviation and RMS, this improvement is about 0.3 TECU (6%) and 0.1 TECU (3%) for TOPEX and JASON, respectively. Moreover, it has also been shown a certain accuracy improvement of the resultant IGS GIM when the UPC kriging GIM replaces the present UPC GIM in the corresponding combination.

  8. Infant motion: detection (M:D) ratios for chromatically defined and luminance-defined moving stimuli.

    PubMed

    Dobkins, K R; Teller, D Y

    1996-10-01

    In order to assess the relative contributions of chromatic vs luminance information to motion processing in infants, we employed a motion:detection (M:D) paradigm. Stimuli consisted of 27 deg by 40 deg, 0.25 c/deg sinusoidal gratings moving at 22 deg/sec (5.6 Hz), and were either chromatically defined or luminance-defined. Contrast thresholds for direction-of-motion (M) were obtained using a directional eye movement technique. Contrast thresholds for detection (D) were obtained using forced-choice preferential looking. M:D threshold ratios were obtained for individual infant subjects, and results were compared to those of adults. As expected, adult M:D threshold ratios were near 1:1 for luminance-defined stimuli, but greater than 1:1 for chromatically defined stimuli. This suggests that, for adults, luminance-defined, but not chromatically defined, stimuli are detected by mechanisms labeled for direction of motion. By contrast, infant M:D ratios for chromatically and luminance-defined stimuli were approximately equal and close to 1:1, suggesting that, for infants, luminance- as well as chromatically defined stimuli are detected by mechanisms that are labeled for direction of motion. PMID:8944288

  9. X-ray mapping technique: a preliminary study in discriminating gunshot residue particles from aggregates of environmental occupational origin.

    PubMed

    Cardinetti, Bruno; Ciampini, Claudio; D'Onofrio, Carlo; Orlando, Giovanni; Gravina, Luciano; Ferrari, Francesco; Di Tullio, Donatello; Torresi, Luca

    2004-06-30

    The possibility of detection of lead-antimony-barium aggregates from non-firearm sources is confirmed according to the tests performed on brake pads, and firework and automobile workers. Moreover, information on particles taken from cartridge cases shows the relative feeble importance of the morphology in distinguishing gunshot residues (GSRs). Furthermore, also the presence in the spectrum of other elements (e.g., iron) is not so conclusive. In this panorama, the possibility of discriminating gunshot residue particles from other non-firearm lead-antimony-barium aggregates is investigated: the proposed method is based on X-ray mapping technique--currently applied used in Reparto Carabinieri Investigazioni Scientifiche in Rome, the forensic service of Italian Carabinieri--according to which the spatial distribution of the emission energy of each element of the sample is pictured. Gunshot residues present the same lead-antimony-barium distribution (or at least the same antimony-barium distribution with lead nodules), as some other environmental occupational aggregates do not (different plaques of lead, antimony, and barium). So, X-ray mapping technique can offer a new fundamental evaluation parameter in analysis of gunshot residues with scanning electron microscopy/energy-dispersive (SEM/EDS) spectrometry, and new standards could be considered. PMID:15177626

  10. Remote sensing techniques for mapping range sites and estimating range yield

    NASA Technical Reports Server (NTRS)

    Benson, L. A.; Frazee, C. J.; Waltz, F. A.; Reed, C.; Carey, R. L.; Gropper, J. L.

    1974-01-01

    Image interpretation procedures for determining range yield and for extrapolating range information were investigated for an area of the Pine Ridge Indian Reservation in southwestern South Dakota. Soil and vegetative data collected in the field utilizing a grid sampling design and digital film data from color infrared film and black and white films were analyzed statistically using correlation and regression techniques. The pattern recognition techniques used were K-class, mode seeking, and thresholding. The herbage yield equation derived for the detailed test site was used to predict yield for an adjacent similar field. The herbage yield estimate for the adjacent field was 1744 lbs. of dry matter per acre and was favorably compared to the mean yield of 1830 lbs. of dry matter per acre based upon ground observations. Also an inverse relationship was observed between vegetative cover and the ratio of MSS 5 to MSS 7 of ERTS-1 imagery.

  11. Development of a stereo analysis algorithm for generating topographic maps using interactive techniques of the MPP

    NASA Technical Reports Server (NTRS)

    Strong, James P.

    1987-01-01

    A local area matching algorithm was developed on the Massively Parallel Processor (MPP). It is an iterative technique that first matches coarse or low resolution areas and at each iteration performs matches of higher resolution. Results so far show that when good matches are possible in the two images, the MPP algorithm matches corresponding areas as well as a human observer. To aid in developing this algorithm, a control or shell program was developed for the MPP that allows interactive experimentation with various parameters and procedures to be used in the matching process. (This would not be possible without the high speed of the MPP). With the system, optimal techniques can be developed for different types of matching problems.

  12. Application of satellite data and LARS's data processing techniques to mapping vegetation of the Dismal Swamp. M.S. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Messmore, J. A.

    1976-01-01

    The feasibility of using digital satellite imagery and automatic data processing techniques as a means of mapping swamp forest vegetation was considered, using multispectral scanner data acquired by the LANDSAT-1 satellite. The site for this investigation was the Dismal Swamp, a 210,000 acre swamp forest located south of Suffolk, Va. on the Virginia-North Carolina border. Two basic classification strategies were employed. The initial classification utilized unsupervised techniques which produced a map of the swamp indicating the distribution of thirteen forest spectral classes. These classes were later combined into three informational categories: Atlantic white cedar (Chamaecyparis thyoides), Loblolly pine (Pinus taeda), and deciduous forest. The subsequent classification employed supervised techniques which mapped Atlantic white cedar, Loblolly pine, deciduous forest, water and agriculture within the study site. A classification accuracy of 82.5% was produced by unsupervised techniques compared with 89% accuracy using supervised techniques.

  13. A surface-based technique for mapping homotopic interhemispheric connectivity: Development, characterization, and clinical application.

    PubMed

    Tobyne, Sean M; Boratyn, Daria; Johnson, Jessica A; Greve, Douglas N; Mainero, Caterina; Klawiter, Eric C

    2016-08-01

    The functional organization of the human brain consists of a high degree of connectivity between interhemispheric homologous regions. The degree of homotopic organization is known to vary across the cortex and homotopic connectivity is high in regions that share cross-hemisphere structural connections or are activated by common input streams (e.g., the visual system). Damage to one or both regions, as well as damage to the connections between homotopic regions, could disrupt this functional organization. Here were introduce and test a computationally efficient technique, surface-based homotopic interhermispheric connectivity (sHIC), that leverages surface-based registration and processing techniques in an attempt to improve the spatial specificity and accuracy of cortical interhemispheric connectivity estimated with resting state functional connectivity. This technique is shown to be reliable both within and across subjects. sHIC is also characterized in a dataset of nearly 1000 subjects. We confirm previous results showing increased interhemispheric connectivity in primary sensory regions, and reveal a novel rostro-caudal functionally defined network level pattern of sHIC across the brain. In addition, we demonstrate a structural-functional relationship between sHIC and atrophy of the corpus callosum in multiple sclerosis (r = 0.2979, p = 0.0461). sHIC presents as a sensitive and reliable measure of cortical homotopy that may prove useful as a biomarker in neurologic disease. Hum Brain Mapp 37:2849-2868, 2016. © 2016 Wiley Periodicals, Inc. PMID:27219660

  14. Three-dimensional electron paramagnetic resonance imaging technique for mapping porosity in ceramics

    SciTech Connect

    Kordas, G.; Kang, Y.H. )

    1991-04-01

    This paper reports on a three-dimensional (3D) electron paramagnetic resonance imaging (EPRI) method which was developed to probe the structure and size of pores in ceramic materials. The imaging device that was added to the EPR instrument consisted of a computer-controlled current source and magnetic field gradient. This add-on facility was tested using a well-defined diphenlpicrylhydrazzyl phantom sample. Pumice was then used to demonstrate the potential of the technique. This stone was immersed in a 0.5 mm {sup 15}N-substituted per-deutereted tempone water solution to fill the pores with spin labels. Images were reconstructed using a filtered back-projection technique. A two-dimensional (2D) imaging plane was constructed by collecting 33 projection planes over 180 {degrees}. A 3D image was derived from 22 planes each constructed by 22 projections. At present, the facility allows a resolution of 69 and 46 {mu}m for 2D and 3D imaging, respectively. Advancements of the imaging apparatus, software, and line width of the spin labels will be needed to enhance the resolution of this technique.

  15. Karst features detection and mapping using airphotos, DSMs and GIS techniques

    NASA Astrophysics Data System (ADS)

    Kakavas, M. P.; Nikolakopoulos, K. G.; Zagana, E.

    2015-10-01

    The aim of this work is to detect and qualify natural karst depressions in the Aitoloakarnania Prefecture, Western Greece, using remote sensing data in conjunction with the Geographical Information Systems - GIS. The study area is a part of the Ionian geotectonic zone, and its geological background consists of the Triassic Evaporates. The Triassic carbonate breccias where formed as a result of the tectonic and orogenetic setting of the external Hellenides and the diaper phenomena of the Triassic Evaporates. The landscape characterized by exokarst features closed depressions in the Triassic carbonate breccias. At the threshold of this study, an in situ observation was performed in order to identify dolines and swallow holes. The creation of sinkholes, in general, is based on the collapse of the surface layer due to chemical dissolution of carbonate rocks. In the current study airphotos stereopairs, DSMs and GIS were combined in order to detect and map the karst features. Thirty seven airphotos were imported in Leica Photogrammetry Suite and a stereo model of the study area was created. Then in 3D view possible karst features were detected and digitized. Those sites were verified during the in situ survey. ASTER GDEM, SRTM DEM, high resolution airphoto DSM created from the Greek Cadastral and a DEM from digitized contours from the 1/50,000 topographic were also evaluated in GIS environment for the automatic detection of the karst depressions. The results are presented in this study.

  16. Conformations of rat brain hexokinase: studies with monoclonal antibodies and peptide mapping techniques

    SciTech Connect

    Smith, A.D.; Wilson, J.E.

    1987-05-01

    Brain hexokinase (HK) consists of a single polypeptide chain with M/sub r/ approx. 100,000. Limited proteolysis of native HK with trypsin yields three major fragments, thought to correspond to discrete structural domains, with molecular masses of approximately 10, 50, and 40 kDa, and derived from the N-terminal, central, and C-terminal regions, respectively. Additional tryptic cleavage sites become susceptible in conformations induced by binding of specific ligands. A library of monoclonal antibodies has been developed for use as probes in examining the structure and function of domains present in HK. Epitopes for several of these have been mapped to specific structural regions in HK. Immunoblotting experiments with these antibodies have been useful in defining the location of susceptible proteolytic cleavage sites in various ligand-induced conformations of HK. Effects of ligand binding on epitope recognition have indicated that several of the antibodies interact with epitopes perturbed by ligand-induced conformational changes. Effects of antibody binding on function have also provided a basis for establishing structure-function relationships. For example, several of the monoclonal antibodies inhibit binding of hexokinase to mitochondria and all recognize epitopes located in the 10 kDa N-terminal domain, consistent with other results indicating that this region is critical to the binding function of hexokinase.

  17. Basic forest cover mapping using digitized remote sensor data and automated data processing techniques

    NASA Technical Reports Server (NTRS)

    Coggeshall, M. E.; Hoffer, R. M.

    1973-01-01

    Remote sensing equipment and automatic data processing techniques were employed as aids in the institution of improved forest resource management methods. On the basis of automatically calculated statistics derived from manually selected training samples, the feature selection processor of LARSYS selected, upon consideration of various groups of the four available spectral regions, a series of channel combinations whose automatic classification performances (for six cover types, including both deciduous and coniferous forest) were tested, analyzed, and further compared with automatic classification results obtained from digitized color infrared photography.

  18. Lessons in modern digital field geology: Open source software, 3D techniques, and the new world of digital mapping

    NASA Astrophysics Data System (ADS)

    Pavlis, Terry; Hurtado, Jose; Langford, Richard; Serpa, Laura

    2014-05-01

    Although many geologists refuse to admit it, it is time to put paper-based geologic mapping into the historical archives and move to the full potential of digital mapping techniques. For our group, flat map digital geologic mapping is now a routine operation in both research and instruction. Several software options are available, and basic proficiency with the software can be learned in a few hours of instruction and practice. The first practical field GIS software, ArcPad, remains a viable, stable option on Windows-based systems. However, the vendor seems to be moving away from ArcPad in favor of mobile software solutions that are difficult to implement without GIS specialists. Thus, we have pursued a second software option based on the open source program QGIS. Our QGIS system uses the same shapefile-centric data structure as our ArcPad system, including similar pop-up data entry forms and generic graphics for easy data management in the field. The advantage of QGIS is that the same software runs on virtually all common platforms except iOS, although the Android version remains unstable as of this writing. A third software option we are experimenting with for flat map-based field work is Fieldmove, a derivative of the 3D-capable program Move developed by Midland Valley. Our initial experiments with Fieldmove are positive, particularly with the new, inexpensive (<300Euros) Windows tablets. However, the lack of flexibility in data structure makes for cumbersome workflows when trying to interface our existing shapefile-centric data structures to Move. Nonetheless, in spring 2014 we will experiment with full-3D immersion in the field using the full Move software package in combination with ground based LiDAR and photogrammetry. One new workflow suggested by our initial experiments is that field geologists should consider using photogrammetry software to capture 3D visualizations of key outcrops. This process is now straightforward in several software packages, and

  19. West Java Snack Mapping based on Snack Types, Main Ingredients, and Processing Techniques

    NASA Astrophysics Data System (ADS)

    Nurani, A. S.; Subekti, S.; Ana

    2016-04-01

    The research was motivated by lack of literature on archipelago snack especially from West Java. It aims to explore the snack types, the processing techniques, and the main ingredients by planning a learning material on archipelago cake especially from West Java. The research methods used are descriptive observations and interviews. The samples were randomly chosen from all regions in West Java. The findings show the identification of traditional snack from West java including: 1. snack types which are similar in all regions as research sample namely: opak, rangginang, nagasari, aliagrem, cuhcur, keripik, semprong, wajit, dodol, kecimpring, combro, tape ketan, and surabi. The typical snack types involve burayot (Garut), simping kaum (Purwakarta), surabi hejo (Karawang), papais cisaat (Subang), Papais moyong, opak bakar (Kuningan), opak oded, ranggesing (Sumedang), gapit, tapel (Cirebon), gulampo, kue aci (Tasikmalaya), wajit cililin, gurilem (West Bandung), and borondong (Bandung District); 2. various processing techniques namely: steaming, boiling, frying, caramelizing, baking, grilling, roaster, sugaring; 3. various main ingredients namely rice, local glutinous rice, rice flour, glutinous rice flour, starch, wheat flour, hunkue flour, cassava, sweet potato, banana, nuts, and corn; 4. snack classification in West Java namely (1) traditional snack, (2) creation-snack, (3) modification-snack, (4) outside influence-snack.

  20. A comparison of two-dimensional techniques for converting magnetocardiogram maps into effective current source distributions

    NASA Astrophysics Data System (ADS)

    Ogata, K.; Kandori, A.; Miyashita, T.; Sekihara, K.; Tsukada, K.

    2011-01-01

    The aim of this study was to develop a method for converting the pseudo two-dimensional current given by a current-arrow map (CAM) into the physical current. The physical current distribution is obtained by the optimal solution in a least mean square sense with Tikhonov regularization (LMSTR). In the current dipole simulation, the current pattern differences (ΔJ) between the results of the CAM and the LMSTR with several regularization parameters (α = 10-1-10-15) are calculated. In magnetocardiographic (MCG) analysis, the depth (zd) of a reconstruction plane is chosen by using the coordinates of the sinus node, which is estimated from MCG signals at the early p-wave. The ΔJs at p-wave peaks, QRS-complex peaks, and T-wave peaks of MCG signals for healthy subjects are calculated. Furthermore, correlation coefficients and regression lines are also calculated from the current values of the CAM and the LMSTR during p-waves, QRS-complex, and T-waves of MCG signals. In the simulation, the ΔJs (α ≈ 10-10) had a minimal value. The ΔJs (α = 10-10) at p-wave peaks, QRS-complex peaks, and T-wave peaks of MCG signals for healthy subjects also had minimal value. The correlation coefficients of the current values given by the CAM and the LMSTR (α = 10-10) were greater than 0.9. Furthermore, slopes (y) of the regression lines are correlated with the depth (zd) (r = -0.93). Consequently, the CAM value can be transformed into the LMSTR current value by multiplying it by the slope (y) obtained from the depth (zd). In conclusion, the result given by the CAM can be converted into an effective physical current distribution by using the depth (zd).

  1. Redox and speciation mapping of rock thin sections using high spatial resolution full-field imaging technique

    NASA Astrophysics Data System (ADS)

    de Andrade, V.; Susini, J.; Salomé, M.; Beraldin, O.; Heymes, T.; Lewin, E.

    2009-04-01

    Because of their complex genesis, natural rocks are the most often heterogeneous systems, with various scale-level heterogeneities for both chemistry and structure. In the last decade, the dramatic improvements of hyperspectral imaging techniques provided new tools for accurate material characterisation. Most of these micro- and nano- analytical techniques rely on scanning instruments, which offer high spatial resolution but suffer from long acquisition times imposing practical limits on the field of view. Conversely, full-field imaging techniques rely on a fast parallel acquisition but have limited resolution. Although soft X-ray full-field microscopes based on Fresnel zone plates are commonly used for high resolution imaging, its combination with spectroscopy is challenging and 2D chemical mapping still difficult. For harder X-rays, lensless X-ray microscope based on simple propagation geometry is easier and can be readily used for 2D spectro-microscopy. A full-field experimental setup was optimized at the ESRF-ID21 beamline to image iron redox and speciation distributions in rocks thin sections. The setup comprises a Si111 or Si220 (E = 0.4 eV) monochromator, a special sample stage and a sensitive camera associated with a brand new GGG:Eu light conversion scintillator and high magnification visible light optics. The pixel size ranges from 1.6 to 0.16 m according to the optic used. This instrument was used to analyse phyllosilicates and oxides of metamorphic sediments coming from the Aspromonte nappes-pile in Calabria. Iron chemical state distributions were derived - from images of 1000 Ã- 2000 Ã- 30 m3 rock thin sections - by subtraction of absorption images above and below the Fe K-edge. Using an automatic stitching reconstruction, a wide field image (4Ã-3 mm2 with a 1 m2 resolution for a total of about 12 millions pixels) of Fetotal elemental distribution was produced. Moreover, -XANES analyses (more than 1 million individual -XANES spectra) were performed

  2. Measuring trace gas emission from multi-distributed sources using vertical radial plume mapping (VRPM) and backward Lagrangian stochastic (bLS) techniques

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Two micrometeorological techniques for measuring trace gas emission rates from distributed area sources were evaluated using a variety of synthetic area sources. The accuracy of the vertical radial plume mapping (VRPM) and the backward Lagrangian (bLS) techniques with an open-path optical spectrosco...

  3. The Views of the Teachers about the Mind Mapping Technique in the Elementary Life Science and Social Studies Lessons Based on the Constructivist Method

    ERIC Educational Resources Information Center

    Seyihoglu, Aysegul; Kartal, Ayca

    2010-01-01

    The purpose of this study is to reveal the opinions of teachers on using the mind mapping technique in Life Science and Social Studies lessons. The participants of the study are 20 primary education teachers. In this study, a semi-structured interview technique was used. For content analysis, the themes and codes were defined, based on the views…

  4. FBGs cascade interrogation technique based on wavelength-to-delay mapping and KLT analysis

    NASA Astrophysics Data System (ADS)

    Hervás, J.; Barrera, D.; Fernández-Pousa, Carlos R.; Sales, S.

    2016-05-01

    The Karhunen-Loeve transform is applied to the coarsely sampled impulse response generated by an FBG cascade in order to calculate the temperature change suffered by the FBGs. Thanks to a dispersive media, the wavelength change performed by the temperature change produces a delay shift in the sample generated by an FBG, delay shift which is recorded in the eigenvalues calculated by the KLT routine, letting to measure the temperature variation. Although the FBGs samples are represented only by four points, a continuous temperature measurement can be performed thanks to the KLT algorithm. This means a three order reduction in the number of points giving this method a low computational complexity. Simulations are performed to validate the interrogation technique and estimate performance and an experimental example is provided to demonstrate real operation.

  5. A microwave technique for mapping ice temperature in the Arctic seasonal sea ice zone

    SciTech Connect

    St. Germain, K.M.; Cavalieri, D.J.

    1997-07-01

    A technique for deriving ice temperature in the Arctic seasonal sea ice zone from passive microwave radiances has been developed. The algorithm operates on brightness temperatures derived from the Special Sensor Microwave/Imager (SSM/I) and uses ice concentration and type from a previously developed thin ice algorithm to estimate the surface emissivity. Comparisons of the microwave derived temperatures with estimates derived from infrared imagery of the Bering Strait yield a correlation coefficient of 0.93 and an RMS difference of 2.1 K when coastal and cloud contaminated pixels are removed. SSM/I temperatures were also compared with a time series of air temperature observations from Gambell on St. Lawrence Island and from Point Barrow, AK weather stations. These comparisons indicate that the relationship between the air temperature and the ice temperature depends on ice type.

  6. A snapshot of radiation therapy techniques and technology in Queensland: An aid to mapping undergraduate curriculum

    SciTech Connect

    Bridge, Pete; Carmichael, Mary-Ann; Brady, Carole; Dry, Allison

    2013-03-15

    Undergraduate students studying the Bachelor of Radiation Therapy at Queensland University of Technology (QUT) attend clinical placements in a number of department sites across Queensland. To ensure that the curriculum prepares students for the most common treatments and current techniques in use in these departments, a curriculum matching exercise was performed. A cross-sectional census was performed on a pre-determined “Snapshot” date in 2012. This was undertaken by the clinical education staff in each department who used a standardized proforma to count the number of patients as well as prescription, equipment, and technique data for a list of tumour site categories. This information was combined into aggregate anonymized data. All 12 Queensland radiation therapy clinical sites participated in the Snapshot data collection exercise to produce a comprehensive overview of clinical practice on the chosen day. A total of 59 different tumour sites were treated on the chosen day and as expected the most common treatment sites were prostate and breast, comprising 46% of patients treated. Data analysis also indicated that intensity-modulated radiotherapy (IMRT) use is relatively high with 19.6% of patients receiving IMRT treatment on the chosen day. Both IMRT and image-guided radiotherapy (IGRT) indications matched recommendations from the evidence. The Snapshot method proved to be a feasible and efficient method of gathering useful data to inform curriculum matching. Frequency of IMRT use in Queensland matches or possibly exceeds that indicated in the literature. It is recommended that future repetition of the study be undertaken in order to monitor trends in referral patterns and new technology implementation.

  7. A snapshot of radiation therapy techniques and technology in Queensland: An aid to mapping undergraduate curriculum

    PubMed Central

    Bridge, Pete; Carmichael, Mary-Ann; Brady, Carole; Dry, Allison

    2013-01-01

    Introduction Undergraduate students studying the Bachelor of Radiation Therapy at Queensland University of Technology (QUT) attend clinical placements in a number of department sites across Queensland. To ensure that the curriculum prepares students for the most common treatments and current techniques in use in these departments, a curriculum matching exercise was performed. Methods A cross-sectional census was performed on a pre-determined “Snapshot” date in 2012. This was undertaken by the clinical education staff in each department who used a standardized proforma to count the number of patients as well as prescription, equipment, and technique data for a list of tumour site categories. This information was combined into aggregate anonymized data. Results All 12 Queensland radiation therapy clinical sites participated in the Snapshot data collection exercise to produce a comprehensive overview of clinical practice on the chosen day. A total of 59 different tumour sites were treated on the chosen day and as expected the most common treatment sites were prostate and breast, comprising 46% of patients treated. Data analysis also indicated that intensity-modulated radiotherapy (IMRT) use is relatively high with 19.6% of patients receiving IMRT treatment on the chosen day. Both IMRT and image-guided radiotherapy (IGRT) indications matched recommendations from the evidence. Conclusion The Snapshot method proved to be a feasible and efficient method of gathering useful data to inform curriculum matching. Frequency of IMRT use in Queensland matches or possibly exceeds that indicated in the literature. It is recommended that future repetition of the study be undertaken in order to monitor trends in referral patterns and new technology implementation. PMID:26229604

  8. Mapping Proxy Sensitivity: A New Technique for Compositional Analysis of Cultured Biominerals and Inorganically Precipitated Materials

    NASA Astrophysics Data System (ADS)

    Gagnon, A. C.; DePaolo, D. J.; DeYoreo, J.; Spero, H. J.; Russell, A. D.

    2011-12-01

    Mineral composition is controlled by a host of environmental factors during precipitation. To build accurate paleo-reconstructions we need to separate the impact of each parameter on proxy behavior and use these data to build a chemical-scale understanding of mineral growth. Biomineral culture and inorganic precipitation experiments, where growth parameters can be manipulated independently, are uniquely suited to calibrate proxies and probe mechanism. Culture and precipitation experiments often involve overgrowth of an initial material. For example, seed crystals are used to control mineralogy and avoid nucleation during inorganic precipitation, while culture experiments in marine organisms typically start with wild specimens. New growth corresponding to the experimental conditions must be resolved from the initial material. Separation is typically achieved using microanalysis, skeletal dissection, or estimates of the initial mass and composition. Each approach imposes limits on the accuracy, precision or types of materials that can be analyzed. Slow growth rates and complicated geometries can make these techniques especially challenging when applied to biominerals. We present a method of compositional analysis for use in biological culture and inorganic growth experiments that overcomes many of these challenges. This method relies on growth in a mixed element stable isotope spike, requires neither the initial mass nor the initial composition to be known, harnesses the precision and sensitivity of bulk analysis, and applies even when it is impossible to physically identify newly grown material. Error analysis suggests this method can significantly improve the precision of metal/calcium measurements in experimentally grown material compared to current methods. Furthermore, the method can isolate different events through time, separating, for example, the impact of day and night cycles on biomineral composition. We will present metal/calcium ratios measured using the

  9. Surface Electrical Conductivity Prediction by Soil Moisture and Electromagnetic Mapping Techniques: Implication for Landmine Detection Technologies

    NASA Astrophysics Data System (ADS)

    Katsube, J.; McNairn, H.; Keating, P. K.; Das, Y.; Dyke, L.; Best, M. E.; Singhroy, V.; Connell-Madore, S.; Hunter, J.; Klassen, R.; Dilabio, R.; Moore, A.

    2004-05-01

    Electrical conductivity (EC) can be a source of significant signal interference in landmine detection, implying that there is a necessity for soil EC prediction in order to carry out safe demining operations in landmine affected countries in the world. A fundamental study on soil EC mechanisms and their relationship to moisture content has been carried out in order to increase the soil EC prediction accuracy when using data from various sensors, such as remote sensing, airborne and surficial electromagnetic (EM) methods. Results indicate that soil moisture consists of free water filling pore spaces and bound water which forms adsorbed water layers on the grain surfaces. The response of these two water phases to drying rates and EC are very different, to the extent that a moist clay poor soil may have low EC but a dry clay rich soil may have higher EC. This is a result of not only the bound water layers being a significant source of EC, but of the capillary component of the free water reacting differently to the different grain-sizes of the soil. The capillary water forms important electrical conductive bridges between the adsorbed water layers on the grains that constitute the soil. This implies that information on soil texture, mineralogy and their distribution are required for accurate EC prediction. Whereas information on these soil characteristics may be acquired by remote sensing and soil maps, soil moisture content is likely to vary from the time of data acquisition to that of demining operations, implying methods to predict these changes are required. In addition, soil type inhomogeniety, such as vertical and horizontal variation can also be a source of inaccuracies in moisture and EC predictions. However, these investigations also indicate that a wide band electrical frequency signal may have the possibility of providing information on, not only metallic mineral content, but on pore space, clay mineral type and water content. In addition, applications of

  10. Regional Mapping of Coupled Fluxes of Carbon and Water Using Multi-Sensor Fusion Techniques

    NASA Astrophysics Data System (ADS)

    Schull, M. A.; Anderson, M. C.; Semmens, K. A.; Yang, Y.; Gao, F.; Hain, C.; Houborg, R.

    2014-12-01

    In an ever-changing climate there is an increasing need to measure the fluxes of water, energy and carbon for decision makers to implement policies that will help mitigate the effects of climate change. In an effort to improve drought monitoring, water resource management and agriculture assessment capabilities, a multi-scale and multi-sensor framework for routine mapping of land-surface fluxes of water and energy at field to regional scales has been established. The framework uses the ALEXI (Atmosphere Land Exchange Inverse)/DisALEXI (Disaggregated ALEXI) suite of land-surface models forced by remotely sensed data from Landsat, MODIS (MODerate resolution Imaging Spectroradiometer), and GOES (Geostationary Operational Environmental Satellite). Land-surface temperature (LST) can be an effective substitute for in-situ surface moisture observations and a valuable metric for constraining land-surface fluxes at sub-field scales. The adopted multi-scale thermal-based land surface modeling framework facilitates regional to local downscaling of water and energy fluxes by using a combination of shortwave reflective and thermal infrared (TIR) imagery from GOES (4-10 km; hourly), MODIS (1 km; daily), and Landsat (30-100 m; bi-weekly). In this research the ALEXI/DisALEXI modeling suite is modified to incorporate carbon fluxes using a stomatal resistance module, which replaces the Priestley-Taylor latent heat approximation. In the module, canopy level nominal light-use-efficiency (βn) is the parameter that modulates the flux of water and carbon in and out of the canopy. Leaf chlorophyll (Chl) is a key parameter for quantifying variability in photosynthetic efficiency to facilitate the spatial distribution of coupled carbon and water retrievals. Spatial distribution of Chl are retrieved from Landsat (30 m) using a surface reflectance dataset as input to the REGularized canopy reFLECtance (REGFLEC) tool. The modified ALEXI/DisALEXI suite is applied to regions of rain fed and

  11. Pedogeochemical mapping of heavy metals with kriging techniques: Lead and Chromium in soils of the Barcelona province (NE Spain)

    NASA Astrophysics Data System (ADS)

    Bech, J.; Tume, P.; Sánchez, P.; Reverter, F.; Bech, J.; Lansac, A.; Longan, L.; Oliver, T.

    2009-04-01

    Analysis of the spatial distribution of heavy metals in soils is of fundamental importance in a number of applications including the delineation of potentially polluted spots at unsampled sites. In this study, 316 topsoil samples (0-20 cm) located approximately in a 5 km regular grid covering the Barcelona province (7728 square km) have been examined. The geology in this area is varied: granodiorites and Paleozoic shales are predominant in the SE-E and Mesozoic and Tertiary limestones, dolomites, marlstones, gypsum and sandstones in the remainder of the province. Heavy metals considered in this paper were lead and chromium. Soil samples (<2mm fraction) were analyzed by standard methods. Aqua regia digest (ISO 11466.2002) of Pb and Cr were determined by ICP-ES (Polyscan 61E Spectrometer). Experimental variograms have been analysed, considering both omni-directional and anisotropic cases. Theoretical variograms fitted with the experimental data were used to map heavy metal concentrations at 1 km regular grid using kriging techniques. Finally, the pedogeochemical maps obtained are discussed in terms of the soil properties, human activities and geological characteristics of the region.

  12. An easy-to-use technique to characterize cardiodynamics from first-return maps on ΔRR-intervals

    NASA Astrophysics Data System (ADS)

    Fresnel, Emeline; Yacoub, Emad; Freitas, Ubiratan; Kerfourn, Adrien; Messager, Valérie; Mallet, Eric; Muir, Jean-François; Letellier, Christophe

    2015-08-01

    Heart rate variability analysis using 24-h Holter monitoring is frequently performed to assess the cardiovascular status of a patient. The present retrospective study is based on the beat-to-beat interval variations or ΔRR, which offer a better view of the underlying structures governing the cardiodynamics than the common RR-intervals. By investigating data for three groups of adults (with normal sinus rhythm, congestive heart failure, and atrial fibrillation, respectively), we showed that the first-return maps built on ΔRR can be classified according to three structures: (i) a moderate central disk, (ii) a reduced central disk with well-defined segments, and (iii) a large triangular shape. These three very different structures can be distinguished by computing a Shannon entropy based on a symbolic dynamics and an asymmetry coefficient, here introduced to quantify the balance between accelerations and decelerations in the cardiac rhythm. The probability P111111 of successive heart beats without large beat-to-beat fluctuations allows to assess the regularity of the cardiodynamics. A characteristic time scale, corresponding to the partition inducing the largest Shannon entropy, was also introduced to quantify the ability of the heart to modulate its rhythm: it was significantly different for the three structures of first-return maps. A blind validation was performed to validate the technique.

  13. Scatterer size and concentration estimation technique based on a 3D acoustic impedance map from histologic sections

    NASA Astrophysics Data System (ADS)

    Mamou, Jonathan; Oelze, Michael L.; O'Brien, William D.; Zachary, James F.

    2001-05-01

    Accurate estimates of scatterer parameters (size and acoustic concentration) are beneficial adjuncts to characterize disease from ultrasonic backscatterer measurements. An estimation technique was developed to obtain parameter estimates from the Fourier transform of the spatial autocorrelation function (SAF). A 3D impedance map (3DZM) is used to obtain the SAF of tissue. 3DZMs are obtained by aligning digitized light microscope images from histologic preparations of tissue. Estimates were obtained for simulated 3DZMs containing spherical scatterers randomly located: relative errors were less than 3%. Estimates were also obtained from a rat fibroadenoma and a 4T1 mouse mammary tumor (MMT). Tissues were fixed (10% neutral-buffered formalin), embedded in paraffin, serially sectioned and stained with H&E. 3DZM results were compared to estimates obtained independently against ultrasonic backscatter measurements. For the fibroadenoma and MMT, average scatterer diameters were 91 and 31.5 μm, respectively. Ultrasonic measurements yielded average scatterer diameters of 105 and 30 μm, respectively. The 3DZM estimation scheme showed results similar to those obtained by the independent ultrasonic measurements. The 3D impedance maps show promise as a powerful tool to characterize ultrasonic scattering sites of tissue. [Work supported by the University of Illinois Research Board.

  14. An easy-to-use technique to characterize cardiodynamics from first-return maps on ΔRR-intervals.

    PubMed

    Fresnel, Emeline; Yacoub, Emad; Freitas, Ubiratan; Kerfourn, Adrien; Messager, Valérie; Mallet, Eric; Muir, Jean-François; Letellier, Christophe

    2015-08-01

    Heart rate variability analysis using 24-h Holter monitoring is frequently performed to assess the cardiovascular status of a patient. The present retrospective study is based on the beat-to-beat interval variations or ΔRR, which offer a better view of the underlying structures governing the cardiodynamics than the common RR-intervals. By investigating data for three groups of adults (with normal sinus rhythm, congestive heart failure, and atrial fibrillation, respectively), we showed that the first-return maps built on ΔRR can be classified according to three structures: (i) a moderate central disk, (ii) a reduced central disk with well-defined segments, and (iii) a large triangular shape. These three very different structures can be distinguished by computing a Shannon entropy based on a symbolic dynamics and an asymmetry coefficient, here introduced to quantify the balance between accelerations and decelerations in the cardiac rhythm. The probability P111111 of successive heart beats without large beat-to-beat fluctuations allows to assess the regularity of the cardiodynamics. A characteristic time scale, corresponding to the partition inducing the largest Shannon entropy, was also introduced to quantify the ability of the heart to modulate its rhythm: it was significantly different for the three structures of first-return maps. A blind validation was performed to validate the technique. PMID:26328562

  15. Epitope mapping by epitope excision, hydrogen/deuterium exchange, and peptide-panning techniques combined with in silico analysis.

    PubMed

    Clementi, Nicola; Mancini, Nicasio; Criscuolo, Elena; Cappelletti, Francesca; Clementi, Massimo; Burioni, Roberto

    2014-01-01

    The fine characterization of protective B cell epitopes plays a pivotal role in the development of novel vaccines. The development of epitope-based vaccines, in fact, cannot be possible without a clear definition of the antigenic regions involved in the binding between the protective antibody (Ab) and its molecular target. To achieve this result, different epitope-mapping approaches have been widely described (Clementi et al. Drug Discov Today 18(9-10):464-471, 2013). Nowadays, the best way to characterize an Ab bound region is still the resolution of Ab-antigen (Ag) co-crystal structure. Unfortunately, the crystallization approaches are not always feasible. However, different experimental strategies aimed to predict Ab-Ag interaction and followed by in silico analysis of the results may be good surrogate approaches to achieve this result. Here, we review few experimental techniques followed by the use of "basic" informatics tools for the analysis of the results. PMID:24515481

  16. Digital snow mapping technique using LANDSAT data and General Electric IMAGE 100 system. [Dinwoody Creek in Wind River Mountains, Wyoming

    NASA Technical Reports Server (NTRS)

    Dallam, W. C.; Foster, J. L.

    1975-01-01

    In this study, a technique and procedures using General Electric IMAGE 100 system were derived for performing a snow cover analysis of small watersheds for quasi-operational application. The study area was the Wind River Mountains of west central Wyoming. A small watershed, namely, Dinwoody Creek was selected as a test site. LANDSAT data and U-2 imagery were used in the analysis. From a minimal snowcover LANDSAT scene, multispectral analysis was performed yielding the distribution of forest, bare rock, grassland, water, and snow within the watershed. The forest and bare rock themes were saved and registered with other scenes containing greater snow cover. Likewise, elevation contours from a digitized map were stored and superimposed over the snowpack areas.

  17. An interpretation of a geologic map of Fannin County, Texas, prepared by ADP techniques from ERTS MSS data

    NASA Technical Reports Server (NTRS)

    Henderson, J. A., Jr.; Gardner, J. J.; Cipra, J. E.

    1973-01-01

    Multispectral scanner data, collected by ERTS-1 from an area in northeast Texas underlain by structurally simple cretaceous sedimentary rocks, were analyzed using several automatic data processing techniques. Training areas for the computer analysis were selected in three ways using: (1) an unsupervised classifier, (2) the unsupervised classifier to further refine training areas, and (3) available ground information. The first two methods utilize the capability of the unsupervised classifier to the unsupervised classifier to group resolution elements with similar spectral properties. With the imagery produced by these procedures, several geologic features can be identified. The most easily recognizable features are streams and other water bodies. The approximate location of contacts between several rock units can be mapped.

  18. Visualizing Mars Using Virtual Reality: A State of the Art Mapping Technique Used on Mars Pathfinder

    NASA Technical Reports Server (NTRS)

    Stoker, C.; Zbinden, E.; Blackmon, T.; Nguyen, L.

    1999-01-01

    We describe an interactive terrain visualization system which rapidly generates and interactively displays photorealistic three-dimensional (3-D) models produced from stereo images. This product, first demonstrated in Mars Pathfinder, is interactive, 3-D, and can be viewed in an immersive display which qualifies it for the name Virtual Reality (VR). The use of this technology on Mars Pathfinder was the first use of VR for geologic analysis. A primary benefit of using VR to display geologic information is that it provides an improved perception of depth and spatial layout of the remote site. The VR aspect of the display allows an operator to move freely in the environment, unconstrained by the physical limitations of the perspective from which the data were acquired. Virtual Reality offers a way to archive and retrieve information in a way that is intuitively obvious. Combining VR models with stereo display systems can give the user a sense of presence at the remote location. The capability, to interactively perform measurements from within the VR model offers unprecedented ease in performing operations that are normally time consuming and difficult using other techniques. Thus, Virtual Reality can be a powerful a cartographic tool. Additional information is contained in the original extended abstract.

  19. Evolution of luminous IRAS galaxies: Radio imaging

    NASA Technical Reports Server (NTRS)

    Neff, S. G.; Hutchings, J. B.

    1993-01-01

    In a recent study of IRAS galaxies' optical morphologies, we found that luminous IR sources lie in the IR color-luminosity plane in groups which separate out by optical spectroscopic type and also by degree of tidal disturbance. We found that the most luminous steep-IR-spectrum sources are generally galaxies in the initial stages of a major tidal interaction. Galaxies with active nuclei were generally found to have flatter IR spectra, to cover a range of IR luminosity, and to be in the later stages of a tidal interaction. We proposed a sequence of events by which luminous IR sources evolve: they start as interacting or merging galaxies, some develop active nuclei, and most undergo extensive star-formation in their central regions. Another way to study these objects and their individual evolution is to study their radio morphologies. Radio emission may arise at a detectable level from supernovae in star-forming regions and/or the appearance of an active nucleus can be accompanied by a nuclear radio source (which may develop extended structure). Therefore, the compact radio structure may trace the evolution of the inner regions of IRAS-luminous sources. If the radio sources are triggered by the interactions, we would expect to find the radio morphology related to the optical 'interactivity' of the systems. Here, we explore using the radio emission of IRAS galaxies as a possible tracer of galaxy evolution. We present and discuss observations of the compact radio morphology of 111 luminous IRAS-selected active galaxies covering a wide range of IR and optical properties.

  20. Mapping of groundwater potential zones in Salem Chalk Hills, Tamil Nadu, India, using remote sensing and GIS techniques.

    PubMed

    Thilagavathi, N; Subramani, T; Suresh, M; Karunanidhi, D

    2015-04-01

    This study proposes to introduce the remote sensing and geographic information system (GIS) techniques in mapping the groundwater potential zones. Remote sensing and GIS techniques have been used to map the groundwater potential zones in Salem Chalk Hills, Tamil Nadu, India. Charnockites and fissile hornblende biotite gneiss are the major rock types in this region. Dunites and peridodites are the ultramafic rocks which cut across the foliation planes of the gneisses and are highly weathered. It comprises magnesite and chromite deposits which are excavated by five mining companies by adopting bench mining. The thickness of weathered and fracture zone varies from 2.2 to 50 m in gneissic formation and 5.8 to 55 m in charnockite. At the contacts of gneiss and charnockite, the thickness ranges from 9.0 to 90.8 m favoring good groundwater potential. The mine lease area is underlined by fractured and sheared hornblende biotite gneiss where groundwater potential is good. Water catchment tanks in this area of 5 km radius are small to moderate in size and are only seasonal. They remain dry during summer seasons. As perennial water resources are remote, the domestic and agricultural activities in this region depend mainly upon the groundwater resources. The mines are located in gently slope area, and accumulation of water is not observed except in mine pits even during the monsoon period. Therefore, it is essential to map the groundwater potential zones for proper management of the aquifer system. Satellite imageries were also used to extract lineaments, hydrogeomorphic landforms, drainage patterns, and land use, which are the major controlling factors for the occurrence of groundwater. Various thematic layers pertaining to groundwater existence such as geology, geomorphology, land use/land cover, lineament, lineament density, drainage, drainage density, slope, and soil were generated using GIS tools. By integrating all the above thematic layers based on the ranks and

  1. Surface charge microscopy: novel technique for mapping charge-mosaic surfaces in electrolyte solutions.

    PubMed

    Yin, Xihui; Drelich, Jaroslaw

    2008-08-01

    The effective surface potential, called the zeta potential, is commonly determined from electrophoretic mobility measurements for particles moving in a solution in response to an electric field applied between two electrodes. The situation can be reversed, with the solution being forced to flow through a plug of packed particles, and the streaming potential of the particles can be calculated. A significant limitation of these electrokinetic measurements is that only an average value of the zeta potential/streaming potential is measured--regardless of whether the surface charge distribution is homogeneous or otherwise. However, in real-world situations, nearly all solids (and liquids) of technological significance exhibit surface heterogeneities. To detect heterogeneities in surface charge, analytical tools which provide accurate and spatially resolved information about the material surface potential--particularly at microscopic and submicroscopic resolutions--are needed. In this study, atomic force microscopy (AFM) was used to measure the surface interaction forces between a silicon nitride AFM cantilever and a multiphase volcanic rock. The experiments were conducted in electrolyte solutions with different ionic strengths and pH values. The colloidal force measurements were carried out stepwise across the boundary between adjacent phases. At each location, the force-distance curves were recorded. Surface charge densities were then calculated by fitting the experimental data with a DLVO theoretical model. Significant differences between the surface charge densities of the two phases and gradual transitions in the surface charge density at the interface were observed. It is demonstrated that this novel technique can be applied to examine one- and two-dimensional distributions of the surface potential. PMID:18620435

  2. Luminal perfusion of isolated gastric glands.

    PubMed

    Waisbren, S J; Geibel, J; Boron, W F; Modlin, I M

    1994-04-01

    We have extended to rabbit gastric glands the technique for perfusing single isolated renal tubules. We isolated glands by hand dissection and used concentric glass pipettes to hold them and perfuse their lumina. Parietal cells (PCs), which tended to be located toward the gland opening, were identified by their pyramidal shape, large size, and autofluorescence. Chief cells (CCs) were identified by their round shape and smaller size. In some experiments, we perfused the lumen with hydroxypyrenetrisulfonate, a pH-sensitive fluorophore, at pH 7.4 and used digital image processing to monitor luminal pH (pH1). Solutions were buffered with N-2-hydroxyethylpiperazine-N'-2-ethanesulfonic acid to pH 7.4 at 37 degrees C. With fast perfusion, we found no evidence of decreased pH1, even with stimulation by 10 microM carbachol. With slow perfusion, pH1 often fell below the dye's sensitive range (pH < 5), especially at low perfusate buffering power. In other experiments, we loaded cells with the pH-sensitive dye 2',7'-bis(2-carboxyethyl)-5(6)-carboxyfluorescein and monitored intracellular pH (pHi) in multiple individual PCs and CCs in a single gland. Mean pHi was 7.21 +/- 0.02 (n = 136 cells) for PCs and 7.27 +/- 0.03 (n = 103) for CCs. To examine the response to decreased pH1 and basolateral pH (pHb), we lowered pHb to 6.4 or lowered pH1 to 3.4 or 1.4. Lowering pHb to 6.4 for approximately 1 min caused pHi to fall reversibly by 0.39 +/- 0.05 (n = 53) in PCs and 0.58 +/- 0.03 (n = 50) in CCs. Lowering pH1 to 3.4 or 1.4 caused no significant pHi changes in PCs (n = 38 and 82) or in CCs (n = 44 and 77). Carbachol did not affect the response to changes in pH1 or pHb. We conclude that the apical surfaces of PCs and CCs are unusually resistant to extreme pH gradients. PMID:8178950

  3. Techniques for Revealing 3d Hidden Archeological Features: Morphological Residual Models as Virtual-Polynomial Texture Maps

    NASA Astrophysics Data System (ADS)

    Pires, H.; Martínez Rubio, J.; Elorza Arana, A.

    2015-02-01

    The recent developments in 3D scanning technologies are not been accompanied by visualization interfaces. We are still using the same types of visual codes as when maps and drawings were made by hand. The available information in 3D scanning data sets is not being fully exploited by current visualization techniques. In this paper we present recent developments regarding the use of 3D scanning data sets for revealing invisible information from archaeological sites. These sites are affected by a common problem, decay processes, such as erosion, that never ceases its action and endangers the persistence of last vestiges of some peoples and cultures. Rock art engravings, or epigraphical inscriptions, are among the most affected by these processes because they are, due to their one nature, carved at the surface of rocks often exposed to climatic agents. The study and interpretation of these motifs and texts is strongly conditioned by the degree of conservation of the imprints left by our ancestors. Every single detail in the remaining carvings can make a huge difference in the conclusions taken by specialists. We have selected two case-studies severely affected by erosion to present the results of the on-going work dedicated to explore in new ways the information contained in 3D scanning data sets. A new method for depicting subtle morphological features in the surface of objects or sites has been developed. It allows to contrast human patterns still present at the surface but invisible to naked eye or by any other archaeological inspection technique. It was called Morphological Residual Model (MRM) because of its ability to contrast the shallowest morphological details, to which we refer as residuals, contained in the wider forms of the backdrop. Afterwards, we have simulated the process of building Polynomial Texture Maps - a widespread technique that as been contributing to archaeological studies for some years - in a 3D virtual environment using the results of MRM

  4. BIA interpretation techniques for vegetation mapping using thematic mapper false color composites (interim report for San Carlos Reservation)

    USGS Publications Warehouse

    Bonner, W.J.; English, T.C.; Haas, R.H.; Feagan, T.R.; McKinley, R.A.

    1987-01-01

    The Bureau of Indian Affairs (BIA) is responsible for the natural resource management of approximately 52 million acres of Trust lands in the contiguous United States. The lands are distributed in a "patchwork" fashion throughout the country. Management responsibilities on these areas include: minerals, range, timber, fish and wildlife, agricultural, cultural, and archaeological resources. In an age of decreasing natural resources and increasing natural resource values, effective multiple resource management is critical. BIA has adopted a "systems approach" to natural resource management which utilizes Geographic Information System (GIS) technology. The GIS encompasses a continuum of spatial and relational data elements, and included functional capabilities such as: data collection, data entry, data base development, data analysis, data base management, display, and report generalization. In support of database development activities, BIA and BLM/TGS conducted a cooperative effort to investigate the potential of 1:100,000 scale Thematic Mapper (TM) False Color Composites (FCCs) for providing vegetation information suitable for input to the GIS and to later be incorporated as a generalized Bureau wide land cover map. Land cover information is critical as the majority of reservations currently have no land cover information in either map or digital form. This poster outlines an approach which includes the manual interpretation of land cover using TM FCCs, the digitizing of interpreted polygons, and the editing of digital data, used upon ground truthing exercises. An efficient and cost-effective methodology for generating large area land cover information is illustrated for the Mineral Strip area on the San Carlos Indian Reservation in Arizona. Techniques which capitalize on the knowledge of the local natural resources professionals, while minimizing machine processing requirements, are suggested.

  5. Mapping forested wetlands in the Great Zhan River Basin through integrating optical, radar, and topographical data classification techniques.

    PubMed

    Na, X D; Zang, S Y; Wu, C S; Li, W L

    2015-11-01

    Knowledge of the spatial extent of forested wetlands is essential to many studies including wetland functioning assessment, greenhouse gas flux estimation, and wildlife suitable habitat identification. For discriminating forested wetlands from their adjacent land cover types, researchers have resorted to image analysis techniques applied to numerous remotely sensed data. While with some success, there is still no consensus on the optimal approaches for mapping forested wetlands. To address this problem, we examined two machine learning approaches, random forest (RF) and K-nearest neighbor (KNN) algorithms, and applied these two approaches to the framework of pixel-based and object-based classifications. The RF and KNN algorithms were constructed using predictors derived from Landsat 8 imagery, Radarsat-2 advanced synthetic aperture radar (SAR), and topographical indices. The results show that the objected-based classifications performed better than per-pixel classifications using the same algorithm (RF) in terms of overall accuracy and the difference of their kappa coefficients are statistically significant (p<0.01). There were noticeably omissions for forested and herbaceous wetlands based on the per-pixel classifications using the RF algorithm. As for the object-based image analysis, there were also statistically significant differences (p<0.01) of Kappa coefficient between results performed based on RF and KNN algorithms. The object-based classification using RF provided a more visually adequate distribution of interested land cover types, while the object classifications based on the KNN algorithm showed noticeably commissions for forested wetlands and omissions for agriculture land. This research proves that the object-based classification with RF using optical, radar, and topographical data improved the mapping accuracy of land covers and provided a feasible approach to discriminate the forested wetlands from the other land cover types in forestry area. PMID

  6. Lithostratigraphic interpretation from joint analysis of seismic tomography and magnetotelluric resistivity models using self-organizing map techniques

    NASA Astrophysics Data System (ADS)

    Bauer, K.; Muñoz, G.; Moeck, I.

    2012-12-01

    The combined interpretation of different models as derived from seismic tomography and magnetotelluric (MT) inversion represents a more efficient approach to determine the lithology of the subsurface compared with the separate treatment of each discipline. Such models can be developed independently or by application of joint inversion strategies. After the step of model generation using different geophysical methodologies, a joint interpretation work flow includes the following steps: (1) adjustment of a joint earth model based on the adapted, identical model geometry for the different methods, (2) classification of the model components (e.g. model blocks described by a set of geophysical parameters), and (3) re-mapping of the classified rock types to visualise their distribution within the earth model, and petrophysical characterization and interpretation. One possible approach for the classification of multi-parameter models is based on statistical pattern recognition, where different models are combined and translated into probability density functions. Classes of rock types are identified in these methods as isolated clusters with high probability density function values. Such techniques are well-established for the analysis of two-parameter models. Alternatively we apply self-organizing map (SOM) techniques, which have no limitations in the number of parameters to be analysed in the joint interpretation. Our SOM work flow includes (1) generation of a joint earth model described by so-called data vectors, (2) unsupervised learning or training, (3) analysis of the feature map by adopting image processing techniques, and (4) application of the knowledge to derive a lithological model which is based on the different geophysical parameters. We show the usage of the SOM work flow for a synthetic and a real data case study. Both tests rely on three geophysical properties: P velocity and vertical velocity gradient from seismic tomography, and electrical resistivity

  7. Genomic Profiling in Luminal Breast Cancer

    PubMed Central

    Gluz, Oleg; Hofmann, Daniel; Würstlein, Rachel; Liedtke, Cornelia; Nitz, Ulrike; Harbeck, Nadia

    2013-01-01

    Summary The developments in gene expression analysis have made it possible to sub-classify hormone receptor-positive (luminal) breast cancer in different prognostic subgroups. This sub-classification is currently used in clinical routine as prognostic signature (e.g. 21-gene Onoctype DX®, 70-gene Mammaprint®). As yet, the optimal method for sub-classification has not been defined. Moreover, there is no evidence from prospective trials. This review explores widely used genomic signatures in luminal breast cancer, making a critical appraisal of evidence from retrospective/prospective trials. It is based on systematic literature search performed using Medline (accessed September 2013) and abstracts presented at the Annual Meeting of American Society of Clinical Oncology and San Antonio Breast Cancer Symposium. PMID:24550749

  8. White LEDs with limit luminous efficacy

    NASA Astrophysics Data System (ADS)

    Lisitsyn, V. M.; Lukash, V. S.; Stepanov, S. A.; Yangyang, Ju

    2016-01-01

    In most promising widespread gallium nitride based LEDs emission is generated in the blue spectral region with a maximum at about 450 nm which is converted to visible light with the desired spectrum by means of phosphor. The thermal energy in the conversion is determined by the difference in the energies of excitation and emission quanta and the phosphor quantum yield. Heat losses manifest themselves as decrease in the luminous efficacy. LED heating significantly reduces its efficiency and life. In addition, while heating, the emission generation output and the efficiency of the emission conversion decrease. Therefore, the reduction of the energy losses caused by heating is crucial for LED development. In this paper, heat losses in phosphor-converted LEDs (hereinafter chips) during spectrum conversion are estimated. The limit values of the luminous efficacy for white LEDs are evaluated.

  9. Orbital masses of nearby luminous galaxies

    SciTech Connect

    Karachentsev, Igor D.; Kudrya, Yuri N. E-mail: yukudrya@gmail.com

    2014-09-01

    We use observational properties of galaxies accumulated in the Updated Nearby Galaxy Catalog to derive a dark matter mass of luminous galaxies via motions of their companions. The data on orbital-to-stellar mass ratio are presented for 15 luminous galaxies situated within 11 Mpc from us: the Milky Way, M31, M81, NGC 5128, IC342, NGC 253, NGC 4736, NGC 5236, NGC 6946, M101, NGC 4258, NGC 4594, NGC 3115, NGC 3627, and NGC 3368, as well as for a composite suite around other nearby galaxies of moderate and low luminosity. The typical ratio for these galaxies is M {sub orb}/M {sub *} = 31, corresponding to the mean local density of matter Ω {sub m} = 0.09, i.e., one-third of the global cosmic density. This quantity seems to be rather an upper limit of dark matter density, since the peripheric population of the suites may suffer from the presence of fictitious unbound members. We note that the Milky Way and M31 halos have lower dimensions and lower stellar masses than those of the other 13 nearby luminous galaxies. However, the dark-to-stellar mass ratio for both the Milky Way and M31 is typical for other neighboring luminous galaxies. The distortion in the Hubble flow, observed around the Local Group and five other neighboring groups, yields their total masses within the radius of a zero velocity surface, R {sub 0}; these masses are slightly lower than the orbital and virial values. This difference may be due to the effect of dark energy producing a kind of 'mass defect' within R {sub 0}.

  10. Luminal Conversion and Immunoregulation by Probiotics

    PubMed Central

    Ganesh, Bhanu Priya; Versalovic, James

    2015-01-01

    Beneficial microbes are responsible for the synthesis of nutrients and metabolites that are likely important for the maintenance of mammalian health. Many nutrients and metabolites derived from the gut microbiota by luminal conversion have been implicated in the development, homeostasis and function of innate and adaptive immunity. These factors clearly suggest that intestinal microbiota may influence host immunity via microbial metabolite-dependent mechanisms. We describe how intestinal microbes including probiotics generate microbial metabolites that modulate mucosal and systemic immunity. PMID:26617521

  11. High-Resolution Sea Ice Topography Mapping using UAS-based Sensors and Structure-from-Motion Techniques

    NASA Astrophysics Data System (ADS)

    Saiet, E., II

    2015-12-01

    Digital Elevation Models (DEMs) of sea-ice are key descriptors of the surface boundary between ice and atmosphere. High resolution (meter-scale) and accurate (cm-scale) DEM data are required to correctly understand surface-atmosphere interactions in arctic environments. Beyond that, high-quality DEM data is also needed to understand sea ice stability and quantify the suitability of sea ice as a means of navigation both on and through the ice. Due to the stringent accuracy requirements of sea ice topography mapping, Lidar data are often used to generate sea ice DEMs. Despite their proven performance, Lidar sensors are expensive and difficult to operate especially in harsh and remote Arctic environments. Hence, alternative more efficient solutions need to be found. To address this issue, this study is investigating the applicability of two recent technical innovations to sea ice DEM production: (1) We analyze the performance of Structure from Motion (SfM) techniques for sea ice topography mapping. SfM is an image processing technique that has recently gained momentum in the geosciences and enables high-quality DEM production from images of uncalibrated off-the-shelf cameras; (2) we investigate the applicability of Unmanned Aerial Systems (UAS) as platform for our camera systems. UAS have significant advantages for Arctic applications due to their high flexibility, low-cost, and ability to fly in environments deemed risky for manned operations. Combined, SfM and UAS may result in an intriguing alternative to traditional sensors. Using data from a 2015 field campaign near Barrow, Alaska, we showcase the DEM measurement performance that can be achieved with UAS-based sensors and SfM processing. In addition to showing examples of DEM products, we will provide results of an extensive performance analysis where DEM measurements were compared to ground observations and DEMs from alternative sources. To analyze the impact of flight-track information on DEM quality, we first

  12. Observations of Luminous Infrared Galaxies with Herschel

    NASA Astrophysics Data System (ADS)

    Armus, Lee

    2014-01-01

    A major result of the IRAS survey was the discovery of a large population of luminous infrared galaxies (LIRGs) which emit a significant fraction of their bolometric luminosity in the far-infrared. LIRGs cover the full range of morphologies from isolated disk galaxies, to advanced mergers, exhibiting enhanced star-formation rates and a higher fraction of Active Galactic Nuclei (AGN) compared to less luminous galaxies. A detailed study of low-redshift LIRGs is critical for our understanding of the cosmic evolution of galaxies and black holes, since LIRGs comprise the bulk of the cosmic far-infrared background and dominate the star-formation between 0.5 < z < 1. With ISO, it was possible to measure the full suite of infrared diagnostic lines in local normal and luminous infrared galaxies for the first time, but samples were small and observations challenging. With Herschel, we have been able to study large samples of low-redshift LIRGs, and even probe the physical conditions in poweful starburst galaxies out to significant redshifts. By combining the Herschel data with those from Spitzer, it is now possible to understand the heating and cooling of the dust and gas in complete samples of LIRGs for the first time. I will review recent results from a number of GTO, OTKP and GO programs in an attempt to summarize the advances we have made in understanding star formation and black hole accretion in LIRGs as a direct result of the Herschel mission.

  13. Pulsations and outbursts of luminous blue variables

    SciTech Connect

    Cox, A.N.; Guzik, J.A.; Soukup, M.S.; Despain, K.M.

    1997-06-01

    We propose an outburst mechanism for the most luminous stars in our and other galaxies. These million solar luminosity stars, with masses (after earlier mass loss) of between 20 and maybe 70 solar masses, are pulsationally unstable for both radial and low-degree nonradial modes. Some of these modes are ``strange,`` meaning mostly that the pulsations are concentrated near the stellar surface and have very rapid growth rates in linear theory. The pulsation driving is by both the high iron line opacity (near 150,000 K) and the helium opacity (near 30,000 K) kappa effects. Periods range from 5 to 40 days. Depending on the composition, pulsations periodically produce luminosities above the Eddington limit for deep layers. The radiative luminosity creates an outward push that readily eases the very low gamma envelope to very large outburst radii. A key point is that a super-Eddington luminosity cannot be taken up by the sluggish convection rapidly enough to prevent an outward acceleration of much of the envelope. As the helium abundance in the envelope stellar material increases by ordinary wind mass loss and the luminous blue variable outbursts, the opacity in the deep pulsation driving layers decreases. This makes the current Eddington luminosity even higher so that pulsations can then no longer give radiative luminosities exceeding the limit. For the lower mass and luminosity luminous blue variables there is considerably less iron line opacity driving, and pulsations are almost all caused by the helium ionization kappa effect.

  14. Applications of the Halo Model to Large Scale Structure Measurements of the Luminous Red Galaxies

    NASA Astrophysics Data System (ADS)

    Reid, Beth A.; Spergel, D. N.; Bode, P.

    2009-01-01

    The power spectrum of density fluctuations in the evolved universe provides constraints on cosmological parameters that are complementary to the CMB and other astronomical probes. The Sloan Digital Sky Survey (SDSS) Luminous Red Galaxy (LRG) sample probes a volume of 3 Gpc3, and systematic errors in modeling the nonlinearities limit our ability to extract information on the shape of the linear power spectrum. In Chapter 2 of this dissertation we present the technique `Counts-in-Cylinders' (CiC) and use it to measure the multiplicity function of groups of LRGs in SDSS. We use the Halo Occupation Distribution description of the galaxy-matter mapping and N-body simulations to connect this observation with constraints on the distribution of LRGs in dark matter halos. In Chapter 3 we study the effects of resolution on statistics relating to both the large and small scale distributions and motions of matter and dark matter halos. We combine these results to produce a large set of high quality mock LRG catalogs that reproduce the higher order statistics in the density field probed by the CiC technique. Using these catalogs we present a detailed analysis of the method used in Tegmark et al. 2006 to estimate the LRG power spectrum, and find that the large nonlinear correction necessary for their analysis is degenerate with changes in the linear spectrum we wish to constrain. We show that the CiC group-finding method in Chapter 2 can be used to reconstruct the underlying halo density field. The power spectrum of this field has only percent-level deviations from the underlying matter power spectrum, and will therefore provided tighter constraints on cosmological parameters. Techniques presented in this dissertation will be useful for final analysis of the SDSS LRGs and upcoming surveys probing much larger volumes. B.A.R. gratefully acknowledges support from the NSF Graduate Research Fellowship.

  15. Relation between acid back-diffusion and luminal surface hydrophobicity in canine gastric mucosa: Effects of salicylate and prostaglandin

    SciTech Connect

    Goddard, P.J.

    1989-01-01

    The stomach is thought to be protected from luminal acid by a gastric mucosal barrier that restricts the diffusion of acid into tissue. This study tested the hypothesis that the hydrophobic luminal surface of canine gastric mucosa incubated in Ussing chambers, impedes the back-diffusion of luminal acid into the tissue. Isolated sheets of mucosa were treated with cimetidine to inhibit spontaneous acid secretion, and incubated under conditions that prevented significant secretion of luminal bicarbonate. By measuring acid loss from the luminal compartment using the pH-stat technique, acid back-diffusion was continuously monitored; potential difference (PD) was measured as an index of tissue viability. Tissue luminal surface hydrophobicity was estimated by contact angle analysis at the end of each experiment. Addition of 16,16-dimethyl prostaglandin E{sub 2} to the nutrient compartment enhanced luminal surface hydrophobicity, but did not reduce acid back-diffusion in tissues that maintained a constant PD. 10 mM salicylate at pH 4.00 in the luminal compartment reduced surface hydrophobicity, but this decrease did not occur if 1 ug/ml prostaglandin was present in the nutrient solution. Despite possessing relatively hydrophilic and relatively hydrophobic surface properties, respectively, acid back-diffusion in the absence of salicylate was not significantly different between these two groups. Neither group maintained a PD after incubation with salicylate. Lastly, radiolabeled salicylate was used to calculate the free (non-salicylate associated) acid loss in tissues incubated with salicylate and/or prostaglandin. No significant correlation was found between free acid back-diffusion and luminal surface hydrophobicity. These data do not support the hypothesis that acid back-diffusion in impeded by the hydrophobic surface presented by isolated canine gastric mucosa.

  16. Combination of techniques for mapping structural and functional connectivity of soil erosion processes: a case study in a small watershed

    NASA Astrophysics Data System (ADS)

    Seeger, Manuel; Taguas, Encarnación; Brings, Christine; Wirtz, Stefan; Rodrigo Comino, Jesus; Albert, Enrique; Ries, Johabbes B.

    2016-04-01

    experiment). The measurements were conducted at representative areas identified in advance by precedent mapping. Preliminary results show that the rills are highly effective in producing sediments, but also in connecting fast the different sources with the catchment's outlet. But also they act as a disconnecting feature to the areas of observation, as they may lead the runoff (and the transported sediments) outside the catchment. On the other side, the experiments showed that the evidently degraded areas produce only very delayed runoff, and thus also sediments, whereas the areas with stable deep soils show evidences of fast runoff and erosive responses. The preliminary results of the combination of mapping and experimental techniques demonstrate the different levels at where functional and structural connectivity have to be evaluated. The latter one may be, as a geomorphological feature, the result of former process distributions, whereas the directly observable (functional) connectivity may shift in time due to internal feedbacks, such as the result of soil degradation.

  17. Optimal Electromagnetic (EM) Geophysical Techniques to Map the Concentration of Subsurface Ice and Adsorbed Water on Mars and the Moon

    NASA Astrophysics Data System (ADS)

    Stillman, D. E.; Grimm, R. E.

    2013-12-01

    Water ice is ubiquitous in our Solar System and is a probable target for planetary exploration. Mapping the lateral and vertical concentration of subsurface ice from or near the surface could determine the origin of lunar and martian ice and quantify a much-needed resource for human exploration. Determining subsurface ice concentration on Earth is not trivial and has been attempted previously with electrical resistivity tomography (ERT), ground penetrating radar (GPR), airborne EM (AEM), and nuclear magnetic resonance (NMR). These EM geophysical techniques do not actually detect ice, but rather the absence of unfrozen water. This causes a non-unique interpretation of frozen and dry subsurface sediments. This works well in the arctic because most locations are not dry. However, for planetary exploration, liquid water is exceedingly rare and subsurface mapping must discriminate between an ice-rich and a dry subsurface. Luckily, nature has provided a unique electrical signature of ice: its dielectric relaxation. The dielectric relaxation of ice creates a temperature and frequency dependence of the electrical properties and varies the relative dielectric permittivity from ~3.1 at radar frequencies to >100 at low frequencies. On Mars, sediments smaller than silt size can hold enough adsorbed unfrozen water to complicate the measurement. This is because the presence of absorbed water also creates frequency-dependent electrical properties. The dielectric relaxation of adsorbed water and ice can be separated as they have different shapes and frequency ranges as long as a spectrum spanning the two relaxations is measured. The volume concentration of ice and adsorbed water is a function of the strength of their relaxations. Therefore, we suggest that capacitively-coupled dielectric spectroscopy (a.k.a. spectral induced polarization or complex resistivity) can detect the concentration of both ice and adsorbed water in the subsurface. To prove this concept we have collected

  18. Utilizing Dynamic Form Generation and Image Map Techniques to Construct an Interface to an Astronomical and Geophysical INGRES Database

    NASA Astrophysics Data System (ADS)

    Dorland, B. N.; Snyder, W. A.; Jones, R. D.; Heinicke, S.; Becker, D. A.

    The Backgrounds Data Center (BDC), located in the Space Sciences Division (SSD) of the Naval Research Laboratory (NRL), is the designated archive for celestial and earth backgrounds data collected by Ballistic Missile Defense Organization (BMDO) science research programs, including the upcoming Midcourse Space Experiment (MSX) data set. We extract and populate relational database catalogs with metadata and these catalogs to locate archived data products which our users request. The advent of Jason Ng's (NCSA) GSQL protocols have allowed us to construct World-Wide Wed interfaces to our catalogs, greatly improving their utility to users. We have modified these scripts to work with our INGRES RDBMS. We have enhanced the standard GSQL interface by incorporating the use of 'on the fly' form and graphical image construction. With dynamic forms, users generate their own forms by pre-selecting those query parameters they wish to use to search on databases. Users can also select query complexity ranging from rank novice to direct interaction with Standard Query Language (SQL). Dynamic image mapping adds a graphical layer to the WWW forms interface, and permits users to select data by interacting with images only. These techniques allow for an uncluttered and intuitive representation of the catalog databases to users.

  19. Locating pollutant emission sources with optical remote sensing measurements and an improved one-dimensional radial plume mapping technique.

    PubMed

    Wu, Chang-fu; Lin, Shih-Chun; Yeh, Cheng-Kai

    2012-04-01

    Previous studies have shown that there was a relatively large amount of uncertainty along the major wind direction in the results of locating emission sources using the one-dimensional radial plume mapping (RPM(1D)) technique based on optical remote sensing measurements. This paper proposes setting up an additional monitoring line that is perpendicular to the original scanning beam geometry to reduce this uncertainty. We first conducted a computer simulation study using the Gaussian dispersion model to generate the downwind concentrations of plumes from 400 source locations in a 201 m × 201 m spatial domain under various wind directions (n = 181). The optical remote sensing instrument was assumed to be at (0, 0) with two perpendicular monitoring lines, each of which had three beam segments of equal length. Each pair of the reconstructed downwind concentration profiles was then used to trace back to the source locations. The results showed that the accuracy of the method and its uncertainty were improved by using the proposed two-line RPM(1D) approach rather than the original one-line RPM(1D) approach at most simulated source locations. In a follow-up field experiment, a tracer gas was released at the coordinate of (100, 100). The release location was covered within the 0.25- to 0.5-probability area of the estimated results, and the distance between the actual and estimated source locations was 18.4 m (9.2% of the longest beam path). PMID:22382995

  20. Deep Resistivity Sounding (DRS) technique for mapping of sub-trappean sediments - A case study from central India

    NASA Astrophysics Data System (ADS)

    Veeraiah, B.; Ashok Babu, G.

    2014-06-01

    Mapping of subtrappean Mesozoic sediments is a complex geophysical problem faced by the oil industry world over which is of greater interest. Sediments underlying the traps are of varying thickness and causing considerable uncertainty in the interpretation of geophysical data. Deep Resistivity Sounding (DRS) technique is an effective non invasive electrical geophysical tool for delineation of the sub-trappean sediments up to a large depth (~ 3-4 km), due to marked resistivity contrast exist between the trap, sediment and basement. As a part of an integrated geophysical study, eighty-one deep resistivity soundings were conducted in the eastern part of Deccan flood basalt region, central India. The region is situated in the western part of the Indian subcontinent, which is covered by the Deccan traps, mostly exposed, except for a few places where thin Tertiary sediments are present. DRS data were acquired with Schlumberger electrode configuration up to a maximum current electrode separation of 10 km using high power deep resistivity equipment. The transmitter is capable of sending maximum current in the order of 20 A, and the receiver can measure signals in micro volts. Layer parameters i.e., resistivities and thicknesses of various layers have been determined from 1D modeling of all the eighty-one DRS stations. In order to explain the layered earth structure, geo-electric sections were prepared along few profiles and the results from two profiles showing maximum sediments are discussed in the present paper.

  1. Evaluation of SLAR and simulated thematic mapper MSS data for forest cover mapping using computer-aided analysis techniques

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M.; Dean, M. E.; Knowlton, D. J.; Latty, R. S.

    1982-01-01

    Kershaw County, South Carolina was selected as the study site for analyzing simulated thematic mapper MSS data and dual-polarized X-band synthetic aperture radar (SAR) data. The impact of the improved spatial and spectral characteristics of the LANDSAT D thematic mapper data on computer aided analysis for forest cover type mapping was examined as well as the value of synthetic aperture radar data for differentiating forest and other cover types. The utility of pattern recognition techniques for analyzing SAR data was assessed. Topics covered include: (1) collection and of TMS and reference data; (2) reformatting, geometric and radiometric rectification, and spatial resolution degradation of TMS data; (3) development of training statistics and test data sets; (4) evaluation of different numbers and combinations of wavelength bands on classification performance; (5) comparison among three classification algorithms; and (6) the effectiveness of the principal component transformation in data analysis. The collection, digitization, reformatting, and geometric adjustment of SAR data are also discussed. Image interpretation results and classification results are presented.

  2. Light Echoes from Luminous Transients - Status of the Field and Future Prospects

    NASA Astrophysics Data System (ADS)

    Welch, Douglas L.; Rest, Armin

    2014-06-01

    Scattered-light echoes from luminous transients (supernovae, luminous blue variables/supernova impostors) are unique in their ability to preserve an observable record of the state of a transient in outburst for centuries. Unlike conventional (i.e. intentional) time-series records, the presence of interstellar dust on the light echo ellipsoid from a given outburst allows photometric and spectroscopic study at any epoch where the scattered-light signal reaches a useful signal-to-noise threshold. Additionally, dust located at different positions on an outburst's light-echo ellipsoid will preserve a record of the hemispheric emission of light from the outburst as seen from each dust location, allowing the degree of asymmetry of an outburst to be studied.A more recent application of scattered-light echoes employs the fact that multiple outbursts of high-amplitude, unstable, luminous variables each produce their own light-echo ellipsoids and dust located on such ellipsoids provides an opportunity for multiple outbursts to be studied at the same (modern) epoch.This talk will provide a brief summary of the evolution of light echoes from use as a tool to map out the interstellar medium around luminous events to their richer and more recent use of investigating the outbursts themselves. Important findings from observations of scattered-light echoes regarding SN1987A, Cas A, and eta Car will be highlighted and near-term prospects for new work and new observational opportunities will be presented.

  3. Evolution Of Map Display Optical Systems

    NASA Astrophysics Data System (ADS)

    Boot, Alan

    1983-06-01

    It is now over 20 years since Ferranti plc introduced optically projected map displays into operational aircraft navigation systems. Then, as now, it was the function of the display to present an image of a topographical map to a pilot or navigator with his present position clearly identified. Then, as now, the map image was projected from a reduced image stored on colour micro film. Then, as now, the fundamental design problems are the same.In the exposed environment of an aircraft cockpit where brightness levels may vary from those associated with direct sunlight on the one hand, to starlight on the other, how does one design an optical system with sufficient luminance, contrast and resolution where in the daytime sunlight may fall on the display or in the pilot's eyes, and at night time the display luminance must not detract from the pilot's ability to pick up external clues? This paper traces the development of Ferranti plc optically projected map displays from the early V Bomber and the ill-fated TSR2 displays to the Harrier and Concorde displays. It then goes on to the development of combined map and electronic displays (COMED), showing how an earlier design, as fitted to Tornado, has been developed into the current COMED design which is fitted to the F-18 and Jaguar aircraft. In each of the above display systems particular features of optical design interest are identified and their impact on the design as a whole are discussed. The use of prisms both for optical rotation and translation, techniques for the maximisation of luminance, the problems associated with contrast enhancement, particularly with polarising filters in the presence of optically active materials, the use of aerial image combining systems and the impact of the pilot interface on the system parameter are all included.Perhaps the most interesting result in considering the evolution of map displays has not been so much the designer's solutions in overcoming the various design problems but

  4. Facilitating the exploitation of ERTS imagery using snow enhancement techniques. [geological mapping of New England test area

    NASA Technical Reports Server (NTRS)

    Wobber, F. J.; Martin, K. R. (Principal Investigator); Amato, R. V.; Leshendok, T.

    1974-01-01

    The author has identified the following significant results. The procedure for conducting a regional geological mapping program utilizing snow-enhanced ERTS-1 imagery has been summarized. While it is recognized that mapping procedures in geological programs will vary from area to area and from geologist to geologist, it is believed that the procedure tested in this project is applicable over a wide range of mapping programs. The procedure is designed to maximize the utility and value of ERTS-1 imagery and aerial photography within the initial phase of geological mapping programs. Sample products which represent interim steps in the mapping formula (e.g. the ERTS Fracture-Lineament Map) have been prepared. A full account of these procedures and products will be included within the Snow Enhancement Users Manual.

  5. The Formation of Primordial Luminous Objects

    SciTech Connect

    Ripamonti, Emanuele; Abel, Tom; /KIPAC, Menlo Park

    2005-08-04

    structure formation. In these notes we will leave the discussion of feedback to lecture notes by Ferrara & Salvaterra and by Madau & Haardt in this same book and focus only on the aspects of the formation of the first objects. The advent of cosmological numerical hydrodynamics in particular allow a fresh new look at these questions. Hence, these notes will touch on aspects of theoretical cosmology to chemistry, computer science, hydrodynamics and atomic physics. For further reading and more references on the subject we refer the reader to other relevant reviews such as Barkana & Loeb 2001, and more recently Ciardi & Ferrara 2004, Glover 2004 and Bromm & Larson 2004. In these notes, we try to give a brief introduction to only the most relevant aspects. We will start with a brief overview of the relevant cosmological concepts in section 2, followed by a discussion of the properties of primordial material (with particular emphasis to its cooling and its chemistry) in section 3. We will then review the technique and the results of numerical simulations in sections 4 and 5: the former will deal with detailed 3D simulations of the formation of gaseous clouds which are likely to transform into luminous objects, while the latter will examine results (mostly from 1D codes) about the modalities of such transformation. Finally, in section 6 we will critically discuss the results of the previous sections, examining their consequences and comparing them to our present knowledge of the universe.

  6. Luminal B breast cancer subtype displays a dicotomic epigenetic pattern.

    PubMed

    Bediaga, Naiara G; Beristain, Elena; Calvo, Borja; Viguri, María A; Gutierrez-Corres, Borja; Rezola, Ricardo; Ruiz-Diaz, Irune; Guerra, Isabel; de Pancorbo, Marian M

    2016-01-01

    Luminal B breast tumors have aggressive clinical and biological features, and constitute the most heterogeneous molecular subtype, both clinically and molecularly. Unfortunately, the immunohistochemistry correlate of the luminal B subtype remains still imprecise, and it has now become of paramount importance to define a classification scheme capable of segregating luminal tumors into clinically meaningful subgroups that may be used clinically to guide patient management. With the aim of unraveling the DNA methylation profiles of the luminal subtypes currently being most used in the clinical setting, we have quantified the DNA methylation level of 27,578 CpG sites in 17 luminal B (ER+, Ki67 ≥ 20 % or PgR < 20 % and HER2-), 8 luminal A (ER+ and Ki67 > 20 %) and 4 luminal B-HER2+ (ER+ and HER2+) breast cancer samples by using the Illumina Infinium methylation microarray approach. Unsupervised hierarchical clustering revealed that DNA methylation stratifies luminal B samples in two categories with differing epigenetic and clinical features. One subgroup of luminal B samples showed a methylator phenotype and clustered with the lumB-HER tumors, while the other showed less methylated events, clustered with the luminal A. A 3 CpG marker panel capable of discriminating methylator versus non-methylator luminal B samples was identified and further validated in an independent cohort of patients. Our results provide evidence that DNA methylation and, more specifically, a panel of 3 CpG markers, enables the stratification of luminal B samples in two categories with differing epigenetic and clinical features and support the utilization of this panel for therapeutic stratification of patients with luminal breast cancer. PMID:27330889

  7. Constraining Emission Models of Luminous Blazar Sources

    SciTech Connect

    Sikora, Marek; Stawarz, Lukasz; Moderski, Rafal; Nalewajko, Krzysztof; Madejski, Greg; /KIPAC, Menlo Park /SLAC

    2009-10-30

    Many luminous blazars which are associated with quasar-type active galactic nuclei display broad-band spectra characterized by a large luminosity ratio of their high-energy ({gamma}-ray) and low-energy (synchrotron) spectral components. This large ratio, reaching values up to 100, challenges the standard synchrotron self-Compton models by means of substantial departures from the minimum power condition. Luminous blazars have also typically very hard X-ray spectra, and those in turn seem to challenge hadronic scenarios for the high energy blazar emission. As shown in this paper, no such problems are faced by the models which involve Comptonization of radiation provided by a broad-line-region, or dusty molecular torus. The lack or weakness of bulk Compton and Klein-Nishina features indicated by the presently available data favors production of {gamma}-rays via up-scattering of infrared photons from hot dust. This implies that the blazar emission zone is located at parsec-scale distances from the nucleus, and as such is possibly associated with the extended, quasi-stationary reconfinement shocks formed in relativistic outflows. This scenario predicts characteristic timescales for flux changes in luminous blazars to be days/weeks, consistent with the variability patterns observed in such systems at infrared, optical and {gamma}-ray frequencies. We also propose that the parsec-scale blazar activity can be occasionally accompanied by dissipative events taking place at sub-parsec distances and powered by internal shocks and/or reconnection of magnetic fields. These could account for the multiwavelength intra-day flares occasionally observed in powerful blazars sources.

  8. Nonlinear Pulsation Modeling of Luminous Blue Variables

    SciTech Connect

    Despain, Kate M.; Guzik, Joyce A.; Cox, Arthur N.

    1997-12-31

    Using an updated version of the Ostlie and Cox (1993) nonlinear hydrodynamics code, we show the results of Luminous Blue Variable (LBV) envelope models based on evolution models of initial mass 50-80 M solar. including mass loss. The models use OPAL opacities, contain 60-120 Lagrangian zones, include time dependent convection, and are given an initial photospheric radial velocity amplitude of 1 km/sec. Our goal is to explain the reason for the LBV instability strip and suggest a cause for LBV outbursts observed in massive stars in our Galaxy as well as the LMC and SMC.

  9. Luminous variables in the Quintuplet cluster

    NASA Astrophysics Data System (ADS)

    Glass, I. S.; Matsumoto, S.; Carter, B. S.; Sekiguchi, K.

    1999-03-01

    We report observations of variability amongst the stars of the `Quintuplet' cluster located about 30 pc in projection from the centre of the Galaxy. Two of the five cocoon star members, which may be protostars or peculiar Wolf-Rayet stars, are seen to vary slowly with moderate amplitude (0.4-0.5 mag). The bright star within the `Pistol' H ii region, suspected of being a luminous blue variable (LBV), has in fact been found to show variability, confirming its tentative classification. A second nearby presumed LBV also varies. One of the apparent Quintuplet members is likely to be a Mira variable along the same line of sight.

  10. Analysis of the nonlinear behavior of shear-Alfvén modes in tokamaks based on Hamiltonian mapping techniques

    SciTech Connect

    Briguglio, S. Vlad, G.; Fogaccia, G.; Di Troia, C.; Fusco, V.; Wang, X.; Zonca, F.

    2014-11-15

    We present a series of numerical simulation experiments set up to illustrate the fundamental physics processes underlying the nonlinear dynamics of Alfvénic modes resonantly excited by energetic particles in tokamak plasmas and of the ensuing energetic particle transports. These phenomena are investigated by following the evolution of a test particle population in the electromagnetic fields computed in self-consistent MHD-particle simulation performed by the HMGC code. Hamiltonian mapping techniques are used to extract and illustrate several features of wave-particle dynamics. The universal structure of resonant particle phase space near an isolated resonance is recovered and analyzed, showing that bounded orbits and untrapped trajectories, divided by the instantaneous separatrix, form phase space zonal structures, whose characteristic non-adiabatic evolution time is the same as the nonlinear time of the underlying fluctuations. Bounded orbits correspond to a net outward resonant particle flux, which produces a flattening and/or gradient inversion of the fast ion density profile around the peak of the linear wave-particle resonance. The connection of this phenomenon to the mode saturation is analyzed with reference to two different cases: a Toroidal Alfvén eigenmode in a low shear magnetic equilibrium and a weakly unstable energetic particle mode for stronger magnetic shear. It is shown that, in the former case, saturation is reached because of radial decoupling (resonant particle redistribution matching the mode radial width) and is characterized by a weak dependence of the mode amplitude on the growth rate. In the latter case, saturation is due to resonance detuning (resonant particle redistribution matching the resonance width) with a stronger dependence of the mode amplitude on the growth rate.

  11. Application of advanced seismic reflection imaging techniques to mapping permeable zones at Dixie Valley, Nevada. Final technical report

    SciTech Connect

    1998-02-18

    Multifold seismic reflection data from the Dixie Valley geothermal field in Nevada were reprocessed using a nonlinear optimization scheme called simulated annealing to model subsurface acoustic velocities, followed by a pre-stack Kirchhoff migration to produce accurate and detailed depth-migrated images of subsurface structure. In contrast to conventional processing techniques, these methods account for significant lateral variations in velocity and thus have the potential ability to image steeply-dipping faults and fractures that may affect permeability within geothermal fields. The optimization scheme develops two-dimensional velocity models to within 6% of velocities obtained from well and surface geologic data. Only the seismic data (i.e., first arrival times of P waves) are used to construct the velocity models and pre-stack migration images, and no other a priori assumptions are invoked. Velocities obtained by processing individual seismic tracks were integrated to develop a block diagram of velocities to 2.3 km depth within the Dixie Valley geothermal field. Details of the tectonic and stratigraphic structure allowed three dimensional extension of the interpretations of two dimensional data. Interpretations of the processed seismic data are compared with well data, surface mapping, and other geophysical data. The Dixie Valley fault along the southeastern Stillwater Range Piedmont is associated with a pronounced lateral velocity gradient that is interpreted to represent the juxtaposition of relatively low velocity basin-fill strata in the hanging wall against higher velocity crystalline rocks in the footwall. The down-dip geometry of the fault was evaluated by inverting arrival times from a negative move-out event, which we associate with the dipping fault plane, on individual shot gathers for seismic line SRC-3 for the location and depth of the associated reflection points on the fault.

  12. Topical report on subsurface fracture mapping from geothermal wellbores. Phase I. Pulsed radar techniques. Phase II. Conventional logging methods. Phase III. Magnetic borehole ranging

    SciTech Connect

    Hartenbaum, B.A.; Rawson, G.

    1980-09-01

    To advance the state-of-the-art in Hot Dry Rock technology, an evaluation is made of (i) the use of radar to map far-field fractures, (ii) the use of more than twenty different conventional well logging tools to map borehole-fracture intercepts, and (iii) the use of magnetic dipole ranging to determine the relative positions of the injection well and the production well within the fractured zone. It is found that according to calculations, VHF backscatter radar has the potential for mapping fractures within a distance of 50 +- 20 meters from the wellbore. A new technique for improving fracture identification is presented. Analyses of extant data indicate that when used synergistically the (1) caliper, (2) resistivity dipmeter, (3) televiewer, (4) television, (5) impression packer, and (6) acoustic transmission are useful for mapping borehole-fracture intercepts. Improvements in both data interpretation techniques and high temperature operation are required. The surveying of one borehole from another appears feasible at ranges of up to 200 to 500 meters by using a low frequency magnetic field generated by a moderately strong dipole source (a solenoid) located in one borehole, a sensitive B field detector that traverses part of the second borehole, narrow band filtering, and special data inversion techniques.

  13. Framework for the mapping of the monthly average daily solar radiation using an advanced case-based reasoning and a geostatistical technique.

    PubMed

    Lee, Minhyun; Koo, Choongwan; Hong, Taehoon; Park, Hyo Seon

    2014-04-15

    For the effective photovoltaic (PV) system, it is necessary to accurately determine the monthly average daily solar radiation (MADSR) and to develop an accurate MADSR map, which can simplify the decision-making process for selecting the suitable location of the PV system installation. Therefore, this study aimed to develop a framework for the mapping of the MADSR using an advanced case-based reasoning (CBR) and a geostatistical technique. The proposed framework consists of the following procedures: (i) the geographic scope for the mapping of the MADSR is set, and the measured MADSR and meteorological data in the geographic scope are collected; (ii) using the collected data, the advanced CBR model is developed; (iii) using the advanced CBR model, the MADSR at unmeasured locations is estimated; and (iv) by applying the measured and estimated MADSR data to the geographic information system, the MADSR map is developed. A practical validation was conducted by applying the proposed framework to South Korea. It was determined that the MADSR map developed through the proposed framework has been improved in terms of accuracy. The developed MADSR map can be used for estimating the MADSR at unmeasured locations and for determining the optimal location for the PV system installation. PMID:24635702

  14. Peripheral visual response time and retinal luminance-area relations

    NASA Technical Reports Server (NTRS)

    Haines, R. F.

    1975-01-01

    Experiments were undertaken to elucidate the stimulus luminance-retinal area relationship that underlies response time (RT) behavior. Mean RT was significantly faster to stimuli imaged beyond about 70 deg of arc from the fovea when their luminance was increased by an amount equal to the foveal stimulus luminance multiplied by the cosine of the angle between the peripheral stimuli and the line of sight. This and additional data are discussed in relation to previous psychophysical data and to possible response mechanisms.

  15. Contribution of a luminance-dependent S-cone mechanism to non-assimilative color spreading in the watercolor configuration

    PubMed Central

    Kimura, Eiji; Kuroki, Mikako

    2014-01-01

    In the watercolor configuration composed of wavy double contours, both assimilative and non-assimilative color spreading have been demonstrated depending on the luminance conditions of the inner and outer contours (IC and OC, respectively). This study investigated how the induced color in the watercolor configuration was modulated by combinations of the IC and the OC color, particularly addressing non-assimilative color spreading. In two experiments, the IC color was fixed to a certain color and combined with various colors selected from a hue circle centered at the background white color. Color spreading was quantified with a chromatic cancelation technique. Results showed that both the magnitude and the apparent hue of the color spreading were largely changed with the luminance condition. When the IC contrast (Weber contrast of the IC to the background luminance) was smaller in size than the OC contrast (higher IC luminance condition), the color spreading was assimilative. When the luminance condition was reversed and the IC contrast was greater than the OC contrast (lower IC luminance condition), the color spreading was non-assimilative and yellowish. When the color spreading was analyzed in terms of cone-opponent excitations, the results were consistent with the interpretation that the color spreading is explainable by a combination of chromatic diffusion from the IC and chromatically opponent induction from the OC. The color spreading in the higher IC luminance condition mainly reflected the chromatic diffusion by both (L–M) and S cone-opponent mechanisms. The non-assimilative color spreading in the lower IC luminance condition mostly reflected S-cone mediated opponent induction and the contribution of −S inducing mechanisms was differentially large. These findings provided several constraints on possible visual mechanisms underlying the watercolor effect. PMID:25538602

  16. Contribution of a luminance-dependent S-cone mechanism to non-assimilative color spreading in the watercolor configuration.

    PubMed

    Kimura, Eiji; Kuroki, Mikako

    2014-01-01

    In the watercolor configuration composed of wavy double contours, both assimilative and non-assimilative color spreading have been demonstrated depending on the luminance conditions of the inner and outer contours (IC and OC, respectively). This study investigated how the induced color in the watercolor configuration was modulated by combinations of the IC and the OC color, particularly addressing non-assimilative color spreading. In two experiments, the IC color was fixed to a certain color and combined with various colors selected from a hue circle centered at the background white color. Color spreading was quantified with a chromatic cancelation technique. Results showed that both the magnitude and the apparent hue of the color spreading were largely changed with the luminance condition. When the IC contrast (Weber contrast of the IC to the background luminance) was smaller in size than the OC contrast (higher IC luminance condition), the color spreading was assimilative. When the luminance condition was reversed and the IC contrast was greater than the OC contrast (lower IC luminance condition), the color spreading was non-assimilative and yellowish. When the color spreading was analyzed in terms of cone-opponent excitations, the results were consistent with the interpretation that the color spreading is explainable by a combination of chromatic diffusion from the IC and chromatically opponent induction from the OC. The color spreading in the higher IC luminance condition mainly reflected the chromatic diffusion by both (L-M) and S cone-opponent mechanisms. The non-assimilative color spreading in the lower IC luminance condition mostly reflected S-cone mediated opponent induction and the contribution of -S inducing mechanisms was differentially large. These findings provided several constraints on possible visual mechanisms underlying the watercolor effect. PMID:25538602

  17. Luminous Stars in Galaxies Beyond 3 Mpc

    NASA Astrophysics Data System (ADS)

    Whitmore, B. C.; Wfc3 Science Oversight Committee

    2011-06-01

    I am mainly interested in the formation and destruction of young star clusters in nearby star forming galaxies such as the Antennae, M83, and M51. One of the first analysis steps is to throw out all those pesky stars that keep contaminating my young cluster samples. Recently, spurred on by our new Wide Field Camera 3 (WFC3) Early Release Science data of galaxies including M83, NGC 4214, M82, NGC 2841, and Cen A, we began taking a closer look at the stellar component. Questions we are addressing are: 1) what are the most luminous stars, 2) how can we use them to help study the destruction of star clusters and the population of the field, 3) what fraction of stars, at least the bright stars, are formed in the field, in associations, and in compact clusters. In this contribution we describe some of the beginning steps in this process. More specifically, we describe how we separate stars from clusters in our galaxies, and describe how candidate Luminous Blue Variables (LBVs) and "Single Star" HII (SSHII) regions have been identified.

  18. The Most Luminous Galaxies Found by WISE

    NASA Astrophysics Data System (ADS)

    Eisenhardt, Peter; Tsai, Chao-Wei; Wu, Jingwen; Griffith, Roger; Yan, Lin; Stern, Daniel; Stanford, Adam; Blain, Andrew; Benford, Dominic; Bridge, Carrie; Petty, Sara; Assef, Roberto; Donoso, Emilio; Lake, Sean

    2012-08-01

    NASA's Wide-field Infrared Survey Explorer (WISE) has surveyed the entire sky at 3.4, 4.6, 12 and 22 (micron) (W1, W2, W3, and W4), reaching sensitivities hundreds of times deeper than IRAS. We have used WISE photometry to select an all-sky sample of objects which are extremely luminous. The objects are prominent in W4, but faint or undetected in W1 and W2 (W12drops). Followup spectroscopy shows that most of the sources have redshifts z>2. In combination with Herschel photometry, these redshifts lead to typical L_bol > 10^13 L_⊙, with ~10% exceeding 10^14 L_⊙. High resolution adaptive optics imaging shows these objects are typically unlensed. We request NOAO time to obtain redshifts and optical and near IR photometry of the complete all-sky sample of the brightest W12drops, all of which are in our Herschel program. This will fulfill the primary WISE objective of finding the most extreme luminous IR galaxies in the Universe. These superlative objects will be the most fruitful for detailed studies of the physics of star formation, AGN fueling, and feedback in the most active galaxies.

  19. Night vision goggle luminance disparity and the Pulfrich phenomenon

    NASA Astrophysics Data System (ADS)

    Pinkus, Alan; Task, Harry L.

    2004-09-01

    When night vision goggle (NVG) image intensifier tubes (I2Ts) are replaced during maintenance, the output luminances of the two channels must not exceed a ratio of 1.5 (brighter channel luminance divided by the dimmer channel luminance) in order to meet the current allowed binocular luminance disparity specification. Two studies were performed to investigate the validity of this requirement. The first study estimated thresholds of binocular luminance disparity detection for observers looking through NVGs. For eight observers, the 25% corrected-for-chance probability of detecting an ocular luminance difference, yielded an average ratio of 1.43 indicating that the current 1.5 specification is perhaps too loose. The second study investigated the Pulfrich phenomenon, a pseudo-stereo effect that can be induced by presenting luminance imbalances to the eyes. This study created NVG luminance imbalances using neutral density (ND) filters and then investigated whether or not the various imbalance levels were sufficient to cause the Pulfrich phenomenon to be perceived. Results indicated an imbalance ratio of 1.10 was insufficient to cause the effect to be seen, but a ratio of 1.26 was sufficient (p <= 0.0003) for the effect to be seen, at least part of the time. Based on these results, it is apparent the allowed binocular luminance disparity ratio should probably be tightened to at least 1.3 with a goal of 1.2.

  20. Luminance-model-based DCT quantization for color image compression

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1992-01-01

    A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).

  1. Making large class basic histology lectures more interactive: The use of draw-along mapping techniques and associated educational activities.

    PubMed

    Kotzé, Sanet Henriët; Mole, Calvin Gerald

    2015-01-01

    At Stellenbosch University, South Africa, basic histology is taught to a combination class of almost 400 first-year medical, physiotherapy, and dietetic students. Many students often find the amount of work in basic histology lectures overwhelming and consequently loose interest. The aim was to determine if a draw-along mapping activity would focus students during large class lectures. After each lecture on three basic histology tissues, a guided draw-along mapping session covering the work from the lecture was introduced in the form of a click-advance PowerPoint presentation which was used to demonstrate the unfolding of an "ideal" map. The lecturer simultaneously drew a similar map using an overhead projector allowing the students to draw their own maps on blank sheets of paper along with the lecturer. Students remained attentive during the activity and many participated in answering informal questions posed by the lecturer as the map-making session progressed. After the last session, students completed an anonymous, voluntary questionnaire (response rate of 78%). The majority of students found the draw-along maps useful (94%) and believed that its use should be continued in the future (93%). A significant increase (P < 0.001) was found in the test results of student cohorts who were given the current intervention compared to cohorts from previous years who were given mind maps as handouts only or had no intervention. The use of the draw-along mapping sessions were successful in focusing students during large class lectures while also providing them with a useful tool for their studies. PMID:25650015

  2. MEASUREMENT OF THE BROAD-LINE REGION SIZE IN A LUMINOUS MACHO QUASAR

    SciTech Connect

    Chelouche, Doron; Daniel, Eliran; Kaspi, Shai E-mail: shai@wise.tau.ac.il

    2012-05-10

    We measure the broad emission line region (BLR) size of a luminous, L {approx} 10{sup 47} erg s{sup -1}, high-z quasar using broadband photometric reverberation mapping. To this end, we analyze {approx}7.5 years of photometric data for MACHO 13.6805.324 (z {approx_equal} 1.72) in the B and R MACHO bands and find a time delay of 180 {+-} 40 days in the rest frame of the object. Given the spectral-variability properties of high-z quasars, we associate this lag with the rest-UV iron emission blends. Our findings are consistent with a simple extrapolation of the BLR size-luminosity relation in local active galactic nuclei to the more luminous, high-z quasar population. Long-term spectroscopic monitoring of MACHO 13.6805.324 may be able to directly measure the line-to-continuum time delay and test our findings.

  3. Use of remote sensing techniques for geological hazard surveys in vegetated urban regions. [multispectral imagery for lithological mapping

    NASA Technical Reports Server (NTRS)

    Stow, S. H.; Price, R. C.; Hoehner, F.; Wielchowsky, C.

    1976-01-01

    The feasibility of using aerial photography for lithologic differentiation in a heavily vegetated region is investigated using multispectral imagery obtained from LANDSAT satellite and aircraft-borne photography. Delineating and mapping of localized vegetal zones can be accomplished by the use of remote sensing because a difference in morphology and physiology results in different natural reflectances or signatures. An investigation was made to show that these local plant zones are affected by altitude, topography, weathering, and gullying; but are controlled by lithology. Therefore, maps outlining local plant zones were used as a basis for lithologic map construction.

  4. A methodology for small scale rural land use mapping in semi-arid developing countries using orbital imagery. Part 5: Experimental and operational techniques of mapping land use

    NASA Technical Reports Server (NTRS)

    Vangenderen, J. L. (Principal Investigator); Lock, B. F.

    1976-01-01

    The author has identified the following significant results. Scope of the preprocessing techniques was restricted to standard material from the EROS Data Center accompanied by some enlarging procedures and the use of the diazo process. Investigation has shown that the most appropriate sampling strategy for this study is the stratified random technique. A viable sampling procedure, together with a method for determining minimum number of sample points in order to test results of any interpretation are presented.

  5. Study of USGS/NASA land use classification system. [compatibility of land use classification system with computer processing techniques employed for land use mapping from ERTS data

    NASA Technical Reports Server (NTRS)

    Spann, G. W.; Faust, N. L.

    1974-01-01

    It is known from several previous investigations that many categories of land-use can be mapped via computer processing of Earth Resources Technology Satellite data. The results are presented of one such experiment using the USGS/NASA land-use classification system. Douglas County, Georgia, was chosen as the test site for this project. It was chosen primarily because of its recent rapid growth and future growth potential. Results of the investigation indicate an overall land-use mapping accuracy of 67% with higher accuracies in rural areas and lower accuracies in urban areas. It is estimated, however, that 95% of the State of Georgia could be mapped by these techniques with an accuracy of 80% to 90%.

  6. The mixture problem in computer mapping of terrain: Improved techniques for establishing spectral signature, atmospheric path radiance, and transmittance. [in Colorado

    NASA Technical Reports Server (NTRS)

    Smedes, H. W.; Hulstrom, R. L.; Ranson, K. J.

    1975-01-01

    The results of LANDSAT and Skylab research programs on the effects of the atmosphere on computer mapping of terrain include: (1) the concept of a ground truth map needs to be drastically revised; (2) the concept of training areas and test areas is not as simple as generally thought because of the problem of pixels that represent a mixture of terrain classes; (3) this mixture problem needs to be more widely recognized and dealt with by techniques of calculating spectral signatures of mixed classes, or by other methods; (4) atmospheric effects should be considered in computer mapping of terrain and in monitoring changes; and (5) terrain features may be used as calibration panels on the ground, from which atmospheric conditions can be determined and monitored. Results are presented of a test area in mountainous terrain of south-central Colorado for which an initial classification was made using simulated mixture-class spectral signatures and actual LANDSAT-1-MSS data.

  7. Visual Comfort Analysis of Innovative Interior and Exterior Shading Systems for Commercial Buildings using High Resolution Luminance Images

    SciTech Connect

    Konis, Kyle; Lee, Eleanor; Clear, Robert

    2011-01-11

    The objective of this study was to explore how calibrated high dynamic range (HDR) images (luminance maps) acquired in real world daylit environments can be used to characterize, evaluate, and compare visual comfort conditions of innovative facade shading and light-redirecting systems. Detailed (1536 x 1536 pixel) luminance maps were time-lapse acquired from two view positions in an unoccupied full scale testbed facility. These maps were analyzed using existing visual comfort metrics to quantify how innovative interior and exterior shading systems compare to conventional systems under real sun and sky conditions over a solstice-to-solstice test interval. The results provide a case study in the challenges and potential of methods of visualizing, evaluating and summarizing daily and seasonal variation of visual comfort conditions computed from large sets of image data.

  8. The Milky Way's Most Luminous Star Clusters: Engines of Galaxy Evolution

    NASA Astrophysics Data System (ADS)

    Rahman, Mubdi; Matzner, C. D.; Moon, D.

    2012-01-01

    Massive young star clusters and OB associations (M > 104 Msun) dominate the energetic feedback from stars into the interstellar medium. They contain the most massive and luminous stars in the Galaxy, which shape their environments through winds, ionizing flux, radiation pressure, and eventually supernovae, destroying their natal molecular clouds and inflating superbubbles. Few such clusters have been identified in our Galaxy. We systematically investigate the most luminous H II regions, which we identify using the WMAP foreground maps. We find that the 13 most luminous sources produce one-third of the Galaxy's total ionizing luminosity, all with expected powering populations of M > 4×104 Msun. These populations are grouped in small numbers of clusters or associations for each WMAP source. The emission from these regions is dominated by the diffuse component at large radii ( 10-70 pc) indicating a high leaking fraction of ionizing photons. Using 8 micron maps from Spitzer GLIMPSE and published radio recombination line observations, we resolve the large (>1°) WMAP sources into 40 star forming complexes (SFCs) exhibiting shell morphology with evidence of expansion due to a central powering source. We develop a method, based on differential extinction of the galactic disk, to identify the SFC's powering cluster candidates with 2MASS. We identify 25 candidate clusters within the 40 SFCs having extinctions consistent with their distances. With near-infrared spectroscopy from the New Technology Telescope, we have confirmed the existence of the most massive of these associations, the Dragonfish Association, with M = 105 Msun. Of the 50 sampled stars, we identify 2 Luminous Blue Variable candidates, a Wolf-Rayet, and 15 O-type stars, consistent with the yield expected from the candidate contamination rate, verifying the candidate cluster identification method. This investigation produces the most complete picture of the upper-end of the Galaxy's cluster mass function to

  9. OBSCURATION BY GAS AND DUST IN LUMINOUS QUASARS

    SciTech Connect

    Usman, S. M.; Murray, S. S.; Hickox, R. C.; Brodwin, M.

    2014-06-10

    We explore the connection between absorption by neutral gas and extinction by dust in mid-infrared (IR) selected luminous quasars. We use a sample of 33 quasars at redshifts 0.7 < z ≲ 3 in the 9 deg{sup 2} Boötes multiwavelength survey field that are selected using Spitzer Space Telescope Infrared Array Camera colors and are well-detected as luminous X-ray sources (with >150 counts) in Chandra observations. We divide the quasars into dust-obscured and unobscured samples based on their optical to mid-IR color, and measure the neutral hydrogen column density N {sub H} through fitting of the X-ray spectra. We find that all subsets of quasars have consistent power law photon indices Γ ≈ 1.9 that are uncorrelated with N {sub H}. We classify the quasars as gas-absorbed or gas-unabsorbed if N {sub H} > 10{sup 22} cm{sup –2} or N {sub H} < 10{sup 22} cm{sup –2}, respectively. Of 24 dust-unobscured quasars in the sample, only one shows clear evidence for significant intrinsic N {sub H}, while 22 have column densities consistent with N {sub H} < 10{sup 22} cm{sup –2}. In contrast, of the nine dust-obscured quasars, six show evidence for intrinsic gas absorption, and three are consistent with N {sub H} < 10{sup 22} cm{sup –2}. We conclude that dust extinction in IR-selected quasars is strongly correlated with significant gas absorption as determined through X-ray spectral fitting. These results suggest that obscuring gas and dust in quasars are generally co-spatial, and confirm the reliability of simple mid-IR and optical photometric techniques for separating quasars based on obscuration.

  10. A novel flood detection mapping using multi-temporal RADARSAT-2 and TerraSAR-X images through an optimized Taguchi based classification technique

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet

    2016-07-01

    Floods are considered as one of the most common natural disasters in Malaysia. Preparation of an actuate flood inventory map is the basic step in flood risk management. Flood detection is yet significantly complex process due to the presence of cloud coverage in the tropical areas especially in Malaysia. Moreover, the most available techniques are expensive and time-consuming. Therefore, in the present study an efficient approach is presented to identify the flooded areas by means of multi-temporal RADARSAT-2 and single Terra-SAR-X images. The proposed framework was tested at two tropical areas in Malaysia: Kelantan (2014 flood); and Kuala Terengganu (2009 flood) to map the flooded areas. Multi-temporal RADARSAT-2 and single TerrSAR-X and Landsat images were classified based on a rule-based object-oriented technique. Then, different levels of image segmentation was performed to distinguish the boundaries of various dimensions and scales of objects. Finally, a novel Taguchi based method was employed to optimize the segmentation parameters. After the completion of segmentation, the rules were defined and the images were classified to produce an accurate flood inventory map for both 2014 Kelantan flood event as well as 2009 flood event in Kuala Terengganu. Finally, the results of classification were measured through the confusion matrix. In this research, the combination of techniques and the optimization approach were applied as a pioneering approach for flood detection. The flood inventory map which was obtained by using the proposed approach is showing the efficiency of the methodology which can be applied by other researchers and decision makers to construct the flood inventory maps. Keywords: Radarsat 2; Multispectral imagery; flood detection; Taguchi; rule-based classification

  11. Introduction of digital soil mapping techniques for the nationwide regionalization of soil condition in Hungary; the first results of the DOSoReMI.hu (Digital, Optimized, Soil Related Maps and Information in Hungary) project

    NASA Astrophysics Data System (ADS)

    Pásztor, László; Laborczi, Annamária; Szatmári, Gábor; Takács, Katalin; Bakacsi, Zsófia; Szabó, József; Dobos, Endre

    2014-05-01

    Due to the former soil surveys and mapping activities significant amount of soil information has accumulated in Hungary. Present soil data requirements are mainly fulfilled with these available datasets either by their direct usage or after certain specific and generally fortuitous, thematic and/or spatial inference. Due to the more and more frequently emerging discrepancies between the available and the expected data, there might be notable imperfection as for the accuracy and reliability of the delivered products. With a recently started project (DOSoReMI.hu; Digital, Optimized, Soil Related Maps and Information in Hungary) we would like to significantly extend the potential, how countrywide soil information requirements could be satisfied in Hungary. We started to compile digital soil related maps which fulfil optimally the national and international demands from points of view of thematic, spatial and temporal accuracy. The spatial resolution of the targeted countrywide, digital, thematic maps is at least 1:50.000 (approx. 50-100 meter raster resolution). DOSoReMI.hu results are also planned to contribute to the European part of GSM.net products. In addition to the auxiliary, spatial data themes related to soil forming factors and/or to indicative environmental elements we heavily lean on the various national soil databases. The set of the applied digital soil mapping techniques is gradually broadened incorporating and eventually integrating geostatistical, data mining and GIS tools. In our paper we will present the first results. - Regression kriging (RK) has been used for the spatial inference of certain quantitative data, like particle size distribution components, rootable depth and organic matter content. In the course of RK-based mapping spatially segmented categorical information provided by the SMUs of Digital Kreybig Soil Information System (DKSIS) has been also used in the form of indicator variables. - Classification and regression trees (CART) were

  12. Near surface geophysics techniques and geomorphological approach to reconstruct the hazard cave map in historical and urban areas

    NASA Astrophysics Data System (ADS)

    Lazzari, M.; Loperte, A.; Perrone, A.

    2009-04-01

    This work, carried out with an integrated methodological approach, focuses on the use of near surface geophysics techniques, such as ground penetrating radar GPR and electrical resistivity tomography (ERT), and geomorphological analysis, in order to reconstruct the cave distribution and geometry in a urban context and, in particular, in historical centres. In fact, there are a lot of historical Mediterranean sites born on an original rupestrian settlement, of which often the new generations forgot the memory and new urban areas built on them burying any marks. The interaction during recent centuries between human activity (caves excavation, birth and growth of an urban area) and the characters of the natural environment were the reasons of a progressive increase in hazard and vulnerability levels of several sites. The reconstruction of a detailed cave map distribution is the first step to define the anthropic and geomorphological hazard in urban areas, fundamental basis for planning and assessing the risk. The integrated near surface geophysics and geomorphological techniques have been applied to the case study of Tursi hilltop town and its older nucleus called Rabatana, located in the south-western sector of the Basilicata (southern Italy), representing an interesting example of the deep bond between natural and man-made environments such as precious cultural heritage. The history of this settlement has always been deeply connected with the characteristics of the neighbouring environment and it seems possible that the first settlement was built by excavating the slopes of the sandy relief. It was a typical rupestrian settlement, where meteoric water was stored inside some cisterns excavated on the slopes. During recent centuries, the increase in territory development by humans produced an increase in cave excavation in the Tursi-Rabatana urban area. To reconstruct the extremely complex near-surface hypogeal environment excavated in the sandy layers, a geophysical

  13. Luminal pulse velocity in a superluminal medium

    NASA Astrophysics Data System (ADS)

    Amano, Heisuke; Tomita, Makoto

    2015-12-01

    To investigate the physical meaning of pulse peak in fast and slow light media, we investigated propagation of differently shaped pulses experimentally, controlling the sharpness of the pulse peak. Symmetric behavior with respect to fast and slow light was observed in traditional Gaussian pulses; that is, propagated pulses were advanced or delayed, respectively, whereas the pulse shape remained unchanged. This symmetry broke down when the pulse peak was sharpened; in the fast light medium, the sharp pulse peak propagated with luminal velocity, and the transmitted pulse deformed into a characteristic asymmetric profile. In contrast, in the slow light medium, a time-delayed smooth peak appeared with a bending point at t =0 . This symmetry breaking with respect to fast and slow light is a universal characteristic of pulse propagation in causal dispersive systems. The sharp pulse peak can be recognized as a bending nonanalytical point and may be capable of transferring information.

  14. Shell nebulae around luminous evolved stars

    NASA Technical Reports Server (NTRS)

    Dufour, Reginald J.

    1989-01-01

    Shell nebulae around luminous Population I Wolf-Rayet, Of, and P-Cygni stars are astrophysically interesting since they are indicators of pre-supernova mass loss and how such massive stars prepare their surrounding interstellar medium prior to explosion. Some twenty-odd such nebulae are known, for which detailed study of their morphological and spectroscopic characteristics have only begun in this decade. In this paper, some of these characteristics are reviewed in general, and new observations are reported. Emphasis has been placed on several 'prototype 'objects (NGC 7635, NGC 2359, NGC 6888, and the Eta Carinae condensations) to illustrate the varied massive-star mass-loss, the physics of their winds and shell ejecta, and related nucleosynthesis effects in the compositions of the winds and shells.

  15. Three-Dimensional Mapping of Ozone-Induced Injury in the Nasal Airways of Monkeys Using Magnetic Resonance Imaging and Morphometric Techniques

    SciTech Connect

    Carey, Stephen A.; Minard, Kevin R.; Trease, Lynn L.; Wagner, James G.; Garcia, Guilherme M.; Ballinger, Carol A.; Kimbell, Julia; Plopper, Charles G.; Corley, Rick A.; Postlewait, Ed; Harkema, Jack R.

    2007-03-01

    ABSTRACT Age-related changes in gross and microscopic structure of the nasal cavity can alter local tissue susceptibility as well as the dose of inhaled toxicant delivered to susceptible sites. This article describes a novel method for the use of magnetic resonance imaging, 3-dimensional airway modeling, and morphometric techniques to characterize the distribution and magnitude of ozone-induced nasal injury in infant monkeys. Using this method, we are able to generate age-specific, 3-dimensional, epithelial maps of the nasal airways of infant Rhesus macaques. The principal nasal lesions observed in this primate model of ozone-induced nasal toxicology were neutrophilic rhinitis, along with necrosis and exfoliation of the epithelium lining the anterior maxilloturbinate. These lesions, induced by acute or cyclic (episodic) exposures, were examined by light microscopy, quantified by morphometric techniques, and mapped on 3-dimensional models of the nasal airways. Here, we describe the histopathologic, imaging, and computational biology methods developed to efficiently characterize, localize, quantify, and map these nasal lesions. By combining these techniques, the location and severity of the nasal epithelial injury were correlated with epithelial type, nasal airway geometry, and local biochemical and molecular changes on an individual animal basis. These correlations are critical for accurate predictive modeling of exposure-dose-response relationships in the nasal airways, and subsequent extrapolation of nasal findings in animals to humans for developing risk assessment.

  16. Application of Geophysical Techniques to Identify and Map the Benthic Habitat and Sub- bottom Sediments of Delaware Bay

    NASA Astrophysics Data System (ADS)

    Madsen, J.; Wilson, B.; Carter, D.

    2006-12-01

    The Coastal Program of the Delaware Department of Natural Resources and Environmental Control is engaged in a bottom and sub-bottom imaging project to identify and map the benthic habitat and sub-bottom sediments of the Delaware Bay. The project, now in the third of its five year implementation, integrates the use of three types of acoustical systems: RoxAnn seabed classification, chirp sub-bottom profiling, and multi-beam bathymetric mapping. Verification of the acoustic data with bottom and sub-bottom sediments is performed through the collection of grab and core samples and underwater video images. All this information is being compiled into a GIS database to enable coastal decision makers to effectively manage this estuarine resource. The RoxAnn system measures bottom hardness and roughness along a ship's track, and when correlated with analyses of ground-truth grab samples and video, can be used to classify bottom sediments. Regional maps of sediment type based on the RoxAnn classification are constructed by interpolating between track lines. This project has identified the spatial extent and relative density of oyster shell on the bay bottom, while also determining the composition of regional surrounding sediments. Sub-bottom profiles, collected using an Edgetech X-STAR chirp sonar system, are being used to map potential, and past, offshore sand borrow sites that can be/have been used for beach replenishment and to study the deposition and/or erosion of sediments and the evolution of the Delaware Estuary over the past 10,000 years. In locations along Delaware Bay where beach replenishment is necessary, the chirp data has been integrated with additional information from sediment cores to develop maps of suitable sand deposits based upon location, thickness, overburden, and grain size. The sub-bottom data has been used to map the Holocene/Pre-Holocene boundary and to constrain the paleo-environmental setting of the recent transgressive sea-level sequence in the

  17. Covariation of Color and Luminance Facilitate Object Individuation in Infancy

    ERIC Educational Resources Information Center

    Woods, Rebecca J.; Wilcox, Teresa

    2010-01-01

    The ability to individuate objects is one of our most fundamental cognitive capacities. Recent research has revealed that when objects vary in color or luminance alone, infants fail to individuate those objects until 11.5 months. However, color and luminance frequently covary in the natural environment, thus providing a more salient and reliable…

  18. Effects of luminance and spatial noise on interferometric contrast sensitivity

    NASA Astrophysics Data System (ADS)

    Coletta, Nancy J.; Sharma, Vineeta

    1995-10-01

    Optical properties of the eye contribute to the reduced visibility of spatial patterns at low luminance. To study the limits of spatial vision when optical factors are minimized, we measured contrast-sensitivity functions (CSF's) for 543.5-nm laser interference fringes imaged directly on the retina. Measurements were made in the fovea at four luminance levels, ranging from 0.3 to 300 photopic trolands (Td). At each luminance the fraction of coherent light in the stimulus pattern was varied to assess the masking effects of laser speckle, which is visible as spatial noise in fields of coherent light. Compared with published CSF's obtained under natural viewing conditions, interferometric CSF's were similar in height but broader, with the range of visibility being extended to higher spatial frequencies. The masking effects of speckle were greatest at the highest luminance and were negligible at the lowest luminance. For low coherent fractions, contrast sensitivity improved over the entire luminance range at a rate consistent with a square-root law; with purely coherent light, sensitivity tended to level off at approximately 30 Td because of speckle masking. The results indicate that the optical quality of the eye reduces the spatial bandwidth of vision even at luminances near the foveal threshold. The change in interference fringe visibility with luminance is consistent with noise-limited behavior, and the masking

  19. Identifying the occurrence of lightning and transient luminous events by nadir spectrophotometric observation

    NASA Astrophysics Data System (ADS)

    Adachi, Toru; Sato, Mitsuteru; Ushio, Tomoo; Yamazaki, Atsushi; Suzuki, Makoto; Kikuchi, Masayuki; Takahashi, Yukihiro; Inan, Umran S.; Linscott, Ivan; Hobara, Yasuhide; Frey, Harald U.; Mende, Stephen B.; Chen, Alfred B.; Hsu, Rue-Ron; Kusunoki, Kenichi

    2016-07-01

    We propose a new technique to identify the occurrence of lightning and transient luminous events (TLEs) using multicolor photometric data obtained by space borne nadir measurements. We estimate the spectral characteristics of lightning and TLEs by converting the optical data obtained by the ISUAL limb experiment to the GLIMS nadir geometry. We find that the estimated spectral shapes of TLE-accompanied lightning are clearly different from those of pure lightning. The obtained results show that (1) the intensity of FUV signals and (2) the ratio of 337/red (609-753 nm) spectral irradiance are useful to identify the occurrence of TLEs. The occurrence probabilities of TLEs are 10%, 40%, 80%, in the case of lightning events having the 337/red spectral irradiance ratio of 0.95, 2.95, 14.79, respectively. By using the 60% criterion of the 337/red ratio and the existence of FUV emissions, we classify the 1039 GLIMS-observed lightning events into 828 pure lightning and 211 TLE-accompanied lightning. Since the GLIMS trigger level is adjusted to observe extremely-bright events, the occurrence probability of TLEs obtained here most probably reflects the characteristics of energetic lightning. The estimated global map is consistent with previously determined distributions: the highest activities of lightning and TLEs are found over the North/South American continents, African continent, and Asian maritime regions. While the absolute occurrence number of pure lightning and TLE-accompanied lightning are found to maximize in the equatorial region, the occurrence probability of TLEs possibly increase somewhat in the mid-latitude region. Since the occurrence probabilities of TLEs are higher over the ocean than over land, it is likely that the GLIMS-observed TLEs are due primarily to elves which tends to occur more frequently over the ocean.

  20. (-)-Reboxetine inhibits muscle nicotinic acetylcholine receptors by interacting with luminal and non-luminal sites.

    PubMed

    Arias, Hugo R; Ortells, Marcelo O; Feuerbach, Dominik

    2013-11-01

    The interaction of (-)-reboxetine, a non-tricyclic norepinephrine selective reuptake inhibitor, with muscle-type nicotinic acetylcholine receptors (AChRs) in different conformational states was studied by functional and structural approaches. The results established that (-)-reboxetine: (a) inhibits (±)-epibatidine-induced Ca(2+) influx in human (h) muscle embryonic (hα1β1γδ) and adult (hα1β1εδ) AChRs in a non-competitive manner and with potencies IC50=3.86±0.49 and 1.92±0.48 μM, respectively, (b) binds to the [(3)H]TCP site with ~13-fold higher affinity when the Torpedo AChR is in the desensitized state compared to the resting state, (c) enhances [(3)H]cytisine binding to the resting but activatableTorpedo AChR but not to the desensitized AChR, suggesting desensitizing properties, (d) overlaps the PCP luminal site located between rings 6' and 13' in the Torpedo but not human muscle AChRs. In silico mutation results indicate that ring 9' is the minimum structural component for (-)-reboxetine binding, and (e) interacts to non-luminal sites located within the transmembrane segments from the Torpedo AChR γ subunit, and at the α1/ε transmembrane interface from the adult muscle AChR. In conclusion, (-)-reboxetine non-competitively inhibits muscle AChRs by binding to the TCP luminal site and by inducing receptor desensitization (maybe by interacting with non-luminal sites), a mechanism that is shared by tricyclic antidepressants. PMID:23917086

  1. Enhancement of Luminous Efficacy by Random Patterning of Phosphor Matrix

    NASA Astrophysics Data System (ADS)

    Fellows, Natalie; Masul, Hisashi; Diana, Frederic; Denbaars, Steven P.; Nakamura, Shuji

    We have demonstrated the ability to increase the luminous flux and luminous efficacy of white light-emitting diodes (LEDs) by randomly patterning the surface of the yellow phosphor matrix. The phosphor was moved away from the LED die by placing it on top of a silicone optic and then roughening the surface of the phosphor/resin mixture. It was found that the roughening increases the luminous flux and efficacy by 10% over the smooth, non-patterned phosphor mixture. The roughened sample’s operating voltage, luminous flux, luminous efficacy, CCT, color coordinates, and CRI were 3.2 V, 7.4 lm, 115.6 lm/W, 4244 K, (0.388, 0.448), and 61 at 20 mA, CW, and room temperature operation. A brief presentation on phosphor scattering is introduced to help explain the effect of the roughening.

  2. Near surface geophysics techniques and geomorphological approach to reconstruct the hazard cave map in historical and urban areas

    NASA Astrophysics Data System (ADS)

    Lazzari, M.; Loperte, A.; Perrone, A.

    2009-04-01

    This work, carried out with an integrated methodological approach, focuses on the use of near surface geophysics techniques, such as ground penetrating radar GPR and electrical resistivity tomography (ERT), and geomorphological analysis, in order to reconstruct the cave distribution and geometry in a urban context and, in particular, in historical centres. In fact, there are a lot of historical Mediterranean sites born on an original rupestrian settlement, of which often the new generations forgot the memory and new urban areas built on them burying any marks. The interaction during recent centuries between human activity (caves excavation, birth and growth of an urban area) and the characters of the natural environment were the reasons of a progressive increase in hazard and vulnerability levels of several sites. The reconstruction of a detailed cave map distribution is the first step to define the anthropic and geomorphological hazard in urban areas, fundamental basis for planning and assessing the risk. The integrated near surface geophysics and geomorphological techniques have been applied to the case study of Tursi hilltop town and its older nucleus called Rabatana, located in the south-western sector of the Basilicata (southern Italy), representing an interesting example of the deep bond between natural and man-made environments such as precious cultural heritage. The history of this settlement has always been deeply connected with the characteristics of the neighbouring environment and it seems possible that the first settlement was built by excavating the slopes of the sandy relief. It was a typical rupestrian settlement, where meteoric water was stored inside some cisterns excavated on the slopes. During recent centuries, the increase in territory development by humans produced an increase in cave excavation in the Tursi-Rabatana urban area. To reconstruct the extremely complex near-surface hypogeal environment excavated in the sandy layers, a geophysical

  3. Perceptions of Pre-Service Social Sciences Teachers Regarding the Concept of "Geography" by Mind Mapping Technique

    ERIC Educational Resources Information Center

    Ozturk Demirbas, Cagri

    2013-01-01

    The objective of this study is to present the perceptions of preservice social sciences teachers regarding the concept of geography. In the study, the study group consists of 46 preservice social sciences teachers, who receive education at Ahi Evran University. The data were collected in December, 2010. Mind maps were used as data collection tools…

  4. EVALUATION OF TEMPORAL VARIATIONS IN HYDRAULIC CAPTURE DUE TO CHANGING FLOW PATTERNS USING MAPPING AND MODELING TECHNIQUES

    SciTech Connect

    SPILIOTOPOULOS AA; SWANSON LC; SHANNON R; TONKIN MJ

    2011-04-07

    Robust performance evaluation represents one of the most challenging aspects of groundwater pump-and-treat (P&T) remedy implementation. In most cases, the primary goal of the P&T system is hydraulic containment, and ultimately recovery, of contaminants to protect downgradient receptors. Estimating the extent of hydraulic containment is particularly challenging under changing flow patterns due to variable pumping, boundaries and/or other conditions. We present a systematic approach to estimate hydraulic containment using multiple lines of evidence based on (a) water-level mapping and (b) groundwater modeling. Capture Frequency Maps (CFMs) are developed by particle tracking on water-level maps developed for each available water level data set using universal kriging. In a similar manner, Capture Efficiency Maps (CEMs) are developed by particle tracking on water-levels calculated using a transient groundwater flow model: tracking is undertaken independently for each stress period using a very low effective porosity, depicting the 'instantaneous' fate of each particle each stress period. Although conceptually similar, the two methods differ in their underlying assumptions and their limitations: their use together identifies areas where containment may be reliable (i.e., where the methods are in agreement) and where containment is uncertain (typically, where the methods disagree). A field-scale example is presented to illustrate these concepts.

  5. Referential Mapping: A Technique for Attaching Functional Significance to the Innovative Utterances of an African Grey Parrot (Psittascus Erithacus).

    ERIC Educational Resources Information Center

    Pepperberg, Irene M.

    1990-01-01

    Spontaneous combinations and phonological variations of the vocalizations of an African Grey parrot were treated as if they were intentional requests or comments. The success of these "referential mapping" procedures in attaching functional significance to the parrot's vocalizations may have implications for intervention programs for humans with…

  6. Making Large Class Basic Histology Lectures More Interactive: The Use of Draw-Along Mapping Techniques and Associated Educational Activities

    ERIC Educational Resources Information Center

    Kotzé, Sanet Henriët; Mole, Calvin Gerald

    2015-01-01

    At Stellenbosch University, South Africa, basic histology is taught to a combination class of almost 400 first-year medical, physiotherapy, and dietetic students. Many students often find the amount of work in basic histology lectures overwhelming and consequently loose interest. The aim was to determine if a draw-along mapping activity would…

  7. Location and mapping of hydrologic regimes and thermal waters in Hawai'i: the use of electrical geophysical techniques

    USGS Publications Warehouse

    Kauahikaua, Jim

    1990-01-01

    Useful information on several different hydrologic regimes can be obtained using a combination of electrical geophysical surveys. Fresh-water lenses and laterally-confined aquifers can be located and mapped. However, more studies are required to evaluate the utility of geophysical methods for assessing perched aquifers.

  8. Can Selforganizing Maps Accurately Predict Photometric Redshifts?

    NASA Technical Reports Server (NTRS)

    Way, Michael J.; Klose, Christian

    2012-01-01

    We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using delta(z) = z(sub phot) - z(sub spec)) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods

  9. The Luminous Convolution Model-The light side of dark matter

    NASA Astrophysics Data System (ADS)

    Cisneros, Sophia; Oblath, Noah; Formaggio, Joe; Goedecke, George; Chester, David; Ott, Richard; Ashley, Aaron; Rodriguez, Adrianna

    2014-03-01

    We present a heuristic model for predicting the rotation curves of spiral galaxies. The Luminous Convolution Model (LCM) utilizes Lorentz-type transformations of very small changes in the photon's frequencies from curved space-times to construct a dynamic mass model of galaxies. These frequency changes are derived using the exact solution to the exterior Kerr wave equation, as opposed to a linearized treatment. The LCM Lorentz-type transformations map between the emitter and the receiver rotating galactic frames, and then to the associated flat frames in each galaxy where the photons are emitted and received. This treatment necessarily rests upon estimates of the luminous matter in both the emitter and the receiver galaxies. The LCM is tested on a sample of 22 randomly chosen galaxies, represented in 33 different data sets. LCM fits are compared to the Navarro, Frenk & White (NFW) Dark Matter Model and to the Modified Newtonian Dynamics (MOND) model when possible. The high degree of sensitivity of the LCM to the initial assumption of a luminous mass to light ratios (M/L), of the given galaxy, is demonstrated. We demonstrate that the LCM is successful across a wide range of spiral galaxies for predicting the observed rotation curves. Through the generous support of the MIT Dr. Martin Luther King Jr. Fellowship program.

  10. Mapping the intracellular distribution of carbon nanotubes after targeted delivery to carcinoma cells using confocal Raman imaging as a label-free technique

    NASA Astrophysics Data System (ADS)

    Lamprecht, C.; Gierlinger, N.; Heister, E.; Unterauer, B.; Plochberger, B.; Brameshuber, M.; Hinterdorfer, P.; Hild, S.; Ebner, A.

    2012-04-01

    The uptake of carbon nanotubes (CNTs) by mammalian cells and their distribution within cells is being widely studied in recent years due to their increasing use for biomedical purposes. The two main imaging techniques used are confocal fluorescence microscopy and transmission electron microscopy (TEM). The former, however, requires labeling of the CNTs with fluorescent dyes, while the latter is a work-intensive technique that is unsuitable for in situ bio-imaging. Raman spectroscopy, on the other hand, presents a direct, straightforward and label-free alternative. Confocal Raman microscopy can be used to image the CNTs inside cells, exploiting the strong Raman signal connected to different vibrational modes of the nanotubes. In addition, cellular components, such as the endoplasmic reticulum and the nucleus, can be mapped. We first validate our method by showing that only when using the CNTs’ G band for intracellular mapping accurate results can be obtained, as mapping of the radial breathing mode (RBM) only shows a small fraction of CNTs. We then take a closer look at the exact localization of the nanotubes inside cells after folate receptor-mediated endocytosis and show that, after 8-10 h incubation, the majority of CNTs are localized around the nucleus. In summary, Raman imaging has enormous potential for imaging CNTs inside cells, which is yet to be fully realized. The authors declare no conflict of interest.

  11. An evaluation of machine processing techniques of ERTS-1 data for user applications. [urban land use and soil association mapping in Indiana

    NASA Technical Reports Server (NTRS)

    Landgrebe, D.

    1974-01-01

    A broad study is described to evaluate a set of machine analysis and processing techniques applied to ERTS-1 data. Based on the analysis results in urban land use analysis and soil association mapping together with previously reported results in general earth surface feature identification and crop species classification, a profile of general applicability of this procedure is beginning to emerge. Put in the hands of a user who knows well the information needed from the data and also is familiar with the region to be analyzed it appears that significantly useful information can be generated by these methods. When supported by preprocessing techniques such as the geometric correction and temporal registration capabilities, final products readily useable by user agencies appear possible. In parallel with application, through further research, there is much potential for further development of these techniques both with regard to providing higher performance and in new situations not yet studied.

  12. The integrated analyses of digital field mapping techniques and traditional field methods: implications from the Burdur-Fethiye Shear Zone, SW Turkey as a case-study

    NASA Astrophysics Data System (ADS)

    Elitez, İrem; Yaltırak, Cenk; Zabcı, Cengiz; Şahin, Murat

    2015-04-01

    The precise geological mapping is one of the most important issues in geological studies. Documenting the spatial distribution of geological bodies and their contacts play a crucial role on interpreting the tectonic evolution of any region. Although the traditional field techniques are still accepted to be the most fundamental tools in construction of geological maps, we suggest that the integration of digital technologies to the classical methods significantly increases the resolution and the quality of such products. We simply follow the following steps in integration of the digital data with the traditional field observations. First, we create the digital elevation model (DEM) of the region of interest by interpolating the digital contours of 1:25000 scale topographic maps to 10 m of ground pixel resolution. The non-commercial Google Earth satellite imagery and geological maps of previous studies are draped over the interpolated DEMs in the second stage. The integration of all spatial data is done by using the market leading GIS software, ESRI ArcGIS. We make the preliminary interpretation of major structures as tectonic lineaments and stratigraphic contacts. These preliminary maps are controlled and precisely coordinated during the field studies by using mobile tablets and/or phablets with GPS receivers. The same devices are also used in measuring and recording the geologic structures of the study region. Finally, all digitally collected measurements and observations are added to the GIS database and we finalise our geological map with all available information. We applied this integrated method to map the Burdur-Fethiye Shear Zone (BFSZ) in the southwest Turkey. The BFSZ is an active sinistral 60-to-90 km-wide shear zone, which prolongs about 300 km-long between Suhut-Cay in the northeast and Köyceğiz Lake-Kalkan in the southwest on land. The numerous studies suggest contradictory models not only about the evolution but also about the fault geometry of this

  13. Independence of color and luminance edges in natural scenes.

    PubMed

    Hansen, Thorsten; Gegenfurtner, Karl R

    2009-01-01

    Form vision is traditionally regarded as processing primarily achromatic information. Previous investigations into the statistics of color and luminance in natural scenes have claimed that luminance and chromatic edges are not independent of each other and that any chromatic edge most likely occurs together with a luminance edge of similar strength. Here we computed the joint statistics of luminance and chromatic edges in over 700 calibrated color images from natural scenes. We found that isoluminant edges exist in natural scenes and were not rarer than pure luminance edges. Most edges combined luminance and chromatic information but to varying degrees such that luminance and chromatic edges were statistically independent of each other. Independence increased along successive stages of visual processing from cones via postreceptoral color-opponent channels to edges. The results show that chromatic edge contrast is an independent source of information that can be linearly combined with other cues for the proper segmentation of objects in natural and artificial vision systems. Color vision may have evolved in response to the natural scene statistics to gain access to this independent information. PMID:19152717

  14. The Review of Nuclear Microscopy Techniques: An Approach for Nondestructive Trace Elemental Analysis and Mapping of Biological Materials

    PubMed Central

    Mulware, Stephen Juma

    2015-01-01

    The properties of many biological materials often depend on the spatial distribution and concentration of the trace elements present in a matrix. Scientists have over the years tried various techniques including classical physical and chemical analyzing techniques each with relative level of accuracy. However, with the development of spatially sensitive submicron beams, the nuclear microprobe techniques using focused proton beams for the elemental analysis of biological materials have yielded significant success. In this paper, the basic principles of the commonly used microprobe techniques of STIM, RBS, and PIXE for trace elemental analysis are discussed. The details for sample preparation, the detection, and data collection and analysis are discussed. Finally, an application of the techniques to analysis of corn roots for elemental distribution and concentration is presented. PMID:26664356

  15. The Review of Nuclear Microscopy Techniques: An Approach for Nondestructive Trace Elemental Analysis and Mapping of Biological Materials.

    PubMed

    Mulware, Stephen Juma

    2015-01-01

    The properties of many biological materials often depend on the spatial distribution and concentration of the trace elements present in a matrix. Scientists have over the years tried various techniques including classical physical and chemical analyzing techniques each with relative level of accuracy. However, with the development of spatially sensitive submicron beams, the nuclear microprobe techniques using focused proton beams for the elemental analysis of biological materials have yielded significant success. In this paper, the basic principles of the commonly used microprobe techniques of STIM, RBS, and PIXE for trace elemental analysis are discussed. The details for sample preparation, the detection, and data collection and analysis are discussed. Finally, an application of the techniques to analysis of corn roots for elemental distribution and concentration is presented. PMID:26664356

  16. AI-based technique for tracking chains of discontinuous symbols and its application to the analysis of topographic maps

    NASA Astrophysics Data System (ADS)

    Mecocci, Alessandro; Lilla, Massimiliano

    1994-12-01

    Automatic digitization of topographic maps is a very important task nowadays. Among the different elements of a topographic map discontinuous lines represent important information. Generally they are difficult to track because they show very large gaps, and abrupt direction changes. In this paper an architecture that automates the digitalization of discontinuous lines (dot-dot lines, dash-dot-dash lines, dash-asterisk lines, etc.) is presented. The tracking process must detect the elementary symbols and then concatenate these symbols into a significant chain that represents the line. The proposed architecture is composed of a common kernel, based on a suitable modification of the A* algorithm, that starts different auxiliary processes depending on the particular line to be tracked. Three auxiliary processes are considered: search strategy generation (SSG) which is responsible for the strategy used to scan the image pixels; low level symbol detection (LSD) which decides if a certain image region around the pixel selected by the SSG is an elementary symbol; cost evaluation (CE) which gives the quality of each symbol with respect to the global course of the line. The whole system has been tested on a 1:50.000 map furnished by the Istituto Geografico Militare Italiano (IGMI). The results were very good for different types of discontinuous lines. Over the whole map (i.e. about 80 Mbytes of digitized data) 95% of the elementary symbols of the lines have been correctly chained. The operator time required to correct misclassifications is a small part of the time needed to manually digitize the discontinuous lines.

  17. Meter-scale Urban Land Cover Mapping for EPA EnviroAtlas Using Machine Learning and OBIA Remote Sensing Techniques

    NASA Astrophysics Data System (ADS)

    Pilant, A. N.; Baynes, J.; Dannenberg, M.; Riegel, J.; Rudder, C.; Endres, K.

    2013-12-01

    US EPA EnviroAtlas is an online collection of tools and resources that provides geospatial data, maps, research, and analysis on the relationships between nature, people, health, and the economy (http://www.epa.gov/research/enviroatlas/index.htm). Using EnviroAtlas, you can see and explore information related to the benefits (e.g., ecosystem services) that humans receive from nature, including clean air, clean and plentiful water, natural hazard mitigation, biodiversity conservation, food, fuel, and materials, recreational opportunities, and cultural and aesthetic value. EPA developed several urban land cover maps at very high spatial resolution (one-meter pixel size) for a portion of EnviroAtlas devoted to urban studies. This urban mapping effort supported analysis of relations among land cover, human health and demographics at the US Census Block Group level. Supervised classification of 2010 USDA NAIP (National Agricultural Imagery Program) digital aerial photos produced eight-class land cover maps for several cities, including Durham, NC, Portland, ME, Tampa, FL, New Bedford, MA, Pittsburgh, PA, Portland, OR, and Milwaukee, WI. Semi-automated feature extraction methods were used to classify the NAIP imagery: genetic algorithms/machine learning, random forest, and object-based image analysis (OBIA). In this presentation we describe the image processing and fuzzy accuracy assessment methods used, and report on some sustainability and ecosystem service metrics computed using this land cover as input (e.g., carbon sequestration from USFS iTREE model; health and demographics in relation to road buffer forest width). We also discuss the land cover classification schema (a modified Anderson Level 1 after the National Land Cover Data (NLCD)), and offer some observations on lessons learned. Meter-scale urban land cover in Portland, OR overlaid on NAIP aerial photo. Streets, buildings and individual trees are identifiable.

  18. Applications of the halo model to large scale structure measurements of the Luminous Red Galaxies

    NASA Astrophysics Data System (ADS)

    Reid, Beth Ann

    The power spectrum of density fluctuations in the evolved universe provides constraints on cosmological parameters that are complementary to cosmic microwave background and other astronomical probes. The Sloan Digital Sky Survey (SDSS) Luminous Red Galaxy (LRG) sample probes a volume of ~ 3 (Gpc) 3 , and systematic errors in modeling the nonlinearities limit our ability to extract information on the shape of the linear power spectrum. There are three main effects that distort the observed power spectrum from the linear power spectrum: nonlinear gravitational evolution, redshift space distortions, and a nonlinear relation between the galaxy density field and the underlying matter density field. In this thesis we introduce a new method to mitigate the latter two distortions and rely on carefully tuned N-body simulations to model the first. In Chapter 2 we present the technique 'Counts-in-Cylinders' (CiC) and use it to measure the multiplicity function of groups of LRGs in SDSS. We use the Halo Occupation Distribution description of the galaxy-matter mapping and N -body simulations to connect this observation with constraints on the distribution of LRGs in dark matter halos. In Chapter 3 we study the effects of resolution on statistics relating to both the large and small scale distributions and motions of matter and dark matter halos. We combine these results to produce a large set of high quality mock LRG catalogs that reproduce the higher order statistics in the density field probed by the CiC technique. Using these catalogs we present a detailed analysis of the method used in Tegmark et al. (2006) to estimate the LRG power spectrum, and find that the large nonlinear correction necessary for their analysis is degenerate with changes in the linear spectrum we wish to constrain. We show that the CiC group-finding method in Chapter 2 can be used to reconstruct the underlying halo density field. The power spectrum of this field has only percent-level deviations from

  19. A Second Luminous Blue Variable in the Quintuplet Cluster

    NASA Astrophysics Data System (ADS)

    Geballe, T. R.; Najarro, F.; Figer, D. F.

    2000-02-01

    H- and K-band moderate-resolution and 4 μm high-resolution spectra have been obtained for FMM 362, a bright star in the Quintuplet Cluster near the Galactic center. The spectral features in these bands closely match those of the Pistol Star, a luminous blue variable and one of the most luminous stars known. The new spectra and previously obtained photometry imply a very high luminosity for FMM 362, L>=106 Lsolar, and a temperature of 10,000-13,000 K. Based on its luminosity, temperature, photometric variability, and similarities to the Pistol Star, we conclude that FMM 362 is a luminous blue variable.

  20. Evaluation of expanded uncertainties in luminous intensity and illuminance calibrations

    SciTech Connect

    Sametoglu, Ferhat

    2008-11-01

    Detector-based calibrating methods and expressions for calculation of photometric uncertainties related to uncertainties in the calibrations of luminous intensity of a light source, illuminance responsivity of a photometer head, and calibration factors of an illuminance meter are discussed. These methods permit luminous intensity calibrations of incandescent light sources, luminous responsivity calibrations of photometer heads, and calibration factors of illuminance meters to be carried out with relative expanded uncertainties (with a level of confidence of 95.45%) of 0.4%, 0.4%, and 0.6%, respectively.

  1. Ultra Luminous X-ray Sources

    NASA Astrophysics Data System (ADS)

    Webb, N. A.; Godet, O.

    2015-12-01

    Ultra Luminous X-ray sources (ULXs) are X-ray bright objects that are not coincident with the central nucleus of the host galaxy and which have luminosities that exceed the Eddington limit for a stellar mass black hole, typically L > 3 × 10^{39} erg s^{-1} for a black hole of 20 M_⊙. The nature of these objects is still unclear. However, it is possible that these sources do not form a single class of objects. Many ULXs may house stellar mass black holes accreting at super-Eddington rates, even if the physical mechanism for such high accretion rates is still not understood. Some ULXs may contain intermediate mass black holes (˜1 × 10^{2} - ˜1 × 10^{5} M_⊙). These elusive black holes are thought to be the building blocks of the more massive supermassive black holes, observed at the centre of many galaxies. Other ULXs may not be accreting black holes at all. Recent evidence for the different types of ULXs is presented in this paper.

  2. Chandra Observations of 12 Luminous Red Quasars

    SciTech Connect

    Urrutia, T; Lacy, M; Gregg, M D; Becker, R H

    2005-03-11

    The authors present results of a study of 12 dust-reddened quasars with 0.4 < z < 2.65 and reddenings in the range 0.15 < E(B-V) < 1.7. They obtained ACIS-S X-ray spectra of these quasars, estimated the column densities towards them, and hence obtained the gas:dust ratios in the material obscuring the quasar. They detect all but one of the red quasars in the X-rays. Even though there is no obvious correlation between the X-ray determined column densities of the sources and their optical color or reddening, all of the sources show absorbed X-ray spectra. When they correct the luminosity for absorption, they can be placed among luminous quasars; therefore their objects belong to the group of high luminosity analogues of the sources contributing to the X-ray background seen in deep X-ray observations. Such sources are also found in serendipitous shallow X-ray surveys. There is a hint that the mean spectral slope of the red quasar is higher than that of normal, unobscured quasars, which could be an indication for higher accretion rates and/or an evolutionary effect. They investigate the number density of these sources compared to type 2 AGN based on the X-ray background and estimate how many moderate luminosity red quasars may be found in deep X-ray fields.

  3. Marine transducing bacteriophage attacking a luminous bacterium.

    PubMed

    Keynan, A; Nealson, K; Sideropoulos, H; Hastings, J W

    1974-08-01

    The isolation and partial characterization of a marine bacteriophage attacking a strain of luminous bacteria is described, including some physical, biological, and genetic properties. It is a DNA phage of density of 1.52 with a long flexible tail and an apparently icosohedral head. With respect to stability in suspension, it has a rather specific requirement for the sodium ion in high concentration; it is further stabilized by the addition of calcium and magnesium ions. These same ions are likewise all required for both good plating efficiency and plaque uniformity. Although it goes through a typical lytic growth cycle (about 45 min), with a burst size of 100, and no stable lysogens have been isolated, it is nevertheless a transducing phage specifically for the tryptophan region, transducing several, but not all, independently isolated Trp(-) auxotrophs to protrophy. No other auxotrophs of a variety of amino acids were transduced by this phage to prototrophy. Phage infection does not change the normal expression of the luminescent system, and light remains at near normal levels until cell lysis occurs. PMID:16789143

  4. Extremely Luminous Far-infrared Sources (ELFS)

    NASA Technical Reports Server (NTRS)

    Harwit, Martin; Houck, James R.; Soifer, B. Thomas; Palumbo, Giorgio G. C.

    1987-01-01

    The Infrared Astronomical Satellite (IRAS) survey uncovered a class of Extremely Luminous Far Infrared Sources (ELFS), exhibiting luminosities up to and occasionally exceeding 10 to the 12th power L sub 0. Arguments are presented to show that sources with luminosities L equal to or greater than 3 x 10 to the 10th power L sub 0 may represent gas rich galaxies in collision. The more conventional explanation of these sources as sites of extremely active star formation fails to explain the observed low optical luminosities of ELFS as well as their high infrared excess. In contrast, a collisional model heats gas to a temperature of approx. 10 to the 6th power K where cooling takes place in the extreme ultraviolet. The UV is absorbed by dust and converted into far infrared radiation (FIR) without generation of appreciable optical luminosity. Gas recombination as it cools generates a Lyman alpha photon only once for every two extreme ultraviolet approx. 50eV photons emitted by the 10 to the 6th power gas. That accounts for the high infrared excess. Finally, the model also is able to explain the observed luminosity distribution of ELFS as well as many other traits.

  5. Galaxy Clustering Around Nearby Luminous Quasars

    NASA Technical Reports Server (NTRS)

    Fisher, Karl B.; Bahcall, John N.; Kirhakos, Sofia; Schneider, Donald P.

    1996-01-01

    We examine the clustering of galaxies around a sample of 20 luminous low redshift (z approx. less than 0.30) quasars observed with the Wide Field Camera-2 on the Hubble Space Telescope (HST). The HST resolution makes possible galaxy identification brighter than V = 24.5 and as close as 1 min or 2 min to the quasar. We find a significant enhancement of galaxies within a projected separation of approx. less than 100 1/h kpc of the quasars. If we model the QSO/galaxy correlation function as a power law with a slope given by the galaxy/galaxy correlation function, we find that the ratio of the QSO/galaxy to galaxy/galaxy correlation functions is 3.8 +/- 0.8. The galaxy counts within r less than 15 1/h kpc of the quasars are too high for the density profile to have an appreciable core radius (approx. greater than 100 1/h kpc). Our results reinforce the idea that low redshift quasars are located preferentially in groups of 10-20 galaxies rather than in rich clusters. We see no significant difference in the clustering amplitudes derived from radio-loud and radio-quiet subsamples.

  6. Mapping erosion-sensitive areas after wildfires using fieldwork, remote sensing, and geographic information systems techniques on a regional scale

    NASA Astrophysics Data System (ADS)

    PéRez-Cabello, F.; de La Riva FernáNdez, J.; Montorio LloveríA, R.; GarcíA-MartíN, A.

    2006-12-01

    Alterations in the hydrological cycle following wildfire due to the loss of ground cover vegetation and changes in soil properties have been documented in many studies. Nevertheless, the rapid process of vegetation recovery reduces such negative effects. Vegetation cover before fire, fire severity, and geophysical properties are important factors that control spatial discontinuities involved in the vegetation-covering process. The objective of this study was to estimate the probability of high erosion in order to map erosion-sensitive areas after fire. The analysis was carried out in different plant communities burnt by summer wildfires in the pre-Pyrenean area (Spain). Three-year Landsat Thematic Mapper (TM) images have been used for mapping wildfire areas and severity levels. Conversion to spectral reflectance has been applied for radiometric correction by normalizing topographic and atmospheric effects. Likewise, other physical variables have also been incorporated into the geographic information system (GIS): vegetation types, parent material, illumination, slope, aspect, and precipitation. The dependent variable has been characterized by means of fieldwork and a photointerpretation process based on high-resolution digital aerial orthophotographs taken 11-12 years after the fire. Different logistic regression models have been used for mapping the probability of erosion. Results indicate that prefire normalized difference vegetation index values and aspect are the most important variables for estimating erosion-sensitive areas after fire (Nagelkerke r2 = 0.66; Kappa values = 0.65). Finally, the use of nonparametric models with environmental digital information based on GIS can facilitate the management of burnt areas.

  7. A comparison of six analytical disease mapping techniques as applied to West Nile Virus in the coterminous United States

    PubMed Central

    Griffith, Daniel A

    2005-01-01

    West Nile Virus has quickly become a serious problem in the United States (US). Its extremely rapid diffusion throughout the country argues for a better understanding of its geographic dimensions. Both 2003 and 2004 percentages of deaths by numbers of reported human cases, for the 48 coterminous US states, are analyzed with a range of spatial statistical models, seeking to furnish a fuller appreciation of the variety of models available to researchers interested in analytical disease mapping. Comparative results indicate that no single spatial statistical model specification furnishes a preferred description of these data, although normal approximations appear to furnish some questionable implications. Findings also suggest several possible future research topics. PMID:16076391

  8. High-luminance LEDs replace incandescent lamps in new applications

    NASA Astrophysics Data System (ADS)

    Evans, David L.

    1997-04-01

    The advent of high luminance AlInGaP and InGaN LED technologies has prompted the use of LED devices in new applications formally illuminated by incandescent lamps. The luminous efficiencies of these new LED technologies equals or exceeds that attainable with incandescent sources, with reliability factors that far exceed those of incandescent sources. The need for a highly efficient, dependable, and cost effective replacement for incandescent lamps is being fulfilled with high luminance LED lamps. This paper briefly described some of the new applications incorporating high luminance LED lamps, traffic signals and roadway signs for traffic management, automotive exterior lighting, active matrix and full color displays for commercial advertising, and commercial aircraft panel lighting and military aircraft NVG compatible lighting.

  9. Single luminal epithelial progenitors can generate prostate organoids in culture

    PubMed Central

    Chua, Chee Wai; Shibata, Maho; Lei, Ming; Toivanen, Roxanne; Barlow, LaMont J.; Bergren, Sarah K.; Badani, Ketan K.; McKiernan, James M.; Benson, Mitchell C.; Hibshoosh, Hanina; Shen, Michael M.

    2014-01-01

    The intrinsic ability to display self-organizing morphogenetic properties in ex vivo culture may represent a general property of tissue stem cells. Here we show that single luminal stem/progenitor cells can generate prostate organoids in a three-dimensional culture system in the absence of stroma. Organoids generated from CARNs (castration-resistant Nkx3.1-expressing cells) or normal prostate epithelium exhibit tissue architecture containing luminal and basal cells, undergo long-term expansion in culture, and display functional androgen receptor signaling. Lineage-tracing demonstrates that luminal cells are favored for organoid formation, and generate basal cells in culture. Furthermore, tumor organoids can initiate from CARNs after oncogenic transformation, and from mouse models of prostate cancer, and can facilitate analyses of drug response. Finally, we provide evidence supporting the feasibility of organoid studies of human prostate tissue. Our studies underscore the progenitor properties of luminal cells, and identify in vitro approaches for studying prostate biology. PMID:25241035

  10. Cue Combination of Conflicting Color and Luminance Edges.

    PubMed

    Sharman, Rebecca J; McGraw, Paul V; Peirce, Jonathan W

    2015-12-01

    Abrupt changes in the color or luminance of a visual image potentially indicate object boundaries. Here, we consider how these cues to the visual "edge" location are combined when they conflict. We measured the extent to which localization of a compound edge can be predicted from a simple maximum likelihood estimation model using the reliability of chromatic (L-M) and luminance signals alone. Maximum likelihood estimation accurately predicted the pattern of results across a range of contrasts. Predictions consistently overestimated the relative influence of the luminance cue; although L-M is often considered a poor cue for localization, it was used more than expected. This need not indicate that the visual system is suboptimal but that its priors about which cue is more useful are not flat. This may be because, although strong changes in chromaticity typically represent object boundaries, changes in luminance can be caused by either a boundary or a shadow. PMID:27551364

  11. Cue Combination of Conflicting Color and Luminance Edges

    PubMed Central

    Sharman, Rebecca J; McGraw, Paul V

    2015-01-01

    Abrupt changes in the color or luminance of a visual image potentially indicate object boundaries. Here, we consider how these cues to the visual “edge” location are combined when they conflict. We measured the extent to which localization of a compound edge can be predicted from a simple maximum likelihood estimation model using the reliability of chromatic (L−M) and luminance signals alone. Maximum likelihood estimation accurately predicted the pattern of results across a range of contrasts. Predictions consistently overestimated the relative influence of the luminance cue; although L−M is often considered a poor cue for localization, it was used more than expected. This need not indicate that the visual system is suboptimal but that its priors about which cue is more useful are not flat. This may be because, although strong changes in chromaticity typically represent object boundaries, changes in luminance can be caused by either a boundary or a shadow. PMID:27551364

  12. Fluorescence lifetime imaging of optically levitated aerosol: a technique to quantitatively map the viscosity of suspended aerosol particles.

    PubMed

    Fitzgerald, C; Hosny, N A; Tong, H; Seville, P C; Gallimore, P J; Davidson, N M; Athanasiadis, A; Botchway, S W; Ward, A D; Kalberer, M; Kuimova, M K; Pope, F D

    2016-08-21

    We describe a technique to measure the viscosity of stably levitated single micron-sized aerosol particles. Particle levitation allows the aerosol phase to be probed in the absence of potentially artefact-causing surfaces. To achieve this feat, we combined two laser based techniques: optical trapping for aerosol particle levitation, using a counter-propagating laser beam configuration, and fluorescent lifetime imaging microscopy (FLIM) of molecular rotors for the measurement of viscosity within the particle. Unlike other techniques used to measure aerosol particle viscosity, this allows for the non-destructive probing of viscosity of aerosol particles without interference from surfaces. The well-described viscosity of sucrose aerosol, under a range of relative humidity conditions, is used to validate the technique. Furthermore we investigate a pharmaceutically-relevant mixture of sodium chloride and salbutamol sulphate under humidities representative of in vivo drug inhalation. Finally, we provide a methodology for incorporating molecular rotors into already levitated particles, thereby making the FLIM/optical trapping technique applicable to real world aerosol systems, such as atmospheric aerosols and those generated by pharmaceutical inhalers. PMID:27430158

  13. Luminal flow alteration in presence of the stent

    NASA Astrophysics Data System (ADS)

    Bernad, E. S.; Hudrea, C. I.; Bernad, S. I.; Totorean, A. F.; Bosioc, A. I.

    2015-12-01

    Luminally protruding struts alter blood flow, creating areas of recirculation, separation, and stagnation. The impact of flow alterations around struts vary as the strut geometrical parameters change. We quantified the influence of the luminal flow alterations due to the presence of the stent struts by performing two-dimensional numerical simulation. Idealized computer models can facilitate understanding of the in-stent restenosis that is impossible to achieve in vivo.

  14. Luminance controlled pupil size affects Landolt C task performance. Revision

    SciTech Connect

    Berman, S.M.; Fein, G.; Jewett, D.L.; Ashford, F.

    1993-02-01

    Subjects judged the orientation of a 2 min. gap Landolt C located at a distance of 2.4 m. The stimuli were presented in central vision on a CRT, at low to medium contrast. The effects of varying the spectrum and luminance of surround lighting were assessed on both pupil size (measured using infrared pupillometry during task performance) and task accuracy. The task display was protected from the surround lighting, so that its luminance and contrast could be varied independently of the changes in the surround lighting. Indirect surround illumination was provided by either two illuminants of very different scotopic spectral content but with the same photopic luminance (Experiments 1 and 3), or by using the same illuminant at two different luminance levels (Experiment 2). In Experiment 3, the effect of changing surround spectrum was compared to the effect of varying task background luminance between 12 cd/m{sup 2} and 73 cd/m{sup 2}. In all experiments, scotopically enhanced surround lighting produced pupil areas which were reduced by almost 50% in comparison with surround lighting with relatively less scotopic luminance. Concomitantly there was improvement in Landolt C task performance with the scotopically enhanced surround lighting at all contrast and luminance levels. In these experiments, smaller pupil sizes were associated with significantly better visual-task performance in spite of lower task retinal illuminance when compared to the condition with larger pupils. These results suggest that changes in surround spectrum can compensate for the effect on task performance of a reduction in task luminance and supports the hypothesis that lighting energy savings could accrue in the workplace by shifting lamp spectra to obtain greater scotopic efficacy.

  15. Luminance controlled pupil size affects Landolt C task performance

    SciTech Connect

    Berman, S.M. ); Fein, G. ); Jewett, D.L.; Ashford, F. )

    1993-02-01

    Subjects judged the orientation of a 2 min. gap Landolt C located at a distance of 2.4 m. The stimuli were presented in central vision on a CRT, at low to medium contrast. The effects of varying the spectrum and luminance of surround lighting were assessed on both pupil size (measured using infrared pupillometry during task performance) and task accuracy. The task display was protected from the surround lighting, so that its luminance and contrast could be varied independently of the changes in the surround lighting. Indirect surround illumination was provided by either two illuminants of very different scotopic spectral content but with the same photopic luminance (Experiments 1 and 3), or by using the same illuminant at two different luminance levels (Experiment 2). In Experiment 3, the effect of changing surround spectrum was compared to the effect of varying task background luminance between 12 cd/m[sup 2] and 73 cd/m[sup 2]. In all experiments, scotopically enhanced surround lighting produced pupil areas which were reduced by almost 50% in comparison with surround lighting with relatively less scotopic luminance. Concomitantly there was improvement in Landolt C task performance with the scotopically enhanced surround lighting at all contrast and luminance levels. In these experiments, smaller pupil sizes were associated with significantly better visual-task performance in spite of lower task retinal illuminance when compared to the condition with larger pupils. These results suggest that changes in surround spectrum can compensate for the effect on task performance of a reduction in task luminance and supports the hypothesis that lighting energy savings could accrue in the workplace by shifting lamp spectra to obtain greater scotopic efficacy.

  16. WARM MOLECULAR GAS IN LUMINOUS INFRARED GALAXIES

    SciTech Connect

    Lu, N.; Zhao, Y.; Xu, C. K.; Mazzarella, J. M.; Howell, J.; Appleton, P.; Lord, S.; Schulz, B.; Gao, Y.; Armus, L.; Díaz-Santos, T.; Surace, J.; Isaak, K. G.; Petric, A. O.; Charmandaris, V.; Evans, A. S.; Inami, H.; Iwasawa, K.; Leech, J.; Sanders, D. B.; and others

    2014-06-01

    We present our initial results on the CO rotational spectral line energy distribution (SLED) of the J to J–1 transitions from J = 4 up to 13 from Herschel SPIRE spectroscopic observations of 65 luminous infrared galaxies (LIRGs) in the Great Observatories All-Sky LIRG Survey. The observed SLEDs change on average from one peaking at J ≤ 4 to a broad distribution peaking around J ∼ 6 to 7 as the IRAS 60-to-100 μm color, C(60/100), increases. However, the ratios of a CO line luminosity to the total infrared luminosity, L {sub IR}, show the smallest variation for J around 6 or 7. This suggests that, for most LIRGs, ongoing star formation (SF) is also responsible for a warm gas component that emits CO lines primarily in the mid-J regime (5 ≲ J ≲ 10). As a result, the logarithmic ratios of the CO line luminosity summed over CO (5–4), (6–5), (7–6), (8–7) and (10–9) transitions to L {sub IR}, log R {sub midCO}, remain largely independent of C(60/100), and show a mean value of –4.13 (≡log R{sub midCO}{sup SF}) and a sample standard deviation of only 0.10 for the SF-dominated galaxies. Including additional galaxies from the literature, we show, albeit with a small number of cases, the possibility that galaxies, which bear powerful interstellar shocks unrelated to the current SF, and galaxies, in which an energetic active galactic nucleus contributes significantly to the bolometric luminosity, have their R {sub midCO} higher and lower than R{sub midCO}{sup SF}, respectively.

  17. FUSE Observations of Luminous Cool Stars

    NASA Astrophysics Data System (ADS)

    Dupree, A. K.; Young, P. R.; Ake, T. B.

    2000-12-01

    Luminous cool stars can address the evolution of magnetic activity and the dynamics of stellar winds and mass loss. The region of yellow supergiants in the HR diagram contains stars of intermediate mass both with coronas and those possessing a hot outer atmosphere in the presence of a strong wind (the ``hybrid'' stars). These hybrid objects hold particular significance for evolution studies because they represent the physically important connection between solar-like stars (with coronas and fast winds of low-mass loss rate) and the cool supergiant stars (Alpha Ori-like) with cool outer atmospheres and massive winds. The Far Ultraviolet Spectroscopic Explorer (FUSE) measured the chromospheric and transition region emissions of the bright G2 Ib supergiant Beta Draconis (HD 159181) on 9 May 2000. Two exposures through the large aperture totaled 7695 s and were obtained in all channels covering the region λ λ 912-1180. Emission from chromospheric and transition region ions (C III, O VI, Si III, S IV, S VI) is detected along with a number of low ion stages. Profiles of strong lines are asymmetric suggesting the presence of a wind. A short exposure (3260 s) of Alpha Aquarii (HD 209750), a hybrid supergiant also of spectral type G2 Ib was obtained June 29, 2000. Dynamics of the atmospheres can be inferred from line profiles. The atmospheric temperature distribution, densities, and scale sizes can be evaluated from line fluxes to characterize the differences between a coronal star and a hybrid supergiant. FUSE is a NASA Origins mission operated by The Johns Hopkins University. Funding for this research is provided through NASA Contract NAS-532985.

  18. Warm Molecular Gas in Luminous Infrared Galaxies

    NASA Astrophysics Data System (ADS)

    Lu, N.; Zhao, Y.; Xu, C. K.; Gao, Y.; Armus, L.; Mazzarella, J. M.; Isaak, K. G.; Petric, A. O.; Charmandaris, V.; Díaz-Santos, T.; Evans, A. S.; Howell, J.; Appleton, P.; Inami, H.; Iwasawa, K.; Leech, J.; Lord, S.; Sanders, D. B.; Schulz, B.; Surace, J.; van der Werf, P. P.

    2014-06-01

    We present our initial results on the CO rotational spectral line energy distribution (SLED) of the J to J-1 transitions from J = 4 up to 13 from Herschel SPIRE spectroscopic observations of 65 luminous infrared galaxies (LIRGs) in the Great Observatories All-Sky LIRG Survey. The observed SLEDs change on average from one peaking at J <= 4 to a broad distribution peaking around J ~ 6 to 7 as the IRAS 60-to-100 μm color, C(60/100), increases. However, the ratios of a CO line luminosity to the total infrared luminosity, L IR, show the smallest variation for J around 6 or 7. This suggests that, for most LIRGs, ongoing star formation (SF) is also responsible for a warm gas component that emits CO lines primarily in the mid-J regime (5 <~ J <~ 10). As a result, the logarithmic ratios of the CO line luminosity summed over CO (5-4), (6-5), (7-6), (8-7) and (10-9) transitions to L IR, log R midCO, remain largely independent of C(60/100), and show a mean value of -4.13 (\\equiv log R^SF_midCO) and a sample standard deviation of only 0.10 for the SF-dominated galaxies. Including additional galaxies from the literature, we show, albeit with a small number of cases, the possibility that galaxies, which bear powerful interstellar shocks unrelated to the current SF, and galaxies, in which an energetic active galactic nucleus contributes significantly to the bolometric luminosity, have their R midCO higher and lower than R^SF_midCO, respectively. Based on Herschel observations. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  19. Luminous red galaxies in hierarchical cosmologies

    NASA Astrophysics Data System (ADS)

    Almeida, C.; Baugh, C. M.; Wake, D. A.; Lacey, C. G.; Benson, A. J.; Bower, R. G.; Pimbblet, K.

    2008-06-01

    Luminous red galaxies (LRGs) are much rarer and more massive than L* galaxies. Coupled with their extreme colours, LRGs therefore provide a demanding testing ground for the physics of massive galaxy formation. We present the first self-consistent predictions for the abundance and properties of LRGs in hierarchical structure formation models. We test two published models which use quite different mechanisms to suppress the formation of massive galaxies: the Bower et al. model which invokes `active galactic nuclei (AGN) feedback' to prevent gas from cooling in massive haloes and the Baugh et al. model which relies upon a `superwind' to eject gas before it is turned into stars. Without adjusting any parameters, the Bower et al. model gives an excellent match to the observed luminosity function of LRGs in the Sloan Digital Sky Survey (with a median redshift of z = 0.24) and to their clustering; the Baugh et al. model is less successful in these respects. Both models fail to match the observed abundance of LRGs at z = 0.5 to better than a factor of ~2. In the models, LRGs are typically bulge-dominated systems with stellar masses of ~2 × 1011h-1Msolar and velocity dispersions of σ ~ 250kms-1. Around half of the stellar mass in the model LRGs is already formed by z ~ 2.2 and is assembled into one main progenitor by z ~ 1.5; on average, only 25 per cent of the mass of the main progenitor is added after z ~ 1. LRGs are predicted to be found in a wide range of halo masses, a conclusion which relies on properly taking into account the scatter in the formation histories of haloes. Remarkably, we find that the correlation function of LRGs is predicted to be a power law down to small pair separations, in excellent agreement with observational estimates. Neither the Bower et al. nor the Baugh et al. model is able to reproduce the observed radii of LRGs.

  20. Evolution of local luminous compact blue galaxies

    NASA Astrophysics Data System (ADS)

    Rabidoux, Katherine; Pisano, Daniel J.

    2015-01-01

    Luminous compact blue galaxies (LCBGs) are a type of very blue, very compact star-forming galaxy that was common at z~1 but is rare in the local universe. While it is clear from this discrepancy that LCBGs must be a rapidly-evolving class of galaxy, it is not clear what type(s) of galaxy they become. Fortunately, since they are bright and nearby, the rare examples of z~0 LCBGs are easily studied across a large range of wavelengths. We have conducted a study of z~0 analogs to the z~1 LCBGs to investigate their galaxy-wide internal properties in order to determine what is triggering their current episode of star formation, for how long the star formation can continue, and what the galaxies may become once their star formation rates decrease from current levels. We have taken resolved H I observations of nine LCBGs and unresolved radio continuum observations of 35 LCBGs and combined this data with archival broad-band data to probe their global properties. We conclude that LCBGs are rotationally-supported, star-forming disk galaxies that, while they may be forming small central bulges or bars, are highly unlikely to evolve into dwarf elliptical, dwarf spheroidal, or elliptical galaxies on their own due to their masses and rotation velocities. LCBGs will likely fade to be spiral galaxies with lower surface brightnesses once their current episodes of star formation conclude. In addition, we have modeled the SEDs of the LCBGs in our sample to determine whether LCBGs' star formation is ramping up or winding down, and for how much longer their current active phase of star formation will last. We have begun to put together a picture of the current evolutionary stage of this class of galaxies, and have better constrained their future evolutionary paths.

  1. Friction force microscopy: a simple technique for identifying graphene on rough substrates and mapping the orientation of graphene grains on copper

    NASA Astrophysics Data System (ADS)

    Marsden, A. J.; Phillips, M.; Wilson, N. R.

    2013-06-01

    At a single atom thick, it is challenging to distinguish graphene from its substrate using conventional techniques. In this paper we show that friction force microscopy (FFM) is a simple and quick technique for identifying graphene on a range of samples, from growth substrates to rough insulators. We show that FFM is particularly effective for characterizing graphene grown on copper where it can correlate the graphene growth to the three-dimensional surface topography. Atomic lattice stick-slip friction is readily resolved and enables the crystallographic orientation of the graphene to be mapped nondestructively, reproducibly and at high resolution. We expect FFM to be similarly effective for studying graphene growth on other metal/locally crystalline substrates, including SiC, and for studying growth of other two-dimensional materials such as molybdenum disulfide and hexagonal boron nitride.

  2. Application of hyperspectral techniques to multispectral data: spectral mixture analysis (SMA) in mapping of emergent macrophytes in a water-hyacinth-infested area

    NASA Astrophysics Data System (ADS)

    Idawo, Cuthbert; Jajah, Munzer; Laneve, Giovanni

    2004-02-01

    Water hyacinth (Eicchornia crassipes (Mart.) Solms) is an invasive aquatic macrophyte that has infested the lake Victoria, East Africa, since the late 1980s. It has been associated with major negative economic and ecological impact of this important water resource in East Africa. Remote sensing technology has significant potential in mapping this fast growing floating weed, in a mostly inaccessible area for field measurements. Our study site is the Winam Gulf, on the Kenyan part of the Lake, which has had the highest reported infestation in recent years. The paper describes a study to evaluate the ability of ETM+ multispectral imagery in mapping water hyacinth and associated macrophytes in the hyacinth infested Winam Gulf. By applying hyperspectral techniques on multispectral data, a spectral mixture analysis was undertaken using image-derived endmembers. The study was also an evaluation of an alternative way of acquiring emergent macrophytic endmembers in cases where limitations like lack of hyperspectral data, spectrometric measurements and spectral libraries exist. The results demonstrate that whereas it is possible to discriminate and map the different spectral constituents, a spectral library of the endmembers under investigation would be required for positive identification, especially for macrophytes that are closely related spectrally, fast growing, have varying concentrations (density) spatially, and are non-static in nature.

  3. The Luminous Convolution Model for Galaxy Rotation Curves

    NASA Astrophysics Data System (ADS)

    Rubin, Shanon; Mucci, Maria; Sophia Cisneros Collaboration; Kennard Chng Collaboration; Meagan Crowley Collaboration

    2016-03-01

    The LCM takes as input only the observed luminous matter profile from galaxies, and allows us to confirm these observed data by considering frame-dependent effects from the luminous mass profile of the Milky Way. The LCM is useful when looking at galaxies that have similar total enclosed mass, but varying distributions. For example, variations in luminous matter profiles from a diffuse galaxy correlate to the LCM's five different Milky Way models equally well, but LCM fits for a centrally condensed galaxy distinguish between Milky Way models. In this presentation, we show how the rotation curve data of such galaxies can be used to constrain the Milky Way luminous mass modeling, by the physical characteristics of each galaxy used to interpret the fitting. Current Investigations will be presented showing how the convolved parameters of Keplerian predictions with rotation curve observations can be extracted with respect to the crossing location of the relative curvature versus the assumption of the luminous mass profiles from photometry. Since there currently exists no direct constraint to photometric estimates of the luminous mass in these systems, the LCM gives the first constraint based on the orthogonal measurement of Doppler shifted spectra from characteristic emitters.

  4. Dynamics of backlight luminance for using smartphone in dark environment

    NASA Astrophysics Data System (ADS)

    Na, Nooree; Jang, Jiho; Suk, Hyeon-Jeong

    2014-02-01

    This study developed dynamic backlight luminance, which gradually changes as time passes for comfortable use of a smartphone display in a dark environment. The study was carried out in two stages. In the first stage, a user test was conducted to identify the optimal luminance by assessing the facial squint level, subjective glare evaluation, eye blink frequency and users' subjective preferences. Based on the results of the user test, the dynamics of backlight luminance was designed. It has two levels of luminance: the optimal level for initial viewing to avoid sudden glare or fatigue to users' eyes, and the optimal level for constant viewing, which is comfortable, but also bright enough for constant reading of the displayed material. The luminance for initial viewing starts from 10 cd/m2, and it gradually increases to 40 cd/m2 for users' visual comfort at constant viewing for 20 seconds; In the second stage, a validation test on dynamics of backlight luminance was conducted to verify the effectiveness of the developed dynamics. It involving users' subjective preferences, eye blink frequency, and brainwave analysis using the electroencephalogram (EEG) to confirm that the proposed dynamic backlighting enhances users' visual comfort and visual cognition, particularly for using smartphones in a dark environment.

  5. Mapping of Moho and Moho Transition Zone (MTZ) in Samail ophiolites of Sultanate of Oman using remote sensing technique

    NASA Astrophysics Data System (ADS)

    Rajendran, Sankaran; Nasir, Sobhi

    2015-08-01

    Moho and Moho Transition Zone (MTZ) of the Samail ophiolite of Sultanate of Oman are characteristic to potential occurrences of chromite deposit, hydrothermal mineralization and serpentinization. Mapping of Moho and MTZ, and discriminating them in between the mafic and ultramafic rocks in ophiolite sequence are more significant and important. The present study describes the remote sensing spectral characters of minerals and rocks of the Moho and MTZ and discriminates the Moho of Wadi Al Abyad of Nakhl massif, and Wadi Nidab and Wadi Abda regions of Sumail massif in the visible and near infrared (VNIR), and short wavelength infrared (SWIR) spectral regions using low-cost multispectral satellite data of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). Results of this study show that the red-green-blue (RGB) color composite images of ASTER spectral bands 8, 3 and 1, and 8, 7 and 4 are able to delineate the Moho and MTZ of the regions. The RGB images of ASTER band ratios (4/8, 4/1, 3/2 * 4/3 and (1 + 3)/2, (4 + 6)/5, (7 + 9)/8) are capable to discriminate the mantle material (ultramafic harzburgites) and crustal rocks (mafic gabbros). The occurrence of such rock types is demonstrated by detection of their minerals using Spectral Angle Mapper (SAM) image processing method. The presence of Moho and MTZ, and associated lithologies are verified in field at Wadi Al Abyad, Wadi Nidab, Wadi Abda, Wadi Tayin, Wadi Fizh and several locations of Nakhl regions of Samail ophiolites. The laboratory study shows the occurrence of typical minerals namely olivine, orthopyroxene and clinopyroxene in the harzburgite and the minerals such as plagioclase, clinopyroxene, hornblende, orthopyroxene and olivine in the layered gabbro. The spectral properties of the rocks are studied using Portable Infrared Mineral Analyzer (PIMA) spectrometer and the occurrences of minerals are confirmed by X-ray diffraction (XRD) analyses. This study demonstrates the sensor

  6. Development of techniques for producing static strata maps and development of photointerpretive methods based on multitemporal LANDSAT data

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator); Hay, C. M.; Thomas, R. W.; Benson, A. S.

    1977-01-01

    Progress in the evaluation of the static stratification procedure and the development of alternative photointerpretive techniques to the present LACIE procedure for the identification of training fields is reported. Statistically significant signature controlling variables were defined for use in refining the stratification procedure. A subset of the 1973-74 Kansas LACIE segments for wheat was analyzed.

  7. Development of techniques for producing static strata maps and development of photointerpretation methods based on multitemporal LANDSAT data

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator); Hay, C. M.; Thomas, R. W.; Benson, A. S.

    1976-01-01

    The progress of research conducted in support of the Large Area Crop Inventory Experiment (LACIE) is documented. Specific tasks include (1) evaluation of the static stratification procedure and modification of that procedure if warranted, and (2) the development of alternative photointerpretative techniques to the present LACIE procedures for the identification and selection of training fields (areas).

  8. Evaluation of SLAR and thematic mapper MSS data for forest cover mapping using computer-aided analysis techniques. [south carolina

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M. (Principal Investigator)

    1979-01-01

    A literature review on radar and spectral band information was conducted and a NC-130 mission was flown carrying the NS001 scanner system which basically corresponds to the channel configuration of the proposed thematic mapper. Aerial photography and other reference data were obtained for the study site, an area approximately 290 sq miles in north central South Carolina. A cover type map was prepared and methods were devised for reformatting and geometrically correcting MSS CRT data. Arrangements were made to obtain LANDSAT data for dates approximating the NC-130 mission. Because of the waveband employed to obtain SEASAT radar data, it was decided to determine if X-band (2.40 cm to 3.75 cm wavelength) imagery is available.

  9. Techniques of low technology sampling of air pollution by metals: a comparison of concentrations and map patterns.

    PubMed Central

    Lloyd, O L; Gailey, F A

    1987-01-01

    During a 17 month survey of air pollution in the town of Armadale, central Scotland, the concentrations of some metals (iron, manganese, zinc, lead, copper, chrome, nickel, cadmium, and cobalt) were measured in seven types of low technology sampler--four indigenous and three transplanted--at 47 sites. The geographical patterns of the concentrations in the samplers were compared on two types of map. For most metals, sites with high concentrations were present close to the foundry and also in the north of the town. The differences between the patterns of pollution shown by the various types of sampler probably reflected differing mechanisms for collection and different affinities for various sizes and types of metal particle. Images PMID:3620375

  10. Evaluation of SLAR and thematic mapper MSS data for forest cover mapping using computer-aided analysis techniques

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M. (Principal Investigator)

    1979-01-01

    The spatial characteristics of the data were evaluated. A program was developed to reduce the spatial distortions resulting from variable viewing distance, and geometrically adjusted data sets were generated. The potential need for some level of radiometric adjustment was evidenced by an along track band of high reflectance across different cover types in the Varian imagery. A multiple regression analysis was employed to explore the viewing angle effect on measured reflectance. Areas in the data set which appeared to have no across track stratification of cover type were identified. A program was developed which computed the average reflectance by column for each channel, over all of the scan lines in the designated areas. A regression analysis was then run using the first, second, and third degree polynomials, for each channel. An atmospheric effect as a component of the viewing angle source of variance is discussed. Cover type maps were completed and training and test field selection was initiated.

  11. Initial Verification of the GPS-LEO Occultation Technique of Mapping the Atmosphere with the GPS-MET Experiment

    NASA Technical Reports Server (NTRS)

    Hajj, G. A.; Kursinski, E. R.; Bertiger, W. I.; Leroy, S. S.; Romans, L. J.; Schofield, J. T.

    1995-01-01

    The radio occultation technique was first used to observe Earth's atmosphere in April 1995 when a high performance Global Positioning System (GPS) receiver was placed into a low-Earth orbit. When a signal from the GPS travels through the ionosphere and the neutral atmosphere, and is received by a low-Earth orbiter (LEO) satellite, occultation data is generated. How that data is analyzed is presented.

  12. Development of techniques for producing static strata maps and development of photointerpretive methods based on multitemporal LANDSAT data

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator)

    1977-01-01

    The results and progress of work conducted in support of the Large Area Crop Inventory Experiment (LACIE) are documented. Research was conducted for two tasks. These tasks include: (1) evaluation of the UCB static stratification procedure and modification of that procedure if warranted; and (2) the development of alternative photointerpretive techniques to the present LACIE procedure for the identification and selection of training areas for machine-processing of LACIE segments.

  13. Morphological classification of local luminous infrared galaxies

    NASA Astrophysics Data System (ADS)

    Psychogyios, A.; Charmandaris, V.; Diaz-Santos, T.; Armus, L.; Haan, S.; Howell, J.; Le Floc'h, E.; Petty, S. M.; Evans, A. S.

    2016-06-01

    We present analysis of the morphological classification of 89 luminous infrared galaxies (LIRGs) from the Great Observatories All-sky LIRG Survey (GOALS) sample, using non-parametric coefficients and compare their morphology as a function of wavelength. We rely on images that were obtained in the optical (B- and I-band) as well as in the infrared (H-band and 5.8 μm). Our classification is based on the calculation of Gini and the second order of light (M20) non-parametric coefficients, which we explore as a function of stellar mass (M⋆), infrared luminosity (LIR), and star formation rate (SFR). We investigate the relation between M20, the specific SFR (sSFR) and the dust temperature (Tdust) in our galaxy sample. We find that M20 is a better morphological tracer than Gini, as it allows us to distinguish systems that were formed by double systems from isolated and post-merger LIRGs. The effectiveness of M20 as a morphological tracer increases with increasing wavelength, from the B to H band. In fact, the multi-wavelength analysis allows us to identify a region in the Gini-M20 parameter space where ongoing mergers reside, regardless of the band used to calculate the coefficients. In particular, when measured in the H band, a region that can be used to identify ongoing mergers, with minimal contamination from LIRGs in other stages. We also find that, while the sSFR is positively correlated with M20 when measured in the mid-infrared, i.e. star-bursting galaxies show more compact emission, it is anti-correlated with the B-band-based M20. We interpret this as the spatial decoupling between obscured and unobscured star formation, whereby the ultraviolet/optical size of an LIRG experience an intense dust-enshrouded central starburst that is larger that in the mid-infrared since the contrast between the nuclear to the extended disk emission is smaller in the mid-infrared. This has important implications for high redshift surveys of dusty sources, where sizes of galaxies

  14. An Interferometric Ka-band Synthetic Aperture Radar: A New Technique for Glacier and Ice- sheet Topography Mapping

    NASA Astrophysics Data System (ADS)

    Moller, D. K.; Aaron, K.; Gim, Y.; Heavey, B.; Hodges, R.; Nicolson, A.; Rengarajan, S.; Rignot, E.; Rogez, F.; Sadowy, G.; Simard, M.; Zawadzki, M.

    2006-12-01

    The estimation of the mass balance of ice sheets and glaciers on Earth is a problem of considerable scientific and societal importance. The Greenland and Antarctic ice sheets together hold enough ice to raise global sea level by 80 m. The annual exchange of mass on the ice sheets is equivalent to 8mm/yr sea level, so that any fluctuation in that level of exchange is significant on the global scale. A key measurement to understanding, monitoring and forecasting these changes is ice-surface topography, both for ice-sheet and glacial regions. As such NASA identified "ice topographic mapping instruments capable of providing precise elevation and detailed imagery data for measurements on glacial scales for detailed monitoring of ice sheet, and glacier changes" as a science priority for the most recent ESTO- Instrument Incubator Program (IIP) opportunities. Funded under this opportunity is the technological development for a Ka-Band (35GHz) single-pass digitally beamformed interferometric synthetic aperture radar (InSAR). Unique to this concept is the ability to map a significant swath impervious of cloud cover with measurement accuracies comparable to lidar altimeters but with variable resolution as appropriate to the differing scales-of-interest over ice-sheets and glaciers. By diverging from the more traditional profiling measurements employed to date (ie radar altimeters and lidars) we are able to offer the potential to significantly advance the spaciotemporal observational capabilities of both ice sheets and glaciers. Dubbed the Glacier and Land Ice Surface Topography Interferometer (GLISTIN), the instrument and mission presents several significant challenges. In particular, under the IIP program we are designing, building and demonstrating a large Ka-band antenna array with integrated digital receivers and utilizing digital beamforming to preserve both antenna gain and swath. These technology items will ultimately be integrated into a complete interferometric

  15. Salmonella Transiently Reside in Luminal Neutrophils in the Inflamed Gut

    PubMed Central

    Loetscher, Yvonne; Wieser, Andreas; Lengefeld, Jette; Kaiser, Patrick; Schubert, Sören; Heikenwalder, Mathias; Hardt, Wolf-Dietrich; Stecher, Bärbel

    2012-01-01

    Background Enteric pathogens need to grow efficiently in the gut lumen in order to cause disease and ensure transmission. The interior of the gut forms a complex environment comprising the mucosal surface area and the inner gut lumen with epithelial cell debris and food particles. Recruitment of neutrophils to the intestinal lumen is a hallmark of non-typhoidal Salmonella enterica infections in humans. Here, we analyzed the interaction of gut luminal neutrophils with S. enterica serovar Typhimurium (S. Tm) in a mouse colitis model. Results Upon S. Tmwt infection, neutrophils transmigrate across the mucosa into the intestinal lumen. We detected a majority of pathogens associated with luminal neutrophils 20 hours after infection. Neutrophils are viable and actively engulf S. Tm, as demonstrated by live microscopy. Using S. Tm mutant strains defective in tissue invasion we show that pathogens are mostly taken up in the gut lumen at the epithelial barrier by luminal neutrophils. In these luminal neutrophils, S. Tm induces expression of genes typically required for its intracellular lifestyle such as siderophore production iroBCDE and the Salmonella pathogenicity island 2 encoded type three secretion system (TTSS-2). This shows that S. Tm at least transiently survives and responds to engulfment by gut luminal neutrophils. Gentamicin protection experiments suggest that the life-span of luminal neutrophils is limited and that S. Tm is subsequently released into the gut lumen. This “fast cycling” through the intracellular compartment of gut luminal neutrophils would explain the high fraction of TTSS-2 and iroBCDE expressing intra- and extracellular bacteria in the lumen of the infected gut. Conclusion In conclusion, live neutrophils recruited during acute S. Tm colitis engulf pathogens in the gut lumen and may thus actively engage in shaping the environment of pathogens and commensals in the inflamed gut. PMID:22493718

  16. Preference for luminance histogram regularities in natural scenes.

    PubMed

    Graham, Daniel; Schwarz, Bianca; Chatterjee, Anjan; Leder, Helmut

    2016-03-01

    Natural scene luminance distributions typically have positive skew, and for single objects, there is evidence that higher skew is a correlate (but not a guarantee) of glossiness. Skewness is also relevant to aesthetics: preference for glossy single objects (with high skew) has been shown even in infants, and skewness is a good predictor of fruit freshness. Given that primate vision appears to efficiently encode natural scene luminance variation, and given evidence that natural scene regularities may be a prerequisite for aesthetic perception in the spatial domain, here we ask whether humans in general prefer natural scenes with more positively skewed luminance distributions. If humans generally prefer images with the higher-order regularities typical of natural scenes and/or shiny objects, we would expect this to be the case. By manipulating luminance distribution skewness (holding mean and variance constant) for individual natural images, we show that in fact preference varies inversely with increasing positive skewness. This finding holds for: artistic landscape images and calibrated natural scenes; scenes with and without glossy surfaces; landscape scenes and close-up objects; and noise images with natural luminance histograms. Across conditions, humans prefer images with skew near zero over higher skew images, and they prefer skew lower than that of the unmodified scenes. These results suggest that humans prefer images with luminances that are distributed relatively evenly about the mean luminance, i.e., images with similar amounts of light and dark. We propose that our results reflect an efficient processing advantage of low-skew images over high-skew images, following evidence from prior brain imaging results. PMID:25872178

  17. LED light engine concept with ultra-high scalable luminance

    NASA Astrophysics Data System (ADS)

    Hoelen, Christoph; de Boer, Dick; Bruls, Dominique; van der Eyden, Joost; Koole, Rolf; Li, Yun; Mirsadeghi, Mo; Vanbroekhoven, Vincent; Van den Bergh, John-John; Van de Voorde, Patrick

    2016-03-01

    Although LEDs have been introduced successfully in many general lighting applications during the past decade, high brightness light source applications are still suffering from the limited luminance of LEDs. High power LEDs are generally limited in luminance to ca 100 Mnit (108 lm/m2sr) or less, while dedicated devices for projection may achieve luminance values up to ca 300 Mnit with phosphor converted green. In particular for high luminous flux applications with limited étendue, like in front projection systems, only very modest luminous flux values in the beam can be achieved with LEDs compared to systems based on discharge lamps. In this paper we introduce a light engine concept based on a light converter rod pumped with blue LEDs that breaks through the étendue and brightness limits of LEDs, enabling LED light source luminance values that are more than 4 times higher than what can be achieved with LEDs so far. In LED front projection systems, green LEDs are the main limiting factor. With our green light emitting modules, peak luminance values well above 1.2 Gnit have been achieved, enabling doubling of the screen brightness of LED based DLP projection systems, and even more when this technology is applied to other colors as well. This light source concept, introduced as the ColorSpark High Lumen Density (HLD) LED technology, enables a breakthrough in the performance of LED-based light engines not only for projection, where >2700 ANSI lm was demonstrated, but for a wide variety of high brightness applications.

  18. Evaluation of SLAR and thematic mapper MSS data for forest cover mapping using computer-aided analysis techniques

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M. (Principal Investigator)

    1981-01-01

    Training and test data sets for CAM1S from NS-001 MSS data for two dates (geometrically adjusted to 30 meter resolution) were used to evaluate wavelength band. Two sets of tapes containing digitized HH and HV polarization data were obtained. Because the SAR data on the 9 track tapes contained no meaningful data, the 7 track tapes were copied onto 9 track tapes at LARS. The LARSYS programs were modified and a program was written to reformat the digitized SAR data into a LARSYS format. The radar imagery is being qualitatively interpreted. Results are to be used to identify possible cover types, to produce a classification map to aid in the numerical evaluation classification of radar data, and to develop an interpretation key for radar imagery. The four spatial resolution data sets were analyzed. A program was developed to reduce the spatial distortions resulting from variable viewing distance, and geometrically adjusted data sets were generated. A flowchart of steps taken to geometrically adjust a data set from the NS-001 scanner is presented.

  19. STEAM - Statistical Template Estimation for Abnormality Mapping: A personalized DTI analysis technique with applications to the screening of preterm infants.

    PubMed

    Booth, Brian G; Miller, Steven P; Brown, Colin J; Poskitt, Kenneth J; Chau, Vann; Grunau, Ruth E; Synnes, Anne R; Hamarneh, Ghassan

    2016-01-15

    We introduce the STEAM DTI analysis engine: a whole brain voxel-based analysis technique for the examination of diffusion tensor images (DTIs). Our STEAM analysis technique consists of two parts. First, we introduce a collection of statistical templates that represent the distribution of DTIs for a normative population. These templates include various diffusion measures from the full tensor, to fractional anisotropy, to 12 other tensor features. Second, we propose a voxel-based analysis (VBA) pipeline that is reliable enough to identify areas in individual DTI scans that differ significantly from the normative group represented in the STEAM statistical templates. We identify and justify choices in the VBA pipeline relating to multiple comparison correction, image smoothing, and dealing with non-normally distributed data. Finally, we provide a proof of concept for the utility of STEAM on a cohort of 134 very preterm infants. We generated templates from scans of 55 very preterm infants whose T1 MRI scans show no abnormalities and who have normal neurodevelopmental outcome. The remaining 79 infants were then compared to the templates using our VBA technique. We show: (a) that our statistical templates display the white matter development expected over the modeled time period, and (b) that our VBA results detect abnormalities in the diffusion measurements that relate significantly with both the presence of white matter lesions and with neurodevelopmental outcomes at 18months. Most notably, we show that STEAM produces personalized results while also being able to highlight abnormalities across the whole brain and at the scale of individual voxels. While we show the value of STEAM on DTI scans from a preterm infant cohort, STEAM can be equally applied to other cohorts as well. To facilitate this whole-brain personalized DTI analysis, we made STEAM publicly available at http://www.sfu.ca/bgb2/steam. PMID:26515903

  20. Method for estimating potential wetland extent by utilizing streamflow statistics and flood-inundation mapping techniques: Pilot study for land along the Wabash River near Terre Haute, Indiana

    USGS Publications Warehouse

    Kim, Moon H.; Ritz, Christian T.; Arvin, Donald V.

    2012-01-01

    Potential wetland extents were estimated for a 14-mile reach of the Wabash River near Terre Haute, Indiana. This pilot study was completed by the U.S. Geological Survey in cooperation with the U.S. Department of Agriculture, Natural Resources Conservation Service (NRCS). The study showed that potential wetland extents can be estimated by analyzing streamflow statistics with the available streamgage data, calculating the approximate water-surface elevation along the river, and generating maps by use of flood-inundation mapping techniques. Planning successful restorations for Wetland Reserve Program (WRP) easements requires a determination of areas that show evidence of being in a zone prone to sustained or frequent flooding. Zone determinations of this type are used by WRP planners to define the actively inundated area and make decisions on restoration-practice installation. According to WRP planning guidelines, a site needs to show evidence of being in an "inundation zone" that is prone to sustained or frequent flooding for a period of 7 consecutive days at least once every 2 years on average in order to meet the planning criteria for determining a wetland for a restoration in agricultural land. By calculating the annual highest 7-consecutive-day mean discharge with a 2-year recurrence interval (7MQ2) at a streamgage on the basis of available streamflow data, one can determine the water-surface elevation corresponding to the calculated flow that defines the estimated inundation zone along the river. By using the estimated water-surface elevation ("inundation elevation") along the river, an approximate extent of potential wetland for a restoration in agricultural land can be mapped. As part of the pilot study, a set of maps representing the estimated potential wetland extents was generated in a geographic information system (GIS) application by combining (1) a digital water-surface plane representing the surface of inundation elevation that sloped in the downstream

  1. Spatiotemporal Mapping of Motility in Ex Vivo Preparations of the Intestines

    PubMed Central

    Kendig, Derek M.; Hurst, Norm R.; Grider, John R.

    2016-01-01

    Multiple approaches have been used to record and evaluate gastrointestinal motility including: recording changes in muscle tension, intraluminal pressure, and membrane potential. All of these approaches depend on measurement of activity at one or multiple locations along the gut simultaneously which are then interpreted to provide a sense of overall motility patterns. Recently, the development of video recording and spatiotemporal mapping (STmap) techniques have made it possible to observe and analyze complex patterns in ex vivo whole segments of colon and intestine. Once recorded and digitized, video records can be converted to STmaps in which the luminal diameter is converted to grayscale or color [called diameter maps (Dmaps)]. STmaps can provide data on motility direction (i.e., stationary, peristaltic, antiperistaltic), velocity, duration, frequency and strength of contractile motility patterns. Advantages of this approach include: analysis of interaction or simultaneous development of different motility patterns in different regions of the same segment, visualization of motility pattern changes over time, and analysis of how activity in one region influences activity in another region. Video recordings can be replayed with different timescales and analysis parameters so that separate STmaps and motility patterns can be analyzed in more detail. This protocol specifically details the effects of intraluminal fluid distension and intraluminal stimuli that affect motility generation. The use of luminal receptor agonists and antagonists provides mechanistic information on how specific patterns are initiated and how one pattern can be converted into another pattern. The technique is limited by the ability to only measure motility that causes changes in luminal diameter, without providing data on intraluminal pressure changes or muscle tension, and by the generation of artifacts based upon experimental setup; although, analysis methods can account for these issues

  2. Comparative study of software techniques for 3D mapping of perforators in deep inferior epigastric artery perforator flap planning

    PubMed Central

    Hunter-Smith, David J.; Rozen, Warren Matthew

    2016-01-01

    Background Computed tomographic (CT) angiography (CTA) is widely considered the gold standard imaging modality for preoperative planning autologous breast reconstruction with deep inferior epigastric artery (DIEA) perforator (DIEP) flap. Improved anatomical understanding from CTA has translated to enhanced clinical outcomes. To achieve this, the use of appropriate CT hardware and software is vital. Various CT scanners and contrast materials have been demonstrated to consistently produce adequate scan data. However, the availability of affordable and easily accessible imaging software capable of generating 3D volume-rendered perforator images to clinically useful quality has been lacking. Osirix (Pixmeo, Geneva, Switzerland) is a free, readily available medical image processing software that shows promise. We have previously demonstrated in a case report the usefulness of Osirix in localizing perforators and their course. Methods In the current case series of 50 consecutive CTA scans, we compare the accuracy of Osirix to a commonly used proprietary 3D imaging software, Siemens Syngo InSpace 4D (Siemens, Erlangen, Germany), in identifying perforator number and location. Moreover, we compared both programs to intraoperative findings. Results We report a high rate of concordance with Osirix and Siemens Syngo InSpace 4D (99.6%). Both programs correlated closely with operative findings (92.2%). Most of the discrepancies were found in the lateral row perforators (90%). Conclusions In the current study, we report the accuracy of Osirix that is comparable to Siemens Syngo InSpace 4D, a proprietary software, in mapping perforators. However, it provides an added advantage of being free, easy-to-use, portable, and potentially a superior quality of 3D reconstructed image. PMID:27047778

  3. Water Through Life, A New Technique for Mapping Shallow Water Tables in Arid and Semi-Arid Climates using Color Infrared Aerial Photographs

    NASA Astrophysics Data System (ADS)

    Fielding, G.

    2003-04-01

    Two of the fundamental issues in water resources in arid regions are (1) the ability to accurately predict the presence of groundwater shallow enough to support riparian ecosystems and (2) the ability to assess the vulnerability of those ecosystems to withdrawals by an ever-increasing human population. A new technique for finding areas of shallow groundwater in arid and semi-arid environments has been developed that addresses both of these concerns by using the relative health of natural vegetation as an indicator of perennial shallow groundwater in environments where water is the main biolimiting factor to growth. The technique revolves around the differences in the spectral response between: actively transpiring vegetation; dormant vegetation; and dry, bare soil in the 400-900nm range as recorded by color infrared film in the dry pre-monsoon months. Distilling out only the active vegetation from aerial photographs was achieved through the creation of an index-based filter using readily available, inexpensive photo processing software. The output of the filter was carefully designed to maximize the qualitative interpretability by an analyst through the careful selection of display colors that are tuned to the maximum sensitivity range of the human vision system. When the analyst combines the qualitative interpretation of the spatial distribution of active vegetation with an understanding of the rooting depth of the native species it becomes possible to extrapolate a quantitative, basin-scale reconnaissance level map which defines the lateral extent of areas of shallow <20m(+/-5m) groundwater with a spatial accuracy of just a few meters. The research plan for the development of this technique also explored the potential for conducting the entire analysis procedure in three dimensions by projecting the filtered aerial photographs onto 10m resolution Digital Elevation Models (DEMs). When this is done and the geomorphology of the region is carefully considered the

  4. Luminal Ca(2+) dynamics during IP3R mediated signals.

    PubMed

    Lopez, Lucia F; Dawson, Silvina Ponce

    2016-01-01

    The role of cytosolic Ca(2+) on the kinetics of Inositol 1,4,5-triphosphate receptors (IP3Rs) and on the dynamics of IP3R-mediated Ca(2+) signals has been studied at large both experimentally and by modeling. The role of luminal Ca(2+) has not been investigated with that much detail although it has been found that it is relevant for signal termination in the case of Ca(2+) release through ryanodine receptors. In this work we present the results of observing the dynamics of luminal and cytosolic Ca(2+) simultaneously in Xenopus laevis oocytes. Combining observations and modeling we conclude that there is a rapid mechanism that guarantees the availability of free Ca(2+) in the lumen even when a relatively large Ca(2+) release is evoked. Comparing the dynamics of cytosolic and luminal Ca(2+) during a release, we estimate that they are consistent with a 80% of luminal Ca(2+) being buffered. The rapid availability of free luminal Ca(2+) correlates with the observation that the lumen occupies a considerable volume in several regions across the images. PMID:27232767

  5. An Exponential Luminous Efficiency Model for Hypervelocity Impact into Regolith

    NASA Technical Reports Server (NTRS)

    Swift, W. R.; Moser, D. E.; Suggs, R. M.; Cooke, W. J.

    2011-01-01

    The flash of thermal radiation produced as part of the impact-crater forming process can be used to determine the energy of the impact if the luminous efficiency is known. From this energy the mass and, ultimately, the mass flux of similar impactors can be deduced. The luminous efficiency, eta, is a unique function of velocity with an extremely large variation in the laboratory range of under 6 km/s but a necessarily small variation with velocity in the meteoric range of 20 to 70 km/s. Impacts into granular or powdery regolith, such as that on the moon, differ from impacts into solid materials in that the energy is deposited via a serial impact process which affects the rate of deposition of internal (thermal) energy. An exponential model of the process is developed which differs from the usual polynomial models of crater formation. The model is valid for the early time portion of the process and focuses on the deposition of internal energy into the regolith. The model is successfully compared with experimental luminous efficiency data from both laboratory impacts and from lunar impact observations. Further work is proposed to clarify the effects of mass and density upon the luminous efficiency scaling factors. Keywords hypervelocity impact impact flash luminous efficiency lunar impact meteoroid 1

  6. Luminance-dependent changes in mesopic visual contrast sensitivity

    PubMed Central

    Smith, Robert A.

    1973-01-01

    Spatial and temporal modulation transfer functions have been measured as a function of luminance at scotopic and mesopic levels. It is found that throughout the scotopic range the data can be accounted for by a two process visual system, with the processes having the following properties. Simple excitatory process This mechanism summates over somewhat less than 1 square degree of visual angle and over about 200 msec of time. These properties do not change with luminance, but the over-all sensitivity of the mechanism follows the De Vries—Rose law. The temporal properties of this mechanism follows those of the scotopic b-wave. At high luminances, the temporal, but not spatial, properties of this mechanism break down in a manner which had not been studied. Low-frequency inhibitory process This process is manifest as a decrease in sensitivity from that of the simple excitatory process. Its effect is observed only when both spatial and temporal frequency are low, suggesting that it has a larger (perhaps 3 ×) summation area in both space and time. The inhibitory process is not observed below a certain threshold luminance, which is highly dependent upon the configuration of the stimulus. For a suitable stimulus, this threshold will be well into the scotopic luminance range. It is suggested that these two processes represent psychophysical correlates of the centre and surround of retinal receptive fields. PMID:4702414

  7. Identification of a Novel Luminal Molecular Subtype of Breast Cancer

    PubMed Central

    Dvorkin-Gheva, Anna; Hassell, John A.

    2014-01-01

    The molecular classification of human breast tumors has afforded insights into subtype specific biological processes, patient prognosis and response to therapies. However, using current methods roughly one quarter of breast tumors cannot be classified into one or another molecular subtype. To explore the possibility that the unclassifiable samples might comprise one or more novel subtypes we employed a collection of publically available breast tumor datasets with accompanying clinical information to assemble 1,593 transcript profiles: 25% of these samples could not be assigned to one of the current molecular subtypes of breast cancer. All of the unclassifiable samples could be grouped into a new molecular subtype, which we termed “luminal-like”. We also identified the luminal-like subtype in an independent collection of tumor samples (NKI295). We found that patients harboring tumors of the luminal-like subtype have a better prognosis than those with basal-like breast cancer, a similar prognosis to those with ERBB2+, luminal B or claudin-low tumors, but a worse prognosis than patients with luminal A or normal-like breast tumors. Our findings suggest the occurrence of another molecular subtype of breast cancer that accounts for the vast majority of previously unclassifiable breast tumors. PMID:25076125

  8. Luminal Ca2+ dynamics during IP3R mediated signals

    NASA Astrophysics Data System (ADS)

    Lopez, Lucia F.; Ponce Dawson, Silvina

    2016-06-01

    The role of cytosolic Ca2+ on the kinetics of Inositol 1,4,5-triphosphate receptors (IP3Rs) and on the dynamics of IP3R-mediated Ca2+ signals has been studied at large both experimentally and by modeling. The role of luminal Ca2+ has not been investigated with that much detail although it has been found that it is relevant for signal termination in the case of Ca2+ release through ryanodine receptors. In this work we present the results of observing the dynamics of luminal and cytosolic Ca2+ simultaneously in Xenopus laevis oocytes. Combining observations and modeling we conclude that there is a rapid mechanism that guarantees the availability of free Ca2+ in the lumen even when a relatively large Ca2+ release is evoked. Comparing the dynamics of cytosolic and luminal Ca2+ during a release, we estimate that they are consistent with a 80% of luminal Ca2+ being buffered. The rapid availability of free luminal Ca2+ correlates with the observation that the lumen occupies a considerable volume in several regions across the images.

  9. Evaluation of SLAR and thematic mapper MSS data for forest cover mapping using computer-aided analysis techniques

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M. (Principal Investigator)

    1980-01-01

    The column normalizing technique was used to adjust the data for variations in the amplitude of the signal due to look angle effects with respect to solar zenith angle along the scan lines (i.e., across columns). Evaluation of the data set containing the geometric and radiometric adjustments, indicates that the data set should be satisfactory for further processing and analysis. Software was developed for degrading the spatial resolution of the aircraft data to produce a total of four data sets for further analysis. The quality of LANDSAT 2 CCT data for the test site is good for channels four, five, and six. Channel seven was not present on the tape. The data received were reformatted and analysis of the test site area was initiated.

  10. Detecting, mapping and monitoring of land subsidence in Jharia Coalfield, Jharkhand, India by spaceborne differential interferometric SAR, GPS and precision levelling techniques

    NASA Astrophysics Data System (ADS)

    Chatterjee, R. S.; Thapa, Shailaja; Singh, K. B.; Varunakumar, G.; Raju, E. V. R.

    2015-08-01

    The study aims at detection, mapping and monitoring of land subsidence in Jharia Coalfield, Jharkhand, India by spaceborne DInSAR, GPS and precision levelling techniques. Using multi-frequency C- and L-band DInSAR, both slowly and rapidly subsiding areas were identified and DInSAR-based subsidence maps were prepared. C-band DInSAR was found useful for detection of slowly subsiding areas whereas L-band DInSAR for rapidly subsiding and/or adverse land cover areas. Due to dynamic nature of mining and adverse land cover, temporal decorrelation poses a serious problem particularly in C-band DInSAR. Specially designed InSAR coherence guided adaptive filtering was found useful to highlight the deformation fringes. Collateral GPS and levelling observations were conducted in three test sites to validate DInSAR measurements and to determine the net displacement vectors. We observed an appreciable horizontal displacement component of land subsidence in all the test sites. For comparison of results, we calculated InSAR coherence weighted LOS displacement rates from the unwrapped differential interferograms of smaller spatial subsets and LOS projected ground-based displacement rates in three test sites. We found good agreement between DInSAR and ground-based measurements except for C-band observation in Dobari test site primarily due to large difference in observation periods and temporally inconsistent land subsidence. Collateral spaceborne and ground-based observations were also found useful for characterization of subsidence phenomena to determine net displacement vector and horizontal displacement component. In coal mining areas with spatially scattered and temporally irregular land subsidence phenomena, the adopted methodology can be used successfully for detection, mapping and monitoring of the subsiding areas vulnerable to future collapse. This will facilitate efficient planning and designing of surface infrastructures and other developmental structures in the mining

  11. Integrating pan-sharpening and classifier ensemble techniques to map an invasive plant (Spartina alterniflora) in an estuarine wetland using Landsat 8 imagery

    NASA Astrophysics Data System (ADS)

    Ai, Jinquan; Gao, Wei; Gao, Zhiqiang; Shi, Runhe; Zhang, Chao; Liu, Chaoshun

    2016-04-01

    Accurate mapping of invasive species in a cost-effective way is the first step toward understanding and predicting the impact of their invasions. However, it is challenging in coastal wetlands due to confounding effects of biodiversity and tidal effects on spectral reflectance. The aim of this work is to describe a method to improve the accuracy of mapping an invasive plant (Spartina alterniflora), which is based on integration of pan-sharpening and classifier ensemble techniques. A framework was designed to achieve this goal. Five candidate image fusion algorithms, including principal component analysis fusion algorithm, modified intensity-hue-saturation fusion algorithm, wavelet-transform fusion algorithm, Ehlers fusion algorithm, and Gram-Schmidt fusion algorithm, were applied to pan-sharpening Landsat 8 operational land imager (OLI) imagery. We assessed the five fusion algorithms with respect to spectral and spatial fidelity using visual inspection and quantitative quality indicators. The optimal fused image was selected for subsequent analysis. Then, three classifiers, namely, maximum likelihood, artificial neural network, and support vector machine, were employed to preclassify the fused and raw OLI 30-m band images. Final object-based S. alterniflora maps were generated through classifier ensemble analysis of outcomes from the three classifiers. The results showed that the introduced method obtained high classification accuracy, with an overall accuracy of 90.96% and balanced misclassification errors between S. alterniflora and its coexistent species. We recommend future research to adopt the proposed method for monitoring long-term or multiseasonal changes in land coverage of invasive wetland plants.

  12. Cosmological baryonic and matter densities from 600000 SDSS luminous red galaxies with photometric redshifts

    NASA Astrophysics Data System (ADS)

    Blake, Chris; Collister, Adrian; Bridle, Sarah; Lahav, Ofer

    2007-02-01

    We analyse MegaZ-LRG, a photometric-redshift catalogue of luminous red galaxies (LRGs) based on the imaging data of the Sloan Digital Sky Survey (SDSS) 4th Data Release. MegaZ-LRG, presented in a companion paper, contains >106 photometric redshifts derived with ANNZ, an artificial neural network method, constrained by a spectroscopic subsample of ~13000 galaxies obtained by the 2dF-SDSS LRG and Quasar (2SLAQ) survey. The catalogue spans the redshift range 0.4 < z < 0.7 with an rms redshift error σz ~ 0.03(1 + z), covering 5914 deg2 to map out a total cosmic volume 2.5h-3Gpc3. In this study we use the most reliable 600000 photometric redshifts to measure the large-scale structure using two methods: (1) a spherical harmonic analysis in redshift slices, and (2) a direct re-construction of the spatial clustering pattern using Fourier techniques. We present the first cosmological parameter fits to galaxy angular power spectra from a photometric-redshift survey. Combining the redshift slices with appropriate covariances, we determine best-fitting values for the matter density Ωm and baryon density Ωb of Ωmh = 0.195 +/- 0.023 and Ωb/Ωm = 0.16 +/- 0.036 (with the Hubble parameter h = 0.75 and scalar index of primordial fluctuations nscalar = 1 held fixed). These results are in agreement with and independent of the latest studies of the cosmic microwave background radiation, and their precision is comparable to analyses of contemporary spectroscopic-redshift surveys. We perform an extensive series of tests which conclude that our power spectrum measurements are robust against potential systematic photometric errors in the catalogue. We conclude that photometric-redshift surveys are competitive with spectroscopic surveys for measuring cosmological parameters in the simplest `vanilla' models. Future deep imaging surveys have great potential for further improvement, provided that systematic errors can be controlled.

  13. Structure of glycosylated NPC1 luminal domain C reveals insights into NPC2 and Ebola virus interactions.

    PubMed

    Zhao, Yuguang; Ren, Jingshan; Harlos, Karl; Stuart, David I

    2016-03-01

    Niemann-pick type C1 (NPC1) is an endo/lysosomal membrane protein involved in intracellular cholesterol trafficking, and its luminal domain C is an essential endosomal receptor for Ebola and Marburg viruses. We have determined the crystal structure of glycosylated NPC1 luminal domain C and find all seven possible sites are glycosylated. Mapping the disease mutations onto the glycosylated structure reveals a potential binding face for NPC2. Knowledge-based docking of NPC1 onto Ebola viral glycoprotein and sequence analysis of filovirus susceptible and refractory species reveals four critical residues, H418, Q421, F502 and F504, some or all of which are likely responsible for the species-specific susceptibility to the virus infection. PMID:26846330

  14. Transient liquid-crystal technique used to produce high-resolution convective heat-transfer-coefficient maps

    NASA Astrophysics Data System (ADS)

    Hippensteele, Steven A.; Poinsatte, Philip E.

    1993-08-01

    In this transient technique the preheated isothermal model wall simulates the classic one-dimensional, semi-infinite wall heat transfer conduction problem. By knowing the temperature of the air flowing through the model, the initial temperature of the model wall, and the surface cooling rate measured at any location with time (using the fast-response liquid-crystal patterns recorded on video tape), the heat transfer coefficient can be calculated for the color isothermal pattern produced. Although the test was run transiently, the heat transfer coefficients are for the steady-state case. The upstream thermal boundary condition was considered to be isothermal. This transient liquid-crystal heat-transfer technique was used in a transient air tunnel in which a square-inlet, 3-to-1 exit transition duct was placed. The duct was preheated prior to allowing room temperature air to be suddenly drawn through it. The resulting isothermal contours on the duct surfaces were revealed using a surface coating of thermochromic liquid crystals that display distinctive colors at particular temperatures. A video record was made of the temperature and time data for all points on the duct surfaces during each test. The duct surfaces were uniformly heated using two heating systems: the first was an automatic temperature-controlled heater blanket completely surrounding the test duct like an oven, and the second was an internal hot-air loop through the inside of the test duct. The hot-air loop path was confined inside the test duct by insulated heat dams located at the inlet and exit ends of the test duct. A recirculating fan moved hot air into the duct inlet, through the duct, out of the duct exit, through the oven, and back to the duct inlet. The temperature nonuniformity of the test duct model wall was held very small. Test results are reported for two inlet Reynolds numbers of 200,000 and 1,150,000 (based on the square-inlet hydraulic diameter) and two free-stream turbulence

  15. Transient liquid-crystal technique used to produce high-resolution convective heat-transfer-coefficient maps

    NASA Technical Reports Server (NTRS)

    Hippensteele, Steven A.; Poinsatte, Philip E.

    1993-01-01

    In this transient technique the preheated isothermal model wall simulates the classic one-dimensional, semi-infinite wall heat transfer conduction problem. By knowing the temperature of the air flowing through the model, the initial temperature of the model wall, and the surface cooling rate measured at any location with time (using the fast-response liquid-crystal patterns recorded on video tape), the heat transfer coefficient can be calculated for the color isothermal pattern produced. Although the test was run transiently, the heat transfer coefficients are for the steady-state case. The upstream thermal boundary condition was considered to be isothermal. This transient liquid-crystal heat-transfer technique was used in a transient air tunnel in which a square-inlet, 3-to-1 exit transition duct was placed. The duct was preheated prior to allowing room temperature air to be suddenly drawn through it. The resulting isothermal contours on the duct surfaces were revealed using a surface coating of thermochromic liquid crystals that display distinctive colors at particular temperatures. A video record was made of the temperature and time data for all points on the duct surfaces during each test. The duct surfaces were uniformly heated using two heating systems: the first was an automatic temperature-controlled heater blanket completely surrounding the test duct like an oven, and the second was an internal hot-air loop through the inside of the test duct. The hot-air loop path was confined inside the test duct by insulated heat dams located at the inlet and exit ends of the test duct. A recirculating fan moved hot air into the duct inlet, through the duct, out of the duct exit, through the oven, and back to the duct inlet. The temperature nonuniformity of the test duct model wall was held very small. Test results are reported for two inlet Reynolds numbers of 200,000 and 1,150,000 (based on the square-inlet hydraulic diameter) and two free-stream turbulence

  16. Luminance Effects on Neural Mechanism at Photopic Level

    NASA Astrophysics Data System (ADS)

    Ge, Jing-Jing; Wang, Zhao-Qi

    2011-05-01

    In order to obtain the influence of the luminance at photopic level on the neural mechanism, a neural contrast sensitivity function (NCSF) measurement system is established. The contrast sensitivity function (CSF) of the visual system and the modulation transfer function (MTF) of the eye's optical system are first measured with correspondent instruments respectively. Then the NCSF is calculated as the ratio of CSF to MTF. Four individual eyes are involved in the cases of green light and white light. With increasing luminance, the tendency of the variation of the CSFs is similar to that of the NCSFs, while the gain is larger than that of the NCSFs, especially in the region of higher spatial frequency. It is the NCSF, rather than CSF, that reflects tie luminance sensitivity in the retina-brain neural system, because the influence of the eye's optical system is excluded.

  17. Mesopic luminance assessed with minimally distinct border perception.

    PubMed

    Raphael, Sabine; MacLeod, Donald I A

    2015-01-01

    In photopic vision, the border between two fields is minimally distinct when the two fields are isoluminant; that is, when the achromatic luminance of the two fields is equal. The distinctness of a border between extrafoveal reference and comparison fields was used here as an isoluminance criterion under a variety of adaptation conditions ranging from photopic to scotopic. The adjustment was done by trading off the amount of blue against the amount of red in the comparison field. Results show that isoluminant border settings are linear under all constant adaptation conditions, though varying with state of adaptation. The relative contribution of rods and cones to luminance was modeled such that the linear sum of the suitably weighted scotopic and photopic luminance is constant for the mesopic isoluminant conditions. The relative weights change with adapting intensity in a sigmoid fashion and also depend strongly on the position of the border in the visual field. PMID:26223024

  18. Mesopic luminance assessed with minimally distinct border perception

    PubMed Central

    Raphael, Sabine; MacLeod, Donald I. A.

    2015-01-01

    In photopic vision, the border between two fields is minimally distinct when the two fields are isoluminant; that is, when the achromatic luminance of the two fields is equal. The distinctness of a border between extrafoveal reference and comparison fields was used here as an isoluminance criterion under a variety of adaptation conditions ranging from photopic to scotopic. The adjustment was done by trading off the amount of blue against the amount of red in the comparison field. Results show that isoluminant border settings are linear under all constant adaptation conditions, though varying with state of adaptation. The relative contribution of rods and cones to luminance was modeled such that the linear sum of the suitably weighted scotopic and photopic luminance is constant for the mesopic isoluminant conditions. The relative weights change with adapting intensity in a sigmoid fashion and also depend strongly on the position of the border in the visual field. PMID:26223024

  19. A Second Luminous Blue Variable in the Quintuplet Cluster.

    PubMed

    Geballe; Najarro; Figer

    2000-02-20

    H- and K-band moderate-resolution and 4 µm high-resolution spectra have been obtained for FMM 362, a bright star in the Quintuplet Cluster near the Galactic center. The spectral features in these bands closely match those of the Pistol Star, a luminous blue variable and one of the most luminous stars known. The new spectra and previously obtained photometry imply a very high luminosity for FMM 362, L>/=106 L middle dot in circle, and a temperature of 10,000-13,000 K. Based on its luminosity, temperature, photometric variability, and similarities to the Pistol Star, we conclude that FMM 362 is a luminous blue variable. PMID:10655174

  20. Lunar eclipse photometry: absolute luminance measurements and modeling.

    PubMed

    Hernitschek, Nina; Schmidt, Elmar; Vollmer, Michael

    2008-12-01

    The Moon's time-dependent luminance was determined during the 9 February 1990 and 3 March 2007 total lunar eclipses by using calibrated, industry standard photometers. After the results were corrected to unit air mass and to standard distances for both Moon and Sun, an absolute calibration was accomplished by using the Sun's known luminance and a pre-eclipse lunar albedo of approximately 13.5%. The measured minimum level of brightness in the total phase of both eclipses was relatively high, namely -3.32 m(vis) and -1.7 m(vis), which hints at the absence of pronounced stratospheric aerosol. The light curves were modeled in such a way as to let the Moon move through an artificial Earth shadow composed of a multitude of disk and ring zones, containing a relative luminance data set from an atmospheric radiative transfer calculation. PMID:19037352

  1. Asymmetric effects of luminance and chrominance in the watercolor illusion.

    PubMed

    Coia, Andrew J; Crognale, Michael A

    2014-01-01

    When bounded by a line of sufficient contrast, the desaturated hue of a colored line will spread over an enclosed area, an effect known as the watercolor illusion. The contrast of the two lines can be in luminance, chromaticity, or a combination of both. The effect is most salient when the enclosing line has greater contrast with the background than the line that induces the spreading color. In most prior experiments with watercolor spreading, the luminance of both lines has been lower than the background. An achromatic version of the illusion exists where a dark line will spread while being bounded by either a darker or brighter line. In a previous study we measured the strength of the watercolor effect in which the colored inducing line was isoluminant to the background, and found an illusion for both brighter and darker achromatic outer contours. We also found the strength of spreading is stronger for bluish (+S cone input) colors compared to yellowish (-S cone input) ones, when bounded by a dark line. The current study set out to measure the hue dependence of the watercolor illusion when inducing colors are flanked with brighter (increment) as opposed to darker outer lines. The asymmetry in the watercolor effect with S cone input was enhanced when the inducing contrast was an increment rather than a decrement. Further experiments explored the relationship between the perceived contrast of these chromatic lines when paired with luminance increments and decrements and revealed that the perceived contrast of luminance increments and decrements is dependent on which isoluminant color they are paired with. In addition to known hue asymmetries in the watercolor illusion there are asymmetries between luminance increments and decrements that are also hue dependent. These latter asymmetries may be related to the perceived contrast of the hue/luminance parings. PMID:25309396

  2. Asymmetric effects of luminance and chrominance in the watercolor illusion

    PubMed Central

    Coia, Andrew J.; Crognale, Michael A.

    2014-01-01

    When bounded by a line of sufficient contrast, the desaturated hue of a colored line will spread over an enclosed area, an effect known as the watercolor illusion. The contrast of the two lines can be in luminance, chromaticity, or a combination of both. The effect is most salient when the enclosing line has greater contrast with the background than the line that induces the spreading color. In most prior experiments with watercolor spreading, the luminance of both lines has been lower than the background. An achromatic version of the illusion exists where a dark line will spread while being bounded by either a darker or brighter line. In a previous study we measured the strength of the watercolor effect in which the colored inducing line was isoluminant to the background, and found an illusion for both brighter and darker achromatic outer contours. We also found the strength of spreading is stronger for bluish (+S cone input) colors compared to yellowish (−S cone input) ones, when bounded by a dark line. The current study set out to measure the hue dependence of the watercolor illusion when inducing colors are flanked with brighter (increment) as opposed to darker outer lines. The asymmetry in the watercolor effect with S cone input was enhanced when the inducing contrast was an increment rather than a decrement. Further experiments explored the relationship between the perceived contrast of these chromatic lines when paired with luminance increments and decrements and revealed that the perceived contrast of luminance increments and decrements is dependent on which isoluminant color they are paired with. In addition to known hue asymmetries in the watercolor illusion there are asymmetries between luminance increments and decrements that are also hue dependent. These latter asymmetries may be related to the perceived contrast of the hue/luminance parings. PMID:25309396

  3. Human Mammary Luminal Epithelial Cells Contain Progenitors to Myoepithelial Cells

    SciTech Connect

    Pechoux, Christine; Gudjonsson, Thorarinn; Ronnov-Jessen, Lone; Bissell, Mina J; Petersen, Ole

    1999-02-01

    The origin of the epithelial and myoepithelial cells in the human breast has not been delineated. In this study we have addressed whether luminal epithelial cells and myoepithelial cells are vertically connected, i.e., whether one is the precursor for the other. We used a primary culture assay allowing preservation of basic phenotypic traits of luminal epithelial and myoepithelial cells in culture. The two cell types were then separated immunomagnetically using antibodies directed against lineage-specific cell surface antigens into at best 100% purity. The cellular identity was ascertained by cytochemistry, immunoblotting, and 2-D gel electrophoresis. Luminal epithelial cells were identified by strong expression of cytokeratins 18 and 19 while myoepithelial cells were recognized by expression of vimentin and {alpha}-smooth muscle actin. We used a previously devised culture medium (CDM4) that allows vigorous expansion of proliferative myoepithelial cells and also devised a medium (CDM6) that allowed sufficient expansion of differentiated luminal epithelial cells based on addition of hepatocyte growth factor/scatter factor. The two different culture media supported each lineage for at least five passages without signs of interconversion. We used parallel cultures where we switched culture media, thus testing the ability of each lineage to convert to the other. Whereas the myoepithelial lineage showed no signs of interconversion, a subset of luminal epithelial cells, gradually, but distinctly, converted to myoepithelial cells. We propose that in the mature human breast, it is the luminal epithelial cell compartment that gives rise to myoepithelial cells rather than the other way around.

  4. Early ERPs to faces: aging, luminance, and individual differences

    PubMed Central

    Bieniek, Magdalena M.; Frei, Luisa S.; Rousselet, Guillaume A.

    2013-01-01

    Recently, Rousselet et al. reported a 1 ms/year delay in visual processing speed in a sample of healthy aged 62 subjects (Frontiers in Psychology 2010, 1:19). Here, we replicate this finding in an independent sample of 59 subjects and investigate the contribution of optical factors (pupil size and luminance) to the age-related slowdown and to individual differences in visual processing speed. We conducted two experiments. In experiment 1 we recorded EEG from subjects aged 18–79. Subjects viewed images of faces and phase scrambled noise textures under nine luminance conditions, ranging from 0.59 to 60.8 cd/m2. We manipulated luminance using neutral density filters. In experiment 2, 10 young subjects (age < 35) viewed similar stimuli through pinholes ranging from 1 to 5 mm. In both experiments, subjects were tested twice. We found a 1 ms/year slowdown in visual processing that was independent of luminance. Aging effects became visible around 125 ms post-stimulus and did not affect the onsets of the face-texture ERP differences. Furthermore, luminance modulated the entire ERP time-course from 60 to 500 ms. Luminance effects peaked in the N170 time window and were independent of age. Importantly, senile miosis and individual differences in pupil size did not account for aging differences and inter-subject variability in processing speed. The pinhole manipulation also failed to match the ERPs of old subjects to those of young subjects. Overall, our results strongly suggest that early ERPs to faces (<200 ms) are delayed by aging and that these delays are of cortical, rather than optical origin. Our results also demonstrate that even late ERPs to faces are modulated by low-level factors. PMID:23717297

  5. Stochastic uncertainty models for the luminance consistency assumption.

    PubMed

    Corpetti, Thomas; Mémin, Etienne

    2012-02-01

    In this paper, a stochastic formulation of the brightness consistency used in many computer vision problems involving dynamic scenes (for instance, motion estimation or point tracking) is proposed. Usually, this model, which assumes that the luminance of a point is constant along its trajectory, is expressed in a differential form through the total derivative of the luminance function. This differential equation linearly links the point velocity to the spatial and temporal gradients of the luminance function. However, when dealing with images, the available information only holds at discrete time and on a discrete grid. In this paper, we formalize the image luminance as a continuous function transported by a flow known only up to some uncertainties related to such a discretization process. Relying on stochastic calculus, we define a formulation of the luminance function preservation in which these uncertainties are taken into account. From such a framework, it can be shown that the usual deterministic optical flow constraint equation corresponds to our stochastic evolution under some strong constraints. These constraints can be relaxed by imposing a weaker temporal assumption on the luminance function and also in introducing anisotropic intensity-based uncertainties. We also show that these uncertainties can be computed at each point of the image grid from the image data and hence provide meaningful information on the reliability of the motion estimates. To demonstrate the benefit of such a stochastic formulation of the brightness consistency assumption, we have considered a local least-squares motion estimator relying on this new constraint. This new motion estimator significantly improves the quality of the results. PMID:21791410

  6. Retinal Ganglion Cell Adaptation to Small Luminance Fluctuations

    PubMed Central

    Freeman, Daniel K.; Graña, Gilberto

    2010-01-01

    To accommodate the wide input range over which the visual system operates within the narrow output range of spiking neurons, the retina adjusts its sensitivity to the mean light level so that retinal ganglion cells can faithfully signal contrast, or relative deviations from the mean luminance. Given the large operating range of the visual system, the majority of work on luminance adaptation has involved logarithmic changes in light level. We report that luminance gain controls are recruited for remarkably small fluctuations in luminance as well. Using spike recordings from the rat optic tract, we show that ganglion cell responses to a brief flash of light are modulated in amplitude by local background fluctuations as little as 15% contrast. The time scale of the gain control is rapid (<125 ms), at least for on cells. The retinal locus of adaptation precedes the ganglion cell spike generator because response gain changes of on cells were uncorrelated with firing rate. The mechanism seems to reside within the inner retinal network and not in the photoreceptors, because the adaptation profiles of on and off cells differed markedly. The response gain changes follow Weber's law, suggesting that network mechanisms of luminance adaptation described in previous work modulates retinal ganglion cell sensitivity, not just when we move between different lighting environments, but also as our eyes scan a visual scene. Finally, we show that response amplitude is uniformly reduced for flashes on a modulated background that has spatial contrast, indicating that another gain control that integrates luminance signals nonlinearly over space operates within the receptive field center of rat ganglion cells. PMID:20538771

  7. Probing the luminal microenvironment of reconstituted epithelial microtissues.

    PubMed

    Cerchiari, Alec E; Samy, Karen E; Todhunter, Michael E; Schlesinger, Erica; Henise, Jeff; Rieken, Christopher; Gartner, Zev J; Desai, Tejal A

    2016-01-01

    Polymeric microparticles can serve as carriers or sensors to instruct or characterize tissue biology. However, incorporating microparticles into tissues for in vitro assays remains a challenge. We exploit three-dimensional cell-patterning technologies and directed epithelial self-organization to deliver microparticles to the lumen of reconstituted human intestinal microtissues. We also develop a novel pH-sensitive microsensor that can measure the luminal pH of reconstituted epithelial microtissues. These studies offer a novel approach for investigating luminal microenvironments and drug-delivery across epithelial barriers. PMID:27619235

  8. Finite-element analyses of blade and slot coating flows using an implicit pseudo-solid domain mapping technique coupled with unstructured grids

    SciTech Connect

    Chen, K.S.; Schunk, P.R.; Sackinger, P.A.

    1994-12-04

    In coating processes (e.g. in blade coating) the flow domain inherently contains free surfaces and three-phase contact lines, and characteristic length scales of flow features in the dimension transverse to the web-movement vary by an order of magnitude or more from a fraction of a millimeter or more to tens of microns or less). The presence of free surfaces and three-phase contact lines, and the sudden changes of flow geometry and directions create difficulties in theoretical analyses of such flows. Though simulations of coating flows via finite-element methods using structured grids have been reportedly demonstrated in the literature, achieving high efficiency of such numerical experiments remains a grand challenge -- mainly due to difficulties in local mesh-refinement and in avoiding unacceptably distorted grids. High efficiency of computing steady flow fields under various process conditions is crucial in shortening turn-around time in design and optimization of coating-flow processes. In this paper we employ a fully-implicit, pseudo-solid, domain mapping technique coupled with unstructured meshes to analyze blade and slot coating flows using Galerkin`s method with finite element basis functions. We demonstrate the robustness and efficiency of our unique technique in circumventing shortcomings of mesh-motion schemes currently being used in the coating-flow research community. Our goal is to develop an efficient numerical tool, together with a suitable optimization toolkit, that can be used routinely in design and optimization of coating-flow processes.

  9. Nanoscale mapping of lithium-ion diffusion in a cathode within an all-solid-state lithium-ion battery by advanced scanning probe microscopy techniques.

    PubMed

    Zhu, Jing; Lu, Li; Zeng, Kaiyang

    2013-02-26

    High-resolution real-space mapping of Li-ion diffusion in the LiNi(1/3)Co(1/3)Mn(1/3)O₂ cathode within an all-solid-state thin film Li-ion battery has been conducted using advanced scanning probe microscopy techniques, namely, band excitation electrochemical strain microscopy (BE-ESM) and conductive atomic force microscopy. In addition, local variations of the electrochemical response in the LiNi(1/3)Co(1/3)Mn(1/3)O₂ thin film cathode at different cycling stages have been investigated. This work demonstrates the unique feature and applications of the BE-ESM technique on battery research. The results allow us to establish a direct relationship of the changes in ionic mobility as well as the electrochemical activity at the nanoscale with the numbers of charge/discharge cycles. Furthermore, various factors influencing the BE-ESM measurements, including sample mechanical properties (e.g., elastic and dissipative properties) as well as surface electrical properties, have also been studied to investigate the coupling effects on the electrochemical strain. The study on the relationships between the Li-ion redistribution and microstructure of the electrode materials within thin film Li-ion battery will provide further understanding of the electrochemical degradation mechanisms of Li-ion rechargeable batteries at the nanoscale. PMID:23336441

  10. Luminal surface fabrication for cardiovascular prostheses

    NASA Technical Reports Server (NTRS)

    Deininger, William D.; Gabriel, Stephen B.

    1987-01-01

    Methodology used for the production of highly ordered surfaces on polytetrafluoroethylene (PTFE) Teflon, for use of this material as a surface layer in small-diameter vascular prostheses, is described. Directed argon-ion beam sputter etching is used to produce highly ordered specific surface morphologies, and photolithographic techniques are used for pattern definition. The methodology also includes surface polishing of the samples, sputter deposition of the intermediary layer of gold to enhance photoresist adhesion and improve pattern resolution, and rotating of the sample to minimize the effects of ion-beam nonuniformity and asymmetry. Use of these techniques resulted in the formation of regular arrays of sharply defined hexagonal pillars with smooth vertical walls and large relief.

  11. Three-dimensional segmentation of luminal and adventitial borders in serial intravascular ultrasound images

    NASA Technical Reports Server (NTRS)

    Shekhar, R.; Cothren, R. M.; Vince, D. G.; Chandra, S.; Thomas, J. D.; Cornhill, J. F.

    1999-01-01

    Intravascular ultrasound (IVUS) provides exact anatomy of arteries, allowing accurate quantitative analysis. Automated segmentation of IVUS images is a prerequisite for routine quantitative analyses. We present a new three-dimensional (3D) segmentation technique, called active surface segmentation, which detects luminal and adventitial borders in IVUS pullback examinations of coronary arteries. The technique was validated against expert tracings by computing correlation coefficients (range 0.83-0.97) and William's index values (range 0.37-0.66). The technique was statistically accurate, robust to image artifacts, and capable of segmenting a large number of images rapidly. Active surface segmentation enabled geometrically accurate 3D reconstruction and visualization of coronary arteries and volumetric measurements.

  12. Mapping the optimal forest road network based on the multicriteria evaluation technique: the case study of Mediterranean Island of Thassos in Greece.

    PubMed

    Tampekis, Stergios; Sakellariou, Stavros; Samara, Fani; Sfougaris, Athanassios; Jaeger, Dirk; Christopoulou, Olga

    2015-11-01

    The sustainable management of forest resources can only be achieved through a well-organized road network designed with the optimal spatial planning and the minimum environmental impacts. This paper describes the spatial layout mapping for the optimal forest road network and the environmental impacts evaluation that are caused to the natural environment based on the multicriteria evaluation (MCE) technique at the Mediterranean island of Thassos in Greece. Data analysis and its presentation are achieved through a spatial decision support system using the MCE method with the contribution of geographic information systems (GIS). With the use of the MCE technique, we evaluated the human impact intensity to the forest ecosystem as well as the ecosystem's absorption from the impacts that are caused from the forest roads' construction. For the human impact intensity evaluation, the criteria that were used are as follows: the forest's protection percentage, the forest road density, the applied skidding means (with either the use of tractors or the cable logging systems in timber skidding), the timber skidding direction, the visitors' number and truck load, the distance between forest roads and streams, the distance between forest roads and the forest boundaries, and the probability that the forest roads are located on sights with unstable soils. In addition, for the ecosystem's absorption evaluation, we used forestry, topographical, and social criteria. The recommended MCE technique which is described in this study provides a powerful, useful, and easy-to-use implement in order to combine the sustainable utilization of natural resources and the environmental protection in Mediterranean ecosystems. PMID:26463090

  13. SN 2010U: A LUMINOUS NOVA IN NGC 4214

    SciTech Connect

    Humphreys, Roberta M.; Helton, L. Andrew; Prieto, Jose L.; Rosenfield, Philip; Williams, Benjamin; Murphy, Jeremiah; Dalcanton, Julianne; Gilbert, Karoline; Kochanek, Christopher S.; Stanek, K. Z.; Khan, Rubab; Szczygiel, Dorota; Mogren, Karen; Fesen, Robert A.; Milisavljevic, Dan

    2010-07-20

    The luminosity, light curve, post-maximum spectrum, and lack of a progenitor on deep pre-outburst images suggest that SN 2010U was a luminous, fast nova. Its outburst magnitude is consistent with that for a fast nova using the maximum magnitude-rate of decline relationship for classical novae.

  14. Modulation of intestinal L-glutamate transport by luminal leptin.

    PubMed

    Fanjul, Carmen; Barrenetxe, Jaione; Lostao, María Pilar; Ducroc, Robert

    2015-06-01

    Leptin is secreted into the digestive tract and contributes to the absorption of dietary molecules by regulating transporters activity. Here, we studied the effect of luminal leptin on the intestinal transport of L-glutamate, an important component of human diet. We examined the effect of leptin on L-glutamate uptake in rat intestine in vitro measuring glutamate-induced short-circuit current (Isc) in Ussing chambers and L-[(3)H (U)]-glutamate uptake in jejunal everted rings. Glutamate-induced Isc was only observed in Na(+)-free conditions. This Isc was concentration (1-60 mmol L(-1)) and pH dependent. Luminal leptin increased glutamate Isc (∼100 %). Dose-response curve showed a biphasic pattern, with maximal stimulations observed at 10(-13) and 10(-10) mmol L(-1), that were sensitive to leptin receptor antagonist. In everted rings, two glutamate transport mechanisms were distinguished: a Na(+)-dependent, H(+)-independent, that was inhibited by leptin (∼20 %), and a Na(+)-independent but H(+)-dependent, that was enhanced by leptin (∼20 %), in line with data obtained in Ussing chambers. Altogether, these data reveal original non-monotonic effect of luminal leptin in the intestine and demonstrate a new role for this hormone in the modulation of L-glutamate transport, showing that luminal active gut peptides can influence absorption of amino acids. PMID:25935421

  15. Vocal Fold Epithelial Response to Luminal Osmotic Perturbation

    ERIC Educational Resources Information Center

    Sivasankar, Mahalakshmi; Fisher, Kimberly V.

    2007-01-01

    Purpose: Dry-air challenges increase the osmolarity of fluid lining the luminal surface of the proximal airway. The homeostasis of surface fluid is thought to be essential for voice production and laryngeal defense. Therefore, the authors hypothesized that viable vocal fold epithelium would generate a water flux to reduce an osmotic challenge (150…

  16. An Exponential Luminous Efficiency Model for Hypervelocity Impact into Regolith

    NASA Technical Reports Server (NTRS)

    Swift, Wesley R.; Moser, D.E.; Suggs, Robb M.; Cooke, W.J.

    2010-01-01

    The flash of thermal radiation produced as part of the impact-crater forming process can be used to determine the energy of the impact if the luminous efficiency is known. From this energy the mass and, ultimately, the mass flux of similar impactors can be deduced. The luminous efficiency, Eta is a unique function of velocity with an extremely large variation in the laboratory range of under 8 km/s but a necessarily small variation with velocity in the meteoric range of 20 to 70 km/s. Impacts into granular or powdery regolith, such as that on the moon, differ from impacts into solid materials in that the energy is deposited via a serial impact process which affects the rate of deposition of internal (thermal) energy. An exponential model of the process is developed which differs from the usual polynomial models of crater formation. The model is valid for the early time portion of the process and focuses on the deposition of internal energy into the regolith. The model is successfully compared with experimental luminous efficiency data from laboratory impacts and from astronomical determinations and scaling factors are estimated. Further work is proposed to clarify the effects of mass and density upon the luminous efficiency scaling factors

  17. Astronomy. ASASSN-15lh: A highly super-luminous supernova.

    PubMed

    Dong, Subo; Shappee, B J; Prieto, J L; Jha, S W; Stanek, K Z; Holoien, T W-S; Kochanek, C S; Thompson, T A; Morrell, N; Thompson, I B; Basu, U; Beacom, J F; Bersier, D; Brimacombe, J; Brown, J S; Bufano, F; Chen, Ping; Conseil, E; Danilet, A B; Falco, E; Grupe, D; Kiyota, S; Masi, G; Nicholls, B; Olivares E, F; Pignata, G; Pojmanski, G; Simonian, G V; Szczygiel, D M; Woźniak, P R

    2016-01-15

    We report the discovery of ASASSN-15lh (SN 2015L), which we interpret as the most luminous supernova yet found. At redshift z = 0.2326, ASASSN-15lh reached an absolute magnitude of Mu ,AB = -23.5 ± 0.1 and bolometric luminosity Lbol = (2.2 ± 0.2) × 10(45) ergs s(-1), which is more than twice as luminous as any previously known supernova. It has several major features characteristic of the hydrogen-poor super-luminous supernovae (SLSNe-I), whose energy sources and progenitors are currently poorly understood. In contrast to most previously known SLSNe-I that reside in star-forming dwarf galaxies, ASASSN-15lh appears to be hosted by a luminous galaxy (MK ≈ -25.5) with little star formation. In the 4 months since first detection, ASASSN-15lh radiated (1.1 ± 0.2) × 10(52) ergs, challenging the magnetar model for its engine. PMID:26816375

  18. ASASSN-15lh: A highly super-luminous supernova

    NASA Astrophysics Data System (ADS)

    Dong, Subo; Shappee, B. J.; Prieto, J. L.; Jha, S. W.; Stanek, K. Z.; Holoien, T. W.-S.; Kochanek, C. S.; Thompson, T. A.; Morrell, N.; Thompson, I. B.; Basu, U.; Beacom, J. F.; Bersier, D.; Brimacombe, J.; Brown, J. S.; Bufano, F.; Chen, Ping; Conseil, E.; Danilet, A. B.; Falco, E.; Grupe, D.; Kiyota, S.; Masi, G.; Nicholls, B.; Olivares E., F.; Pignata, G.; Pojmanski, G.; Simonian, G. V.; Szczygiel, D. M.; Woźniak, P. R.

    2016-01-01

    We report the discovery of ASASSN-15lh (SN 2015L), which we interpret as the most luminous supernova yet found. At redshift z = 0.2326, ASASSN-15lh reached an absolute magnitude of Mu,AB = -23.5 ± 0.1 and bolometric luminosity Lbol = (2.2 ± 0.2) × 1045 ergs s-1, which is more than twice as luminous as any previously known supernova. It has several major features characteristic of the hydrogen-poor super-luminous supernovae (SLSNe-I), whose energy sources and progenitors are currently poorly understood. In contrast to most previously known SLSNe-I that reside in star-forming dwarf galaxies, ASASSN-15lh appears to be hosted by a luminous galaxy (MK ≈ -25.5) with little star formation. In the 4 months since first detection, ASASSN-15lh radiated (1.1 ± 0.2) × 1052 ergs, challenging the magnetar model for its engine.

  19. On the Social Traits of Luminous Blue Variables

    NASA Astrophysics Data System (ADS)

    Humphreys, Roberta M.; Weis, Kerstin; Davidson, Kris; Gordon, Michael S.

    2016-07-01

    In a recent paper, Smith & Tombleson state that the luminous blue variables (LBVs) in the Milky Way and the Magellanic Clouds are isolated; they are not spatially associated with young O-type stars. They propose a novel explanation that would overturn the standard view of LBVs. In this paper we test their hypothesis for the LBVs in M31 and M33, as well as the LMC and SMC. We show that in M31 and M33 the LBVs are associated with luminous young stars and supergiants that are appropriate to their luminosities and positions on the H-R diagram. Moreover, in the Smith and Tombleson scenario most of the LBVs should be runaway stars, but the stars’ velocities are consistent with their positions in the respective galaxies. In the Magellanic Clouds, those authors’ sample was a mixed population. We reassess their analysis, removing seven stars that have no clear relation to LBVs. When we separate the more massive classical and the less luminous LBVs, the classical LBVs have a distribution similar to the late O-type stars, while the less luminous LBVs have a distribution like the red supergiants. None of the confirmed LBVs have high velocities or are candidate runaway stars. These results support the accepted description of LBVs as evolved massive stars that have shed a lot of mass and are now close to their Eddington limit.

  20. Profile of a Growing Urban School: The Lumin Experience

    ERIC Educational Resources Information Center

    Ford, Terry

    2015-01-01

    This fairytale-come-true began with an idealistic public school teacher just out of college who lived in the neighborhood of her students. In stages, working with a community organizing group consisting mainly of concerned parents, Terry Ford founded what is now called Lumin Education, a network of campuses serving more than six hundred children…

  1. Can Self-Organizing Maps Accurately Predict Photometric Redshifts?

    NASA Astrophysics Data System (ADS)

    Way, M. J.; Klose, C. D.

    2012-03-01

    We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using Δz = zphot - zspec) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods.

  2. Analysis of abdominal wounds made by surgical trocars using functional luminal imaging probe (FLIP) technology.

    PubMed

    McMahon, Barry P; O'Donovan, Deidre; Liao, Donghua; Zhao, Jingbo; Schiretz, Rich; Heninrich, Russell; Gregersen, Hans

    2008-09-01

    The aim was to use a novel functional luminal imaging probe for evaluation of wound defects and tissue damage resulting from the use of trocars. Following general anesthesia of 4 adult pigs, 6 different trocars were randomly inserted at preselected locations in the porcine abdominal wall. The functional luminal imaging probe was used to profile the trocar holes during bag distension from 8 axial cross-sectional area measurements. The cross-sectional areas and pressure in the bag were recorded and exported to Matlab for analysis and data display. Geometric profiles were generated, and the minimum cross-sectional area and hole length (abdominal wall thickness) were used as endpoints. Successful distensions were made in all cases. The slope of the contours increased away from the narrowest point of the hole. The slope increased more rapidly toward the inner abdominal wall than toward the outer wall. The slope of the linear trend lines for the cross-sectional area-pressure relation represents the compliance at the narrowest point in the wall. The hole length (abdominal wall thickness) could be obtained at different cross-sectional area cutoff points. A cutoff point of 300 mm(2) gave good results when compared to the length of the hole measured after the tissue was excised. This technique represents a new and straightforward way to evaluate the effects of trocars on the abdominal wall. It may also prove useful in comparing techniques and technology from different manufacturers. PMID:18757380

  3. A stent for co-delivering paclitaxel and nitric oxide from abluminal and luminal surfaces: Preparation, surface characterization, and in vitro drug release studies

    NASA Astrophysics Data System (ADS)

    Gallo, Annemarie; Mani, Gopinath

    2013-08-01

    Most drug-eluting stents currently available are coated with anti-proliferative drugs on both abluminal (toward blood vessel wall) and luminal (toward lumen) surfaces to prevent neointimal hyperplasia. While the abluminal delivery of anti-proliferative drugs is useful for controlling neointimal hyperplasia, the luminal delivery of such drugs impairs or prevents endothelialization which causes late stent thrombosis. This research is focused on developing a bidirectional dual drug-eluting stent to co-deliver an anti-proliferative agent (paclitaxel - PAT) and an endothelial cell promoting agent (nitric oxide - NO) from abluminal and luminal surfaces of the stent, respectively. Phosphonoacetic acid, a polymer-free drug delivery platform, was initially coated on the stents. Then, the PAT and NO donor drugs were co-coated on the abluminal and luminal stent surfaces, respectively. The co-coating of drugs was collectively confirmed by the surface characterization techniques such as Fourier transform infrared spectroscopy, scanning electron microscopy (SEM), 3D optical surface profilometry, and contact angle goniometry. SEM showed that the integrity of the co-coating of drugs was maintained without delamination or cracks formation occurring during the stent expansion experiments. In vitro drug release studies showed that the PAT was released from the abluminal stent surfaces in a biphasic manner, which is an initial burst followed by a slow and sustained release. The NO was burst released from the luminal stent surfaces. Thus, this study demonstrated the co-delivery of PAT and NO from abluminal and luminal stent surfaces, respectively. The stent developed in this study has potential applications in inhibiting neointimal hyperplasia as well as encouraging luminal endothelialization to prevent late stent thrombosis.

  4. Calcium transport in bovine rumen epithelium as affected by luminal Ca concentrations and Ca sources

    PubMed Central

    Schröder, Bernd; Wilkens, Mirja R; Ricken, Gundula E; Leonhard-Marek, Sabine; Fraser, David R; Breves, Gerhard

    2015-01-01

    The quantitative role of different segments of the gastrointestinal tract for Ca absorption, the respective mechanisms, and their regulation are not fully identified for ruminants, that is, cattle. In different in vitro experiments the forestomach wall has been demonstrated to be a major site for active Ca absorption in sheep and goats. In order to further clarify the role of the bovine rumen for Ca transport with special attention to luminal Ca concentrations, its ionic form, and pH, electrophysiological and unidirectional flux rate measurements were performed with isolated bovine rumen epithelial tissues. For Ca flux studies (Jms, Jsm) in vitro Ussing chamber technique was applied. Standard RT-PCR method was used to characterize TRPV6 and PMCA1 as potential contributors to transepithelial active Ca transport. At Ca concentrations of 1.2 mmol L−1 on both sides of the tissues, Jms were higher than Jsm resulting under some conditions in significant Ca net flux rates (Jnet), indicating the presence of active Ca transport. In the absence of an electrical gradient, Jnet could significantly be stimulated in the presence of luminal short-chain fatty acids (SCFAs). Increasing the luminal Ca concentrations up to 11.2 mmol L−1 resulted in significant increases in Jms without influencing Jsm. Providing Ca in its form as respective chloride, formate, or propionate salts there was no significant effect on Jms. No transcripts specific for Ca channel TRPV6 could be demonstrated. Our results indicate different mechanisms for Ca absorption in bovine rumen as compared with those usually described for the small intestines. PMID:26564067

  5. Calcium transport in bovine rumen epithelium as affected by luminal Ca concentrations and Ca sources.

    PubMed

    Schröder, Bernd; Wilkens, Mirja R; Ricken, Gundula E; Leonhard-Marek, Sabine; Fraser, David R; Breves, Gerhard

    2015-11-01

    The quantitative role of different segments of the gastrointestinal tract for Ca absorption, the respective mechanisms, and their regulation are not fully identified for ruminants, that is, cattle. In different in vitro experiments the forestomach wall has been demonstrated to be a major site for active Ca absorption in sheep and goats. In order to further clarify the role of the bovine rumen for Ca transport with special attention to luminal Ca concentrations, its ionic form, and pH, electrophysiological and unidirectional flux rate measurements were performed with isolated bovine rumen epithelial tissues. For Ca flux studies (Jms, Jsm) in vitro Ussing chamber technique was applied. Standard RT-PCR method was used to characterize TRPV6 and PMCA1 as potential contributors to transepithelial active Ca transport. At Ca concentrations of 1.2 mmol L(-1) on both sides of the tissues, Jms were higher than Jsm resulting under some conditions in significant Ca net flux rates (Jnet), indicating the presence of active Ca transport. In the absence of an electrical gradient, Jnet could significantly be stimulated in the presence of luminal short-chain fatty acids (SCFAs). Increasing the luminal Ca concentrations up to 11.2 mmol L(-1) resulted in significant increases in Jms without influencing Jsm. Providing Ca in its form as respective chloride, formate, or propionate salts there was no significant effect on Jms. No transcripts specific for Ca channel TRPV6 could be demonstrated. Our results indicate different mechanisms for Ca absorption in bovine rumen as compared with those usually described for the small intestines. PMID:26564067

  6. Using maps in genealogy

    USGS Publications Warehouse

    U.S. Geological Survey

    1999-01-01

    Maps are one of many sources you may need to complete a family tree. In genealogical research, maps can provide clues to where our ancestors may have lived and where to look for written records about them. Beginners should master basic genealogical research techniques before starting to use topographic maps.

  7. Quantitative DNA fiber mapping

    DOEpatents

    Gray, Joe W.; Weier, Heinz-Ulrich G.

    1998-01-01

    The present invention relates generally to the DNA mapping and sequencing technologies. In particular, the present invention provides enhanced methods and compositions for the physical mapping and positional cloning of genomic DNA. The present invention also provides a useful analytical technique to directly map cloned DNA sequences onto individual stretched DNA molecules.

  8. AN INTEGRAL FIELD STUDY OF ABUNDANCE GRADIENTS IN NEARBY LUMINOUS INFRARED GALAXIES

    SciTech Connect

    Rich, J. A.; Kewley, L. J.; Dopita, M. A.; Torrey, P.; Rupke, D. S. N.

    2012-07-01

    We present for the first time metallicity maps generated using data from the Wide Field Spectrograph on the ANU 2.3 m of 10 luminous infrared galaxies (LIRGs) and discuss the abundance gradients and distribution of metals in these systems. We have carried out optical integral field spectroscopy (IFS) of several LIRGs in various merger phases to investigate the merger process. In a major merger of two spiral galaxies with preexisting disk abundance gradients, the changing distribution of metals can be used as a tracer of gas flows in the merging system as low-metallicity gas is transported from the outskirts of each galaxy to their nuclei. We employ this fact to probe merger properties by using the emission lines in our IFS data to calculate the gas-phase metallicity in each system. We create abundance maps and subsequently derive a metallicity gradient from each map. We compare our measured gradients to merger stage as well as several possible tracers of merger progress and observed nuclear abundances. We discuss our work in the context of previous abundance gradient observations and compare our results to new galaxy merger models that trace metallicity gradient. Our results agree with the observed flattening of metallicity gradients as a merger progresses. We compare our results with new theoretical predictions that include chemical enrichment. Our data show remarkable agreement with these simulations.

  9. [Molecular structure of luminal diuretic receptors].

    PubMed

    Gamba, G

    1995-01-01

    Since day to day sodium and water intake is more or less constant, the output by urinary sodium excretion is the key to maintain extracellular fluid volume within physiologic ranges. To achieve this goal, the kidneys ensure that most of the large quantities of filtered sodium are reabsorbed, a function that takes place in the proximal tubule, the loop of Henle and the distal tubule, and then the kidneys adjust the small amount of sodium that is excreted in urine in such a way that sodium balance is maintained. This adjustment occurs in the collecting duct. Three groups of diuretic-sensitive sodium transport mechanisms have been identified in the apical membranes of the distal nephron based on their different sensitivities to diuretics and requirements for chloride and potassium: 1) the sulfamoylbenzoic (or bumetanide)-sensitive Na+:K+:2CI- and Na+:CI- symporters in the thick ascending loop of Henle; 2) the benzothiadiazine (or thiazide)-sensitive Na+:CI- cotransporter in the distal tubule; and 3) the amiloride-sensitive Na+ channel in the collecting tubule. The inhibition of any one of these proteins by diuretics results in increased sodium urinary excretion. Recently, the use of molecular biology techniques, specially the functional expression cloning in Xenopus laevis oocytes, has led to the identification of cDNA's encoding members of the three groups of diuretic-sensitive transport proteins. The present paper reviews the primary structure and some aspects of the relationship between structure and function of these transporters as well as the new protein families emerging from these sequences. It also discusses the future implications of these discoveries on the physiology and pathophysiology of kidney disease and sodium retaining states. PMID:7569367

  10. An evaluation of organic light emitting diode monitors for medical applications: Great timing, but luminance artifacts

    PubMed Central

    Elze, Tobias; Taylor, Christopher; Bex, Peter J.

    2013-01-01

    Purpose: In contrast to the dominant medical liquid crystal display (LCD) technology, organic light-emitting diode (OLED) monitors control the display luminance via separate light-emitting diodes for each pixel and are therefore supposed to overcome many previously documented temporal artifacts of medical LCDs. We assessed the temporal and luminance characteristics of the only currently available OLED monitor designed for use in the medical treatment field (SONY PVM2551MD) and checked the authors’ main findings with another SONY OLED device (PVM2541). Methods: Temporal properties of the photometric output were measured with an optical transient recorder. Luminances of the three color primaries and white for all 256 digital driving levels (DDLs) were measured with a spectroradiometer. Between the luminances of neighboring DDLs, just noticeable differences were calculated according to a perceptual model developed for medical displays. Luminances of full screen (FS) stimuli were compared to luminances of smaller stimuli with identical DDLs. Results: All measured luminance transition times were below 300 μs. Luminances were independent of the luminance in the preceding frame. However, for the single color primaries, up to 50.5% of the luminances of neighboring DDLs were not perceptually distinguishable. If two color primaries were active simultaneously, between 36.7% and 55.1% of neighboring luminances for increasing DDLs of the third primary were even decreasing. Moreover, luminance saturation effects were observed when too many pixels were active simultaneously. This effect was strongest for white; a small white patch was close to 400 cd/m2, but in FS the luminance of white saturated at 162 cd/m2. Due to different saturation levels, the luminance of FS green and FS yellow could exceed the luminance of FS white for identical DDLs. Conclusions: The OLED temporal characteristics are excellent and superior to those of LCDs. However, the OLEDs revealed severe

  11. Assessment of planetary geologic mapping techniques for Mars using terrestrial analogs: The SP Mountain area of the San Francisco Volcanic Field, Arizona

    USGS Publications Warehouse

    Tanaka, K.L.; Skinner, J.A., Jr.; Crumpler, L.S.; Dohm, J.M.

    2009-01-01

    We photogeologically mapped the SP Mountain region of the San Francisco Volcanic Field in northern Arizona, USA to evaluate and improve the fidelity of approaches used in geologic mapping of Mars. This test site, which was previously mapped in the field, is chiefly composed of Late Cenozoic cinder cones, lava flows, and alluvium perched on Permian limestone of the Kaibab Formation. Faulting and folding has deformed the older rocks and some of the volcanic materials, and fluvial erosion has carved drainage systems and deposited alluvium. These geologic materials and their formational and modificational histories are similar to those for regions of the Martian surface. We independently prepared four geologic maps using topographic and image data at resolutions that mimic those that are commonly used to map the geology of Mars (where consideration was included for the fact that Martian features such as lava flows are commonly much larger than their terrestrial counterparts). We primarily based our map units and stratigraphic relations on geomorphology, color contrasts, and cross-cutting relationships. Afterward, we compared our results with previously published field-based mapping results, including detailed analyses of the stratigraphy and of the spatial overlap and proximity of the field-based vs. remote-based (photogeologic) map units, contacts, and structures. Results of these analyses provide insights into how to optimize the photogeologic mapping of Mars (and, by extension, other remotely observed planetary surfaces). We recommend the following: (1) photogeologic mapping as an excellent approach to recovering the general geology of a region, along with examination of local, high-resolution datasets to gain insights into the complexity of the geology at outcrop scales; (2) delineating volcanic vents and lava-flow sequences conservatively and understanding that flow abutment and flow overlap are difficult to distinguish in remote data sets; (3) taking care to

  12. Transient luminous events above two mesoscale convective systems

    NASA Astrophysics Data System (ADS)

    Lang, Timothy; Rutledge, Steve; Lyons, Walt; Cummer, Steve; Li, Jingbo; Macgorman, Don

    2010-05-01

    Two warm-season mesoscale convective systems (MCSs) were analyzed with respect to production of transient luminous events (TLEs), mainly sprites. Sprites were documented over the lightning mapping array (LMA) network in Oklahoma, USA, using highly sensitive optical cameras operated at Yucca Ridge in Ft. Collins, Colorado, as part of our Sprites 2007 field campaign. Information about charge moment changes in lightning flashes was obtained by the National Charge Moment Change Network (CMCN). Cloud-to-ground lightning data were obtained from the National Lightning Detection Network (NLDN). The 20 June 2007 symmetric MCS produced 282 observed TLEs over a 4-h period, during which time the storm's intense convection weakened and its stratiform region strengthened. In contrast to previous sprite studies, the stratiform charge layer involved in producing the TLE-parent positive cloud-to-ground (+CG) lightning flash was situated at upper levels as opposed to near the melting level. This layer was physically connected to an even higher upper-level convective positive charge region via a downward-sloping pathway. The average altitude discharged by TLE-parent flashes during TLE activity was 8.2 km above mean sea level (MSL; -25 °C). The 9 May 2007 asymmetric MCS produced 25 observed TLEs over a 2-h period, during which the storm's convection rapidly weakened before recovering later. The 9 May storm best fit the conventional model of low-altitude positive charge playing the dominant role in sprite production; however, the average altitude discharged during the TLE phase of flashes still was higher than the melting level: 6.1 km MSL (-15 °C). The average TLE-parent +CG flash in the symmetric 20 June case initiated at higher altitude, discharged a substantially larger area, had a larger peak current, and tapped positive charge at higher altitude compared to the asymmetric 9 May case. Analysis of full charge moment change (CMC) data from TLE-parent +CGs in these two cases

  13. Infrared spectroscopy of radio-luminous OH/IR stars

    NASA Technical Reports Server (NTRS)

    Jones, Terry Jay; Hyland, A. R.; Fix, John D.; Cobb, Michael L.

    1988-01-01

    Low-resolution 1.5-2.5-micron spectra for 21 radio-luminous OH/IR stars are presented. These spectra divide into two broad classes. Those with very strong water-vapor absorption closely resemble the spectra of classical Mira variables and are classified Type VM. Those with weaker water-vapor absorption, but still showing strong CO absorption, resemble the spectra of true core-burning supergiants and are classified Type SG. Comparison of the classification of 30 radio-luminous OH/IR stars with their Delta(V)s and luminosities suggests this classification is a good indicator of the intrinsic nature of the underlying star. There is some evidence, however, that some true supergiants (massive main-sequence progenitors) develop the pulsation properties and photospheric characteristics of the Mira-like OH/IR stars when they become optically obscured OH/IR stars.

  14. Luminous fabric devices for wearable low-level light therapy

    PubMed Central

    Shen, Jing; Chui, Chunghin; Tao, Xiaoming

    2013-01-01

    In this paper, a flexible luminous fabric device was developed and investigated for wearable three-dimensionally fitted low-level light therapy. The fabric device exhibited excellent optical and thermal properties. Its optical power density and operating temperature were stable during usage for 10 hours. In vitro experiments demonstrated a significant increase in collagen production in human fibroblast irradiated by the fabric device, compared with the fibroblast without light irradiation. A series of tests were conducted for the safety of the fabric for human skin contact according to ISO standard ISO 10993-1:2003. The results showed that there was no potential hazard when the luminous fabrics were in direct contact with human skin. PMID:24409391

  15. Luminance Uniformity Evaluation of Electronic Displays Based on Visual Characteristics

    NASA Astrophysics Data System (ADS)

    Morimoto, Yuichi; Takagi, Yuji; Asano, Toshio

    A novel method that can be used to evaluate luminance uniform quality of Flat Panel Displays (FPDs) has been developed. This method is capable of evaluating the edge gradients of the non-uniform regions. This is accomplished by converting the original image into a visual sensitivity image using a contrast sensitivity function filter. The experiments were executed by using pseudo non-uniform samples and real LCD displays. It was shown that the average of the visual sensitivity values of the non-uniform regions correlated strongly to the grades obtained by human evaluation. The proposed method can be used to evaluate luminance uniform grades at any viewing distance, which is very important for uniformity evaluation.

  16. Mapping gas-phase organic reactivity and concomitant secondary organic aerosol formation: chemometric dimension reduction techniques for the deconvolution of complex atmospheric data sets

    NASA Astrophysics Data System (ADS)

    Wyche, K. P.; Monks, P. S.; Smallbone, K. L.; Hamilton, J. F.; Alfarra, M. R.; Rickard, A. R.; McFiggans, G. B.; Jenkin, M. E.; Bloss, W. J.; Ryan, A. C.; Hewitt, C. N.; MacKenzie, A. R.

    2015-07-01

    Highly non-linear dynamical systems, such as those found in atmospheric chemistry, necessitate hierarchical approaches to both experiment and modelling in order to ultimately identify and achieve fundamental process-understanding in the full open system. Atmospheric simulation chambers comprise an intermediate in complexity, between a classical laboratory experiment and the full, ambient system. As such, they can generate large volumes of difficult-to-interpret data. Here we describe and implement a chemometric dimension reduction methodology for the deconvolution and interpretation of complex gas- and particle-phase composition spectra. The methodology comprises principal component analysis (PCA), hierarchical cluster analysis (HCA) and positive least-squares discriminant analysis (PLS-DA). These methods are, for the first time, applied to simultaneous gas- and particle-phase composition data obtained from a comprehensive series of environmental simulation chamber experiments focused on biogenic volatile organic compound (BVOC) photooxidation and associated secondary organic aerosol (SOA) formation. We primarily investigated the biogenic SOA precursors isoprene, α-pinene, limonene, myrcene, linalool and β-caryophyllene. The chemometric analysis is used to classify the oxidation systems and resultant SOA according to the controlling chemistry and the products formed. Results show that "model" biogenic oxidative systems can be successfully separated and classified according to their oxidation products. Furthermore, a holistic view of results obtained across both the gas- and particle-phases shows the different SOA formation chemistry, initiating in the gas-phase, proceeding to govern the differences between the various BVOC SOA compositions. The results obtained are used to describe the particle composition in the context of the oxidised gas-phase matrix. An extension of the technique, which incorporates into the statistical models data from anthropogenic (i

  17. Mapping and analyzing Conservation Reserve Program (CRP) enrollment patterns from 1991 to 2011 in Nelson County, North Dakota, using remote sensing and GIS techniques

    NASA Astrophysics Data System (ADS)

    Roehrdanz, Nicholas L.

    Since its inception in 1985, the federally managed Conservation Reserve Program (CRP) has contributed to land-use/land-cover change (LUCC) in areas throughout North Dakota. Concurrently, the Devils Lake Basin and surrounding Prairie Pothole Region (PPR) in North Dakota has experienced pervasive lake and wetland flooding. Unsurprisingly, a clustering of CRP enrollment in certain counties within the basin has occurred, seemingly coinciding with the flooding. Analysis of historical county-level CRP enrollment data pertaining to counties in North Dakota revealed that Nelson County, which is partially within the basin, has developed as a CRP hotspot in the state and has had the greatest increase in the density of CRP acreage amongst the counties in the region. We hypothesize that this high enrollment is the response of farmers losing arable lands and/or field access to the rising waters in the region, thus making CRP enrollment an economically viable option. This study uses Landsat data and GIS analysis to document LUCC and the forces driving it associated with CRP grassland and pervasive lake and wetland flooding in Nelson County. Because CRP field locations are not available from the federal government, we used multi-temporal classification techniques (three scenes per year) to derive land-cover maps from Landsat Thematic Mapper data for five growing seasons (1984, 1991, 1998, 2005, and 2011). We mapped CRP grassland at more than 90% accuracy with validation data derived from interpretation of historical aerial photography and, in the case of 2011, data gathered in the field. LUCC change analysis was done using raster GIS. We found an increase in the amount of CRP grassland in the study area between 1991 (19,688 ha) and 2005 (35,612 ha) and then a decline to 2011 (27,856 ha). Spatial analysis revealed a clustering of CRP in 1991 in the Sheyenne and Goose river valleys, likely attributable to those lands being considered of greater conservation importance. By 1998, a

  18. Infrared atomic hydrogen line formation in luminous stars

    NASA Technical Reports Server (NTRS)

    Krolik, J. H.; Smith, H. A.

    1981-01-01

    Infrared atomic hydrogen lines observed in luminous stars, generally attributed to compact circumstellar H II regions, can also be formed in the winds likely to emanate from these stars. Implications are discussed for the class of obscured infrared point sources showing these lines, and an illustrative model is derived for the BN object in Orion. Such stellar winds should also produce weak, but detectable, radio emission.

  19. Race-associated biological differences among Luminal A breast tumors.

    PubMed

    D'Arcy, Monica; Fleming, Jodie; Robinson, Whitney R; Kirk, Erin L; Perou, Charles M; Troester, Melissa A

    2015-07-01

    African-American (AA) women have higher breast cancer-specific mortality rates. A higher prevalence of the worse outcome Basal-like breast cancer subtype contributes to this, but AA women also have higher mortality even within the more favorable outcome Luminal A breast cancers. These differences may reflect treatment or health care access issues, inherent biological differences, or both. To identify potential biological differences by race among Luminal A breast cancers, gene expression data from 108 CAU and 57 AA breast tumors were analyzed. Race-associated genes were evaluated for associations with survival. Finally, expression of race- and survival-associated genes was evaluated in normal tissue of AA and CAU women. Six genes (ACOX2, MUC1, CRYBB2, PSPH, SQLE, TYMS) were differentially expressed by race among Luminal A breast cancers and were associated with survival (HR <0.8, HR >1.25). For all six genes, tumors in AA had higher expression of poor prognosis genes (CRYBB2, PSPH, SQLE, TYMS) and lower expression of good prognosis genes (ACOX2, MUC1). A score based on all six genes predicted survival in a large independent dataset (HR = 1.9 top vs. bottom quartile, 95% CI: 1.4-2.5). For four genes, normal tissue of AA and CAU women showed similar expression (ACOX2, MUC1, SQLE, TYMS); however, the poor outcome-associated genes CRYBB2 and PSPH were more highly expressed in AA versus CAU women's normal tissue. This analysis identified gene expression differences that may contribute to mortality disparities and suggests that among Luminal A breast tumors there are biological differences between AA and CAU patients. Some of these differences (CRYBB2 and PSPH) may exist from the earliest stages of tumor development, or may even precede malignancy. PMID:26109344

  20. MODELING PROXIMAL TUBULE CELL HOMEOSTASIS: TRACKING CHANGES IN LUMINAL FLOW

    PubMed Central

    Weinstein, Alan M.; Sontag, Eduardo D.

    2009-01-01

    During normal kidney function, there are are routinely wide swings in proximal tubule fluid flow and proportional changes in Na+ reabsorption across tubule epithelial cells. This "glomerulotubular balance" occurs in the absence of any substantial change in cell volume, and is thus a challenge to coordinate luminal membrane solute entry with peritubular membrane solute exit. In this work, linear optimal control theory is applied to generate a configuration of regulated transporters that could achieve this result. A previously developed model of rat proximal tubule epithelium is linearized about a physiologic reference condition; the approximate linear system is recast as a dynamical system; and a Riccati equation is solved to yield the optimal linear feedback that stabilizes Na+ flux, cell volume, and cell pH. The first observation is that optimal feedback control is largely consigned to three physiologic variables, cell volume, cell electrical potential, and lateral intercellular hydrostatic pressure. Parameter modulation by cell volume stabilizes cell volume; parameter modulation by electrical potential or interspace pressure act to stabilize Na+ flux and cell pH. This feedback control is utilized in a tracking problem, in which reabsorptive Na+ flux varies over a factor of two. The resulting control parameters consist of two terms, an autonomous term and a feedback term, and both terms include transporters on both luminal and peritubular cell membranes. Overall, the increase in Na+ flux is achieved with upregulation of luminal Na+/H+ exchange and Na+-glucose cotransport, with increased peritubular Na+−3HCO3− and K+ − Cl− cotransport, and with increased Na+, K+-ATPase activity. The configuration of activated transporters emerges as testable hypothesis of the molecular basis for glomerulotubular balance. It is suggested that the autonomous control component at each cell membrane could represent the cytoskeletal effects of luminal flow. PMID:19280266

  1. Methods for Cultivation of Luminal Parasitic Protists of Clinical Importance

    PubMed Central

    Clark, C. Graham; Diamond, Louis S.

    2002-01-01

    Cultivation of luminal protistan parasites has a long history. In this review we discuss the methods and media that are most widely used for the establishment and maintenance of the following organisms in culture: Entamoeba histolytica, Giardia intestinalis, Trichomonas vaginalis, Dientamoeba fragilis, Blastocystis hominis, and Balantidium coli. While cultivation is of limited importance in the diagnostic laboratory, it is essential to most research laboratories, and it is toward the latter that this review is primarily aimed. PMID:12097242

  2. Identifying Luminous AGN in Deep Surveys: Revised IRAC Selection Criteria

    NASA Astrophysics Data System (ADS)

    Donley, Jennifer; Koekemoer, A. M.; Brusa, M.; Capak, P.; Cardamone, C. N.; Civano, F.; Ilbert, O.; Impey, C. D.; Kartaltepe, J.; Miyaji, T.; Salvato, M.; Sanders, D. B.; Trump, J. R.; Zamorani, G.

    2012-01-01

    Spitzer IRAC selection is a powerful tool for identifying luminous AGN. The AGN selection wedges currently in use, however, are heavily contaminated by star-forming galaxies, especially at high redshift. Using the large samples of luminous AGN and high-redshift star-forming galaxies in COSMOS, we redefine the AGN selection criteria for use in deep IRAC surveys. The new IRAC criteria are designed to be both highly complete and reliable, and incorporate the best aspects of the current AGN selection wedges and of infrared power-law selection while excluding high redshift star-forming galaxies selected via the BzK, DRG, LBG, and SMG criteria. At QSO-luminosities of log L(2-10 keV)>44, the new IRAC criteria recover 75% of the hard X-ray and IRAC-detected XMM-COSMOS sample, yet only 37% of the IRAC AGN candidates have X-ray counterparts, a fraction that rises to 51% in regions with Chandra exposures of 50-160 ks. X-ray stacking of the individually X-ray non-detected AGN candidates leads to a hard X-ray signal indicative of heavily obscured to mildly Compton-thick obscuration (log NH >= 23.7). While IRAC selection recovers a substantial fraction of luminous unobscured and obscured AGN, it is incomplete to low-luminosity and host-dominated AGN.

  3. Characterization of halogen lamps as secondary standard of luminous flux

    NASA Astrophysics Data System (ADS)

    Marques, A. A. R.; Sanchez, O., Jr.; Ferreira, A. F. G., Jr.

    2011-09-01

    This work presents a study of lamps characterization concerning its lamp output, current and voltage drift during seasoning and regarding the use of theses lamps as luminous flux secondary standard. The 200W halogen lamps are seasoned for 30 hours and during the seasoning period the relative drift of the lamp illuminance, current and voltage are measured at each 3 minutes. The illuminance is measured using a photometer with detector head, the lamp voltage is measured using a 6.5 digits voltmeter and the current is measured using a 6.5 digits voltmeter and 0.1 Ohms standard resistor. The lamp current is controlled by a calibrated current power source with stability better than 1 mA. To reduce the stray light, baffles are positioned between the lamp and the detector head. The alignment of experimental assembly is made by a He-Ne Laser. Data of illuminance, current and voltage is acquired by software built in Labview database. Among the 5 lamps seasoned, the best result presents the variation of illuminance of 0.04% per hour. This lamp is chosen to become the secondary standard and its luminous flux is measured using an Ulbricht integrating sphere. This method allows the laboratory to create secondary standard of luminous flux for its routine test and measurements and to supply theses standards for Brazilian industry.

  4. Night sky luminance under clear sky conditions: Theory vs. experiment

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav

    2014-05-01

    Sky glow is caused by both natural phenomena and factors of anthropogenic origin, and of the latter ground-based light sources are the most important contributors for they emit the spatially linked spectral radiant intensity distribution of artificial light sources, which are further modulated by local atmospheric optics and perceived as the diffuse light of a night sky. In other words, sky glow is closely related to a city's shape and pattern of luminaire distribution, in practical effect an almost arbitrary deployment of random orientation of heterogeneous electrical light sources. Thus the luminance gradation function measured in a suburban zone or near the edges of a city is linked to the City Pattern or vice versa. It is shown that clear sky luminance/radiance data recorded in an urban area can be used to retrieve the bulk luminous/radiant intensity distribution if some a-priori information on atmospheric aerosols is available. For instance, the single scattering albedo of aerosol particles is required under low turbidity conditions, as demonstrated on a targeted experiment in the city of Frýdek-Mistek. One of the main advantages of the retrieval method presented in this paper is that the single scattering approximation is satisfactorily accurate in characterizing the light field near the ground because the dominant contribution to the sky glow has originated from beams propagated along short optical paths.

  5. Detecting Exomoons around Self-luminous Giant Exoplanets through Polarization

    NASA Astrophysics Data System (ADS)

    Sengupta, Sujan; Marley, Mark S.

    2016-06-01

    Many of the directly imaged self-luminous gas-giant exoplanets have been found to have cloudy atmospheres. Scattering of the emergent thermal radiation from these planets by the dust grains in their atmospheres should locally give rise to significant linear polarization of the emitted radiation. However, the observable disk-averaged polarization should be zero if the planet is spherically symmetric. Rotation-induced oblateness may yield a net non-zero disk-averaged polarization if the planets have sufficiently high spin rotation velocity. On the other hand, when a large natural satellite or exomoon transits a planet with a cloudy atmosphere along the line of sight, the asymmetry induced during the transit should give rise to a net non-zero, time-resolved linear polarization signal. The peak amplitude of such time-dependent polarization may be detectable even for slowly rotating exoplanets. Therefore, we suggest that large exomoons around directly imaged self-luminous exoplanets may be detectable through time-resolved imaging polarimetry. Adopting detailed atmospheric models for several values of effective temperature and surface gravity that are appropriate for self-luminous exoplanets, we present the polarization profiles of these objects in the infrared during the transit phase and estimate the peak amplitude of polarization that occurs during the inner contacts of the transit ingress/egress phase. The peak polarization is predicted to range between 0.1% and 0.3% in the infrared.

  6. Cartographic mapping study

    NASA Technical Reports Server (NTRS)

    Wilson, C.; Dye, R.; Reed, L.

    1982-01-01

    The errors associated with planimetric mapping of the United States using satellite remote sensing techniques are analyzed. Assumptions concerning the state of the art achievable for satellite mapping systems and platforms in the 1995 time frame are made. An analysis of these performance parameters is made using an interactive cartographic satellite computer model, after first validating the model using LANDSAT 1 through 3 performance parameters. An investigation of current large scale (1:24,000) US National mapping techniques is made. Using the results of this investigation, and current national mapping accuracy standards, the 1995 satellite mapping system is evaluated for its ability to meet US mapping standards for planimetric and topographic mapping at scales of 1:24,000 and smaller.

  7. Coupling of cerebral blood flow and oxygen metabolism is conserved for chromatic and luminance stimuli in human visual cortex.

    PubMed

    Leontiev, Oleg; Buracas, Giedrius T; Liang, Christine; Ances, Beau M; Perthen, Joanna E; Shmuel, Amir; Buxton, Richard B

    2013-03-01

    The ratio of the changes in cerebral blood flow (CBF) and cerebral metabolic rate of oxygen (CMRO(2)) during brain activation is a critical determinant of the magnitude of the blood oxygenation level dependent (BOLD) response measured with functional magnetic resonance imaging (fMRI). Cytochrome oxidase (CO), a key component of oxidative metabolism in the mitochondria, is non-uniformly distributed in visual area V1 in distinct blob and interblob regions, suggesting significant spatial variation in the capacity for oxygen metabolism. The goal of this study was to test whether CBF/CMRO(2) coupling differed when these subpopulations of neurons were preferentially stimulated, using chromatic and luminance stimuli to preferentially stimulate either the blob or interblob regions. A dual-echo spiral arterial spin labeling (ASL) technique was used to measure CBF and BOLD responses simultaneously in 7 healthy human subjects. When the stimulus contrast levels were adjusted to evoke similar CBF responses (mean 65.4% ± 19.0% and 64.6% ± 19.9%, respectively for chromatic and luminance contrast), the BOLD responses were remarkably similar (1.57% ± 0.39% and 1.59% ± 0.35%) for both types of stimuli. We conclude that CBF-CMRO(2) coupling is conserved for the chromatic and luminance stimuli used, suggesting a consistent coupling for blob and inter-blob neuronal populations despite the difference in CO concentration. PMID:23238435

  8. Lunar impact flashes from Geminids: analysis of luminous efficiencies and the flux of large meteoroids on Earth

    NASA Astrophysics Data System (ADS)

    Ortiz, J. L.; Madiedo, J. M.; Morales, N.; Santos-Sanz, P.; Aceituno, F. J.

    2015-11-01

    We analyse lunar impact flashes recorded by our team during runs in December 2007, 2011, 2013 and 2014. In total, 12 impact flashes with magnitudes ranging between 7.1 and 9.3 in V band were identified. From these, nine events could be linked to the Geminid stream. Using these observations, the ratio of luminous energy emitted in the flashes with respect to the kinetic energy of the impactors for meteoroids of the Geminid stream is estimated. By making use of the known Geminids meteoroid flux on Earth we found this ratio to be 2.1 × 10-3 on average. We compare this luminous efficiency with other estimations derived in the past for other meteoroid streams and also compare it with other estimations that we present here for the first time by making use of crater diameter measurements. We think that the luminous efficiency has to be revised downwards, not upwards, at least for sporadic impacts. This implies an increase in the influx of kilogram-sized and larger bodies on Earth that has been derived thus far through the lunar impact flash monitoring technique.

  9. Mapping the University Learning Environment.

    ERIC Educational Resources Information Center

    Mitchell, Alice A.; And Others

    The perceptual mapping technique was used to solicit University of Maryland college freshmen's perceptions of the campus environment. In perceptual mapping respondents are provided with a map of an indoor or outdoor location and their feelings or perceptions about areas on the map are solicited. During this study students participating in regular…

  10. Assessing the effects of physical and perceived luminance contrast on RT and TMS-induced percepts.

    PubMed

    Knight, Ramisha; Mazzi, Chiara; Savazzi, Silvia

    2015-12-01

    Simple reaction times (RTs) are inversely related to the luminance of a visual region, with RT increasing as luminance decreases, and decreasing as luminance increases. A potential discrepancy in the link between luminance and RT, however, stems from the perception of luminance itself. Here, we tested whether RT is modulated by a measureable amount of light (luminance) or perceptual amount of light (brightness), as two test regions having the same luminance can be perceived as having different brightness. The current study investigates the effects of brightness using probes and artificial percepts, i.e., transcranial magnetic stimulation (TMS)-induced light and dark percepts. In Experiment 1, participants performed a RT task to light and dark probes presented over two backgrounds, one exhibiting a physical luminance and the other exhibiting perceptual brightness. Experiment 2 tested whether perceptual brightness could influence RT and frequency of artificial percepts. We found that while brightness contrast modulated RT to the dark probes, the frequency of artificial percepts was susceptible to both physical and perceived changes in luminance. These data suggest that some behavioral responses can be influenced by an illusion of brightness, wherein there is no actual change in luminance, as well as the perception of TMS-induced percepts. PMID:26314754

  11. Luminous Infrared Sources in the Local Group: Identifying the Missing Links in Massive Star Evolution

    NASA Astrophysics Data System (ADS)

    Britavskiy, N.; Bonanos, A. Z.; Mehner, A.

    2015-01-01

    We present the first systematic survey of dusty massive stars (RSGs, LBVs, sgB[e]) in nearby galaxies, with the goal of understanding their importance in massive star evolution. Using the fact that these stars are bright in mid-infrared colors due to dust, we provide a technique for selecting and identifying dusty evolved stars based on the results of Bonanos et al. (2009, 2010), Britavskiy et al. (2014), and archival Spitzer/IRAC photometry. We present the results of our spectroscopic follow-up of luminous infrared sources in the Local Group dwarf irregular galaxies: Pegasus, Phoenix, Sextans A and WLM. The survey aims to complete the census of dusty massive stars in the Local Group.

  12. Luminance measurement to evaluate the damage of notched FRP plates in static load

    SciTech Connect

    Hyakutake, H.; Yamamoto, T.

    1995-11-01

    The validity of the damage criterion for notched FRP plates based on the concept of severity near the notch root is subjected to further experimental scrutiny. An experimental program is presented which examines the effect of notch geometry on the damage near the notch root of FRP plates. This is accomplished by obtaining experimental data on the notched specimens of a glass cloth/epoxy laminate for a wide range of notch geometries in tension and bending. The process of initiation and growth of damage near the notch root was measured by means of the luminance measurement technique with a CCD camera. The experiment shows that the growth of damage zone near the notch root was governed predominantly by both the notch-root radius and the maximum elastic stress at the notch root, while it was independent of notch depth and type of loading. On the basis of the concept of severity, the experimental results can be clearly elucidated.

  13. High-throughput imaging of self-luminous objects through a single optical fibre

    NASA Astrophysics Data System (ADS)

    Barankov, Roman; Mertz, Jerome

    2014-11-01

    Imaging through a single optical fibre offers attractive possibilities in many applications such as micro-endoscopy or remote sensing. However, the direct transmission of an image through an optical fibre is difficult because spatial information is scrambled upon propagation. We demonstrate an image transmission strategy where spatial information is first converted to spectral information. Our strategy is based on a principle of spread-spectrum encoding, borrowed from wireless communications, wherein object pixels are converted into distinct spectral codes that span the full bandwidth of the object spectrum. Image recovery is performed by numerical inversion of the detected spectrum at the fibre output. We provide a simple demonstration of spread-spectrum encoding using Fabry-Perot etalons. Our technique enables the two-dimensional imaging of self-luminous (that is, incoherent) objects with high throughput in principle independent of pixel number. Moreover, it is insensitive to fibre bending, contains no moving parts and opens the possibility of extreme miniaturization.

  14. Extensive Transcriptomic and Genomic Analysis Provides New Insights about Luminal Breast Cancers

    PubMed Central

    Tishchenko, Inna; Milioli, Heloisa Helena; Riveros, Carlos; Moscato, Pablo

    2016-01-01

    Despite constituting approximately two thirds of all breast cancers, the luminal A and B tumours are poorly classified at both clinical and molecular levels. There are contradictory reports on the nature of these subtypes: some define them as intrinsic entities, others as a continuum. With the aim of addressing these uncertainties and identifying molecular signatures of patients at risk, we conducted a comprehensive transcriptomic and genomic analysis of 2,425 luminal breast cancer samples. Our results indicate that the separation between the molecular luminal A and B subtypes—per definition—is not associated with intrinsic characteristics evident in the differentiation between other subtypes. Moreover, t-SNE and MST-kNN clustering approaches based on 10,000 probes, associated with luminal tumour initiation and/or development, revealed the close connections between luminal A and B tumours, with no evidence of a clear boundary between them. Thus, we considered all luminal tumours as a single heterogeneous group for analysis purposes. We first stratified luminal tumours into two distinct groups by their HER2 gene cluster co-expression: HER2-amplified luminal and ordinary-luminal. The former group is associated with distinct transcriptomic and genomic profiles, and poor prognosis; it comprises approximately 8% of all luminal cases. For the remaining ordinary-luminal tumours we further identified the molecular signature correlated with disease outcomes, exhibiting an approximately continuous gene expression range from low to high risk. Thus, we employed four virtual quantiles to segregate the groups of patients. The clinico-pathological characteristics and ratios of genomic aberrations are concordant with the variations in gene expression profiles, hinting at a progressive staging. The comparison with the current separation into luminal A and B subtypes revealed a substantially improved survival stratification. Concluding, we suggest a review of the definition of

  15. Extensive Transcriptomic and Genomic Analysis Provides New Insights about Luminal Breast Cancers.

    PubMed

    Tishchenko, Inna; Milioli, Heloisa Helena; Riveros, Carlos; Moscato, Pablo

    2016-01-01

    Despite constituting approximately two thirds of all breast cancers, the luminal A and B tumours are poorly classified at both clinical and molecular levels. There are contradictory reports on the nature of these subtypes: some define them as intrinsic entities, others as a continuum. With the aim of addressing these uncertainties and identifying molecular signatures of patients at risk, we conducted a comprehensive transcriptomic and genomic analysis of 2,425 luminal breast cancer samples. Our results indicate that the separation between the molecular luminal A and B subtypes-per definition-is not associated with intrinsic characteristics evident in the differentiation between other subtypes. Moreover, t-SNE and MST-kNN clustering approaches based on 10,000 probes, associated with luminal tumour initiation and/or development, revealed the close connections between luminal A and B tumours, with no evidence of a clear boundary between them. Thus, we considered all luminal tumours as a single heterogeneous group for analysis purposes. We first stratified luminal tumours into two distinct groups by their HER2 gene cluster co-expression: HER2-amplified luminal and ordinary-luminal. The former group is associated with distinct transcriptomic and genomic profiles, and poor prognosis; it comprises approximately 8% of all luminal cases. For the remaining ordinary-luminal tumours we further identified the molecular signature correlated with disease outcomes, exhibiting an approximately continuous gene expression range from low to high risk. Thus, we employed four virtual quantiles to segregate the groups of patients. The clinico-pathological characteristics and ratios of genomic aberrations are concordant with the variations in gene expression profiles, hinting at a progressive staging. The comparison with the current separation into luminal A and B subtypes revealed a substantially improved survival stratification. Concluding, we suggest a review of the definition of

  16. Analysis of the Potent Prognostic Factors in Luminal-Type Breast Cancer

    PubMed Central

    Kim, Han-Sung; Park, Inseok; Cho, Hyun Jin; Yang, Keunho; Bae, Byung Noe; Kim, Ki Whan; Han, Sehwan; Kim, Hong-Joo; Kim, Young-Duck

    2012-01-01

    Purpose Luminal-type breast cancer has a good prognosis compared to other types, such as human epidermal growth factor receptor 2 and triple negative types. Luminal-type breast cancer is classified into luminal A and B, according to the proliferation index. We investigated the clinicopathological factors that affect the prognosis of the luminal-type subgroups. Methods We reviewed the medical records and the pathologic reports of 159 luminal-type breast cancer patients who were treated between February 2005 and November 2007. We divided luminal-type breast cancer into luminal A and B, according to Ki-67 (cutoff value, 14%) and analyzed the clinicopathologic factors, such as age at diagnosis, intensity score of estrogen receptor and progesterone receptor, histologic grade, and Bcl-2. Moreover, we compared the disease-free survival (DFS) of each group. Results In the univariate analysis, age (p=0.004), tumor size (p=0.010), lymph node metastasis (p=0.001), and Bcl-2 (p=0.002) were statistically significant factors in luminal-type breast cancer. In the multivariate analysis, lymph node (p=0.049) and Bcl-2 (p=0.034) were significant relevant factors in luminal-type breast cancer. In the subgroup analysis, the increased Bcl-2 (cutoff value, 33%) was related with a longer DFS in the luminal B group (p=0.004). Conclusion In our study, luminal A breast cancer showed a longer DFS than luminal B breast cancer, further, Bcl-2 may be a potent prognostic factor in luminal-type breast cancer. PMID:23346168

  17. Using Micro-Gravity Techniques to Map Alluvium Thickness and Pleistocene Location of the West Branch of the Susquehanna River Near Muncy, Pennsylvania

    NASA Astrophysics Data System (ADS)

    Sirianni, M.; Hayes, B.; Jacob, R. W.

    2013-12-01

    Laurentide glaciation during the early Pleistocene (~ or approximately 970 ka) dammed the northeast-flowing West Branch of the Susquehanna River (WBSR), scouring bedrock and creating 100-km-long glacial Lake Lesley near the Great Bend at Muncy, Pennsylvania. Local drill logs and well data indicate that subsequent paleo-outwash floods and modern fluvial processes have deposited as much as 30 meters of alluvium in this area, but little is known about the valley fill architecture and the bedrock-alluvium interface. This project determined if variations in the thickness of the valley fill were detectable as deviations in the gravitational field using micro-gravity techniques to map the bedrock-alluvium interface. A LaCoste and Romberg Gravitron unit was used to collect gravitational field readings at 50 locations over 5 transects across the Muncy Creek and WBSR valleys (approximately 30 km2), with at least two gravity base stations per transect. Latitude, longitude and ground surface elevation at each location were measured using an OPUS corrected Trimble RTK-GPS unit. Base stations were chosen based on ease of access due to the necessity of repeat measurements. Gravity measurement locations were selected and marked to provide easy access and repeat measurements. The gravimeter was returned to a base station within every two hours and a looping procedure was used to determine drift and maximize confidence in the gravity measurements. A two minute calibration reading at each station was used to minimize any tares in the data. The Gravitron digitally recorded finite impulse response filtered gravity measurements every 20 seconds at each station. A measurement period of 15 minutes was used for each base station occupation and a minimum of 5 minutes at all other locations. Longer or multiple measurements were utilized at some sites if drift or other externalities (i.e. train or truck traffic) were effecting readings. Average, median, standard deviation and 95% confidence

  18. LUMINOUS SATELLITES VERSUS DARK SUBHALOS: CLUSTERING IN THE MILKY WAY

    SciTech Connect

    Bozek, Brandon; Wyse, Rosemary F. G.; Gilmore, Gerard

    2013-08-01

    The observed population of the Milky Way satellite galaxies offers a unique testing ground for galaxy formation theory on small scales. Our novel approach was to investigate the clustering of the known Milky Way satellite galaxies and to quantify the amount of substructure within their distribution using a two-point correlation function statistic in each of three spaces: configuration space, line-of-sight velocity space, and four-dimensional (4D) phase space. These results were compared to those for three sets of subhalos in the Via Lactea II cold dark matter (CDM) simulation defined to represent the luminous dwarfs. We found no evidence at a significance level above 2{sigma} of substructure within the distribution of the Milky Way satellite galaxies in any of the three spaces. The 'luminous' subhalo sets are more strongly clustered than are the Milky Way satellites in all three spaces and over a broader range of scales in 4D phase space. Each of the 'luminous' subhalo sets are clustered as a result of substructure within their line-of-sight velocity space distributions at greater than 3{sigma} significance, whereas the Milky Way satellite galaxies are randomly distributed in line-of-sight velocity space. While our comparison is with only one CDM simulation, the inconsistencies between the Milky Way satellite galaxies and the Via Lactea II subhalo sets for all clustering methods suggest a potential new 'small-scale' tension between CDM theory and the observed Milky Way satellites. Future work will obtain a more robust comparison between the observed Milky Way satellites and CDM theory by studying additional simulations.

  19. Mapping gas-phase organic reactivity and concomitant secondary organic aerosol formation: chemometric dimension reduction